Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How we made one data layer serve the UI, the mocks, and the E2E tests

One data layer, three consumers

May 6, 2026
Riccardo Forina
Related topics:
APIsDeveloper productivityDevOpsHybrid cloud
Related products:
Red Hat OpenShift

    This is the final installment of a four-part series. In part 1, we covered governance: how we made the code base AI-ready. In part 2, we covered delivery: the migration strategy. Part 3 described how we built a verification engine inside Storybook: real components, real network interception, and typed mock data.

    Storybook tests run against simulated APIs, but eventually, you will need to test against the real thing.

    This post describes the underlying architecture: a single typed data layer that serves the access management interface on Red Hat Hybrid Cloud Console at runtime, Storybook mocks during development, and a standalone CLI that seeds and cleanses real test environments. Three consumers, one source of truth.

    The problem with separate test infrastructure

    A common pattern is to treat test infrastructure as a separate codebase. The UI talks to the API using one set of types and clients. The test mocks use a different set, often inferred from documentation or copied from response examples. The end-to-end (E2E) tests create data through a third mechanism—sometimes shell scripts, a dedicated test harness, or manual setup.

    That creates three places to update when an API changes. It also means three places where a renamed field goes unnoticed until something breaks in a way that's hard to trace.

    We wanted a guarantee: When the API contract changes, you fix it in one place and every consumer updates automatically.

    One data layer, three consumers

    The API client packages are imported in exactly one place: the data layer. An ESLint rule enforces this: If any other file tries to import the SDK directly, the build fails.

    From there, the types flow to three consumers:

    • The UI imports data-fetching hooks from the data layer. The hooks call the API clients. Components never see the raw SDK.
    • The Storybook mocks import types from the same data layer. The handler factories described in part 3 use these types to construct responses that match the actual API shape. When the SDK updates a field name, the factory breaks at compile time.
    • The CLI imports the same API client constructors. It calls the same factory functions the UI uses, just pointed at a real API instead of a browser cache.

    One change propagates to all three. No separate type definitions, copied response shapes, or drift.

    The seeder/cleaner CLI

    E2E tests that depend on pre-existing data are fragile. If someone deletes the test role from staging, the tests fail. If two continuous integration (CI) runs execute simultaneously, they corrupt each other's data. If a test creates data and then crashes before cleanup, the environment accumulates orphaned data.

    We built a CLI tool that solves all three problems. It reuses the application's own API clients to create and destroy test data against real environments.

    • Seeding works from a declarative JSON fixture. The fixture defines roles, groups, workspaces, and the relationships between them. The CLI reads the fixture, prefixes every resource name with a unique identifier (like ci-1234__), and creates the resources through the API. It then writes a name-to-UUID mapping file that the tests consume.
    • Cleanup finds every resource with a matching prefix and deletes it. Cleanup runs even if the tests fail: it's unconditional, so a test crash doesn't leave orphaned data.
    • Isolation comes from the prefix. Two CI runs with different prefixes can execute against the same environment without interfering. Each run sees only its own data. When the run finishes, cleanup removes everything with that prefix.

    Because the CLI uses the same API clients as the UI, it validates the web layer by proxy. The CLI has its own test suite that exercises the same API client code the UI calls at runtime. When a CLI test fails because a required field changed or a response shape shifted, it's telling you the UI has the same problem. The CLI acts as a canary for API contract changes, and its test suite indirectly covers code paths that otherwise require a browser to test.

    One investment, multiple returns

    A traditional E2E suite tests endpoints in isolation: call the API, check the response, and move on. This verifies contracts, but it doesn't verify that the product works.

    Our E2E tests are user journeys. They authenticate as an actual user, navigate to an actual page, create a role, and assign it to a group. Every test exercises the UI, API contracts, the permission model, and the business logic in a single pass: because that is what a user does.

    This means a single suite covers multiple concerns with one investment. When a field is renamed, the seeder breaks. When a permission rule changes, the journey fails at the step during the access check. When a response shape shifts, the UI renders incorrectly and the assertion catches it. The failure always reflects the user experience, rather than just what an engineer thought to check.

    Architecture changes across the stack are normal in a multi-team product. Journey-based E2E tests ensure those changes are caught in staging before they reach customers. The suite is an early warning system for the entire stack: not because we built a monitoring tool, but because user journeys inherently cross team boundaries.

    What's next: From CI signal to operational monitoring

    The E2E tests already behave like health probes. They authenticate as real users, navigate real pages, create and verify real data. The next step is plugging these tests into the observability dashboards that operations teams already monitor. This ensures the same suite that runs in CI also monitors the deployed product. While we have not shipped this feature yet, the architecture supports it. The investment in the test infrastructure was made one time, and the benefits continue to grow.

    What I'd tell you if you're building this

    Share the data layer between your application and your test tools. If your test seeder uses different types than your UI, you're maintaining two things that should be one. When they drift, you get false confidence.

    Make seeding declarative and cleanup automatic. A JSON fixture that describes what to create, a prefix for isolation, and unconditional cleanup at the end. The alternative is a staging environment that degrades until someone rebuilds it manually.

    Use E2E tests to validate your dependencies, not just your code. If your tests call actual APIs, they're already testing the backend. Treat that as a feature, not a side effect.

    Build test infrastructure that can grow. The same tests that run in CI can run against production. The same data layer that serves mocks can serve a seeder. Design for reuse from the start.

    Try Red Hat Hybrid Cloud Console at console.redhat.com.

    Learn more

    • Red Hat Hybrid Cloud Console
    • Inventory Groups are now Workspaces
    • Read part 1: Engineering an AI-ready code base: Governance lessons from the Red Hat Hybrid Cloud Console
    • Read part 2: How we rewrote a production UI without stopping it
    • Read part 3: How we turned Storybook into a behavioral verification engine

    Related Posts

    • How we turned Storybook into a behavioral verification engine

    • Engineering an AI-ready code base: Governance lessons from the Red Hat Hybrid Cloud Console

    • How to plan your RHEL lifecycle with AI

    • Manage RHEL inventory using natural language

    • How to set up an MCP server for Red Hat Lightspeed

    • Using AI agents with Red Hat Lightspeed

    Recent Posts

    • How we made one data layer serve the UI, the mocks, and the E2E tests

    • Build trusted Python containers with Project Hummingbird and Calunga

    • Simplify distributed tracing: ObservabilityInstaller installation

    • Storage isolation with OpenStack Services on OpenShift distributed zones

    • Run multiple OpenStack Services on OpenShift with HCPs

    What’s up next?

    Rede Hat Lightspeed API cheat sheet tile card

    Red Hat Lightspeed API cheat sheet

    Jerome Marc
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility