This is the final installment of a four-part series. In part 1, we covered governance: how we made the code base AI-ready. In part 2, we covered delivery: the migration strategy. Part 3 described how we built a verification engine inside Storybook: real components, real network interception, and typed mock data.
Storybook tests run against simulated APIs, but eventually, you will need to test against the real thing.
This post describes the underlying architecture: a single typed data layer that serves the access management interface on Red Hat Hybrid Cloud Console at runtime, Storybook mocks during development, and a standalone CLI that seeds and cleanses real test environments. Three consumers, one source of truth.
The problem with separate test infrastructure
A common pattern is to treat test infrastructure as a separate codebase. The UI talks to the API using one set of types and clients. The test mocks use a different set, often inferred from documentation or copied from response examples. The end-to-end (E2E) tests create data through a third mechanism—sometimes shell scripts, a dedicated test harness, or manual setup.
That creates three places to update when an API changes. It also means three places where a renamed field goes unnoticed until something breaks in a way that's hard to trace.
We wanted a guarantee: When the API contract changes, you fix it in one place and every consumer updates automatically.
One data layer, three consumers
The API client packages are imported in exactly one place: the data layer. An ESLint rule enforces this: If any other file tries to import the SDK directly, the build fails.
From there, the types flow to three consumers:
- The UI imports data-fetching hooks from the data layer. The hooks call the API clients. Components never see the raw SDK.
- The Storybook mocks import types from the same data layer. The handler factories described in part 3 use these types to construct responses that match the actual API shape. When the SDK updates a field name, the factory breaks at compile time.
- The CLI imports the same API client constructors. It calls the same factory functions the UI uses, just pointed at a real API instead of a browser cache.
One change propagates to all three. No separate type definitions, copied response shapes, or drift.
The seeder/cleaner CLI
E2E tests that depend on pre-existing data are fragile. If someone deletes the test role from staging, the tests fail. If two continuous integration (CI) runs execute simultaneously, they corrupt each other's data. If a test creates data and then crashes before cleanup, the environment accumulates orphaned data.
We built a CLI tool that solves all three problems. It reuses the application's own API clients to create and destroy test data against real environments.
- Seeding works from a declarative JSON fixture. The fixture defines roles, groups, workspaces, and the relationships between them. The CLI reads the fixture, prefixes every resource name with a unique identifier (like
ci-1234__), and creates the resources through the API. It then writes a name-to-UUID mapping file that the tests consume. - Cleanup finds every resource with a matching prefix and deletes it. Cleanup runs even if the tests fail: it's unconditional, so a test crash doesn't leave orphaned data.
- Isolation comes from the prefix. Two CI runs with different prefixes can execute against the same environment without interfering. Each run sees only its own data. When the run finishes, cleanup removes everything with that prefix.
Because the CLI uses the same API clients as the UI, it validates the web layer by proxy. The CLI has its own test suite that exercises the same API client code the UI calls at runtime. When a CLI test fails because a required field changed or a response shape shifted, it's telling you the UI has the same problem. The CLI acts as a canary for API contract changes, and its test suite indirectly covers code paths that otherwise require a browser to test.
One investment, multiple returns
A traditional E2E suite tests endpoints in isolation: call the API, check the response, and move on. This verifies contracts, but it doesn't verify that the product works.
Our E2E tests are user journeys. They authenticate as an actual user, navigate to an actual page, create a role, and assign it to a group. Every test exercises the UI, API contracts, the permission model, and the business logic in a single pass: because that is what a user does.
This means a single suite covers multiple concerns with one investment. When a field is renamed, the seeder breaks. When a permission rule changes, the journey fails at the step during the access check. When a response shape shifts, the UI renders incorrectly and the assertion catches it. The failure always reflects the user experience, rather than just what an engineer thought to check.
Architecture changes across the stack are normal in a multi-team product. Journey-based E2E tests ensure those changes are caught in staging before they reach customers. The suite is an early warning system for the entire stack: not because we built a monitoring tool, but because user journeys inherently cross team boundaries.
What's next: From CI signal to operational monitoring
The E2E tests already behave like health probes. They authenticate as real users, navigate real pages, create and verify real data. The next step is plugging these tests into the observability dashboards that operations teams already monitor. This ensures the same suite that runs in CI also monitors the deployed product. While we have not shipped this feature yet, the architecture supports it. The investment in the test infrastructure was made one time, and the benefits continue to grow.
What I'd tell you if you're building this
Share the data layer between your application and your test tools. If your test seeder uses different types than your UI, you're maintaining two things that should be one. When they drift, you get false confidence.
Make seeding declarative and cleanup automatic. A JSON fixture that describes what to create, a prefix for isolation, and unconditional cleanup at the end. The alternative is a staging environment that degrades until someone rebuilds it manually.
Use E2E tests to validate your dependencies, not just your code. If your tests call actual APIs, they're already testing the backend. Treat that as a feature, not a side effect.
Build test infrastructure that can grow. The same tests that run in CI can run against production. The same data layer that serves mocks can serve a seeder. Design for reuse from the start.
Try Red Hat Hybrid Cloud Console at console.redhat.com.
Learn more
- Red Hat Hybrid Cloud Console
- Inventory Groups are now Workspaces
- Read part 1: Engineering an AI-ready code base: Governance lessons from the Red Hat Hybrid Cloud Console
- Read part 2: How we rewrote a production UI without stopping it
- Read part 3: How we turned Storybook into a behavioral verification engine