Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Harness engineering: Structured workflows for AI-assisted development

Getting consistent results from AI-assisted coding starts before the first line of code

April 7, 2026
Marco Rizzi
Related topics:
Artificial intelligenceDeveloper productivity
Related products:
Red Hat AI

    Building AI development workflows taught me a key lesson: the AI writes better code when you design the environment it works in, a practice some are calling harness engineering. The secret is structured context rather than free-form tickets. This is the journey that led me to this approach and the two techniques that worked.

    The problem: Vague in, vague out

    I was working on a multi-repository project that included a Rust backend, a TypeScript frontend, Helm charts, and performance tests. I wanted to use AI to accelerate the software development lifecycle (SDLC). The first attempts went like this: paste a Jira feature description into an AI coding tool, tell it to "implement this," and hope for the best.

    The results were unpredictable. Sometimes the AI would succeed. Other times it would hallucinate file paths, invent APIs that didn't exist, or modify the wrong module entirely. The failure mode was always the same: the AI was guessing about the code base instead of looking at it.

    The fundamental issue wasn't the AI's coding ability. Instead, it was the input I provided. A free-form Jira ticket like "add CSV export for SBOMs" leaves the solution space wide open. Which service handles software bill of materials (SBOMs)? What's the existing export pattern? Where do the tests go? Without answers to these questions, the AI fills in the blanks with suboptimal solutions.

    The fix: A two-phase workflow

    I split the problem into two distinct phases, planning and implementation, which each have their own constraints. Constraining the solution space at each phase made the output more consistent.

    Phase 1: The repository impact map

    Before any tasks are created, I have the AI scan the actual code base and produce a repository impact map. It's grounded in the repository as the single source of truth.

    The AI uses Language Server Protocol (LSP) and Model Context Protocol (MCP) servers to inspect the repository structure: it finds symbols, traces references, and searches for patterns. Then it produces a map like this:

    trustify (backend):
      changes:
        - Add CSV serialization for SBOM query results
        - Add GET /api/v2/sbom/export endpoint
    
    trustify-ui (frontend):
      changes:
        - Add "Export CSV" button to SBOM list toolbar

    The important step: A human reviews this map before anything else happens. If the AI picked the wrong module, missed a repository, or invented a nonexistent endpoint pattern, I catch it here to prevent bad tasks and poor code.

    This checkpoint catches a common class of errors early: structurally incorrect plans that produce code that compiles but solves the wrong problem.

    Phase 2: The structured task template

    Once the impact map is approved, each unit of work becomes a Jira task that follows a strict template:

    ## Repository
    trustify
    
    ## Description
    Add a CSV export endpoint for SBOM query results.
    
    ## Files to Modify
    - `modules/sbom/src/service.rs`—add CSV serialization method
    - `modules/sbom/src/endpoints.rs`—add GET handler
    
    ## Implementation Notes
    Follow the existing JSON export pattern in `SbomService::export_json()`.
    Reuse the `QueryResult` type from `modules/sbom/src/model.rs`.
    
    ## Acceptance Criteria
    - [ ] GET /api/v2/sbom/export?format=csv returns valid CSV
    - [ ] Existing JSON export still works
    
    ## Test Requirements
    - [ ] Integration test in `modules/sbom/tests/` following existing test patterns

    Every field is intentional:

    • Repository scopes the AI to a single repository to avoid cross-repo confusion.
    • Description summarizes what this task refers to.
    • Files to Modify lists real paths found during the analysis phase, not guesses.
    • Implementation Notes reference actual symbol names and existing patterns. When the AI reads Follow the existing JSON export pattern in `SbomService::export_json()`., it can look up that function and mimic its structure.
    • Acceptance Criteria give the AI a concrete checklist to verify against.
    • Test Requirements provides guidance on the test coverage required for this tasks.

    This template is the contract between planning and implementation. The AI implementing the task doesn't need to make architectural decisions. Those constraints are established in the impact map.

    Why this works

    The underlying principle is simple: The more you constrain the solution space, the more predictable the output becomes.

    When the AI plans against real code, it produces grounded plans. When it implements against a structured spec with real file paths and symbol names, it produces targeted code changes.

    I iterated on this, tightening the feedback loop each time. Early versions of the planning skill would dump a wall of analysis without a review checkpoint. As a result, the AI would rush from "here's what I found" straight to creating tasks. Adding the explicit pause for human review of the impact map was a small change that reduced errors noticeably.

    Similarly, the task template evolved. The first version had a generic Notes field. Renaming it to Implementation Notes and requiring references to actual code symbols changed the results. —Vague descriptions signaled that the planning phase wasn't grounded in real analysis.

    The takeaway

    If you're building AI-assisted development workflows, don't start by writing code. Start with harness engineering by designing the environment:

    • Make the AI look at real code before planning. A repository impact map built from symbol analysis grounds every downstream task in real code.
    • Give the AI structured constraints to implement against. File paths, symbol names, existing patterns to follow: the more specific the input, the more consistent the output.
    • Put a human review checkpoint between planning and implementation. Catching a wrong assumption in a three-line impact map costs far less than catching it in a pull request.
    • Invest in the quality of your Jira features. A well-written feature description with clear scope, concrete acceptance criteria, and relevant context directly improves the plans and code the AI produces. The same structure in, structure out principle applies to the requirements themselves.

    The shift from prompt engineering to harness engineering marks a change in how we treat AI in the development cycle. When you stop treating the AI as a magic box and start treating it as a component within a structured environment, your results become predictable and reproducible. By grounding the AI in your code base and providing clear implementation constraints, you turn a flighty assistant into a reliable contributor.

    Structure in, structure out.

    Next steps

    • Make the repository the single source of truth, including conventions. Move style guides, naming rules, and architectural decisions into the repo itself (for example, CLAUDE.md or CONVENTIONS.md). When the AI reads the code base, it picks up the rules automatically—no copy-pasting into prompts.
    • Treat the harness as software you maintain. Skills, prompts, and MCP configurations are code. Version them, review them in PRs, and refactor them when they drift. A stale prompt rots just like a stale test.
    • Close the feedback loop with the agent. When a generated plan or PR is wrong, trace the mistake back to the input: a missing constraint, a vague acceptance criterion, a symbol the AI couldn't see. Fix the harness, not just the output.
    • Expand the agent's toolbox through MCP. Each new MCP integration (such as continuous integration (CI) status, deployment logs, and runtime metrics) gives the AI another source of real data.

    Related Posts

    • Vibes, specs, skills, and agents: The four pillars of AI coding

    • The uncomfortable truth about vibe coding

    • How spec-driven development improves AI coding quality

    • Fly Eagle(3) fly: Faster inference with vLLM & speculative decoding

    • Deploy a coding copilot model with OpenShift AI

    • Integrate a private AI coding assistant into your CDE using Ollama, Continue, and OpenShift Dev Spaces

    Recent Posts

    • Harness engineering: Structured workflows for AI-assisted development

    • Running Karpathy's autoresearch on Red Hat OpenShift AI: 198 experiments, zero intervention

    • Distributed tracing for agentic workflows with OpenTelemetry

    • Set up a CI framework using Red Hat Ansible Automation Platform, Podman, and Horreum

    • Kafka Monthly Digest: March 2026

    What’s up next?

    Learning Path Red Hat AI

    How to run AI models in cloud development environments

    This learning path explores running AI models, specifically large language...
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue