Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Vibes, specs, skills, and agents: The four pillars of AI coding

March 30, 2026
Rich Naszcyniec
Related topics:
Artificial intelligenceAutomation and managementDeveloper productivityDevOps
Related products:
Red Hat AI Inference ServerRed Hat AIRed Hat OpenShift AI

    Picture this: I'm sitting at my desk, the glow of the monitor illuminating a fresh project. I type a quick "vibe" into my AI-powered editor: "Build me a clean, reactive dashboard for monitoring cluster health." Within seconds, code begins to pour onto the screen. It is a thrilling moment: for a heartbeat, it feels like the AI has read my mind. 

    But as I look closer, the excitement fades. The authentication is missing. The data-fetching logic is using a library I explicitly deprecated last month. The vibe was right, but the execution was off-key.

    I call this the encoding/decoding gap: that frustrating space between what I intend (my internal encoding) and what the AI produces (its decoding of my intent). When the gap is narrow, the delivery is smooth. When it is wide, I spend more time fixing AI errors than I would have spent writing the code myself.

    From prompts to orchestration

    In my previous article, How spec-driven development improves AI coding quality, I showed how moving from pure vibe coding to structured specifications dramatically improves AI output. In that piece, I introduced the spec-driven framework as a way to ground AI in reality. In this follow-up post, we're moving beyond writing better prompts to a complete four-pillar orchestration system: vibes, specs, skills, and agents.

    In my experience, many developers remain stuck in a binary mindset. They either rely entirely on the vibe—an intuitive, conversational flow—or they attempt to write monolithic, rigid requirements that the AI eventually ignores. 

    Why settle for inconsistent results? I believe the future of software engineering lies in the deliberate balance of these four elements.

    Scenario in practice

    Picture Jennifer, a lead developer at a growing fintech startup. She spent her first week with a new AI coding agent feeling like she was vibing perfectly, until a major bug revealed the agent had used the wrong encryption standard because she hadn't specified one. Jennifer realized that while her vibe was high-velocity, her spec was non-existent. By shifting to a spec-driven approach, she didn't just fix the bug. She created a repeatable pattern that her whole team could follow.

    The four-pillar system

    Not all developers are aware of vibes, specs, skills, and agents. Even among those who are, competing definitions and levels of granularity can create confusion. I have seen teams struggle because they confuse a skill with a spec, or they try to use an agent without giving it the necessary specs to succeed. I now structure my projects by treating these as distinct but interconnected layers.

    The core philosophy I want to share with you is this:

    • Use vibes for intuitive exploration and idea-sharing.
    • Use specs for precise, authoritative instructions.
    • Use skills to add specific capabilities to an agent.
    • Use agents as the interactive, autonomous engines that translate your specs into your code.

    I don't want to disparage vibe coding. It is a fantastic starting point for prototyping. But as I argued in my previous article, the spec is the anchor. It minimizes the gaps. When I achieve harmony between all four pillars, I don't just get code that works. I get code that feels like I wrote it myself, running on reliable infrastructure like Red Hat OpenShift.

    Defining the four pillars

    Before I dive into the mechanics, let me define what I mean by these four terms. In my daily workflow, I treat them as a stack.

    • Vibes: Conversational, intuitive interaction. For example, "I'm thinking of a tool that does X, what do you think?" It's high-level, flexible, and essential for the "interacting" phase of development.
    • Specs (specifications): The what and the how. They are static, human-readable, and AI-effective blueprints. I never build a feature without a spec.
    • Skills: Modular "how-to" packages. If a spec says what to do, a skill provides the capability to do it (for example, how to deploy to OpenShift or how to run a security audit).
    • Agents: The software entities (such as Cursor, Claude Code, or custom bots on Red Hat OpenShift AI) that use specs and skills to actually perform the work.

    Agent types in practice

    I've found that the agent pillar is where most of the confusion lives today. Not all agents are created equal. I generally categorize them into three types. Choose your tool based on the task at hand.

    Agent typeDescriptionCommon tools and platformsBest for:Choose this type when:
    Interactive / chat agentsConversational; doesn't usually see the whole repository unless told.Cursor Composer, Claude Code, GeminiRapid iteration, "what-if" scenarios.You want fast feedback in your editor.
    IDE-integrated agentsLives in the editor; reads/writes files; runs local terminal commands.Cursor Agent, Claude Code, Kiro autonomous modeFeature implementation, refactoring.You are working inside a codebase and need deep context.
    Autonomous agentsSelf-running loops; plans, acts, observes, and replans.AutoGPT, Claude Code (non-interactive), custom agents on Red Hat OpenShift AIComplex migrations, end-to-end feature builds.You have a solid spec and want hands-off generation.

    I often start with an interactive agent to explore an idea, then move to an IDE-integrated agent once I've codified that idea into a spec. For mature deployments or repetitive tasks, I might hand the spec to an autonomous agent running on Red Hat OpenShift AI.

    Scenario in practice

    David is a DevOps engineer tasked with migrating 50 microservices to a new logging standard. He tried doing it manually with a chat AI, but it was too slow. I suggested he create a logging migration spec and a regex refactoring skill. He fed these into an autonomous agent. Instead of spending weeks on the migration, the agent finished most of the work in an afternoon. David only had to review the final pull requests.

    Bridging communication gaps: The what/how split

    In my previous article, I introduced a concept that has since become my North Star: the what/how split. This split is essential. If you want 95% accuracy or more from an AI, you must separate the goal from the implementation details.

    The what includes the vision, functional requirements, user story, and success criteria. It answers the questions: What are we building, and why?

    The how covers constraints, project structure, security standards, concurrency models, and testing requirements. It defines how to build the software within a specific environment.

    I have observed that when I mix these two in a single prompt, the AI loses focus. It might nail the what but ignore the how, or vice versa. By modularizing my specs, I create a cleaner encoding for the AI to decode.

    Avoid the monolith with modular specs

    I am a firm believer that the days of the 100-page PDF specification are over. I don't want to see them, and the AI doesn't want to read them. 

    Instead, I recommend creating multiple, small, focused Markdown files in a specs directory:

    • what-vision.md: High-level goals.
    • how-security.md: Authentication, secrets handling, and OpenID Connect (OIDC) requirements.
    • how-observability.md: Guidance on using OpenTelemetry and where to send logs.
    • how-standards.md: Corporate standards for code and architecture.
    • how-testing.md: Coverage targets and mocking strategies.

    This modularity allows me to load only the relevant context for a specific task. If I'm building a UI component, I don't need to feed the how-concurrency.md spec to the AI.

    Move beyond prompts with skills

    This is the newest pillar in my workflow. While a spec tells the AI what the end state should look like, a skill gives the AI the procedural knowledge to get there.

    I follow the official standards defined at agentskills.io. A skill is not just a prompt; it is a directory containing a SKILL.md file. This file must contain YAML frontmatter followed by Markdown instructions.

    The mechanical depth of skills

    Here is how I structure a skill. Let's say I want my agent to know how to deploy applications specifically to Red Hat OpenShift using the oc command-line interface (CLI).

    The following example shows the directory structure for a skill:

    .claude/skills/deploy-openshift/
    ├── SKILL.md
    ├── deploy-template.yaml
    └── validate-route.sh

    The following shows the contents of SKILL.md:

    ---
    name: deploy-to-openshift
    description: Procedures for deploying containerized apps to Red Hat OpenShift using the 'oc' CLI and standard templates.
    version: 1.0.0
    tags: [devops, redhat, openshift, deployment]
    ---
     
    # Deploy to Red Hat OpenShift Skill
     
    ## Context
    Use this skill when the user or a spec requires deploying a service to a Red Hat OpenShift cluster. 
     
    ## Instructions
    1.  **Authentication**: Check if the agent is logged in using `oc whoami`. If not, prompt the user for a login command.
    2.  **Project Selection**: Ensure the correct project (namespace) is set using `oc project <name>`.
    3.  **Apply Templates**: Use the `deploy-template.yaml` found in this skill directory.
    4.  **Validation**: After deployment, run `validate-route.sh` to ensure the application is accessible.
     
    ## Constraints
    - Never delete an existing namespace unless explicitly instructed.
    - Always use image streams instead of raw Docker images where possible.

    How agents consume skills

    Skills are consumed exclusively by agents. You, the human, do not run a skill. You equip your agent with it. In my workflow, this happens in two ways:

    • User-requested (during the interacting phase): While chatting with Cursor or Claude Code, I might say, "Load my deploy-to-openshift skill and use it to push this current build."
    • Autonomous (during the instructing phase): When I give an autonomous agent a spec that mentions deployment, the agent scans its skills directory, discovers deploy-to-openshift via the YAML frontmatter, and loads it automatically.

    Scenario in practice

    Suppose James, a senior developer, is working on a complex multicloud project. He created a library of skills for various cloud providers. When he switches the project target from a generic Kubernetes provider to Red Hat OpenShift, he doesn't have to rewrite his prompts. He simply points his agent to the OpenShift skill folder. The agent "learns" the new deployment nuances instantly This ensures the how of the deployment matches the infrastructure perfectly.

    Instructing vs. interacting: The two modes of AI engagement

    I have realized that I operate in two distinct modes when working with AI. Knowing which mode I am in is essential for staying efficient.

    Interacting (the vibe/coaching phase)

    In this phase, I explore and use the AI to refine my ideas. I use the AI as a coach to improve my specs. Tools like Claude Code now offer a plan mode, which is helpful for this type of interaction.

    I recommend asking the AI to score your specifications. For example: "On a scale of 0-10, how effective would an autonomous agent be at generating this feature based only on this what-vision.md file? What is missing?" This creates a coaching loop where the AI helps me encode my intent more clearly.

    Instructing (the execution phase)

    Once the spec is a 9 or a 10, I switch to instructing mode. I stop chatting and start commanding. I point the agent at the specs and say, "Execute."

    During this phase, I expect the agent to:

    • Read the relevant modular specs.
    • Discover and apply the necessary skills.
    • Generate code that meets the what and adheres to the how.
    • Run tests and fix errors autonomously.

    How different agents execute specs

    Different agents require different levels of guidance when it comes to specs.

    Interactive and chat agents such as Cursor Composer, Claude Code, and Windsurf do not automatically load your specs folder. I have found that I must explicitly tell them to use these resources. For example, I might prompt agent: "Referencing all files in specs, especially how-security.md, please implement the login controller."

    Similarly, IDE-integrated agents offer better file context but still require explicit prompting. This ensures they don't default to pre-trained behaviors instead of following your project-specific "how" specifications.

    Autonomous agents, such as custom builds on Red Hat OpenShift AI, are the most effective for spec execution. I can configure these agents to automatically ingest every file in the specs and skills directories at startup. They operate with a spec-first mentality, which minimizes the need for repetitive instructions.

    Scenario in practice

    Imagine Rana, an AI architect. She built a custom autonomous agent hosted on Red Hat OpenShift AI using the Red Hat AI Inference Server. Because she runs the agent on her own infrastructure, she could hard-code the spec discovery logic. Every time the agent starts a new task, it first reads the specs directory to understand the project's DNA. Rana found that this reduced code reviews by 40% because the AI stopped making generic mistakes.

    Benefits of spec co-evolution

    One of the most exciting aspects of this four-pillar system is what I call spec co-evolution. I don't view specs as static documents like the waterfall models of old. I want my specs to evolve with the project.

    I accomplish this through a lessons learned coaching loop. I instruct my agents to maintain a LessonsLearned.md file. When an agent encounters a bug, a build failure, or a correction, I instruct it to identify the cause, record the fix in LessonsLearned.md, and suggest an update to the relevant how-xxx.md specification.

    This process turns every off-key moment into a permanent improvement in my encoding. The AI isn't just writing code; it helps me write better instructions for the future.

    Sample: how-security.md (excerpt)

    To give you a concrete example, here is a snippet of a "how" spec I might use in a real project.

    # HOW: Security and Authentication
     
    ## Standards
    - All API endpoints MUST be protected by JWT validation via the Red Hat Build of Keycloak.
    - Secrets MUST NEVER be hardcoded; use environment variables mapped to OpenShift Secrets.
     
    ## Implementation Patterns
    - Use the `@Authorized` decorator on all controller methods.
    - Pass the `X-Correlation-ID` header to all downstream microservices.
     
    ## Lessons Learned (Auto-Updated)
    - *Ref 2026-03-01*: Ensure the JWT 'aud' claim is checked against the service-id to prevent token reuse across namespaces.

    Red Hat ecosystem integration: The infrastructure enabler

    While the vibes might happen in your local IDE, the agents and code need a home. This is where the Red Hat OpenShift family of products is essential.

    In my experience, enterprise AI workflows require more than just a laptop. They need a platform that provides specific capabilities.

    I use Red Hat OpenShift AI to train, serve, and orchestrate autonomous agents. It provides lifecycle management for the models that power the vibes.

    Red Hat AI Inference Server is based on vLLM and allows for high-performance, local inference. This is critical when I'm working with sensitive specs that cannot leave my corporate network.

    Using Red Hat OpenShift as the infrastructure enabler helps ensure that the balance between vibes and specs isn't broken by environment inconsistencies. The agent that writes the code in an IDE uses the same specs and skills as the agent running an autonomous migration on OpenShift AI.

    Scenario in practice

    Consider Michael, a DevOps engineer at a global bank. He used to struggle with "works on my machine" AI issues. By deploying a shared agent platform on Red Hat OpenShift AI, he ensured that every agent across the global team read from the same how-security.md master spec hosted in a central Git repository. This resulted in a 100% compliance rate for security standards across three continents.

    When to use which: A quick reference

    I know this is a lot to digest. Refer to the following table to help you decide where to focus.

    SituationPrimary pillarEngagement modeOutcome
    Exploring a new libraryVibesInteractingUnderstanding and vibe check
    Defining API boundariesSpecs (what)InstructingFunctional blueprint
    Enforcing authentication patternsSpecs (how)InstructingGuardrails and consistency
    Repeating a deploymentSkillsAgent autonomyReliable, reusable execution
    Refining an existing specAI coachingInteractingHigher-fidelity encoding
    Scaling code generationAgentsInstructingHigh-volume, accurate code

    Evolving to shared specs and skills as team assets

    I believe the ultimate goal is for specs and skills to become team assets. Much like how we share utility libraries today, I see a future where we share skill directories.

    Imagine joining a new team and receiving access to a skills folder instead of a week of onboarding documentation. Your local AI agent instantly knows how the team handles database migrations, how they style their React components, and how they deploy to OpenShift.

    The encoding of the team's collective knowledge is available for your agent to "decode" into code from day one. That is the power of the four-pillar system.

    Scenario in practice

    Consider Lisa, a lead architect who started a skill registry for her department. She realized that five teams were prompting their AI agents separately on how to use the company's internal API. She created a single internal-api-skill and distributed it. Code quality improved, and junior developers learned company best practices by watching the AI apply the shared skill.

    The strategic value: Productivity, quality, and scaling

    Why go through all this effort? Why not just keep vibing? The strategic payoff is three-fold:

    • Productivity: I spend less time fixing AI and more time reviewing it. The high-fidelity output from a spec-driven agent is better than the guesswork of a vibe-only agent.
    • Quality: My code is more traceable. If a bug occurs, I can check the spec to see if my instruction was wrong or if the agent execution failed. This accountability is impossible with only vibes.
    • Scaling: One developer can only vibe so much. A team can share a modular library of specs and skills, allowing everyone to operate at the level of the team's best architect.

    Find your balance

    The transition from vibe coding to spec-driven orchestration is not about working harder. It is about working with more intent.

    Building directly on the spec-driven foundation I outlined in my previous blog, I didn't just prompt harder. I orchestrated the four-pillar system of vibes, specs, skills, and agents. I moved from asking an AI for code to directing a system of intelligence.

    There is satisfaction in watching an autonomous agent deliver a feature that is 100% compliant with your security specs, uses your custom deployment skills, and runs perfectly on Red Hat OpenShift while you focus on the next big architectural challenge. This balance is available to every developer willing to start today.

    Start by taking your most frequent vibe and turning it into a modular spec. Then, take a repetitive task and package it as a skill. Experiment. See how narrowing that encoding/decoding gap changes your relationship with your AI tools.

    The next time your agent delivers code that feels like you wrote it yourself, you'll know exactly what I mean.

    Research references and further reading:

    • How spec-driven development improves AI coding quality: My 2025 article.
    • AgentSkills.io: The official standard for skill directory structures.
    • Red Hat OpenShift AI documentation: Infrastructure for hosting and serving agents.
    • The vibe coding concept popularized by Andrej Karpathy.

    Related Posts

    • How spec-driven development improves AI coding quality

    • Deploy a coding copilot model with OpenShift AI

    • Integrate a private AI coding assistant into your CDE using Ollama, Continue, and OpenShift Dev Spaces

    • The state of open source AI models in 2025

    • From tuning to serving: How open source powers the LLM life cycle

    • The uncomfortable truth about vibe coding

    Recent Posts

    • Vibes, specs, skills, and agents: The four pillars of AI coding

    • What's New in OpenShift GitOps 1.20

    • Integrate Claude Code with Red Hat AI Inference Server on OpenShift

    • Scale LLM fine-tuning with Training Hub and OpenShift AI

    • Reproducible builds in Project Hummingbird

    What’s up next?

    Learning Path Red Hat AI

    How to run AI models in cloud development environments

    This learning path explores running AI models, specifically large language...
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue