Note
PatchPatrol is a community-driven open source project and not supported by Red Hat.
Enterprise development teams deploying applications on Red Hat OpenShift face a unique set of challenges. Container security vulnerabilities can cascade across cluster nodes, affecting multiple applications simultaneously. When your applications run in production environments that serve millions of users, a single security flaw or performance regression can significantly affect downstream systems
The enterprise code review challenge
Traditional manual code reviews struggle to scale with the complexity of modern cloud-native applications. Security teams often discover vulnerabilities weeks after deployment, when fixing them costs 30 times more than catching issues during development. Meanwhile, platform teams struggle to enforce consistent security policies across dozens of microservices managed by different teams.
Enterprise developers working with Red Hat's ecosystem need security scanning that understands both application-level vulnerabilities and container-specific risks. You need code quality checks that consider how running in containerized environments affects performance. Most importantly, you need this analysis integrated directly into your existing CI/CD pipelines without disrupting workflows.
PatchPatrol helps to address these enterprise challenges by bringing AI-powered security and quality analysis directly into the commit process. This ensures every code change is evaluated against enterprise security standards before it reaches your OpenShift clusters.
Dual-mode intelligence: Code quality and security
PatchPatrol operates in two specialized modes. Each mode is powered by AI prompts and domain knowledge tailored for enterprise development workflows.
Code quality mode serves as your default analysis engine. It examines code structure and naming conventions while following established best practices. The system reviews test coverage and documentation completeness, helping teams maintain the high standards expected in enterprise environments. Beyond surface-level checks, it identifies potential bugs and performance issues that could affect application reliability. It also helps maintain consistency with project standards and provides constructive feedback to help developers.
Security mode focuses on threat detection and vulnerability prevention. The system performs OWASP Top 10 2021 vulnerability detection, scanning for hardcoded secrets and credentials that could compromise your applications in production. It identifies injection vulnerabilities including SQL injection, XSS, and command injection attacks while reviewing authentication and authorization logic for potential weaknesses. Each finding is mapped to relevant CWE categories and compliance frameworks, providing guidance that security teams can use to fix issues immediately.
Flexible backend architecture
Unlike monolithic AI tools, PatchPatrol provides multiple backend options so teams can choose the approach that fits their security requirements and performance needs.
Local models prioritize privacy and data sovereignty, essential for enterprise teams handling sensitive code. The ONNX backend provides high-accuracy models with custom fine-tuning. The llama.cpp backend runs code-optimized models like Granite 3B/8B and CodeLlama directly on your infrastructure. With zero network calls, your code never leaves your environment. The built-in model registry handles automatic downloads and updates, eliminating the complexity of manual model management.
Cloud models optimize for performance and convenience when data privacy requirements permit external API calls. Gemini API integration provides access to Google's latest AI capabilities with instant analysis that requires no local hardware investment. These cloud-based models stay updated with the latest improvements and can scale to handle teams of any size without infrastructure planning.
Zero-friction integration
We built PatchPatrol with the developer experience in mind. It integrates into existing development workflows without disrupting your process.
Pre-commit hooks integrate into your pre-commit configuration, reviewing changes before they enter your repository. The configuration below shows a dual-hook setup where the first hook performs quality analysis using local CI models with strict enforcement (--hard flag), while the second hook runs security analysis using cloud models with a lower threshold for security findings:
repos:
- repo: https://github.com/4383/patchpatrol
rev: v0.1.0
hooks:
- id: patchpatrol-review-changes
args: [--model=ci, --hard]
- id: patchpatrol-review-message
args: [--mode=security, --model=cloud, --threshold=0.2]CI/CD integration works with any continuous integration system, including Red Hat OpenShift Pipelines and Jenkins. This example demonstrates a GitHub Actions step that performs comprehensive security analysis. The review-complete command analyzes both code changes and commit messages. The --hard flag causes the pipeline to fail if critical security issues are detected:
- name: AI Code & Security Review
run: |
pip install patchpatrol[all]
patchpatrol review-complete --mode security --model quality --hardOne-command setup makes configuration easy. Run pip install to download PatchPatrol and its dependencies. The review command downloads and caches the necessary AI models on the first run, making it ready for immediate use:
pip install patchpatrol[all]
patchpatrol review-changes --model ci # Models download automaticallyHow PatchPatrol supports your team
PatchPatrol helps development teams maintain higher quality and code security standards without slowing down their release cycles.
Democratizing expert-level review
Not every team has access to senior engineers or security experts for thorough code review. PatchPatrol shares this expertise by providing more consistent, higher-quality feedback for teams of any size. Junior developers receive mentorship-quality guidance, while senior developers gain an additional safety net against oversight.
Scaling quality without scaling friction
Traditional code review processes don't scale as teams grow. Adding more reviewers increases coordination overhead and potential inconsistency. PatchPatrol provides consistent, instant feedback that scales infinitely without human bottlenecks.
Security as a first-class citizen
Security can't be an afterthought. By integrating OWASP-based security analysis directly into the commit workflow, PatchPatrol makes security review as natural as syntax checking. Teams catch vulnerabilities before they reach production, reducing both risk and remediation costs.
Choice and control
PatchPatrol respects that different teams have different needs:
- Privacy-conscious teams can use local models with zero data transmission.
- Performance-focused teams can use cloud APIs for instant analysis.
- Hybrid approaches allow different models for different sensitivity levels.
- Gradual adoption through soft mode allows teams to ease into AI-assisted review.
How PatchPatrol works
PatchPatrol matches specialized AI models with your project needs to provide clear, detailed feedback directly in your workflow.
Specialized AI prompts
PatchPatrol uses prompts designed for technical accuracy and enterprise standards rather than generic AI responses. These prompts are based on the OWASP Top 10 and industry security frameworks to help identify vulnerabilities. The system also incorporates software engineering best practices to analyze code structure, file types, and change patterns. Every review provides feedback with severity scores to help your team prioritize fixes.
Intelligent model selection
Different models excel at different tasks, so PatchPatrol includes a registry of specialized options. You can use IBM Granite 3B/8B for fast, accurate analysis or Meta CodeLlama 7B for specialized code understanding. For teams using cloud capabilities, Gemini 2.0 Flash provides multimodal analysis. The tool also supports custom, organization-specific models.
Rich, actionable output
PatchPatrol doesn't just say "this is bad." It gives line-by-line feedback with file and line references directly in your terminal. You can also receive structured JSON responses to integrate the tool with other automated systems. To help your team resolve issues, the output includes concrete steps for fixes and maps findings to compliance frameworks like SOC2, PCI-DSS, and GDPR.
Getting started: Your first PatchPatrol review
Ready to experience the future of code review? Here's how to get started in under 5 minutes.
Quick installation
Choose your installation based on your deployment requirements. For teams prioritizing data privacy, install the llama variant to run models locally. For teams needing maximum performance with cloud access, use the gemini variant with your API key. The complete installation includes all backends and model types:
# For local models
pip install patchpatrol[llama]
# For cloud models
pip install patchpatrol[gemini]
export GEMINI_API_KEY="your-api-key"
# For everything
pip install patchpatrol[all]First review
These commands demonstrate the three core review workflows. The first command analyzes your currently staged Git changes using local CI models. The second performs security-focused analysis on your most recent commit using cloud models. The third command provides comprehensive analysis of both code changes and commit messages, with a higher threshold that only reports the most significant findings:
# Review your current staged changes
patchpatrol review-changes --model ci
# Security analysis of your latest commit
patchpatrol review-commit --mode security --model cloud HEAD
# Comprehensive review (changes + message)
patchpatrol review-complete --model quality --threshold 0.8
Integration
This shell command automatically adds PatchPatrol to your existing pre-commit configuration. The --soft flag provides warnings without blocking commits, allowing teams to gradually adopt AI-assisted review without disrupting existing workflows:
# Add to your pre-commit config
cat >> .pre-commit-config.yaml << EOF
repos:
- repo: https://github.com/4383/patchpatrol
rev: v0.1.0
hooks:
- id: patchpatrol-review-changes
args: [--model=ci, --soft]
EOFThe future of code quality is here
PatchPatrol provides automated quality assurance that enhances rather than replaces human expertise. By combining the consistency of automation with AI analysis, PatchPatrol helps teams ship faster without compromising quality, catch issues earlier in the development lifecycle, learn continuously from AI-powered feedback, scale confidently with consistent review standards, and address vulnerabilities proactively through integrated security analysis.
Whether you're a solo developer looking to improve your code quality, a startup trying to establish best practices, or an enterprise team seeking to scale review processes, PatchPatrol adapts to your needs while maintaining higher standards of code quality and security.
As development teams face increasing pressure to deliver security-forward, high-quality software quickly, automated code review becomes essential. PatchPatrol bridges the gap between manual review processes and the demands of modern software development.
Ready to improve security in your enterprise development pipeline?
Integrate PatchPatrol with your existing Red Hat infrastructure to catch vulnerabilities before they reach production. Get started today with these resources.
- Explore the documentation: PatchPatrol GitHub repository
- Quick start: Run
pip install patchpatrol[all]and integrate with your CI/CD pipelines. - Join the discussion: GitHub Discussions
- Report issues: GitHub Issues
- Try Red Hat OpenShift AI: Start your free trial to explore AI-powered development workflows that complement your commit-level scanning.
- Scale your reviews with OpenShift: Containerize PatchPatrol to provide consistent code analysis across all your development teams.