Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Spending transaction monitor: Agentic AI for intelligent financial alerts

March 30, 2026
Saurabh Agarwal Theia Surette Sid Kattoju Yash Oza RJ Johnson
Related topics:
Artificial intelligenceAutomation and managementData science
Related products:
Red Hat AIRed Hat OpenShift AI

    Traditional transaction monitoring systems are capable, but they can be difficult to manage. Users must navigate rigid rule builders, write complex conditions, or rely on static thresholds that don’t adapt to real spending behavior. As personal finance data grows richer and more dynamic, these approaches start to feel outdated.

    The spending transaction monitor AI quickstart demonstrates a different approach: agentic AI. Instead of forcing users to configure rules manually, the system lets them describe alert conditions in natural language. It then uses autonomous AI agents to interpret, validate, and execute those rules against live transaction data. This AI quickstart showcases how agentic workflows can provide intelligent financial monitoring on infrastructure suited for enterprise environments.

    What are AI quickstarts?

    AI quickstarts are a curated catalog of ready-to-run, industry-focused AI solutions designed for rapid experimentation and extension. Instead of building complex architectures from scratch, these AI quickstarts provide a path to deployment on open, enterprise-grade platforms.

    The primary goal of the catalog is to provide an accessible environment where users can explore hands-on examples of how to use AI in real-world applications. By lowering the barrier to entry, these tools help organizations move straight into testing and refining their AI strategy.

    The spending transaction monitor is a prime example of this approach. It demonstrates how agentic AI can support sophisticated personal finance alerting systems.

    Why agentic AI for transaction monitoring?

    Conventional alerting systems have long relied on static logic characterized by predefined thresholds, hard-coded categories, and brittle rules that require constant manual updates. Agentic AI changes this model by shifting the focus from simple execution to active reasoning. Instead of following a rigid script, the system interprets the underlying goal of a task to deliver better results.

    A significant shift is the move toward natural language intent. This lets users describe their alerts in plain English—for instance, asking the system to notify them if dining expenses increase compared to the previous month—instead of navigating complex configuration menus. Using autonomous reasoning, AI agents translate that human intent into executable logic, analyze the context of a transaction, and validate the results for accuracy.

    This model also introduces adaptive intelligence into the financial workflow. Rather than remaining static, the system recommends new rules by learning from historical spending patterns. This approach makes it easier for users to start while enabling a much more expressive and intelligent level of alerting behavior.

    Key features

    The spending transaction monitor combines agentic AI with a modern cloud-native stack to deliver the following capabilities.

    Agentic rule creation

    AI agents parse natural language rules, generate Structured Query Language (SQL) queries, and validate them against real data. This process helps ensure correctness without manual intervention.

    Intelligent recommendations

    The system analyzes historical spending behavior to suggest alert rules that might be useful to users.

    LangGraph-orchestrated workflows

    Multi-step agent workflows are orchestrated using LangGraph. This setup supports structured reasoning, branching logic, and validation.

    Multi-channel notifications

    Alerts are delivered through configurable channels such as email or SMS, ensuring timely awareness.

    Kubeflow pipelines

    This AI quickstart includes a production-ready pipeline to train, save, register, and serve the alert-recommender model. This model uses live data derived from actual spending transactions found in the database.

    Architecture overview

    The solution is deployed on Red Hat OpenShift and integrates multiple components. Figure 1 shows a high level overview.

    Users connect via Nginx and React to a FastAPI back end integrating Keycloak, AI services, PostgreSQL, and notification services for email and SMS.
    Figure 1: System architecture for the spending transaction monitor, highlighting how the AI agent acts as an intermediary between natural language user intent and backend transaction databases.

    Agentic workflows in action

    Agentic workflows allow the system to handle complex tasks by breaking them into smaller, manageable steps.

    The mechanics of agentic rule creation

    A LangGraph-powered agent pipeline manages the transition from a user’s simple request to a functional monitoring rule by coordinating several components in sequence, as illustrated in Figure 2. This workflow begins with the classification agent, which parses the user's natural language to identify the specific intent—such as spending thresholds, categories, merchants, or locations. Once categorized, the SQL generation agent translates that description into an executable database query.

    To ensure the system remains reliable, the pipeline includes rigorous quality controls. A validation agent immediately tests the newly generated query against sample transaction data to confirm it produces the expected results. Meanwhile, a similarity agent uses vector embeddings to cross-reference the new rule against the existing database. By identifying duplicate or overlapping logic before a rule is activated, this process keeps the monitoring system accurate and free of redundant alerts.

    User request flows to LangGraph Agent via Web UI to classify, validate, and save a rule using LLM Service and PostgreSQL.
    Figure 2: A sequence diagram illustrating the lifecycle of an agentic rule creation, from a user’s natural language request to the LangGraph-orchestrated validation and final storage in the PostgreSQL database.

    Intelligent alert evaluation

    The spending transaction monitor also uses second agentic workflow to evaluate incoming transactions against active alerts, as shown in Figure 3. This real-time process is managed by a LangGraph state machine that uses conditional routing to balance performance with correctness. The cycle begins with a route decision phase, where the system determines whether to reuse a cached SQL query or regenerate the logic for new variables.

    Next, the SQL execution layer runs the query against the most current transaction data to identify any matches. If the alert detection phase confirms that specific rule conditions have been met, the workflow moves into message generation. Rather than delivering a cryptic system notification, the agent produces a clear, human-readable alert message that explains why the notification was triggered. This automated yet reasoned approach ensures that users receive timely, understandable insights without the latency typically associated with manual data processing.

    New transactions trigger the LangGraph agent to store data and fetch rules from PostgreSQL, then execute SQL and send LLM-generated notifications.
    Figure 3: Functional workflow for the dynamic evaluation of active rules. The diagram highlights the automated loop, where the AI agent assesses each transaction against established monitoring logic to determine if an alert must be triggered.

    Example alert rules

    The following table provides examples of how you can describe monitoring logic using natural language.

    Rule type

    Example

    SpendingAlert when I spend more than $500 in one transaction.
    CategoryNotify me if dining exceeds my 30-day average by 40%.
    LocationAlert for transactions outside my home state.
    MerchantAlert for any gas station purchases over $100.

    These examples highlight how expressive and flexible natural-language alerting can be.

    Getting started

    To begin exploring the AI quickstart, follow the deployment instructions below for either a local environment or OpenShift.

    Local development

    # Clone the repository
    git clone https://github.com/rh-ai-quickstart/spending-transaction-monitor.git
    cd spending-transaction-monitor
    # Start all services
    make run-local
    # Setup test data
    pnpm setup:data

    Access the application

    You can access the components of the spending transaction monitor through your local browser. The primary interface is available via the frontend at http://localhost:3000. You can also explore the interactive API documentation at http://localhost:8000/docs.

    To verify that your agentic alerts are firing correctly, you can monitor the test email UI at http://localhost:3002. This tool captures and displays all outgoing notifications in a local environment.

    Test alert rules

    You can verify your setup by running the provided test scripts to see sample rules in action.

    # List sample alert rules
    make list-alert-samples
    # Interactive testing
    make test-alert-rules

    OpenShift deployment

    Use the following command to deploy the solution to your OpenShift cluster.

    # Quick deploy
    make build-deploy

    Deploying the ML pipeline

    Run the following commands to deploy the full machine learning pipeline and model serving.

    make build-all
    make deploy-with-ml-dspa

    For complete instructions, see the README.

    Technology stack

    The spending transaction monitor uses the following frameworks and tools to enable agentic AI.

    LayerTechnology
    Agentic AILangGraph + LangChain
    LLM integrationLlamaStack / OpenAI
    Vector searchpgvector
    FrontendReact + TypeScript + TanStack
    BackendFastAPI + SQLAlchemy
    DatabasePostgreSQL
    AuthenticationKeycloak (OAuth2/OIDC)
    ML pipelinesKubeflow/Data Science Pipelines

    Final thoughts

    The spending transaction monitor illustrates how agentic AI unlocks a new interaction model for financial applications—one where users express intent naturally, and autonomous agents handle the complexity. By combining LangGraph-driven workflows with a cloud-native stack on OpenShift, this AI quickstart demonstrates production-ready patterns for building intelligent, adaptive AI systems.

    If you’re exploring how agentic AI can simplify complex user workflows while maintaining enterprise rigor, this AI quickstart is a great place to start.

    Learn more

    • Project README: Setup and deployment
    • Developer guide: Development documentation
    • API documentation: API reference
    • LangGraph documentation: Agent orchestration framework

    Get started

    Dive into the AI quickstart catalog or explore our repository for more information. You can also try OpenShift AI in the Red Hat product trial center. This gives you 60-day no-cost access to a fully managed environment where you can test these production-grade tools.

    Related Posts

    • AI quickstart: Self-service agent for IT process automation

    • Transform complex metrics into actionable insights with this AI quickstart

    • Eval-driven development: Build and evaluate reliable AI agents

    • Hybrid loan-decisioning with OpenShift AI and Vertex AI

    • Automate AI agents with the Responses API in Llama Stack

    • Prompt engineering: Big vs. small prompts for AI agents

    Recent Posts

    • Spending transaction monitor: Agentic AI for intelligent financial alerts

    • Vibes, specs, skills, and agents: The four pillars of AI coding

    • What's New in OpenShift GitOps 1.20

    • Integrate Claude Code with Red Hat AI Inference Server on OpenShift

    • Scale LLM fine-tuning with Training Hub and OpenShift AI

    What’s up next?

    Learning Path TensorFlow-Onnx-LP-featured-image

    Build and evaluate a fraud detection model with TensorFlow and ONNX

    Learn how to deploy a trained model with Red Hat OpenShift AI and use its...
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue