Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Using the custom metrics policy in Red Hat 3scale API Management

May 25, 2021
Chamal Abeywardhana
Related topics:
APIs
Related products:
Red Hat 3scale API Management

    Developers use Red Hat 3scale API Management to manage APIs through a gateway called APIcast. APIcast includes many out-of-the-box policies that can be configured to extend its default behavior. A new addition to APIcast, the custom metrics policy, provides another way to track metrics that are valuable to your business via the 3scale analytics engine.

    The custom metrics policy was introduced in 3scale API Management 2.9. I was intrigued to learn how to configure this policy, and wanted to understand its potential use scenarios. This article introduces one way to take advantage of the custom metrics policy, using a metric that tracks specific HTTP response status codes as an example scenario. The default analytics in 3scale just capture the response statuses into buckets of 2xx, 4xx, and so on. If you want to capture specific status codes such as 203 or 403, you can use this custom metrics policy.

    Note: 3scale analytics represent only cumulative metrics. Therefore, complex metric tracking for averages, etc., should be done with other tools.

    Setting up the custom metrics policy

    This section explains the prerequisites for the scenario shown in the article, followed by setup. The section that follows explains the end-to-end flow.

    Prerequisites

    To complete this tutorial you will need the following:

    • 3scale API Management 2.9 on-premises deployment.
    • A private backend API that returns HTTP status codes. For the purpose of this article, use the 3scale-private-api.
    • A binary download of the ngrok secure tunneling service.

    Setup

    Install and deploy the 3scale-private-api backend. This simple application will return a response with the status code that you specify by passing a status code in the request URL. For instance, if you include the parameter /status/203 in the URL, the backend returns a status of 203.

    The example application listens for requests on port 3000. Expose that port to the internet via the ngrok application by executing the following command:

    Note: The public URL generated by ngrok is valid for only eight hours on the free tier.

    $ ./ngrok http 3000

    When you execute the previous command, you will get an output similar to the following:

    Forwarding                    http://8fcf5e08d900.ngrok.io -> localhost:30
    Forwarding                    https://8fcf5e08d900.ngrok.io -> localhost:3000

    Copy the secure HTTPS URL from the output and specify it as your backend, pasting it into the Private Base URL field, as shown in Figure 1.

    The Private Base URL of the backend should be the HTTPS URL generated by ngrok.
    Figure 1: Choosing the private base URL for the backend.

    If the backend is not configured for your API product, configure it for the product as shown in the 3scale documentation.

    Configure a new metric on the product to be updated based on the response code. You can create any number of metrics with the System name field of status-HTTP-STATUS-CODE for all the HTTP status codes you would like to capture, as shown in Figure 2.

    To set the status code you would like to have returned, in the "New metric" screen, set the System name to status-HTTP-STATUS-CODE.
    Figure 2: Setting the status that you would like the backend to return.

    Add the custom metrics policy to your service configuration, as shown in Figure 3.

    Under "Select a policy," choose "Custom Metrics."
    Figure 3: Selecting the custom metrics policy.

    Configure the policy as shown in Figure 4.

    The "Edit policy" screen offers several fields to configure the policy.
    Figure 4: Configuring the policy.

    Update the policy chain and promote the service configuration to the API product's staging and production environments.

    End-to-end flow

    Make sure that the private API backend application is running and exposed via ngrok, and that the private base URL of the service backend is correctly configured to reflect the URL generated by ngrok.

    Make a request to APIcast with the correct credentials to invoke the service configured. Note that the following request URL matches the metric status-203. You have to include the -k option if you have self-signed certificates:

    $ curl "https://api-3scale-apicast-staging.apps-crc.testing:443/status/203?user_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" -k

    You should receive a response similar to the following:

    status code 203 set to the header

    Make another request as follows to update the status-303 metric:

    $ curl "https://api-3scale-apicast-staging.apps-crc.testing:443/status/303?user_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" -k<

    Go to Product Analytics—>Traffic to check whether the metrics about return status codes are updated accordingly.

    If you have added more metrics, they also are updated based on status codes you have passed in.

    Additional notes

    To find out all the liquid variables available in each step of APIcast, you can use the Liquid Context Debug policy. This is useful if you want to use the available variables to write complex logical expressions. (We won't cover that topic in this article.)

    Note: The Liquid Context Debug policy does not work when you have more than one backend in your product configuration.

    Here is how to inspect the host header using a liquid expression:

    1. Add the Liquid Context Debug policy to the policy chain. The policy needs to be placed before the APIcast policy or the upstream policy in the policy chain.
    2. Make a request to the APIcast as you normally do.
    3. The response will be in JSON, with additional variables available in each context. Here is a snippet from an example request:
      "http_method": "GET",
      "already_seen": "ALREADY SEEN - NOT CALCULATING AGAIN TO AVOID CIRCULAR REFS",
      "remote_port": "52672",
      "remote_addr": "10.116.0.1",
      "current": {
      "host": "api-3scale-apicast-staging.apps-crc.testing",
      "original_request": {
      "current": {
      "host": "api-3scale-apicast-staging.apps-crc.testing",
      "query": "user_key=267c255c50b842ca0ebf9cf4ee2bfd1e",
      "path": "\/",
      "headers": {
      "host": "api-3scale-apicast-staging.apps-crc.testing",
      "x-forwarded-host": "api-3scale-apicast-staging.apps-crc.testing",
      "x-forwarded-for": "192.168.130.1",
      "user-agent": "curl\/7.66.0",
      "forwarded": "for=192.168.130.1;host=api-3scale-apicast-staging.apps-crc.testing;proto=https",
      "x-forwarded-port": "443",
      "x-forwarded-proto": "https",
      "accept": "*\/*"
      },
      "uri": "\/",
      "server_addr": "10.116.0.83"
      },
      "next": {
      "already_seen": "ALREADY SEEN - NOT CALCULATING AGAIN TO AVOID CIRCULAR REFS"
      }
      },
      
    4. Note the variables such as host, query, and headers available under the original_request variable. You can access those values using liquid tags in policies that support liquid values. For example, you can access the original hostname by using the liquid tag {{original_request['host']}}.

    Conclusion

    In this article, you have learned how to use the custom metrics policy to capture analytics for specific status codes, which 3scale API Management by default does not individually capture. You also learned how to use the Liquid Context Debug policy to find out the variables available for use in APIcast policies.

    I hope this guide provided you with sufficient tools to implement a solution similar to the one described. Feel free to comment with any suggestions to improve this content.

    What’s next?

    Use this guide as a reference to track alternative metrics in the 3scale analytics, and don't hesitate to contact the support team or open an issue on the APIcast GitHub repository with questions.

    Last updated: November 6, 2023

    Recent Posts

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    • How EvalHub manages two-layer Kubernetes control planes

    • Tekton joins the CNCF as an incubating project

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.