Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Custom policies in Red Hat 3scale API Management, Part 2: Securing the API with rate limit policies

May 4, 2021
Satya Jayanti
Related topics:
DevOpsEvent-drivenKubernetes
Related products:
Developer Toolset

    In Part 1 of this series, we discussed the policy framework in Red Hat 3scale API Management—adding policies to the APIcast gateway to customize API request and response behavior. In this article, we will look at adding rate limiting, backend URL protection, and edge limiting policies to the APIcast gateway. We'll also review which policies are appropriate to use for different use cases.

    API gateway as a reverse proxy

    One of the API gateway's chief responsibilities is securing the API endpoints. The API gateway acts as a reverse proxy, and all API requests flow through the gateway to the backend APIs. An API exposed through the API gateway is secured in the following ways:

    • Rate limiting controls the number of requests that reach the API by enforcing limits per URL path, method, or user and account plan limits. This is a standard feature of 3scale API Management and is possible using API packaging and plans. You can configure additional policies to limit allowed IP ranges, respond with rate limit headers, and shut down all traffic to the backend during maintenance periods.
    • Authentication provides a way to uniquely identify the requester and only allow access to authenticated accounts. This authentication can happen based on the requester's identity. 3scale API Management supports authentication via API (user) keys, application identifier and key pairs, or OpenID Connect (OIDC) based on OAuth 2.0.
    • Authorization lets you manage user and account access based on roles. This goes beyond authentication by looking into the user profile to determine if the user or group should have access to the resource requested. This is configured in 3scale API Management by assigning users and accounts to specific plans. More fine-grained access control can be provided for OIDC-secured services by inspecting the JWT (JSON Web Token) shared by the identity provider and applying role check policies.

    In this article, we will primarily focus on the different access control and rate-limiting options available through policies in APIcast.

    Applying the configuration samples

    To apply any of the configuration samples listed in this article, send a PUT request to the Admin API endpoint:

    https://<<3scale admin URL>>/admin/api/services/<<service id>>/proxy/policies.json
    

    Enter the following information:

    • Specify the 3scale Admin Portal URL (<<3scale admin URL>>).
    • Enter the API product's service ID number (<<service id>>), which can be found on the product overview page in 3scale.
    • The configuration JSON can be passed in the body of the request.
    • The admin access token can also be passed in the body of the request.

    Here is an example of the request:

    curl -v  -X PUT   -d 'policies_config=[{"name":"apicast","version":"builtin","configuration":{},"enabled":true}]&access_token=redacted'  https://red-hat-gpte-satya-admin.3scale.net/admin/api/services/18/proxy/policies.json
    

    If the request is successful, the Admin API sends an HTTP 200 response.

    Anonymous access policy

    In 3scale API Management, you must use one of the three authentication methods to access an API. An anonymous access policy lets you bypass this requirement so the API request can be made without providing the authentication in the request.

    The policy can be used only if the service is set up to use either the API key method or the App_ID and App_Key Pair authentication mechanism. The policy will not work for APIs requiring OIDC authentication.

    For the policy, providers need to create an application plan and an application with valid credentials. Using the policy, these credentials can be supplied to the API endpoint during the request if the request does not provide any credentials.

    When to use anonymous access

    You might consider using this policy in the following situations:

    • When API consumers cannot pass the authentication credentials, as they are legacy or unable to support them.
    • For testing purposes, so you can reuse existing credentials within the gateway.
    • In development or staging environments, to make it easier for developers of customer-facing applications to get API access.

    Please be aware of the following caveats:

    • The request is anonymous, so combine this policy with an IP check policy (discussed later in this article) to ensure the API endpoint is protected.
    • Be sure to provide rate limits and deny access to create, update, and delete operations to avoid misuse.
    • Avoid using this policy in production environments. If necessary, use it as a tactical solution until consumers can be migrated to use authenticated endpoints.

    How to configure anonymous access

    The anonymous access policy needs to be configured before the APIcast policy in the policy chain. The following is an example of the full configuration:

    [
    
          {
    
            "name": "default_credentials",
    
            "version": "builtin",
    
            "configuration": {
    
              "auth_type": "user_key",
    
              "user_key": "16e66c9c2eee1adb3786221ccffa1e23"
    
            }
    
          },
    
          {
    
            "name": "apicast",
    
            "version": "builtin",
    
            "configuration": {}
    
          }
    
     ]

    Maintenance mode policy

    Because the APIcast gateway acts as a reverse proxy, all authenticated requests are routed to the backend API. In certain instances, it is possible to prevent the request from reaching the backend by using the maintenance mode policy. This is helpful when the API won't accept requests due to scheduled maintenance, or if the backend API is down.

    When to use maintenance mode

    Consider using this policy when:

    • The backend API is down for maintenance.
    • You want to provide a more meaningful response to consumers when the API is unavailable or returning an Internal Server Error.
    • The API is being updated, or API policies are being changed.

    How to configure maintenance mode

    The maintenance policy needs to be configured before the APIcast policy in the policy chain. The following is an example of the full configuration:

    [
    
          {
    
            "name": "maintenance_mode",
    
            "version": "builtin",
    
            "configuration": {
    
              "message_content_type": "text/plain; charset=utf-8",
    
              "status": 503,
    
              "message": "Service Unavailable - Maintenance. Please try again later."
    
            }
    
          },
    
          {
    
            "name": "apicast",
    
            "version": "builtin",
    
            "configuration": {}
    
          }
    
        ]
    

    IP check policy

    API providers require the ability to either allow API calls only from a certain set of pre-configured IPs, or to deny calls from a set of IPs to prevent misuse of the API. By default, the APIcast gateway exposes the API as an HTTP/HTTPS endpoint and does not deny or allow calls based on the IP of the requester. Adding the IP check policy to the policy chain allows you to configure this functionality to the policy chain.

    When to use IP check

    Here are some examples of when you might want to use this policy:

    • The API is to be allowed for internal customers only, so the API endpoint is only allowed for a set of IPs or IP blocks in the network.
    • There are a known set of IPs that the provider has blocked for misusing the API or raising malevolent requests, DDoS attacks, etc. The gateway can block such IPs to deny the request without reaching the API.
    • The API provider wants to provide an IP block to allow a set of partner IPs to access the API, while preventing requesters from outside of the network from accessing it.

    Please note the APIcast IP check is a layer 7 OSI 7-layer model and is applicable at the TCP/IP layer. The IP check happens at the application layer. For a more robust policy, a LAN network whitelisting or blacklisting policy may be used on the MAC address (layer 2) or firewall policy configured for layers 3 and 4.

    How to configure IP check

    The IP check policy needs to be configured before the APIcast policy in the policy chain.

    Use the following configuration and make a request to the Admin API in a POST request:

    [
    
          {
    
            "name": "ip_check",
    
            "version": "builtin",
    
            "configuration": {
    
              "error_msg": "IP address not allowed",
    
              "client_ip_sources": [
    
                "last_caller",
    
                "X-Forwarded-For",
    
                "X-Real-IP"
    
              ],
    
              "ips": [
    
                "1.2.3.4",
    
                "1.2.3.0/4"
    
              ],
    
              "check_type": "whitelist"
    
            }
    
          },
    
          {
    
            "name": "apicast",
    
            "version": "builtin",
    
            "configuration": {}
    
          }
    
        ]

    Edge limiting policy

    In 3scale, you can control the number of API calls to the backend API by setting limits in the application plans. The rate limit can be set according to the plan type, pricing model, or access control requirements. However, this rate limit does not take into account the backend API's throughput capacity. This may lead to overwhelming the API if the number of applications and requests increases with the number of subscribers to the API. An edge limiting policy enforces a total number of requests in a given time period for the backend API across all the applications, so that backend API is protected. You can set the rate limits to enforce the number of concurrent requests per second or the number of concurrent connections allowed.

    Application plans only enforce rate limits per application. Applications are shared across all the users for a particular account, so you can use the edge limiting policy to allow for user-specific rate limiting by uniquely identifying users through their JWT claim check or request parameters. This ensures no single account user can monopolize the call limit set for the application. Using liquid templates, you can set rate limits based on variables like remote IP address, headers, JWT variables, or URL path parameters.

    The edge limiting policy uses the OpenResty lua-resty-limit-traffic library. The policy allows the following limits to be set:

    • leaky_bucket_limiters, based on the leaky bucket algorithm, which builds on the average number of requests plus a maximum burst size.
    • fixed_window_limiters, based on a fixed window of time: last n seconds.
    • connection_limiters, based on the concurrent number of connections.

    Policies normally apply at the gateway level, but the edge limit policy can be service-level and apply to the total number of requests to the backend, irrespective of the number of gateways deployed. An external Redis storage database can be used and configured in case multiple gateways share the edge limit.

    When to use edge limiting

    Here are some examples of when you might consider using this policy:

    • You want to set an overall limit on backend API across all users, accounts, and applications.
    • You want to control throughput for popular APIs with hundreds of consumers. For example, if the API can handle 100 concurrent connections, the edge limiting policy can be set accordingly, and it will apply to all applications.
    • An application plan may have a rate limit of 10 requests per minute, but the number of applications using that plan depends on the number of consumers. If there are 1,000 applications, then theoretically, up to 10,000 requests could be allowed per minute. Setting the edge limit enforces simultaneous usage limits according to API capacity.
    • An application plan allows for 100 requests per minute, but a single client IP is making 100% of the requests, and other requesters with the same account are unable to access. Setting a limit of 10 requests per minute per client IP ensures the plan is used fairly across the account.

    How to configure edge limiting

    The edge limiting policy needs to be configured before the APIcast policy in the policy chain. The following examples demonstrate some sample configurations.

    To set a concurrent connections rate limit globally with an external Redis storage database:

    [
    
          {
    
            "name": "rate_limit",
    
            "version": "builtin",
    
            "configuration": {
    
              "limits_exceeded_error": {
    
                "status_code": 429,
    
                "error_handling": "exit"
    
              },
    
              "configuration_error": {
    
                "status_code": 500,
    
                "error_handling": "exit"
    
              },
    
              "fixed_window_limiters": [],
    
              "connection_limiters": [
    
                {
    
                  "condition": {
    
                    "combine_op": "and"
    
                  },
    
                  "key": {
    
                    "scope": "service",
    
                    "name_type": "plain"
    
                  },
    
                  "conn": 100,
    
                  "burst": 50,
    
                  "delay": 1
    
                }
    
              ],
    
              "redis_url": "redis://gateway-redis:6379/1"
    
            }
    
          },
    
          {
    
            "name": "apicast",
    
            "version": "builtin",
    
            "configuration": {}
    
          }
    
        ]

    To set leaky bucket limiters for a client IP address:

    [
    
          {
    
            "name": "rate_limit",
    
            "version": "builtin",
    
            "configuration": {
    
              "limits_exceeded_error": {
    
                "status_code": 429,
    
                "error_handling": "exit"
    
              },
    
              "configuration_error": {
    
                "status_code": 500,
    
                "error_handling": "exit"
    
              },
    
              "fixed_window_limiters": [],
    
              "connection_limiters": [],
    
              "redis_url": "redis://gateway-redis:6379/1",
    
              "leaky_bucket_limiters": [
    
                {
    
                  "condition": {
    
                    "combine_op": "and"
    
                  },
    
                  "key": {
    
                    "scope": "service",
    
                    "name_type": "liquid",
    
                    "name": "{{ remote_addr }}"
    
                  },
    
                  "rate": 100,
    
                  "burst": 50
    
                }
    
              ]
    
            }
    
          },
    
          {
    
            "name": "apicast",
    
            "version": "builtin",
    
            "configuration": {}
    
          }
    
        ]

    To set a fixed-window limiter for a header match:

    [
    
          {
    
            "name": "rate_limit",
    
            "version": "builtin",
    
            "configuration": {
    
              "limits_exceeded_error": {
    
                "status_code": 429,
    
                "error_handling": "exit"
    
              },
    
              "configuration_error": {
    
                "status_code": 500,
    
                "error_handling": "exit"
    
              },
    
              "fixed_window_limiters": [
    
                {
    
                  "window": 10,
    
                  "condition": {
    
                    "combine_op": "and"
    
                  },
    
                  "key": {
    
                    "scope": "service",
    
                    "name_type": "liquid",
    
                    "name": "{{ jwt.sub }}"
    
                  },
    
                  "count": 100
    
                }
    r
              ]
    
            }
    
          },
    
          {
    
            "name": "apicast",
    
            "version": "builtin",
    
            "configuration": {}
    
          }
    
        ]

    Conclusion

    In this article, we saw how policies can be used to fine-tune the rate limiting and access controls that are set up for the APIcast gateway. In the next article, we will explore using advanced security policies that can work in conjunction with OIDC for providing authorization controls.

    You can try out the 3scale API Management platform and create policies, as shown in this article, by signing up for free.

    Last updated: February 5, 2024

    Recent Posts

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.