Custom policies in Red Hat 3scale API Management, Part 2: Securing the API with rate limit policies

In Part 1 of this series, we discussed the policy framework in Red Hat 3scale API Management—adding policies to the APIcast gateway to customize API request and response behavior. In this article, we will look at adding rate limiting, backend URL protection, and edge limiting policies to the APIcast gateway. We’ll also review which policies are appropriate to use for different use cases.

API gateway as a reverse proxy

One of the API gateway’s chief responsibilities is securing the API endpoints. The API gateway acts as a reverse proxy, and all API requests flow through the gateway to the backend APIs. An API exposed through the API gateway is secured in the following ways:

  • Rate limiting controls the number of requests that reach the API by enforcing limits per URL path, method, or user and account plan limits. This is a standard feature of 3scale API Management and is possible using API packaging and plans. You can configure additional policies to limit allowed IP ranges, respond with rate limit headers, and shut down all traffic to the backend during maintenance periods.
  • Authentication provides a way to uniquely identify the requester and only allow access to authenticated accounts. This authentication can happen based on the requester’s identity. 3scale API Management supports authentication via API (user) keys, application identifier and key pairs, or OpenID Connect (OIDC) based on OAuth 2.0.
  • Authorization lets you manage user and account access based on roles. This goes beyond authentication by looking into the user profile to determine if the user or group should have access to the resource requested. This is configured in 3scale API Management by assigning users and accounts to specific plans. More fine-grained access control can be provided for OIDC-secured services by inspecting the JWT (JSON Web Token) shared by the identity provider and applying role check policies.

In this article, we will primarily focus on the different access control and rate-limiting options available through policies in APIcast.

Applying the configuration samples

To apply any of the configuration samples listed in this article, send a PUT request to the Admin API endpoint:

https://<<3scale admin URL>>/admin/api/services/<<service id>>/proxy/policies.json

Enter the following information:

  • Specify the 3scale Admin Portal URL (<<3scale admin URL>>).
  • Enter the API product’s service ID number (<<service id>>), which can be found on the product overview page in 3scale.
  • The configuration JSON can be passed in the body of the request.
  • The admin access token can also be passed in the body of the request.

Here is an example of the request:

curl -v  -X PUT   -d 'policies_config=[{"name":"apicast","version":"builtin","configuration":{},"enabled":true}]&access_token=redacted'  https://red-hat-gpte-satya-admin.3scale.net/admin/api/services/18/proxy/policies.json

If the request is successful, the Admin API sends an HTTP 200 response.

Anonymous access policy

In 3scale API Management, you must use one of the three authentication methods to access an API. An anonymous access policy lets you bypass this requirement so the API request can be made without providing the authentication in the request.

The policy can be used only if the service is set up to use either the API key method or the App_ID and App_Key Pair authentication mechanism. The policy will not work for APIs requiring OIDC authentication.

For the policy, providers need to create an application plan and an application with valid credentials. Using the policy, these credentials can be supplied to the API endpoint during the request if the request does not provide any credentials.

When to use anonymous access

You might consider using this policy in the following situations:

  • When API consumers cannot pass the authentication credentials, as they are legacy or unable to support them.
  • For testing purposes, so you can reuse existing credentials within the gateway.
  • In development or staging environments, to make it easier for developers of customer-facing applications to get API access.

Please be aware of the following caveats:

  • The request is anonymous, so combine this policy with an IP check policy (discussed later in this article) to ensure the API endpoint is protected.
  • Be sure to provide rate limits and deny access to create, update, and delete operations to avoid misuse.
  • Avoid using this policy in production environments. If necessary, use it as a tactical solution until consumers can be migrated to use authenticated endpoints.

How to configure anonymous access

The anonymous access policy needs to be configured before the APIcast policy in the policy chain. The following is an example of the full configuration:

[

      {

        "name": "default_credentials",

        "version": "builtin",

        "configuration": {

          "auth_type": "user_key",

          "user_key": "16e66c9c2eee1adb3786221ccffa1e23"

        }

      },

      {

        "name": "apicast",

        "version": "builtin",

        "configuration": {}

      }

 ]

Maintenance mode policy

Because the APIcast gateway acts as a reverse proxy, all authenticated requests are routed to the backend API. In certain instances, it is possible to prevent the request from reaching the backend by using the maintenance mode policy. This is helpful when the API won’t accept requests due to scheduled maintenance, or if the backend API is down.

When to use maintenance mode

Consider using this policy when:

  • The backend API is down for maintenance.
  • You want to provide a more meaningful response to consumers when the API is unavailable or returning an Internal Server Error.
  • The API is being updated, or API policies are being changed.

How to configure maintenance mode

The maintenance policy needs to be configured before the APIcast policy in the policy chain. The following is an example of the full configuration:

[

      {

        "name": "maintenance_mode",

        "version": "builtin",

        "configuration": {

          "message_content_type": "text/plain; charset=utf-8",

          "status": 503,

          "message": "Service Unavailable - Maintenance. Please try again later."

        }

      },

      {

        "name": "apicast",

        "version": "builtin",

        "configuration": {}

      }

    ]

IP check policy

API providers require the ability to either allow API calls only from a certain set of pre-configured IPs, or to deny calls from a set of IPs to prevent misuse of the API. By default, the APIcast gateway exposes the API as an HTTP/HTTPS endpoint and does not deny or allow calls based on the IP of the requester. Adding the IP check policy to the policy chain allows you to configure this functionality to the policy chain.

When to use IP check

Here are some examples of when you might want to use this policy:

  • The API is to be allowed for internal customers only, so the API endpoint is only allowed for a set of IPs or IP blocks in the network.
  • There are a known set of IPs that the provider has blocked for misusing the API or raising malevolent requests, DDoS attacks, etc. The gateway can block such IPs to deny the request without reaching the API.
  • The API provider wants to provide an IP block to allow a set of partner IPs to access the API, while preventing requesters from outside of the network from accessing it.

Please note the APIcast IP check is a layer 7 OSI 7-layer model and is applicable at the TCP/IP layer. The IP check happens at the application layer. For a more robust policy, a LAN network whitelisting or blacklisting policy may be used on the MAC address (layer 2) or firewall policy configured for layers 3 and 4.

How to configure IP check

The IP check policy needs to be configured before the APIcast policy in the policy chain.

Use the following configuration and make a request to the Admin API in a POST request:

[

      {

        "name": "ip_check",

        "version": "builtin",

        "configuration": {

          "error_msg": "IP address not allowed",

          "client_ip_sources": [

            "last_caller",

            "X-Forwarded-For",

            "X-Real-IP"

          ],

          "ips": [

            "1.2.3.4",

            "1.2.3.0/4"

          ],

          "check_type": "whitelist"

        }

      },

      {

        "name": "apicast",

        "version": "builtin",

        "configuration": {}

      }

    ]

Edge limiting policy

In 3scale, you can control the number of API calls to the backend API by setting limits in the application plans. The rate limit can be set according to the plan type, pricing model, or access control requirements. However, this rate limit does not take into account the backend API’s throughput capacity. This may lead to overwhelming the API if the number of applications and requests increases with the number of subscribers to the API. An edge limiting policy enforces a total number of requests in a given time period for the backend API across all the applications, so that backend API is protected. You can set the rate limits to enforce the number of concurrent requests per second or the number of concurrent connections allowed.

Application plans only enforce rate limits per application. Applications are shared across all the users for a particular account, so you can use the edge limiting policy to allow for user-specific rate limiting by uniquely identifying users through their JWT claim check or request parameters. This ensures no single account user can monopolize the call limit set for the application. Using liquid templates, you can set rate limits based on variables like remote IP address, headers, JWT variables, or URL path parameters.

The edge limiting policy uses the OpenResty lua-resty-limit-traffic library. The policy allows the following limits to be set:

  • leaky_bucket_limiters, based on the leaky bucket algorithm, which builds on the average number of requests plus a maximum burst size.
  • fixed_window_limiters, based on a fixed window of time: last n seconds.
  • connection_limiters, based on the concurrent number of connections.

Policies normally apply at the gateway level, but the edge limit policy can be service-level and apply to the total number of requests to the backend, irrespective of the number of gateways deployed. An external Redis storage database can be used and configured in case multiple gateways share the edge limit.

When to use edge limiting

Here are some examples of when you might consider using this policy:

  • You want to set an overall limit on backend API across all users, accounts, and applications.
  • You want to control throughput for popular APIs with hundreds of consumers. For example, if the API can handle 100 concurrent connections, the edge limiting policy can be set accordingly, and it will apply to all applications.
  • An application plan may have a rate limit of 10 requests per minute, but the number of applications using that plan depends on the number of consumers. If there are 1,000 applications, then theoretically, up to 10,000 requests could be allowed per minute. Setting the edge limit enforces simultaneous usage limits according to API capacity.
  • An application plan allows for 100 requests per minute, but a single client IP is making 100% of the requests, and other requesters with the same account are unable to access. Setting a limit of 10 requests per minute per client IP ensures the plan is used fairly across the account.

How to configure edge limiting

The edge limiting policy needs to be configured before the APIcast policy in the policy chain. The following examples demonstrate some sample configurations.

To set a concurrent connections rate limit globally with an external Redis storage database:

[

      {

        "name": "rate_limit",

        "version": "builtin",

        "configuration": {

          "limits_exceeded_error": {

            "status_code": 429,

            "error_handling": "exit"

          },

          "configuration_error": {

            "status_code": 500,

            "error_handling": "exit"

          },

          "fixed_window_limiters": [],

          "connection_limiters": [

            {

              "condition": {

                "combine_op": "and"

              },

              "key": {

                "scope": "service",

                "name_type": "plain"

              },

              "conn": 100,

              "burst": 50,

              "delay": 1

            }

          ],

          "redis_url": "redis://gateway-redis:6379/1"

        }

      },

      {

        "name": "apicast",

        "version": "builtin",

        "configuration": {}

      }

    ]

To set leaky bucket limiters for a client IP address:

[

      {

        "name": "rate_limit",

        "version": "builtin",

        "configuration": {

          "limits_exceeded_error": {

            "status_code": 429,

            "error_handling": "exit"

          },

          "configuration_error": {

            "status_code": 500,

            "error_handling": "exit"

          },

          "fixed_window_limiters": [],

          "connection_limiters": [],

          "redis_url": "redis://gateway-redis:6379/1",

          "leaky_bucket_limiters": [

            {

              "condition": {

                "combine_op": "and"

              },

              "key": {

                "scope": "service",

                "name_type": "liquid",

                "name": "{{ remote_addr }}"

              },

              "rate": 100,

              "burst": 50

            }

          ]

        }

      },

      {

        "name": "apicast",

        "version": "builtin",

        "configuration": {}

      }

    ]

To set a fixed-window limiter for a header match:

[

      {

        "name": "rate_limit",

        "version": "builtin",

        "configuration": {

          "limits_exceeded_error": {

            "status_code": 429,

            "error_handling": "exit"

          },

          "configuration_error": {

            "status_code": 500,

            "error_handling": "exit"

          },

          "fixed_window_limiters": [

            {

              "window": 10,

              "condition": {

                "combine_op": "and"

              },

              "key": {

                "scope": "service",

                "name_type": "liquid",

                "name": "{{ jwt.sub }}"

              },

              "count": 100

            }
r
          ]

        }

      },

      {

        "name": "apicast",

        "version": "builtin",

        "configuration": {}

      }

    ]

Conclusion

In this article, we saw how policies can be used to fine-tune the rate limiting and access controls that are set up for the APIcast gateway. In the next article, we will explore using advanced security policies that can work in conjunction with OIDC for providing authorization controls.

You can try out the 3scale API Management platform and create policies, as shown in this article, by signing up for free.

Share

Enhance application security by rotating 3scale access tokens

In Red Hat 3scale API Management, access tokens allow authentication against the 3scale APIs. An access token can provide read and write access to the Billing, Account Management, and Analytics APIs. Therefore, ensuring you are handling access tokens carefully is paramount.

This article explains how to enhance security by making access tokens ephemeral. By the end of the article, you will be able to set up 3scale to perform access token rotation. An external webhook listener service performs the actual token revocation. The rotation takes place automatically after a specific event triggers a webhook.

Note: This article does not cover access tokens used with the 3scale gateway as part of any OAuth or OpenID Connect flows.

A step-by-step guide to access token rotation

Example scenario: I want single-use access tokens so that after the token is used for creating an application via the 3scale API, it will automatically be revoked.

Prerequisites

Setting up the access tokens

Note: It is important to complete the following steps before proceeding to the “End-to-end flow in action” section.

For this tutorial, you can use the example application AccessTokenRevoker. However, it’s recommended to implement your own webhook listener service in your preferred language for a real-world scenario.

  1. Create a custom field definition on the Application object and make it hidden. Set the Name and Label fields, as shown in Figure 1. For this example, we will use token_ value and Token Value, respectively. The Name field is important here because that is the parameter that will be parsed from the webhook object later.

    The custom sign-up form fields; Name and Label are defined and the Hidden checkbox is selected.

    Figure 1: Defining the custom sign-up form fields.

  2. Configure 3scale webhooks to deliver upon admin portal actions, specifically for the Application created event, as shown in Figure 2.

    3scale webhooks configuration; the Application created checkbox is selected.

    Figure 2: Configure 3scale webhooks.

  3. Deploy an application running as an external service to 3scale where the 3scale webhooks will be delivered and parsed; the application will subsequently make an API call to 3scale to rotate the access token. See the example application instructions to deploy and configure the application to follow along with the steps.Here is an example command that you can use to start the example application:
    node app.js --url "https://{TENANT | ACCOUNT}-admin.{WILDCARD_DOMAIN | 3scale.net}"
    
  4. The API call to 3scale should pass the value of the custom field Token Value as both the access_token and id parameters to the Personal Access Token Delete API.Here is an example of a curl request the application should implement to revoke the token successfully:
    curl -X DELETE "https://{TENANT | ACCOUNT}-admin.{WILDCARD_DOMAIN | 3scale.net}/admin/api/personal/access_tokens/${ACCESS_TOKEN}.json" -d 'access_token=${ACCESS_TOKEN}'

End-to-end flow in action

  1. The admin user creates an access token with read/write permissions and scope for the Account Management API.
  2. The user creates an application from the 3scale API and is required to add a value to the field created in the “Setting up the access token” section. The value of this field should be equal to the access token created in step 1.The following is an example of a curl request to the 3scale API. You can use this to create a new application and rotate the token. Notice the token_value field contains the access token as a value in the request’s POST data:
    curl -X POST "https://{TENANT | ACCOUNT}-admin.{WILDCARD_DOMAIN | 3scale.net}/admin/api/accounts/${ACCOUNT_ID}/applications.xml" -d 'access_token=${ACCESS_TOKEN}&plan_id=${PLAN_ID}&name=${APPLICATION_NAME}&description=demo&token_value=${ACCESS_TOKEN}'
    
  3. The Application created event triggers the 3scale webhook. Figure 3 is an example of what the webhook’s request body looks like. The token value is visible as part of the extra_fields object in req.body.event.object[0].application[0].extra_fields[0].token_value[0].

    Sample body for a webhook's request.

    Figure 3: Webhook’s request body.

  4. The external service listening for 3scale webhooks receives this object. It then parses the body for the custom field defined previously. In the example, the value of token_value is stored for the API call to 3scale.

Once the token has been deleted, it will no longer function. Cleaning it up from the hidden field on the application is optional at this point, given that the field is hidden and no longer valid.

Alternatively, in the setup phase, you can set the custom field definition to required instead of hidden. This prevents the user from creating an application without setting this important field. This way, the custom field is visible by default to developers if they have access to the developer portal. This can pose a security threat while the access token is still valid. As a further step, you can ensure that the field doesn’t render in HTML by customizing the liquid templates in the developer portal.

Supported webhook event triggers

It is possible to configure this workflow to fit the API provider’s requirements to rotate tokens after any of the supported webhook event triggers shown in Figure 4.

Events that trigger webhooks are listed under the categories Accounts, Users, Applications, and Keys.

Figure 4: Events that trigger webhooks.

Note: Remember that webhooks will be triggered for the same events that occur due to actions executed from the Developer portal.

Conclusion

In this article, you learned how 3scale users can use temporary access tokens to access all the features available via the 3scale API, keeping security in mind. Feel free to comment on this article with any suggestions for how to improve this content.

Share

Deploy integration components easily with the Red Hat Integration Operator

Any developer knows that when we talk about integration, we can mean many different concepts and architecture components. Integration can start with the API gateway and extend to events, data transfer, data transformation, and so on. It is easy to lose sight of what technologies are available to help you solve various business problems. Red Hat Integration‘s Q1 release introduces a new feature that targets this challenge: the Red Hat Integration Operator.

The Red Hat Integration Operator helps developers easily explore and discover components available in Red Hat OpenShift. One single Operator is now the entry point for getting the latest Red Hat Integration components. The Operator declaratively manages the components we want to enable, the namespaces to which we want to deploy, and their scope in the Red Hat OpenShift cluster using a Kubernetes custom resource.

Now in general availability, the Red Hat Integration Operator lets you choose and install any of the following Operators:

  • 3scale
  • 3scale APIcast
  • AMQ Broker
  • AMQ Interconnect
  • AMQ Streams
  • API Designer
  • Camel K
  • Fuse Console
  • Fuse Online
  • Service Registry

In this article, we will review the different components that the Red Hat Integration Operator manages and see how to install the Operator in OpenShift with OperatorHub. Finally, we will install 3scale APIcast and AMQ Broker Operators using a customized Install file.

Installing the Red Hat Integration Operator with OperatorHub

To get started using the Red Hat Integration Operator on OpenShift with OperatorHub, follow these steps:

  1. In the OpenShift Container Platform web console, log in as an administrator and navigate to the OperatorHub.
  2. Type in Red Hat Integration to filter within the available operators.
  3. Select the Red Hat Integration Operator.
  4. Check the Operator details and click Install.
  5. On the Install Operator page, accept all of the default selections and click Install.
  6. When the install status indicates the Operator is ready to use, click View Operator.
Locating Red Hat Integration in the OpenShift OperatorHub via a filter search

Figure 1: Searching for Red Hat Integration Operator in the OpenShift OperatorHub

Now you are ready to install Operators for Red Hat Integration components. For more information on installing the Operator from OperatorHub, see the OpenShift documentation.

Installing Operators for Red Hat Integration components

On the Operator page, you will see the Operator has provided a new custom resource definition (CRD) called Installation. You can click Create Instance to produce a new custom resource (CR) to trigger the Operators’ installation. The Create Installation page displays the complete set of Red Hat Integration Operators available for installation. All Operators are enabled for installation by default.

If you prefer, you can configure the installation specification from the form or YAML views before performing the installation. Doing so lets you include or exclude Operators, change namespaces, and switch between an Operator’s supported modes.

Here is an example configuration for 3scale APIcast and AMQ Streams:

apiVersion: integration.redhat.com/v1

kind: Installation

metadata:

  name: rhi-installation

spec:

  3scale-apicast-installation:

    enabled: true 

    mode: namespace

    namespace: rhi-3scale-apicast

  amq-streams-installation:

    enabled: true

    mode: namespace 

    namespace: rhi-streams

When you navigate to the Installed Operators view, you should see the list of installed Operators and their managed namespaces (see Figure 2).

A list of installed Operators in the OpenShift Container Platform web console

Figure 2: Viewing installed Operators in the OpenShift Container Platform web console

The Red Hat Integration Operator also upgrades the Operators for the Red Hat Integration components it has installed. It defaults to an automatic approval strategy, so the components are upgraded to use the latest available versions.

Check the complete documentation for more information about installing the Red Hat Integration Operator.

Conclusion

In the previous sections, we explored the different components that the Red Hat Integration Operator can install using the Installation custom resource on OpenShift. We also reviewed a simple way to install the Operator using OperatorHub and used a customized file to install only two components in their namespace scope. As you can see, it is now easier than ever to get started with Red Hat Integration on OpenShift.

Red Hat Integration provides an agile, distributed, and API-centric solution that organizations can use to connect and share data between applications and systems in a digital world. Get started by navigating to your OpenShift OperatorHub and installing the Red Hat Integration Operator.

The Red Hat Integration Operator is also available in Red Hat Marketplace.

If you want to know more about the Red Hat Integration Operator, watch this video from Christina Lin.

Share

Packaging APIs for consumers with Red Hat 3scale API Management

One of an API management platform’s core functionalities is defining and enforcing policies, business domain rate limits, and pricing rules for securing API endpoints. As an API provider, you sometimes need to make the same backend API available for different consumer segments using these terms. In this article, you will learn about using Red Hat 3scale API Management to package APIs for different consumers, including internal and external developers and strategic partners. See the end of the article for a video tutorial that guides you through using 3scale API Management to create and configure the packages that you will learn about in this article.

About Red Hat 3scale API Management

3scale API Management is a scalable, hybrid cloud API management framework that is part of the Red Hat Integration product portfolio. Figure 1 is a simplified view of the 3scale API Management framework.

The flow of interactions between the API consumer, API manager, and API gateway in 3scale API Management.

Figure 1: A high-level view of 3scale API Management.

The API Manager is the framework’s control plane. It provides an interface for the API provider to create users, accounts, policies, and services. The API Gateway (APIcast) enforces the policies for APIs in the data plane.

Policies for  API access

We can use 3scale API Management to create consumer segments where distinct policies are enforced for the same API. For example, let’s say we need to expose a single API endpoint to three different consumer audiences: internal developers, external developers, and strategic partners. Table 1 shows a sample scenario of the packages that we could create for each audience.

Table 1: A basic application plan for three audiences
Package Rate limits Pricing rules Features
Internal developers None Free Internal
External developers 10 calls per minute $0.01 per call Basic
Strategic partners 1 million calls per day $100 per month Premium

Note: Although the rate limit is set to “None” for internal developers, it is better to set a high rate limit to prevent distributed denial-of-service (DDoS) attacks. Additionally, while the rate limit for strategic partners is expressed per day, it would be better to set it on a per-minute basis. Doing that would prevent overloading systems with heavy loads in short bursts.

Figure 2 shows the API packages from Table 1.

A visual representation of the API packages from Table 1.

Figure 2: API packages for internal developers, external developers, and strategic partners.

The Rate limits policy, shown in Figure 2, enforces call limits on APIs. Limits are defined for each method, and the same package can enforce different limits for each API method. Pricing rules are used to enable metering and chargeback for API calls. Pricing rules are defined for each API method, and the same package can enforce different pricing rules for each API method. Finally, the Features policy lets us define multiple features for each package. 3scale API Management adds metadata tags to each package to uniquely identify and map its available features.

3scale API Management’s packaging scenario is common, and most API management platforms support something similar. In the following sections, we will look at the different types of plans available from 3scale API Management.

Application plans

Application plans establish the rules (limits, pricing, features) for using an API. Every application request to the API happens within the constraints of an application plan. Every API in 3scale API Management must have at least one defined application plan. It is also common to define multiple plans to target different audiences using the same API. Figure 3 shows the relationship of the API to application plans, consumer audiences, and policies.

A single API with three different audiences and their respective application plans and policies.

Figure 3: A single API can have multiple application plans enforcing different policies for different users.

Each consumer application is mapped uniquely to a single application plan. When the application requests an API, 3scale API Management applies the rate limits and pricing rules for that application and updates its usage statistics. Application plans are the lowest granularity of control available in 3scale API Management. Most packaging requirements can be met by using one or more application plans per API.

Beyond application plans

In some cases, we need to use specialized plans to define policies for multiple application plans for an API or developer account. A default plan is available to all API providers, but specialized plans—which define complex relationships between services, applications, and accounts—must be explicitly enabled. The decision to use one or more specialized plans should be considered during the API design phase and documented in detail to avoid unexpected outcomes. The next sections introduce service plans and account plans.

Service plans

We can use service plans to subscribe consumers to APIs. Service subscriptions are enabled by default in 3scale API Management, and only one service plan is enabled per subscription. A service plan provides service-level features and plans for all applications consuming the service under that plan.

As an example, the plan described in Table 2 adds a new feature to the application plans we developed in the previous section.

Table 2: Adding new features to the three basic application plans
Package Rate limits Pricing rules Features
Internal developers None Free Internal, developers
External developers 10 calls per minute $0.01 per call Basic, developers
Strategic partners 1 million calls per day $100 per month Premium, partners

We could set up the new features individually for each application plan. However, it would be better to define the “default” service plan features and enable corresponding features in the application plans as required.

Table 3 describes a more complex scenario, where the API provider needs to provide two or more application plans for partners.

Table 3: Multiple application plans
Package Rate limits Pricing rules Features
Strategic Partners Bronze Plan 100,000 calls per day $30 per month Premium, partners, bronze, developers
Strategic Partners Silver Plan 500,000 calls per day $60 per month Premium, partners, silver, testing
Strategic Partners Gold Plan 1 million calls per day $100 per month Premium, partners, gold, production

In this case, the API provider could allow a single partner account to sign up for multiple plans. For example, a strategic partner could use the bronze plan for applications in development, the silver plan for quality assurance (QA), and the gold plan for production applications. To provide the partner with standard pricing across all application plans, we could use the service plan described in Table 4.

Table 4: Introducing a service plan
Service plan Set up fees Pricing rules Features
Strategic Partners Premium Plan $50 $100 per month Premium, partners, customers

Figure 4 shows a typical scenario using service plans and application plans in tandem.

A single API with two different service plans: One for developers and one for strategic partners.

Figure 4: Combining service plans and application plans in 3scale API Management.

To recap, consider using custom service plans for these types of use cases:

  • Custom features that multiple application plans can inherit.
  • A custom trial period that is applicable across multiple application plans for the same API.
  • Set up fees or fixed fees that are applicable across multiple application plans for the same API.

Account plans

Account plans are used to apply subscription criteria to consumer accounts. Instead of managing API access, like application plans and service plans, this plan packages accounts and applies the account plan across all the APIs accessed by a given account. Account plans create “tiers” of usage within the developer portal, allowing you to distinguish between grades of support, content, and other services that partners at different levels receive.

Let’s say that an API provider wants to cater to three different partner levels, with policies for each, as shown in Table 5.

Table 5: A sample account plan for three levels of partners
Package Set up cost Monthly cost Features
Strategic Partners Bronze Plan Free $30 per month Premium, partners, bronze, no support
Strategic Partners Silver Plan $50 $60 per month Premium, partners, silver, standard support
Strategic Partners Gold Plan $100 $100 per month Premium, partners, gold, 24/7 support, dedicated account

The provider chooses to charge a fixed monthly cost and setup cost instead of charging per API or application plan. In this case, it makes sense to have a plan operate at the account level so that the same policies apply to all the APIs and applications associated with that account. The API provider could also create different setup costs and support plans for different sets of customer accounts. Figure 5 illustrates the relationship between account plans and APIs.

Internal developers receive a basic plan, while external developers and strategic partners receive a standard trial plan.

Figure 5: Use an account plan to apply the same policies to all APIs and applications associated with an account.

3scale API Management provides a default account plan for all developer accounts. The default plan ensures that application access is controlled through individual service and application plans. If you needed to define features for a set of developer accounts independent of the number of applications, you might consider implementing an account plan. Account plans also work well when the setup fee, usage fee, or the length of a trial period is fixed for the account regardless of the number of APIs subscribed.

Watch the video

Watch the following video for a guide to using 3scale API Management to package and combine API plans for a variety of consumers.

Conclusion

As you have seen, it is possible to accomplish many complex API packaging scenarios using 3scale API Management and the right combination of account plans, service plans, and application plans. This article discussed strategies for packaging a single API backend endpoint. 3scale API Management also supports an API-as-a-product functionality that lets us package multiple backend APIs using the same policies and plans. My next article in this series introduces the API-as-a-product functionality and use cases.

Share

Custom policies in Red Hat 3scale API Management, Part 1: Overview

API management platforms such as Red Hat 3scale API Management provide an API gateway as a reverse proxy between API requests and responses. In this stage, most API management platforms optimize the request-response pathway and avoid introducing complex processing and delays. Such platforms provide minimal policy enforcement such as authentication, authorization, and rate-limiting. With the proliferation of API-based integrations, however, customers are demanding more fine-tuned capabilities.

Policy frameworks are key to adding new capabilities to the API request and response lifecycle. In this series, you will learn about the Red Hat 3scale API Management policy framework and how to use it to configure custom policies in the APIcast API gateway.

Policy enforcement with 3scale API Management

APIcast is 3scale API Management’s default data-plane gateway and policy enforcement point for API requests and responses. Its core functionality is to enforce rate limits, report methods and metrics, and use the mapping paths and security specified for each API defined in the 3scale API manager.

APIcast is built on NGINX. It is a custom implementation of a reverse proxy using the OpenResty framework, with modules written in Lua. Most NGINX functionality is implemented using modules, which are controlled by directives specified in a configuration file.

APIcast enforces API configuration rules that are set in the 3scale API manager. It authenticates new requests by connecting to the service API exposed by the API manager. It also allows access to the backend API and reports usage. Figure 1 presents a high-level view of APIcast in 3scale API Management’s API request and response flow.

A diagram of APIcast in the 3scale API Management request and response flow.

Figure 1: APIcast in the 3scale API Management API request and response flow.

The default APIcast policy

A default APIcast policy interprets the standard configuration and provides API gateway functionality. The default policy acts as the API’s entry point to the gateway and must be enabled for all APIs configured in 3scale API Management. The APIcast policy ensures that API requests are handled using the rules configured in the API manager. The configuration is provided to the APIcast gateway as a JSON specification, which APIcast downloads from the 3scale API Management portal.

Each HTTP request passes through a sequence of phases. A distinct type of processing is performed on the request in each phase. Module-specific handlers can be registered in most phases, and many standard NGINX modules register their phase handlers so that they will be called at a specific stage of request processing. Phases are processed successively, and phase handlers are called once the request reaches the phase.

To customize request processing, we can register additional modules at the appropriate phase. 3scale API Management provides standard policies that are pre-built as NGINX modules and can be plugged into each service’s request.

Custom APIcast policies

In addition to the default policy, 3scale API Management provides custom policies that can be configured to each API. Using modules and configurations, these policies provide custom features for handling API requests and responses. Using custom modules makes the APIcast gateway highly customizable. It is possible to add custom processing and functionality on demand without modifying API gateway code or writing any additional code.

Note: See the chapter on APIcast policies in the 3sale API Management documentation for a list of all the standard policies that are available to be configured directly with APIcast.

Policy chaining

Policies must be placed in order of execution priority, and this placement is called policy chaining. Policy chains affect the default behavior of any combination of policies. The default APIcast policy should be part of the policy chain. If a custom policy needs to be evaluated before the APIcast policy, it must be placed before that policy in the chain. Figure 2 shows an example of the policy order as defined in the policy chain.

A diagram of the custom policy chain.

Figure 2: A custom URL-rewriting policy in the policy chain.

For example, take a scenario using a URL-rewriting policy to change the URL path from /A/B to /B. Placing the URL-rewriting policy before the APIcast policy ensures that the path is changed before gateway processing. Backend rules, mapping rules, and metrics all will be evaluated using the /B URL path.

If, on the other hand, the custom policy should be evaluated after the APIcast policy, you can reverse the order. As an example, if you wanted the mapping rules to be evaluated for /A/B, with the URL rewrite to /B applied afterward, then you would place the URL rewriting policy after the APIcast policy.

Configuring custom policies

There are two ways to add a new policy to an API. One option is to use the Policy section for each managed API in the 3scale API Management Admin Portal. All of the available policies are available to be added. If you prefer to use the Admin API, then you can provide the policy as a JSON specification, which you can upload to the service configuration using the provided REST API.

You also can copy a set of policies as part of the service configuration from one running environment to another using the 3scale Toolbox. To verify the set of policies applied to a specific API, you can download the current configuration and configuration history from the administration portal or using the provided REST API.

Check the video below for a demonstration of the policy configuration in 3scale API Management.

Conclusion

3scale API Management provides multiple options for configuring custom API policies. This article introduced the 3scale API Management policy framework, custom policies, and policy chaining. I also presented a brief example of how to configure and view policies in 3scale API Management. Future articles in this series will look at the available policies, and I will introduce the developer toolset that you can use to create your own custom policies. In the meantime, you can explore 3scale API Management by signing up for a free 3scale API Management account.

Share

Installing Red Hat OpenShift API Management

Red Hat OpenShift API Management is a new managed application service designed as an add-on to Red Hat OpenShift Dedicated. The service provides developers with a streamlined experience when developing and delivering microservices applications with an API-first approach.

OpenShift API Management was built on the success of 3scale API Management and designed to let developers easily share, secure, reuse, and discover APIs when developing applications. This article shows you how to install OpenShift API Management as an add-on to your OpenShift Dedicated cluster. As you will see, it takes less than 10 minutes to install, configure, administer, and be up and running with OpenShift API Management. Check the end of the article for the included video demonstration.

Step 1: Get OpenShift Dedicated

You will need an OpenShift Dedicated subscription to provision a cluster to run Red Hat OpenShift API Management.

Start by provisioning your cluster:

  1. Log in to your account on cloud.redhat.com/openshift.
  2. Open the Red Hat OpenShift Cluster Manager.
  3. Create a cluster by selecting Red Hat OpenShift Dedicated.
  4. Choose your cloud provider.
  5. Fill in all the cluster configuration details: Cluster names, regions, availability zones, computer nodes, node instance type, CIDR ranges (these are optional, but cannot be modified at a later date), and so on.
  6. Submit your request and wait for the cluster to be provisioned.

Step 2: Select and configure an identity provider

Once your cluster is ready, you can configure your preferred identity provider (IDP)

To start, select and configure your identity provider (IDP) from the drop-down menu. Options include GitHub, Google, OpenID, LDAP, GitLab, and other providers that support OpenID flows.

After you have configured your IDP, you will need to add at least one user to the dedicated-admins group. Click Add user within the Cluster administrative users section, then enter the username of the new dedicated-admin user.

Access levels vary per organization. Any user that requires full access should be defined as an administrator. As an administrator, you can use 3scale API Management and OpenShift’s role-based access control (RBAC) features to limit other users’ access.

Step 3: Install OpenShift API Management

The process to install OpenShift API Management is straightforward. From the cluster details page, you can navigate to the Add-ons tab and find the services available to you. Here’s the setup:

  1. Go to the Add-ons tabs on the cluster details page.
  2. Choose Red Hat OpenShift API Management and hit Install.
  3. Provide the email address for the account that will receive service-related alerts and notifications.
  4. Click Install again to allow the Operators to install and configure the service.

After installing the add-on, you can access the service via the Application Launcher menu located in the OpenShift Dedicated console’s top-right corner. The Application Launcher provides direct access to the OpenShift API Management and Red Hat single sign-on (SSO) technology service UIs.

Watch the video demonstration

Check out the following video to see the OpenShift API Management installation steps in action:

Using OpenShift API Management

There you go! Getting OpenShift API Management installed and running on your OpenShift Dedicated cluster is as simple as the steps described here. Now you can start using the service. Check out the following resources to get going:

Share

5 steps to manage your first API using Red Hat OpenShift API Management

Companies are increasingly using hosted and managed services to deliver on application modernization efforts and reduce the burden of managing cloud infrastructure. The recent release of Red Hat OpenShift API Management makes it easier than ever to get your own dedicated instance of Red Hat 3scale API Management running on Red Hat OpenShift Dedicated.

This article is for developers who want to learn how to use Red Hat’s hosted and managed services to automatically import and manage exposed APIs. We’ll deploy a Quarkus application on OpenShift Dedicated, then use OpenShift API Management to add API key security. See the end of the article for a video demonstration of the workflow described.

Prerequisites

This article assumes that you already have the following:

  • Access to a cloud.redhat.com account.
  • An existing OpenShift Dedicated cluster or the ability to deploy one.
  • Entitlements to deploy the Red Hat OpenShift API Management add-on.
  • A development environment with:

Step 1: Obtain an OpenShift Dedicated cluster

Using a hosted and managed service like OpenShift API Management makes this step straightforward. See this video guide to obtaining an OpenShift Dedicated cluster and installing the OpenShift API Management add-on. You can also find instructions in this article and in the OpenShift API Management documentation.

Once you’ve obtained your OpenShift Dedicated cluster and installed the Red Hat OpenShift API Management add-on, we can move on to the next step.

Step 2: Create a project using the OpenShift CLI

Logging into an OpenShift Dedicated cluster via the OpenShift command-line interface requires a login token and URL. You can obtain both of these by logging into the OpenShift console via a web browser and using the configured IdP. Click Copy Login Command in the dropdown menu displayed under your username in the top-right corner. Alternatively, navigate directly to https://oauth-openshift.$CLUSTER_HOSTNAME/oauth/token/request and use your web browser to obtain a valid login command.

Once you have a token, issue a login command, then create a new project:

  1. $ oc login --token=$TOKEN --server=$URL
  2. $ oc new-project my-quarkus-api

Step 3: Deploy the Quarkus application to OpenShift

The Java application you’ll deploy for this demonstration is based on the example from the Quarkus OpenAPI and Swagger UI Guide. It’s a straightforward CRUD application that supports using a REST API to modify an in-memory list of fruits. You’ll find the source code in this GitHub repository.

Our application’s codebase differs slightly from the Quarkus OpenAPI and Swagger UI Guide example. I made the following changes:

  1. Set quarkus.smallrye-openapi.store-schema-directory in application.properties.
  2. Updated .gitignore to openapi.json and openapi.yaml.
  3. Added the quarkus-openshift extension.

These modifications create a local copy of the OpenAPI spec in JSON format and include tooling that simplifies the deployment process.

Build and deploy the Quarkus application

Start by cloning the repository to your local environment:

$ git clone https://github.com/evanshortiss/rhoam-quarkus-openapi

Issue the following command to start a local development server and view the Swagger UI at http://localhost:8080/swagger-ui:

$ ./mvnw quarkus:dev

Enter this command to build and deploy the application on your OpenShift Dedicated cluster:

$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true -Dquarkus.openshift.expose=true

The build progress will be streamed from the OpenShift build pod to your terminal. You can also track the build logs and status in the project’s Builds section in the OpenShift console, as shown in Figure 1.

The Builds section in the OpenShift console.

Figure 1: Viewing build logs in the OpenShift console.

Once the build and deployment process is complete, the URL to access the application will be printed in your terminal. Use this URL to verify that the application’s OpenAPI spec is available at the /openapi?format=json endpoint. It’s important to verify that the JSON response is returned. You’ll need it to import the API to 3scale API Management and automatically generate the 3scale API Management ActiveDocs. Figure 2 shows an example of the response returned by this endpoint.

The OpenAPI spec for Quarkus Fruits.

Figure 2: Viewing the OpenAPI specification in JSON format.

Step 4: Apply Service Discovery annotations

Next, we’ll import the API into 3scale API Management using its Service Discovery feature. For this step, we need to apply a specific set of annotations and labels to the service associated with the Quarkus application. The Service Discovery annotations and labels are documented here.

Use the following commands to apply the necessary annotations:

$ oc annotate svc/rhoam-openapi "discovery.3scale.net/description-path=/openapi?format=json"
$ oc annotate svc/rhoam-openapi discovery.3scale.net/port="8080"
$ oc annotate svc/rhoam-openapi discovery.3scale.net/scheme=http

Add the discovery label using the following command:

$ oc label svc/rhoam-openapi discovery.3scale.net="true"

Verify the label and annotations using:

$ oc get svc/rhoam-openapi -o yaml

The output should be similar to the sample displayed in Figure 3.

The YAML file for the Quarkus Fruits API service definition.

Figure 3: The Quarkus API Service definition in YAML format with annotations and labels.

Step 5: Use Service Discovery to import the API

At this point, you can import the Quarkus Fruits API and manage it using 3scale API Management’s Service Discovery feature. Use the OpenShift Dedicated application launcher to navigate to the 3scale API Management console. Figure 4 shows the application launcher in the top-right corner of the OpenShift Dedicated console.

The OpenShift Dedicated console and application launcher with API Management selected.

Figure 4: Using the application launcher to access 3scale API Management.

Import the API

Log in to 3scale API Management using your configured IdP, and click the New Product link on the dashboard. Perform the following steps on the New Product screen:

  1. Select Import from OpenShift (authenticate if necessary).
  2. Choose the my-quarkus-api namespace from the Namespace dropdown.
  3. Choose the rhoam-openapi service from the Name dropdown.
  4. Click the Create Product button.

Figure 5 shows the new product screen in 3scale API Management.

The 3scale API Management dialog to create a new product.

Figure 5: Creating a new product in 3scale API Management.

At this point, you should be redirected back to the 3scale API Management dashboard. If your new API isn’t listed in the APIs section after a few moments, try refreshing the page. Once the API has been imported and listed on the dashboard, expand it and click the ActiveDoc link. Select rhoam-openapi on the subsequent screen to view the live documentation that was generated from the OpenAPI specification, as shown in Figure 6.

The 3scale API Management ActiveDocs page.

Figure 6: Viewing the generated ActiveDocs in 3scale API Management.

Create an Application Plan in 3scale API Management

Next, you’ll need to configure an Application Plan to interact with the API via a protected route:

  1. Choose Product: rhoam-openapi from the top navigation bar.
  2. Select Applications > Application Plans from the menu on the left.
  3. Click the Create Application Plan link.
  4. Enter “RHOAM Test Plan” in the Name field.
  5. Enter “rhoam-test-plan” in the System name field.
  6. Click the Create Application Plan button.
  7. Click the Publish link when redirected to the Application Plans screen.

Figure 7 shows the dialog to create a new application plan in 3scale API Management.

The 'Create Application Plan' screen in 3scale API Management.

Figure 7: Creating an application plan in 3scale API Management.

Configure a developer account to use the application

Now that you’ve created an Application Plan, you’ll need to sign up a developer account to use the application. Typically, an API consumer signs up using your API Developer portal. For the purpose of this demonstration, you will manually provide the default Developer account with API access:

  1. Select Audience from the top navigation bar.
  2. Select the Developer account from the Accounts list.
  3. Click the 1 Applications link from the breadcrumb links at the top of the screen.
  4. Click the Create Application link and you’ll be directed to the New Application screen.
  5. Select RHOAM Test Plan as the Application Plan.
  6. Enter “RHOAM Test Application” in the Name field.
  7. Enter a description of the API.
  8. Click Create Application.

Once the application is created, you’ll see that an API key is listed under the API Credentials section, as shown in Figure 8. Take note of the key.

The 3scale API Management Application Details screen.

Figure 8: Creating an application for a user generates an API key.

Test the application

Use the top navigation bar to navigate back to the Quarkus API’s product page, then open the Integration > Configuration section. The Staging APIcast section should include an example cURL command for testing, as shown in Figure 9. Copy this command and add /fruits to the URL, e.g https://my-quarkus-api-3scale-staging.$CLUSTER_HOSTNAME:443/fruits?user_key=$YOUR_API_KEY

The 3scale API Management API configuration screen.

Figure 9: The example cURL command now contains a valid API key.

Issuing the cURL command or pasting the URL into a web browser returns the list of fruits from the Quarkus API. Congratulations: You’ve deployed a Quarkus-based REST API on OpenShift and protected it using Red Hat 3scale API Management.

Video demonstration: Red Hat OpenShift API Management

If you want to go over the steps in this article again, see this video guide to using Red Hat OpenShift API Management, Quarkus​, and 3scale API Management to automatically import and manage exposed APIs.

Summary and next steps

If you’ve made it this far, you have successfully:

  • Provisioned an OpenShift Dedicated cluster.
  • Installed the Red Hat OpenShift API Management add-on.
  • Deployed a Quarkus application on your OpenShift Dedicated cluster.
  • Applied custom labels and annotations to a service using the OpenShift CLI.
  • Imported the Quarkus API into 3scale API Management and protected it using API key security.

Now that you’ve learned the basics of OpenShift Dedicated and 3scale API Management, why not explore other OpenShift Dedicated and Red Hat OpenShift API Management features? Here are some ideas:

  • Familiarize yourself with the single sign-on instance that’s included with your Red Hat OpenShift API Management add-on. You could consider using Red Hat’s single sign-on (SSO) technology instead of API key security to protect routes using OpenID Connect. (SSO is accessible from the OpenShift Dedicated application launcher.)
  • Learn more about OpenShift and your cluster by following a quickstart from the OpenShift web console’s developer perspective.
  • Delete the unprotected route to the Quarkus API using the OpenShift console or CLI. This was the route you used to view the OpenAPI in JSON format.

Happy coding!

Share

OpenID Connect integration with Red Hat 3scale API Management and Okta

This article introduces you to using Red Hat 3scale API Management for OpenID Connect (OIDC) integration and compliance. Our goal is to secure an API in 3scale API Management using JSON Web Token (JWT), OIDC, and the Oauth2 Authorization Framework. We will set up the integration using Okta as our third-party OpenID Connect identity provider. An important part of the demonstration is establishing the 3scale API Management gateway’s connection with Okta.

Note: This article is not a deep dive into OIDC or Oauth2. I won’t cover the details of authentication and authorization flows. Toward the end of the article, you will see how to obtain an access token, which you will need to execute a request against a protected API.

Prerequisites

For demonstration purposes, we will use 3scale API Management and Okta as self-managed services. If you don’t have them already, begin by creating free service accounts using 3scale.net and okta.com.

Setting up the 3scale API Management OIDC integration

Our first step is to create the simplest possible REST API for integration. We’ll use the 3scale API Management platform and an API back end configured to the echo-api: https://echo-api.3scale.net:443.

As an alternative to this setup, you could try a different back end or a self-managed APIcast instance. This article showcases OIDC authentication. You can adapt different settings to the use case.

Figure 1 shows the OIDC settings in 3scale API Management.

The dialog screen to enter the 3scale API Management OIDC settings for Okta authentication.

Figure 1: Enter the 3scale API Management OIDC settings for Okta authentication.

Note that the settings include AUTHENTICATION, AUTHENTICATION SETTINGS, and OPENID CONNECT (OIDC) BASICS. The OpenID Connect issuer URL is set to an Okta custom authorization server named “default.” It is impossible to customize an Okta default authorization server, so we will use the custom server for this example.

Overview of the 3scale API Management Okta integration

So far, we have employed OpenID Connect’s .well-known/openid-configuration endpoint to connect 3scale API Management with Okta. The 3scale API Management gateway determines what it needs from the OpenID Connect issuer URL, which we’ve just defined. Before going further, let’s clarify what we want to accomplish. The diagram in Figure 2 illustrates the step-by-step process for integrating 3scale API Management and Okta.

A diagram of the 3scale API Management and Okta integration.

Figure 2: An overview of the 3scale API Management and Okta integration.

Our goal is to call a protected API resource from 3scale API Management and use Okta for the user-delegated access.  Step 1 assumes that we’ve retrieved a JSON web token from the Okta authorization server, as defined in the OIDC specification. We will experiment with the OIDC authorization flow later.

After calling the API in Step 2, 3scale API Management verifies the JSON web token in Step 3. If the token is valid, 3scale API Management dispatches the request to the server back end, which you can see in Step 4.

Verifying the client application ID is paramount for the request to be successful. In the next sections, we will look closely at the mechanics of verification.

Verify and match the JWT claim

The 3scale API Management gateway secures every request by checking its associated JSON web token for the following characteristics:

  • Integrity: Is the JWT being tampered with by a malicious user (signature check)?
  • Expiration: Is this token expired?
  • Issuer: Has it been issued by an authorization server that is known to the 3scale API Management gateway?
  • Client ID: Does the token contain a claim matching a client application ID that is known to the 3scale API Management gateway?

The next step is to match the 3scale API Management client with the correct JWT claim.

In the 3scale API Management settings, set the ClientID Token Claim field to appid, as shown in Figure 3.

The dialog to set the ClientID Token Claim.

Figure 3: Set the ClientID Token Claim to appid.

This configuration tells 3scale API Management which claim to match against a client application in its API. For this demonstration, I decided to use appid rather than the default azp claim. The Okta authorization server requires a custom claim. I also wanted to avoid the often misunderstood and misused azp claim.

Configuring Okta

Next, let’s head over to the Okta admin portal to configure the Okta authorization server and OpenID Connect application. This configuration allows a client application to request a JSON web token on behalf of a user. Recall that we’re using a custom authorization server (named default) to add the appid JWT claim. The value assigned to this claim will be the Okta client application ID.

Configure the Okta authorization server

As shown in Figure 4, we use the Authorization Servers dialog to add a new claim to the default authorization server.

The Authorization Servers dialog in Okta admin includes the option to add the appid claim to Okta's default authorization server.

Figure 4: Add the appid claim to the default authorization server.

In OpenID Connect, two tokens are usually issued in response to the authorization code flow: The ID token and the access token. We will use the access token to access the protected resource from 3scale API Management, so we only need to add the custom claim to that token type.

Create the OIDC application

While in the Okta admin portal, we’ll use the OpenID Connect sign-on method to create a new application. Figure 5 shows the dialog to create a new application integration.

The dialog to create a new application in the Okta admin portal. The OpenID Connect sign-on method is selected.

Figure 5: Select the option to create an OpenID Connect application.

Next, we use the Create OpenID Connect Integration dialog to create the application, as shown in Figure 6.  Note that we’ll use the login redirect URI to retrieve the token later as part of the authorization flow.

Use the 'Create OpenID Connect Integration' dialog to create the Okta app.

Figure 6: Create the OpenID Connect application for Okta.

After creating the OIDC application, locate the Okta-generated client ID on the Client Credentials page, shown in Figure 7. Save this value to use when we create the corresponding client application in 3scale API Management.

The Client ID generated by Okta is located on the Client Credentials page.

Figure 7: Locate the client ID on the Okta admin Client Credentials page.

Create and assign a user to the OIDC application

The last thing we’ll do in Okta is to create and assign at least one user to the application, as shown in Figure 8. This allows a valid login to execute using the OpenID Connect authorization flow.

In the Okta admin console, create and assign at least one user to the application.

Figure 8: Create and assign at least one user to the OpenID Connect application in Okta.

This completes the Okta configuration. Next, we will configure a corresponding application in 3scale API Management.

Configuring the 3scale API Management client application

The API gateway can only authorize API calls from a previously registered client application. So, our last step is to create a 3scale API Management application whose credentials match with the application we’ve just created in Okta. We only need to match the application_id (also called the client ID), because it is carried by the JWT appid claim.

As an admin user, navigate to the 3scale API Management docs. You must use 3scale API Management to create the client application and specify a user-defined application_id. Figure 9 shows the dialog to create the 3scale API Management client application.

Create the client application using the 3scale API Management gateway API.

Figure 9: Use the 3scale API Management gateway API to create a client application.

Once you have it set up with the correct parameters, you will see the new application in the listing that subscribes to the API product you are testing.

Testing the application

Now, you might wonder how to ensure that the 3scale API Management application performs correctly. In this case, we can use Postman to execute a request with a valid JWT access token from Okta. The screenshot in Figure 10 shows how to execute the authorization flow in Postman.

Using Postman to get a JWT access token from Okta.

Figure 10: Using Postman to get a JWT access token from Okta.

A login screen should pop up, followed by the successful retrieval of the ID and access tokens. Then, we can successfully retrieve the client ID and access tokens shown in Figure 11. (Note that the access token is represented using jwt.io.)

Represents the JWT access token from Okta using jwt.io

Figure 11: Retrieve the client ID and access tokens from Okta.

From here, we call the API endpoint with the JWT access token assigned to the Authorization: Bearer HTTP request header:

$ curl "https://some-example-api.xyz.gw.apicast.io" -H "Authorization: Bearer jwt-access-token-base64"

Postman can take care of the rest. The echo-api will respond when the authentication is successful.

Using Red Hat’s single sign-on technology for OIDC integration

For this demonstration, we had to create an OpenID Connect application in both Okta and 3scale API Management. The number of applications grows as you start to delegate the process of application creation to other developers. The one OIDC specification that addresses this problem is the Dynamic Client Registration specification.

At the time of this writing, 3scale API Management and Okta don’t automatically integrate. However, Red Hat’s single sign-on technology is an open-source OpenID provider that integrates seamlessly with 3scale API Management. You can use the 3scale API Management gateway and the single sign-on developer portal to drive the authorization flow. Find out more about Red Hat single sign-on tools (7.4) and its upstream community project Keycloak.

Conclusion

Thank you for taking the time to read this article and follow the demonstration. As you have seen, 3scale API Management works together with any OpenID provider in a way that is compliant with its specification. We’ve used Okta as our OpenID provider for this demonstration. I hope that breaking down the verification process and showing each party’s roles and responsibilities helped to demystify aspects of application security with JWT, OIDC, and Oauth2.

Share

New custom metrics and air gapped installation in Red Hat 3scale API Management 2.9

We continue to update the Red Hat Integration product portfolio to provide a better operational and development experience for modern cloud– and container-native applications. The Red Hat Integration 2020-Q3 release includes Red Hat 3scale API Management 2.9, which provides new features and capabilities for 3scale. Among other features, we have updated the 3scale API Management and Gateway Operators.

This article introduces the Red Hat 3scale API Management 2.9 release highlights, including air-gapped installation for 3scale on Red Hat OpenShift and new APIcast policies for custom metrics and upstream mutual Transport Layer Security (TLS).

Continue reading “New custom metrics and air gapped installation in Red Hat 3scale API Management 2.9”

Share

HTTP-based Kafka messaging with Red Hat AMQ Streams

Apache Kafka is a rock-solid, super-fast, event streaming backbone that is not only for microservices. It’s an enabler for many use cases, including activity tracking, log aggregation, stream processing, change-data capture, Internet of Things (IoT) telemetry, and more.

Red Hat AMQ Streams makes it easy to run and manage Kafka natively on Red Hat OpenShift. AMQ Streams’ upstream project, Strimzi, does the same thing for Kubernetes.

Setting up a Kafka cluster on a developer’s laptop is fast and easy, but in some environments, the client setup is harder. Kafka uses a TCP/IP-based proprietary protocol and has clients available for many different programming languages. Only the JVM client is on Kafka’s main codebase, however.

Continue reading “HTTP-based Kafka messaging with Red Hat AMQ Streams”

Share