Deploy integration components easily with the Red Hat Integration Operator

Any developer knows that when we talk about integration, we can mean many different concepts and architecture components. Integration can start with the API gateway and extend to events, data transfer, data transformation, and so on. It is easy to lose sight of what technologies are available to help you solve various business problems. Red Hat Integration‘s Q1 release introduces a new feature that targets this challenge: the Red Hat Integration Operator.

The Red Hat Integration Operator helps developers easily explore and discover components available in Red Hat OpenShift. One single Operator is now the entry point for getting the latest Red Hat Integration components. The Operator declaratively manages the components we want to enable, the namespaces to which we want to deploy, and their scope in the Red Hat OpenShift cluster using a Kubernetes custom resource.

Now in general availability, the Red Hat Integration Operator lets you choose and install any of the following Operators:

  • 3scale
  • 3scale APIcast
  • AMQ Broker
  • AMQ Interconnect
  • AMQ Streams
  • API Designer
  • Camel K
  • Fuse Console
  • Fuse Online
  • Service Registry

In this article, we will review the different components that the Red Hat Integration Operator manages and see how to install the Operator in OpenShift with OperatorHub. Finally, we will install 3scale APIcast and AMQ Broker Operators using a customized Install file.

Installing the Red Hat Integration Operator with OperatorHub

To get started using the Red Hat Integration Operator on OpenShift with OperatorHub, follow these steps:

  1. In the OpenShift Container Platform web console, log in as an administrator and navigate to the OperatorHub.
  2. Type in Red Hat Integration to filter within the available operators.
  3. Select the Red Hat Integration Operator.
  4. Check the Operator details and click Install.
  5. On the Install Operator page, accept all of the default selections and click Install.
  6. When the install status indicates the Operator is ready to use, click View Operator.
Locating Red Hat Integration in the OpenShift OperatorHub via a filter search

Figure 1: Searching for Red Hat Integration Operator in the OpenShift OperatorHub

Now you are ready to install Operators for Red Hat Integration components. For more information on installing the Operator from OperatorHub, see the OpenShift documentation.

Installing Operators for Red Hat Integration components

On the Operator page, you will see the Operator has provided a new custom resource definition (CRD) called Installation. You can click Create Instance to produce a new custom resource (CR) to trigger the Operators’ installation. The Create Installation page displays the complete set of Red Hat Integration Operators available for installation. All Operators are enabled for installation by default.

If you prefer, you can configure the installation specification from the form or YAML views before performing the installation. Doing so lets you include or exclude Operators, change namespaces, and switch between an Operator’s supported modes.

Here is an example configuration for 3scale APIcast and AMQ Streams:

apiVersion: integration.redhat.com/v1

kind: Installation

metadata:

  name: rhi-installation

spec:

  3scale-apicast-installation:

    enabled: true 

    mode: namespace

    namespace: rhi-3scale-apicast

  amq-streams-installation:

    enabled: true

    mode: namespace 

    namespace: rhi-streams

When you navigate to the Installed Operators view, you should see the list of installed Operators and their managed namespaces (see Figure 2).

A list of installed Operators in the OpenShift Container Platform web console

Figure 2: Viewing installed Operators in the OpenShift Container Platform web console

The Red Hat Integration Operator also upgrades the Operators for the Red Hat Integration components it has installed. It defaults to an automatic approval strategy, so the components are upgraded to use the latest available versions.

Check the complete documentation for more information about installing the Red Hat Integration Operator.

Conclusion

In the previous sections, we explored the different components that the Red Hat Integration Operator can install using the Installation custom resource on OpenShift. We also reviewed a simple way to install the Operator using OperatorHub and used a customized file to install only two components in their namespace scope. As you can see, it is now easier than ever to get started with Red Hat Integration on OpenShift.

Red Hat Integration provides an agile, distributed, and API-centric solution that organizations can use to connect and share data between applications and systems in a digital world. Get started by navigating to your OpenShift OperatorHub and installing the Red Hat Integration Operator.

The Red Hat Integration Operator is also available in Red Hat Marketplace.

If you want to know more about the Red Hat Integration Operator, watch this video from Christina Lin.

Share

Packaging APIs for consumers with Red Hat 3scale API Management

One of an API management platform’s core functionalities is defining and enforcing policies, business domain rate limits, and pricing rules for securing API endpoints. As an API provider, you sometimes need to make the same backend API available for different consumer segments using these terms. In this article, you will learn about using Red Hat 3scale API Management to package APIs for different consumers, including internal and external developers and strategic partners. See the end of the article for a video tutorial that guides you through using 3scale API Management to create and configure the packages that you will learn about in this article.

About Red Hat 3scale API Management

3scale API Management is a scalable, hybrid cloud API management framework that is part of the Red Hat Integration product portfolio. Figure 1 is a simplified view of the 3scale API Management framework.

The flow of interactions between the API consumer, API manager, and API gateway in 3scale API Management.

Figure 1: A high-level view of 3scale API Management.

The API Manager is the framework’s control plane. It provides an interface for the API provider to create users, accounts, policies, and services. The API Gateway (APIcast) enforces the policies for APIs in the data plane.

Policies for  API access

We can use 3scale API Management to create consumer segments where distinct policies are enforced for the same API. For example, let’s say we need to expose a single API endpoint to three different consumer audiences: internal developers, external developers, and strategic partners. Table 1 shows a sample scenario of the packages that we could create for each audience.

Table 1: A basic application plan for three audiences
Package Rate limits Pricing rules Features
Internal developers None Free Internal
External developers 10 calls per minute $0.01 per call Basic
Strategic partners 1 million calls per day $100 per month Premium

Note: Although the rate limit is set to “None” for internal developers, it is better to set a high rate limit to prevent distributed denial-of-service (DDoS) attacks. Additionally, while the rate limit for strategic partners is expressed per day, it would be better to set it on a per-minute basis. Doing that would prevent overloading systems with heavy loads in short bursts.

Figure 2 shows the API packages from Table 1.

A visual representation of the API packages from Table 1.

Figure 2: API packages for internal developers, external developers, and strategic partners.

The Rate limits policy, shown in Figure 2, enforces call limits on APIs. Limits are defined for each method, and the same package can enforce different limits for each API method. Pricing rules are used to enable metering and chargeback for API calls. Pricing rules are defined for each API method, and the same package can enforce different pricing rules for each API method. Finally, the Features policy lets us define multiple features for each package. 3scale API Management adds metadata tags to each package to uniquely identify and map its available features.

3scale API Management’s packaging scenario is common, and most API management platforms support something similar. In the following sections, we will look at the different types of plans available from 3scale API Management.

Application plans

Application plans establish the rules (limits, pricing, features) for using an API. Every application request to the API happens within the constraints of an application plan. Every API in 3scale API Management must have at least one defined application plan. It is also common to define multiple plans to target different audiences using the same API. Figure 3 shows the relationship of the API to application plans, consumer audiences, and policies.

A single API with three different audiences and their respective application plans and policies.

Figure 3: A single API can have multiple application plans enforcing different policies for different users.

Each consumer application is mapped uniquely to a single application plan. When the application requests an API, 3scale API Management applies the rate limits and pricing rules for that application and updates its usage statistics. Application plans are the lowest granularity of control available in 3scale API Management. Most packaging requirements can be met by using one or more application plans per API.

Beyond application plans

In some cases, we need to use specialized plans to define policies for multiple application plans for an API or developer account. A default plan is available to all API providers, but specialized plans—which define complex relationships between services, applications, and accounts—must be explicitly enabled. The decision to use one or more specialized plans should be considered during the API design phase and documented in detail to avoid unexpected outcomes. The next sections introduce service plans and account plans.

Service plans

We can use service plans to subscribe consumers to APIs. Service subscriptions are enabled by default in 3scale API Management, and only one service plan is enabled per subscription. A service plan provides service-level features and plans for all applications consuming the service under that plan.

As an example, the plan described in Table 2 adds a new feature to the application plans we developed in the previous section.

Table 2: Adding new features to the three basic application plans
Package Rate limits Pricing rules Features
Internal developers None Free Internal, developers
External developers 10 calls per minute $0.01 per call Basic, developers
Strategic partners 1 million calls per day $100 per month Premium, partners

We could set up the new features individually for each application plan. However, it would be better to define the “default” service plan features and enable corresponding features in the application plans as required.

Table 3 describes a more complex scenario, where the API provider needs to provide two or more application plans for partners.

Table 3: Multiple application plans
Package Rate limits Pricing rules Features
Strategic Partners Bronze Plan 100,000 calls per day $30 per month Premium, partners, bronze, developers
Strategic Partners Silver Plan 500,000 calls per day $60 per month Premium, partners, silver, testing
Strategic Partners Gold Plan 1 million calls per day $100 per month Premium, partners, gold, production

In this case, the API provider could allow a single partner account to sign up for multiple plans. For example, a strategic partner could use the bronze plan for applications in development, the silver plan for quality assurance (QA), and the gold plan for production applications. To provide the partner with standard pricing across all application plans, we could use the service plan described in Table 4.

Table 4: Introducing a service plan
Service plan Set up fees Pricing rules Features
Strategic Partners Premium Plan $50 $100 per month Premium, partners, customers

Figure 4 shows a typical scenario using service plans and application plans in tandem.

A single API with two different service plans: One for developers and one for strategic partners.

Figure 4: Combining service plans and application plans in 3scale API Management.

To recap, consider using custom service plans for these types of use cases:

  • Custom features that multiple application plans can inherit.
  • A custom trial period that is applicable across multiple application plans for the same API.
  • Set up fees or fixed fees that are applicable across multiple application plans for the same API.

Account plans

Account plans are used to apply subscription criteria to consumer accounts. Instead of managing API access, like application plans and service plans, this plan packages accounts and applies the account plan across all the APIs accessed by a given account. Account plans create “tiers” of usage within the developer portal, allowing you to distinguish between grades of support, content, and other services that partners at different levels receive.

Let’s say that an API provider wants to cater to three different partner levels, with policies for each, as shown in Table 5.

Table 5: A sample account plan for three levels of partners
Package Set up cost Monthly cost Features
Strategic Partners Bronze Plan Free $30 per month Premium, partners, bronze, no support
Strategic Partners Silver Plan $50 $60 per month Premium, partners, silver, standard support
Strategic Partners Gold Plan $100 $100 per month Premium, partners, gold, 24/7 support, dedicated account

The provider chooses to charge a fixed monthly cost and setup cost instead of charging per API or application plan. In this case, it makes sense to have a plan operate at the account level so that the same policies apply to all the APIs and applications associated with that account. The API provider could also create different setup costs and support plans for different sets of customer accounts. Figure 5 illustrates the relationship between account plans and APIs.

Internal developers receive a basic plan, while external developers and strategic partners receive a standard trial plan.

Figure 5: Use an account plan to apply the same policies to all APIs and applications associated with an account.

3scale API Management provides a default account plan for all developer accounts. The default plan ensures that application access is controlled through individual service and application plans. If you needed to define features for a set of developer accounts independent of the number of applications, you might consider implementing an account plan. Account plans also work well when the setup fee, usage fee, or the length of a trial period is fixed for the account regardless of the number of APIs subscribed.

Watch the video

Watch the following video for a guide to using 3scale API Management to package and combine API plans for a variety of consumers.

Conclusion

As you have seen, it is possible to accomplish many complex API packaging scenarios using 3scale API Management and the right combination of account plans, service plans, and application plans. This article discussed strategies for packaging a single API backend endpoint. 3scale API Management also supports an API-as-a-product functionality that lets us package multiple backend APIs using the same policies and plans. My next article in this series introduces the API-as-a-product functionality and use cases.

Share

Custom policies in Red Hat 3scale API Management, Part 1: Overview

API management platforms such as Red Hat 3scale API Management provide an API gateway as a reverse proxy between API requests and responses. In this stage, most API management platforms optimize the request-response pathway and avoid introducing complex processing and delays. Such platforms provide minimal policy enforcement such as authentication, authorization, and rate-limiting. With the proliferation of API-based integrations, however, customers are demanding more fine-tuned capabilities.

Policy frameworks are key to adding new capabilities to the API request and response lifecycle. In this series, you will learn about the Red Hat 3scale API Management policy framework and how to use it to configure custom policies in the APIcast API gateway.

Policy enforcement with 3scale API Management

APIcast is 3scale API Management’s default data-plane gateway and policy enforcement point for API requests and responses. Its core functionality is to enforce rate limits, report methods and metrics, and use the mapping paths and security specified for each API defined in the 3scale API manager.

APIcast is built on NGINX. It is a custom implementation of a reverse proxy using the OpenResty framework, with modules written in Lua. Most NGINX functionality is implemented using modules, which are controlled by directives specified in a configuration file.

APIcast enforces API configuration rules that are set in the 3scale API manager. It authenticates new requests by connecting to the service API exposed by the API manager. It also allows access to the backend API and reports usage. Figure 1 presents a high-level view of APIcast in 3scale API Management’s API request and response flow.

A diagram of APIcast in the 3scale API Management request and response flow.

Figure 1: APIcast in the 3scale API Management API request and response flow.

The default APIcast policy

A default APIcast policy interprets the standard configuration and provides API gateway functionality. The default policy acts as the API’s entry point to the gateway and must be enabled for all APIs configured in 3scale API Management. The APIcast policy ensures that API requests are handled using the rules configured in the API manager. The configuration is provided to the APIcast gateway as a JSON specification, which APIcast downloads from the 3scale API Management portal.

Each HTTP request passes through a sequence of phases. A distinct type of processing is performed on the request in each phase. Module-specific handlers can be registered in most phases, and many standard NGINX modules register their phase handlers so that they will be called at a specific stage of request processing. Phases are processed successively, and phase handlers are called once the request reaches the phase.

To customize request processing, we can register additional modules at the appropriate phase. 3scale API Management provides standard policies that are pre-built as NGINX modules and can be plugged into each service’s request.

Custom APIcast policies

In addition to the default policy, 3scale API Management provides custom policies that can be configured to each API. Using modules and configurations, these policies provide custom features for handling API requests and responses. Using custom modules makes the APIcast gateway highly customizable. It is possible to add custom processing and functionality on demand without modifying API gateway code or writing any additional code.

Note: See the chapter on APIcast policies in the 3sale API Management documentation for a list of all the standard policies that are available to be configured directly with APIcast.

Policy chaining

Policies must be placed in order of execution priority, and this placement is called policy chaining. Policy chains affect the default behavior of any combination of policies. The default APIcast policy should be part of the policy chain. If a custom policy needs to be evaluated before the APIcast policy, it must be placed before that policy in the chain. Figure 2 shows an example of the policy order as defined in the policy chain.

A diagram of the custom policy chain.

Figure 2: A custom URL-rewriting policy in the policy chain.

For example, take a scenario using a URL-rewriting policy to change the URL path from /A/B to /B. Placing the URL-rewriting policy before the APIcast policy ensures that the path is changed before gateway processing. Backend rules, mapping rules, and metrics all will be evaluated using the /B URL path.

If, on the other hand, the custom policy should be evaluated after the APIcast policy, you can reverse the order. As an example, if you wanted the mapping rules to be evaluated for /A/B, with the URL rewrite to /B applied afterward, then you would place the URL rewriting policy after the APIcast policy.

Configuring custom policies

There are two ways to add a new policy to an API. One option is to use the Policy section for each managed API in the 3scale API Management Admin Portal. All of the available policies are available to be added. If you prefer to use the Admin API, then you can provide the policy as a JSON specification, which you can upload to the service configuration using the provided REST API.

You also can copy a set of policies as part of the service configuration from one running environment to another using the 3scale Toolbox. To verify the set of policies applied to a specific API, you can download the current configuration and configuration history from the administration portal or using the provided REST API.

Check the video below for a demonstration of the policy configuration in 3scale API Management.

Conclusion

3scale API Management provides multiple options for configuring custom API policies. This article introduced the 3scale API Management policy framework, custom policies, and policy chaining. I also presented a brief example of how to configure and view policies in 3scale API Management. Future articles in this series will look at the available policies, and I will introduce the developer toolset that you can use to create your own custom policies. In the meantime, you can explore 3scale API Management by signing up for a free 3scale API Management account.

Share

Installing Red Hat OpenShift API Management

Red Hat OpenShift API Management is a new managed application service designed as an add-on to Red Hat OpenShift Dedicated. The service provides developers with a streamlined experience when developing and delivering microservices applications with an API-first approach.

OpenShift API Management was built on the success of 3scale API Management and designed to let developers easily share, secure, reuse, and discover APIs when developing applications. This article shows you how to install OpenShift API Management as an add-on to your OpenShift Dedicated cluster. As you will see, it takes less than 10 minutes to install, configure, administer, and be up and running with OpenShift API Management. Check the end of the article for the included video demonstration.

Step 1: Get OpenShift Dedicated

You will need an OpenShift Dedicated subscription to provision a cluster to run Red Hat OpenShift API Management.

Start by provisioning your cluster:

  1. Log in to your account on cloud.redhat.com/openshift.
  2. Open the Red Hat OpenShift Cluster Manager.
  3. Create a cluster by selecting Red Hat OpenShift Dedicated.
  4. Choose your cloud provider.
  5. Fill in all the cluster configuration details: Cluster names, regions, availability zones, computer nodes, node instance type, CIDR ranges (these are optional, but cannot be modified at a later date), and so on.
  6. Submit your request and wait for the cluster to be provisioned.

Step 2: Select and configure an identity provider

Once your cluster is ready, you can configure your preferred identity provider (IDP)

To start, select and configure your identity provider (IDP) from the drop-down menu. Options include GitHub, Google, OpenID, LDAP, GitLab, and other providers that support OpenID flows.

After you have configured your IDP, you will need to add at least one user to the dedicated-admins group. Click Add user within the Cluster administrative users section, then enter the username of the new dedicated-admin user.

Access levels vary per organization. Any user that requires full access should be defined as an administrator. As an administrator, you can use 3scale API Management and OpenShift’s role-based access control (RBAC) features to limit other users’ access.

Step 3: Install OpenShift API Management

The process to install OpenShift API Management is straightforward. From the cluster details page, you can navigate to the Add-ons tab and find the services available to you. Here’s the setup:

  1. Go to the Add-ons tabs on the cluster details page.
  2. Choose Red Hat OpenShift API Management and hit Install.
  3. Provide the email address for the account that will receive service-related alerts and notifications.
  4. Click Install again to allow the Operators to install and configure the service.

After installing the add-on, you can access the service via the Application Launcher menu located in the OpenShift Dedicated console’s top-right corner. The Application Launcher provides direct access to the OpenShift API Management and Red Hat single sign-on (SSO) technology service UIs.

Watch the video demonstration

Check out the following video to see the OpenShift API Management installation steps in action:

Using OpenShift API Management

There you go! Getting OpenShift API Management installed and running on your OpenShift Dedicated cluster is as simple as the steps described here. Now you can start using the service. Check out the following resources to get going:

Share

5 steps to manage your first API using Red Hat OpenShift API Management

Companies are increasingly using hosted and managed services to deliver on application modernization efforts and reduce the burden of managing cloud infrastructure. The recent release of Red Hat OpenShift API Management makes it easier than ever to get your own dedicated instance of Red Hat 3scale API Management running on Red Hat OpenShift Dedicated.

This article is for developers who want to learn how to use Red Hat’s hosted and managed services to automatically import and manage exposed APIs. We’ll deploy a Quarkus application on OpenShift Dedicated, then use OpenShift API Management to add API key security. See the end of the article for a video demonstration of the workflow described.

Prerequisites

This article assumes that you already have the following:

  • Access to a cloud.redhat.com account.
  • An existing OpenShift Dedicated cluster or the ability to deploy one.
  • Entitlements to deploy the Red Hat OpenShift API Management add-on.
  • A development environment with:

Step 1: Obtain an OpenShift Dedicated cluster

Using a hosted and managed service like OpenShift API Management makes this step straightforward. See this video guide to obtaining an OpenShift Dedicated cluster and installing the OpenShift API Management add-on. You can also find instructions in this article and in the OpenShift API Management documentation.

Once you’ve obtained your OpenShift Dedicated cluster and installed the Red Hat OpenShift API Management add-on, we can move on to the next step.

Step 2: Create a project using the OpenShift CLI

Logging into an OpenShift Dedicated cluster via the OpenShift command-line interface requires a login token and URL. You can obtain both of these by logging into the OpenShift console via a web browser and using the configured IdP. Click Copy Login Command in the dropdown menu displayed under your username in the top-right corner. Alternatively, navigate directly to https://oauth-openshift.$CLUSTER_HOSTNAME/oauth/token/request and use your web browser to obtain a valid login command.

Once you have a token, issue a login command, then create a new project:

  1. $ oc login --token=$TOKEN --server=$URL
  2. $ oc new-project my-quarkus-api

Step 3: Deploy the Quarkus application to OpenShift

The Java application you’ll deploy for this demonstration is based on the example from the Quarkus OpenAPI and Swagger UI Guide. It’s a straightforward CRUD application that supports using a REST API to modify an in-memory list of fruits. You’ll find the source code in this GitHub repository.

Our application’s codebase differs slightly from the Quarkus OpenAPI and Swagger UI Guide example. I made the following changes:

  1. Set quarkus.smallrye-openapi.store-schema-directory in application.properties.
  2. Updated .gitignore to openapi.json and openapi.yaml.
  3. Added the quarkus-openshift extension.

These modifications create a local copy of the OpenAPI spec in JSON format and include tooling that simplifies the deployment process.

Build and deploy the Quarkus application

Start by cloning the repository to your local environment:

$ git clone https://github.com/evanshortiss/rhoam-quarkus-openapi

Issue the following command to start a local development server and view the Swagger UI at http://localhost:8080/swagger-ui:

$ ./mvnw quarkus:dev

Enter this command to build and deploy the application on your OpenShift Dedicated cluster:

$ ./mvnw clean package -Dquarkus.kubernetes.deploy=true -Dquarkus.openshift.expose=true

The build progress will be streamed from the OpenShift build pod to your terminal. You can also track the build logs and status in the project’s Builds section in the OpenShift console, as shown in Figure 1.

The Builds section in the OpenShift console.

Figure 1: Viewing build logs in the OpenShift console.

Once the build and deployment process is complete, the URL to access the application will be printed in your terminal. Use this URL to verify that the application’s OpenAPI spec is available at the /openapi?format=json endpoint. It’s important to verify that the JSON response is returned. You’ll need it to import the API to 3scale API Management and automatically generate the 3scale API Management ActiveDocs. Figure 2 shows an example of the response returned by this endpoint.

The OpenAPI spec for Quarkus Fruits.

Figure 2: Viewing the OpenAPI specification in JSON format.

Step 4: Apply Service Discovery annotations

Next, we’ll import the API into 3scale API Management using its Service Discovery feature. For this step, we need to apply a specific set of annotations and labels to the service associated with the Quarkus application. The Service Discovery annotations and labels are documented here.

Use the following commands to apply the necessary annotations:

$ oc annotate svc/rhoam-openapi "discovery.3scale.net/description-path=/openapi?format=json"
$ oc annotate svc/rhoam-openapi discovery.3scale.net/port="8080"
$ oc annotate svc/rhoam-openapi discovery.3scale.net/scheme=http

Add the discovery label using the following command:

$ oc label svc/rhoam-openapi discovery.3scale.net="true"

Verify the label and annotations using:

$ oc get svc/rhoam-openapi -o yaml

The output should be similar to the sample displayed in Figure 3.

The YAML file for the Quarkus Fruits API service definition.

Figure 3: The Quarkus API Service definition in YAML format with annotations and labels.

Step 5: Use Service Discovery to import the API

At this point, you can import the Quarkus Fruits API and manage it using 3scale API Management’s Service Discovery feature. Use the OpenShift Dedicated application launcher to navigate to the 3scale API Management console. Figure 4 shows the application launcher in the top-right corner of the OpenShift Dedicated console.

The OpenShift Dedicated console and application launcher with API Management selected.

Figure 4: Using the application launcher to access 3scale API Management.

Import the API

Log in to 3scale API Management using your configured IdP, and click the New Product link on the dashboard. Perform the following steps on the New Product screen:

  1. Select Import from OpenShift (authenticate if necessary).
  2. Choose the my-quarkus-api namespace from the Namespace dropdown.
  3. Choose the rhoam-openapi service from the Name dropdown.
  4. Click the Create Product button.

Figure 5 shows the new product screen in 3scale API Management.

The 3scale API Management dialog to create a new product.

Figure 5: Creating a new product in 3scale API Management.

At this point, you should be redirected back to the 3scale API Management dashboard. If your new API isn’t listed in the APIs section after a few moments, try refreshing the page. Once the API has been imported and listed on the dashboard, expand it and click the ActiveDoc link. Select rhoam-openapi on the subsequent screen to view the live documentation that was generated from the OpenAPI specification, as shown in Figure 6.

The 3scale API Management ActiveDocs page.

Figure 6: Viewing the generated ActiveDocs in 3scale API Management.

Create an Application Plan in 3scale API Management

Next, you’ll need to configure an Application Plan to interact with the API via a protected route:

  1. Choose Product: rhoam-openapi from the top navigation bar.
  2. Select Applications > Application Plans from the menu on the left.
  3. Click the Create Application Plan link.
  4. Enter “RHOAM Test Plan” in the Name field.
  5. Enter “rhoam-test-plan” in the System name field.
  6. Click the Create Application Plan button.
  7. Click the Publish link when redirected to the Application Plans screen.

Figure 7 shows the dialog to create a new application plan in 3scale API Management.

The 'Create Application Plan' screen in 3scale API Management.

Figure 7: Creating an application plan in 3scale API Management.

Configure a developer account to use the application

Now that you’ve created an Application Plan, you’ll need to sign up a developer account to use the application. Typically, an API consumer signs up using your API Developer portal. For the purpose of this demonstration, you will manually provide the default Developer account with API access:

  1. Select Audience from the top navigation bar.
  2. Select the Developer account from the Accounts list.
  3. Click the 1 Applications link from the breadcrumb links at the top of the screen.
  4. Click the Create Application link and you’ll be directed to the New Application screen.
  5. Select RHOAM Test Plan as the Application Plan.
  6. Enter “RHOAM Test Application” in the Name field.
  7. Enter a description of the API.
  8. Click Create Application.

Once the application is created, you’ll see that an API key is listed under the API Credentials section, as shown in Figure 8. Take note of the key.

The 3scale API Management Application Details screen.

Figure 8: Creating an application for a user generates an API key.

Test the application

Use the top navigation bar to navigate back to the Quarkus API’s product page, then open the Integration > Configuration section. The Staging APIcast section should include an example cURL command for testing, as shown in Figure 9. Copy this command and add /fruits to the URL, e.g https://my-quarkus-api-3scale-staging.$CLUSTER_HOSTNAME:443/fruits?user_key=$YOUR_API_KEY

The 3scale API Management API configuration screen.

Figure 9: The example cURL command now contains a valid API key.

Issuing the cURL command or pasting the URL into a web browser returns the list of fruits from the Quarkus API. Congratulations: You’ve deployed a Quarkus-based REST API on OpenShift and protected it using Red Hat 3scale API Management.

Video demonstration: Red Hat OpenShift API Management

If you want to go over the steps in this article again, see this video guide to using Red Hat OpenShift API Management, Quarkus​, and 3scale API Management to automatically import and manage exposed APIs.

Summary and next steps

If you’ve made it this far, you have successfully:

  • Provisioned an OpenShift Dedicated cluster.
  • Installed the Red Hat OpenShift API Management add-on.
  • Deployed a Quarkus application on your OpenShift Dedicated cluster.
  • Applied custom labels and annotations to a service using the OpenShift CLI.
  • Imported the Quarkus API into 3scale API Management and protected it using API key security.

Now that you’ve learned the basics of OpenShift Dedicated and 3scale API Management, why not explore other OpenShift Dedicated and Red Hat OpenShift API Management features? Here are some ideas:

  • Familiarize yourself with the single sign-on instance that’s included with your Red Hat OpenShift API Management add-on. You could consider using Red Hat’s single sign-on (SSO) technology instead of API key security to protect routes using OpenID Connect. (SSO is accessible from the OpenShift Dedicated application launcher.)
  • Learn more about OpenShift and your cluster by following a quickstart from the OpenShift web console’s developer perspective.
  • Delete the unprotected route to the Quarkus API using the OpenShift console or CLI. This was the route you used to view the OpenAPI in JSON format.

Happy coding!

Share

OpenID Connect integration with Red Hat 3scale API Management and Okta

This article introduces you to using Red Hat 3scale API Management for OpenID Connect (OIDC) integration and compliance. Our goal is to secure an API in 3scale API Management using JSON Web Token (JWT), OIDC, and the Oauth2 Authorization Framework. We will set up the integration using Okta as our third-party OpenID Connect identity provider. An important part of the demonstration is establishing the 3scale API Management gateway’s connection with Okta.

Note: This article is not a deep dive into OIDC or Oauth2. I won’t cover the details of authentication and authorization flows. Toward the end of the article, you will see how to obtain an access token, which you will need to execute a request against a protected API.

Prerequisites

For demonstration purposes, we will use 3scale API Management and Okta as self-managed services. If you don’t have them already, begin by creating free service accounts using 3scale.net and okta.com.

Setting up the 3scale API Management OIDC integration

Our first step is to create the simplest possible REST API for integration. We’ll use the 3scale API Management platform and an API back end configured to the echo-api: https://echo-api.3scale.net:443.

As an alternative to this setup, you could try a different back end or a self-managed APIcast instance. This article showcases OIDC authentication. You can adapt different settings to the use case.

Figure 1 shows the OIDC settings in 3scale API Management.

The dialog screen to enter the 3scale API Management OIDC settings for Okta authentication.

Figure 1: Enter the 3scale API Management OIDC settings for Okta authentication.

Note that the settings include AUTHENTICATION, AUTHENTICATION SETTINGS, and OPENID CONNECT (OIDC) BASICS. The OpenID Connect issuer URL is set to an Okta custom authorization server named “default.” It is impossible to customize an Okta default authorization server, so we will use the custom server for this example.

Overview of the 3scale API Management Okta integration

So far, we have employed OpenID Connect’s .well-known/openid-configuration endpoint to connect 3scale API Management with Okta. The 3scale API Management gateway determines what it needs from the OpenID Connect issuer URL, which we’ve just defined. Before going further, let’s clarify what we want to accomplish. The diagram in Figure 2 illustrates the step-by-step process for integrating 3scale API Management and Okta.

A diagram of the 3scale API Management and Okta integration.

Figure 2: An overview of the 3scale API Management and Okta integration.

Our goal is to call a protected API resource from 3scale API Management and use Okta for the user-delegated access.  Step 1 assumes that we’ve retrieved a JSON web token from the Okta authorization server, as defined in the OIDC specification. We will experiment with the OIDC authorization flow later.

After calling the API in Step 2, 3scale API Management verifies the JSON web token in Step 3. If the token is valid, 3scale API Management dispatches the request to the server back end, which you can see in Step 4.

Verifying the client application ID is paramount for the request to be successful. In the next sections, we will look closely at the mechanics of verification.

Verify and match the JWT claim

The 3scale API Management gateway secures every request by checking its associated JSON web token for the following characteristics:

  • Integrity: Is the JWT being tampered with by a malicious user (signature check)?
  • Expiration: Is this token expired?
  • Issuer: Has it been issued by an authorization server that is known to the 3scale API Management gateway?
  • Client ID: Does the token contain a claim matching a client application ID that is known to the 3scale API Management gateway?

The next step is to match the 3scale API Management client with the correct JWT claim.

In the 3scale API Management settings, set the ClientID Token Claim field to appid, as shown in Figure 3.

The dialog to set the ClientID Token Claim.

Figure 3: Set the ClientID Token Claim to appid.

This configuration tells 3scale API Management which claim to match against a client application in its API. For this demonstration, I decided to use appid rather than the default azp claim. The Okta authorization server requires a custom claim. I also wanted to avoid the often misunderstood and misused azp claim.

Configuring Okta

Next, let’s head over to the Okta admin portal to configure the Okta authorization server and OpenID Connect application. This configuration allows a client application to request a JSON web token on behalf of a user. Recall that we’re using a custom authorization server (named default) to add the appid JWT claim. The value assigned to this claim will be the Okta client application ID.

Configure the Okta authorization server

As shown in Figure 4, we use the Authorization Servers dialog to add a new claim to the default authorization server.

The Authorization Servers dialog in Okta admin includes the option to add the appid claim to Okta's default authorization server.

Figure 4: Add the appid claim to the default authorization server.

In OpenID Connect, two tokens are usually issued in response to the authorization code flow: The ID token and the access token. We will use the access token to access the protected resource from 3scale API Management, so we only need to add the custom claim to that token type.

Create the OIDC application

While in the Okta admin portal, we’ll use the OpenID Connect sign-on method to create a new application. Figure 5 shows the dialog to create a new application integration.

The dialog to create a new application in the Okta admin portal. The OpenID Connect sign-on method is selected.

Figure 5: Select the option to create an OpenID Connect application.

Next, we use the Create OpenID Connect Integration dialog to create the application, as shown in Figure 6.  Note that we’ll use the login redirect URI to retrieve the token later as part of the authorization flow.

Use the 'Create OpenID Connect Integration' dialog to create the Okta app.

Figure 6: Create the OpenID Connect application for Okta.

After creating the OIDC application, locate the Okta-generated client ID on the Client Credentials page, shown in Figure 7. Save this value to use when we create the corresponding client application in 3scale API Management.

The Client ID generated by Okta is located on the Client Credentials page.

Figure 7: Locate the client ID on the Okta admin Client Credentials page.

Create and assign a user to the OIDC application

The last thing we’ll do in Okta is to create and assign at least one user to the application, as shown in Figure 8. This allows a valid login to execute using the OpenID Connect authorization flow.

In the Okta admin console, create and assign at least one user to the application.

Figure 8: Create and assign at least one user to the OpenID Connect application in Okta.

This completes the Okta configuration. Next, we will configure a corresponding application in 3scale API Management.

Configuring the 3scale API Management client application

The API gateway can only authorize API calls from a previously registered client application. So, our last step is to create a 3scale API Management application whose credentials match with the application we’ve just created in Okta. We only need to match the application_id (also called the client ID), because it is carried by the JWT appid claim.

As an admin user, navigate to the 3scale API Management docs. You must use 3scale API Management to create the client application and specify a user-defined application_id. Figure 9 shows the dialog to create the 3scale API Management client application.

Create the client application using the 3scale API Management gateway API.

Figure 9: Use the 3scale API Management gateway API to create a client application.

Once you have it set up with the correct parameters, you will see the new application in the listing that subscribes to the API product you are testing.

Testing the application

Now, you might wonder how to ensure that the 3scale API Management application performs correctly. In this case, we can use Postman to execute a request with a valid JWT access token from Okta. The screenshot in Figure 10 shows how to execute the authorization flow in Postman.

Using Postman to get a JWT access token from Okta.

Figure 10: Using Postman to get a JWT access token from Okta.

A login screen should pop up, followed by the successful retrieval of the ID and access tokens. Then, we can successfully retrieve the client ID and access tokens shown in Figure 11. (Note that the access token is represented using jwt.io.)

Represents the JWT access token from Okta using jwt.io

Figure 11: Retrieve the client ID and access tokens from Okta.

From here, we call the API endpoint with the JWT access token assigned to the Authorization: Bearer HTTP request header:

$ curl "https://some-example-api.xyz.gw.apicast.io" -H "Authorization: Bearer jwt-access-token-base64"

Postman can take care of the rest. The echo-api will respond when the authentication is successful.

Using Red Hat’s single sign-on technology for OIDC integration

For this demonstration, we had to create an OpenID Connect application in both Okta and 3scale API Management. The number of applications grows as you start to delegate the process of application creation to other developers. The one OIDC specification that addresses this problem is the Dynamic Client Registration specification.

At the time of this writing, 3scale API Management and Okta don’t automatically integrate. However, Red Hat’s single sign-on technology is an open-source OpenID provider that integrates seamlessly with 3scale API Management. You can use the 3scale API Management gateway and the single sign-on developer portal to drive the authorization flow. Find out more about Red Hat single sign-on tools (7.4) and its upstream community project Keycloak.

Conclusion

Thank you for taking the time to read this article and follow the demonstration. As you have seen, 3scale API Management works together with any OpenID provider in a way that is compliant with its specification. We’ve used Okta as our OpenID provider for this demonstration. I hope that breaking down the verification process and showing each party’s roles and responsibilities helped to demystify aspects of application security with JWT, OIDC, and Oauth2.

Share

New custom metrics and air gapped installation in Red Hat 3scale API Management 2.9

We continue to update the Red Hat Integration product portfolio to provide a better operational and development experience for modern cloud– and container-native applications. The Red Hat Integration 2020-Q3 release includes Red Hat 3scale API Management 2.9, which provides new features and capabilities for 3scale. Among other features, we have updated the 3scale API Management and Gateway Operators.

This article introduces the Red Hat 3scale API Management 2.9 release highlights, including air-gapped installation for 3scale on Red Hat OpenShift and new APIcast policies for custom metrics and upstream mutual Transport Layer Security (TLS).

Continue reading “New custom metrics and air gapped installation in Red Hat 3scale API Management 2.9”

Share

HTTP-based Kafka messaging with Red Hat AMQ Streams

Apache Kafka is a rock-solid, super-fast, event streaming backbone that is not only for microservices. It’s an enabler for many use cases, including activity tracking, log aggregation, stream processing, change-data capture, Internet of Things (IoT) telemetry, and more.

Red Hat AMQ Streams makes it easy to run and manage Kafka natively on Red Hat OpenShift. AMQ Streams’ upstream project, Strimzi, does the same thing for Kubernetes.

Setting up a Kafka cluster on a developer’s laptop is fast and easy, but in some environments, the client setup is harder. Kafka uses a TCP/IP-based proprietary protocol and has clients available for many different programming languages. Only the JVM client is on Kafka’s main codebase, however.

Continue reading “HTTP-based Kafka messaging with Red Hat AMQ Streams”

Share

Build and deploy a serverless app with Camel K and Red Hat OpenShift Serverless 1.5.0 Tech Preview

Red Hat OpenShift Serverless 1.5.0 (currently in tech preview) runs on Red Hat OpenShift Container Platform 4.3. It enables stateful, stateless, and serverless workloads to all operate on a single multi-cloud container platform. Apache Camel K is a lightweight integration platform that runs natively on Kubernetes. Camel K has serverless superpowers.

In this article, I will show you how to use OpenShift Serverless and Camel K to create a serverless Java application that you can scale up or down on demand.

Continue reading “Build and deploy a serverless app with Camel K and Red Hat OpenShift Serverless 1.5.0 Tech Preview”

Share

First steps with the data virtualization Operator for Red Hat OpenShift

The Red Hat Integration Q4 release adds many new features and capabilities with an increasing focus around cloud-native data integration. The features I’m most excited about are the introduction of the schema registry, the advancement of change data capture capabilities based on Debezium to technical preview, and data virtualization (technical preview) capabilities.

Data integration is a topic that has not received much attention from the cloud-native community so far, and we will cover it in more detail in future posts. Here, we jump straight into demonstrating the latest release of data virtualization (DV) capabilities on Red Hat OpenShift 4. This is a step-by-step visual tutorial describing how to create a simple virtual database using Red Hat Integration’s data virtualization Operator. By the end of the tutorial, you will learn:

  • How to deploy the DV Operator.
  • How to create a virtual database.
  • How to access the virtual database.

The steps throughout this article work on any Openshift 4.x environment with operator support, even on time- and resource-constrained environments such as the Red Hat OpenShift Interactive Learning Portal.

Continue reading “First steps with the data virtualization Operator for Red Hat OpenShift”

Share