Apache Kafka is a rock-solid, super-fast, event streaming backbone that is not only for microservices. It’s an enabler for many use cases, including activity tracking, log aggregation, stream processing, change-data capture, Internet of Things (IoT) telemetry, and more.
Red Hat AMQ Streams makes it easy to run and manage Kafka natively on Red Hat OpenShift. AMQ Streams’ upstream project, Strimzi, does the same thing for Kubernetes.
Setting up a Kafka cluster on a developer’s laptop is fast and easy, but in some environments, the client setup is harder. Kafka uses a TCP/IP-based proprietary protocol and has clients available for many different programming languages. Only the JVM client is on Kafka’s main codebase, however.
Continue reading “HTTP-based Kafka messaging with Red Hat AMQ Streams”
Red Hat OpenShift Serverless 1.5.0 (currently in tech preview) runs on Red Hat OpenShift Container Platform 4.3. It enables stateful, stateless, and serverless workloads to all operate on a single multi-cloud container platform. Apache Camel K is a lightweight integration platform that runs natively on Kubernetes. Camel K has serverless superpowers.
In this article, I will show you how to use OpenShift Serverless and Camel K to create a serverless Java application that you can scale up or down on demand.
Continue reading “Build and deploy a serverless app with Camel K and Red Hat OpenShift Serverless 1.5.0 Tech Preview”
The Red Hat Integration Q4 release adds many new features and capabilities with an increasing focus around cloud-native data integration. The features I’m most excited about are the introduction of the schema registry, the advancement of change data capture capabilities based on Debezium to technical preview, and data virtualization (technical preview) capabilities.
Data integration is a topic that has not received much attention from the cloud-native community so far, and we will cover it in more detail in future posts. Here, we jump straight into demonstrating the latest release of data virtualization (DV) capabilities on Red Hat OpenShift 4. This is a step-by-step visual tutorial describing how to create a simple virtual database using Red Hat Integration’s data virtualization Operator. By the end of the tutorial, you will learn:
- How to deploy the DV Operator.
- How to create a virtual database.
- How to access the virtual database.
The steps throughout this article work on any Openshift 4.x environment with operator support, even on time- and resource-constrained environments such as the Red Hat OpenShift Interactive Learning Portal.
Continue reading “First steps with the data virtualization Operator for Red Hat OpenShift”
In the previous article, APIs as a Product: Get the value out of your APIs, we presented a new approach called “APIs as a Product” to maximize the value of your APIs. In this article, we show how to quickly get started with APIs as a Product using the new features of Red Hat 3scale API Management 2.7.
To showcase the power of 3scale 2.7’s new features, combined with the awesomeness of the open source communities Apicurio and Microcks, we will design two APIs as a Product and show how we can compose both products in 3scale to get the resulting API as a Product.
Let’s look at the well-known Petstore example. Imagine for a moment that the first steps of the API Design Thinking process led to this rough definition of the customer’s needs:
- Petstore is a company selling cuddly toys online. They would like to open a marketplace to let partners resell the pets. Ideally, orders have to placed through an API so that the whole process can be automated. The partners are split over what they want:
- One group of Petstore’s partners expressed the need to be able to discover the Petstore inventory to add items to their own catalog.
- Other partners want to have the checkout process managed for them, from cart creation up to checkout.
- A last group of partners wants to do both.
In the first part of the video below, we go through the API Ideation process to design two products: One to discover the inventory, and one to place an order. Then, we’ll go through the API Prototyping step to expose our two products as live mocks.
Finally, we use the features of 3scale 2.7 to compose both products as a unique product, expose it through API Management, and gather feedback from our early adopters.
To get started with APIs as a Product, watch this video:
- API Ideation starts at 01:37.
- API Prototyping starts at 08:53.
- The API as a Product features of 3scale 2.7 are showcased at 18:46.
APIs continue to spread, as seen in this 2019 report from ProgrammableWeb, which shows a 30% increase over last year’s growth rate. More regulations are enforcing the use of APIs to open up companies and foster innovation. Think of the Payment Services Directive version two (PSD2), open banking, and the public sector releasing 0pen data APIs. With such an abundance of APIs, it becomes increasingly crucial to get the value out of your APIs and differentiate yourself from the growing competition. It’s time to design and manage your APIs as a Product.
Change your organization
Designing an API as a Product implies a lot of changes in the way APIs are designed and managed. It means changing your mindset from “packaging existing services” to “fulfilling customer needs.” For teams in charge of an API, this mindset implies being in charge of a product instead, and thus:
- Focusing on customer needs rather than existing assets.
- Managing APIs continuously rather than for a project’s lifetime.
- Moving from finite resources to elastic computing.
- Evolving from a central team specialized in APIs to product cross-functional teams with a variety of competencies.
In a nutshell, designing an API as a Product means:
- Conceiving it specifically to solve a customer problem.
- Setting shared indicators of the value delivered by the API.
- Having a feedback loop to know what to improve.
Change your design process
Designing APIs as a Product also means changing the way we craft APIs (by adopting API design thinking):
- The first step of design thinking applied to APIs is focusing on the customers. Through their eyes, what is their pain? Empathize means meeting with and listening to your potential new customers. You will discover their understanding, how they are organized, which technical ecosystem they use, etc.
- Then, you would synthesize your findings to define which problem they are trying to solve and identify the Job to be Done. Customers wanting to perform a transaction to buy a pet and customers willing to synchronize their pet inventory would lead to two very different Petstore APIs.
- Once defined, you can foster new ideas through a process known as API Ideation. During this step, you knock up different API designs to see if one stands up as a plausible solution.
- During the API Prototyping phase, you mock your API based on meaningful examples to end up with a working API. Other team members can give their feedback on your design attempt.
- The penultimate step is to get feedback on your final design from your early adopters and thus refine the business expectations for your API.
- Finally, you implement the actual API.
Open source communities to help you design APIs as a Product
So far, we have only spoken about organizational changes. Dedicated tools and open source communities can help you succeed in designing and managing APIs as a Product:
- Microcks is the open source Kubernetes-native tool for API mocking and testing that helps you during the Ideation and Prototyping phases.
- Apicurio helps you design better API contracts in a collaborative fashion. Architects, product owners, designers, and developers can work together on the API designs.
- Red Hat 3scale API Management eases the design and management of APIs as a Product by promoting the API designs, collecting feedback, and actually managing your APIs as a Product.
In the next article, we’ll look at how the most recent version of 3scale actually helps you build APIs as a Product.
Red Hat continues to increase the features available for users looking to implement a 100% open source, event-driven architecture (EDA) through running Apache Kafka on Red Hat OpenShift and Red Hat Enterprise Linux. The Red Hat Integration Q4 release provides new features and capabilities, including ones aimed at simplifying usage and deployment of the AMQ streams distribution of Apache Kafka.
Continue reading “Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge”
In this article, we’ll cover microservice security concepts by using protocols such as OpenID Connect with the support of Red Hat Single Sign-On and 3scale. Working with a microservice-based architecture, user identity, and access control in a distributed, in-depth form must be carefully designed. Here, the integration of these tools will be detailed, step-by-step, in a realistic view.
This article exemplifies the use of tools that can securely run your businesses, avoiding using homemade solutions, and protecting your services by using an API gateway, preventing your applications from being exposed to the public network. The use of an API gateway also provides additional access control, monetization, and analytics.
Continue reading “How to secure microservices with Red Hat Single Sign-On, Fuse, and 3scale”
A couple weeks ago I was faced with the challenge of installing Red Hat 3scale and configuring its tenants using solely the command line — no GUI allowed. This is a rather interesting use case, so I decided to write this article and show how to do it with just seven commands!
(By the way, I also decided to include Red Hat Single Sign-On (SSO) in the mix because I want my APIs to use OpenID Connect (OIDC) for authentication. But I’ll leave those commands to a future article.)
Continue reading “Install Red Hat 3scale and configure tenants with 7 simple commands”
In the previous article of this series, Deploy your API from a Jenkins Pipeline, we discovered how the 3scale toolbox can help you deploy your API from a Jenkins Pipeline on Red Hat OpenShift/Kubernetes. In this article, we will improve the pipeline from the previous article to make it more robust, less verbose, and also offer more features by using the 3scale toolbox Jenkins Shared Library.
Continue reading “Using the 3scale toolbox Jenkins Shared Library”
In a previous article, 5 principles for deploying your API from a CI/CD pipeline, we discovered the main steps required to deploy your API from a CI/CD pipeline and this can prove to be a tremendous amount of work. Hopefully, the latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. In 3scale toolbox: Deploy an API from the CLI, we discovered how the 3scale toolbox strives to automate the delivery of APIs. In this article, we will discuss how the 3scale toolbox can help you deploy your API from a Jenkins pipeline on Red Hat OpenShift/Kubernetes.
Continue reading “Deploy your API from a Jenkins Pipeline”