ci/cd

3scale toolbox: Deploy an API from the CLI

3scale toolbox: Deploy an API from the CLI

Deploying your API from a CI/CD pipeline can be a tremendous amount of work. The latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. The 3scale CLI is named 3scale toolbox and strives to help API administrators to operate their services as well as automate the delivery of their API through Continuous Delivery pipelines.

Having a standard CLI is a great advantage for our customers since they can use it in the CI/CD solution of their choice (Jenkins, GitLab CI, Ansible, Tekton, etc.). It is also a means for Red Hat to capture customer needs as much as possible and offer the same feature set to all our customers.

Continue reading “3scale toolbox: Deploy an API from the CLI”

Share
5 principles for deploying your API from a CI/CD pipeline

5 principles for deploying your API from a CI/CD pipeline

With companies generating more and more revenue through their APIs, these APIs also have become even more critical. Quality and reliability are key goals sought by companies looking for large scale use of their APIs, and those goals are usually supported through well-crafted DevOps processes. Figures from the tech giants make us dizzy: Amazon is deploying code to production every 11.7 seconds, Netflix deploys thousands of time per day, and Fidelity saved $2.3 million per year with their new release framework. So, if you have APIs, you might want to deploy your API from a CI/CD pipeline.

Deploying your API from a CI/CD pipeline is a key activity of the “Full API Lifecycle Management.” Sitting between the “Implement” and “Secure” phases, the “Deploy” activity encompasses every process needed to bring the API from source code to the production environment. To be more specific, it covers Continuous Integration and Continuous Delivery.

Deploy your API from a CI/CD pipeline - High Level view

Continue reading “5 principles for deploying your API from a CI/CD pipeline”

Share
Enabling CI/CD for Red Hat Decision Manager on OpenShift

Enabling CI/CD for Red Hat Decision Manager on OpenShift

As part of the microservices adoption journey, there will come a time when an organization starts to implement more and more rules services as part of the total application solution landscape. There could be hundreds of rules services to be managed and deployed at one time, making the job of the application team more challenging and eventually causing delays to the entire production rollout.

Red Hat Decision Manager (RHDM) is a solution that enables developers and application teams to implement decision services or business rules for their application needs. I will be covering how we can fully utilize the capabilities brought by Red Hat Decision Manager and Red Hat OpenShift to enable a smooth CI/CD process, in order to have rapid decision services deployment. (This tutorial assumes you already have a good understanding of RHDM and OpenShift.)

Continue reading “Enabling CI/CD for Red Hat Decision Manager on OpenShift”

Share
An introduction to cloud-native CI/CD with Red Hat OpenShift Pipelines

An introduction to cloud-native CI/CD with Red Hat OpenShift Pipelines

Red Hat OpenShift 4.1 offers a developer preview of OpenShift Pipelines, which enable the creation of cloud-native, Kubernetes-style continuous integration and continuous delivery (CI/CD) pipelines based on the Tekton project. In a recent article on the Red Hat OpenShift blog, I provided an introduction to Tekton and pipeline concepts and described the benefits and features of OpenShift Pipelines.

Continue reading “An introduction to cloud-native CI/CD with Red Hat OpenShift Pipelines”

Share
Application lifecycle management for container-native development

Application lifecycle management for container-native development

Container-native development is primarily about consistency, flexibility, and scalability. Legacy Application Lifecycle Management (ALM) tooling often is not, leading to situations where it:

  • Places artificial barriers on development speed, and therefore time to value,
  • Creates single points of failure in the infrastructure, and
  • Stifles innovation through inflexibility.

Ultimately, developers are expensive, but they are the domain experts in what they build. With development teams often being treated as product teams (who own the entire lifecycle and support of their applications), it becomes imperative that they control the end-to-end process on which they rely to deliver their applications into production. This means decentralizing both the ALM process and the tooling that supports that process. In this article, we’ll explore this approach and look at a couple of implementation scenarios.

Continue reading “Application lifecycle management for container-native development”

Share
Get started with Jenkins CI/CD in Red Hat OpenShift 4

Get started with Jenkins CI/CD in Red Hat OpenShift 4

Automation is what we (developers) do. We automate ticket sales and automobiles and streaming music services and everything you can possibly tie into an analog-to-digital converter. But, have we taken the time to automate our processes?

In this article, I’ll show how to build an automated integration and continuous delivery pipeline using Jenkins CI/CD and Red Hat OpenShift 4. I will not dive into a lot of details—and there are a lot of details—but we’ll get a good overview. The details will be explained later in this series of blog posts.

Continue reading “Get started with Jenkins CI/CD in Red Hat OpenShift 4”

Share
Curse you choices! Kubernetes or Application Servers? (Part 3)

Curse you choices! Kubernetes or Application Servers? (Part 3)

This is the finale of a series on whether Kubernetes is the new Application Server. In this part I discuss the choice between Kubernetes, a traditional application server, and alternatives.  Such alternatives can be referred to as “Just enough Application Server”, like Thorntail. There are several articles on Thorntail (previously known as Wildfly Swarm) on the Red Hat Developer blog. A good introduction to Thorntail is in the 2.2 product announcement.

Continue reading Curse you choices! Kubernetes or Application Servers? (Part 3)

Share
Automating tests and metrics gathering for Kubernetes and OpenShift  (part 3)

Automating tests and metrics gathering for Kubernetes and OpenShift (part 3)

This is the third of a series of three articles based on a session I held at Red Hat Tech Exchange EMEA. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup. In the second article, we looked at building an observability stack. In this third part, we will see how the execution of the performance tests can be automated and related metrics gathered.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Automating tests and metrics gathering for Kubernetes and OpenShift (part 3)”

Share
Are App Servers Dead in the Age of Kubernetes? (Part 2)

Are App Servers Dead in the Age of Kubernetes? (Part 2)

Welcome to the second in a series of posts on Kubernetes, application servers, and the future. Part 1, Kubernetes is the new application operating environment, discussed Kubernetes and its place in application development. In this part, we explore application servers and their role in relation to Kubernetes.

You may recall from  that we were exploring the views put forth in Why Kubernetes is The New Application Server and thinking about what those views mean for Java EE, Jakarta EE, Eclipse MicroProfile, and application servers. Is it a curtain call for application servers? Are we seeing the start of an imminent decline in their favor and usage?

Before answering that, we need to discuss the use case for application servers. Then can we decide whether it’s still a valid use case.

Continue reading “Are App Servers Dead in the Age of Kubernetes? (Part 2)”

Share
Kubernetes is the new application operating environment (Part 1)

Kubernetes is the new application operating environment (Part 1)

This is the first in a series of articles that consider the role of Kubernetes and application servers. Do application servers need to exist? Where does the current situation leave developers trying to choose the right path forward for their applications?

Why Kubernetes is the new application server

By now you’ve likely read “Why Kubernetes is The New Application Server” and you might be wondering what that means for you. How does it impact Java EE or Jakarta EE and Eclipse MicroProfile? What about application servers or fat JARs? Is it the end as we’ve known it for nearly two decades?

In reality, it doesn’t impact the worldview for most. It’s in line with the efforts of a majority of vendors around Docker and Kubernetes deployments over the last few years. In addition, there’s greater interest in service mesh infrastructures, such as Istio, and how they can further assist with managing Kubernetes deployments.

Continue reading “Kubernetes is the new application operating environment (Part 1)”

Share