ci/cd

Application lifecycle management for container-native development

Application lifecycle management for container-native development

Container-native development is primarily about consistency, flexibility, and scalability. Legacy Application Lifecycle Management (ALM) tooling often is not, leading to situations where it:

  • Places artificial barriers on development speed, and therefore time to value,
  • Creates single points of failure in the infrastructure, and
  • Stifles innovation through inflexibility.

Ultimately, developers are expensive, but they are the domain experts in what they build. With development teams often being treated as product teams (who own the entire lifecycle and support of their applications), it becomes imperative that they control the end-to-end process on which they rely to deliver their applications into production. This means decentralizing both the ALM process and the tooling that supports that process. In this article, we’ll explore this approach and look at a couple of implementation scenarios.

Continue reading “Application lifecycle management for container-native development”

Share
Get started with Jenkins CI/CD in Red Hat OpenShift 4

Get started with Jenkins CI/CD in Red Hat OpenShift 4

Automation is what we (developers) do. We automate ticket sales and automobiles and streaming music services and everything you can possibly tie into an analog-to-digital converter. But, have we taken the time to automate our processes?

In this article, I’ll show how to build an automated integration and continuous delivery pipeline using Jenkins CI/CD and Red Hat OpenShift 4. I will not dive into a lot of details—and there are a lot of details—but we’ll get a good overview. The details will be explained later in this series of blog posts.

Continue reading “Get started with Jenkins CI/CD in Red Hat OpenShift 4”

Share
Curse you choices! Kubernetes or Application Servers? (Part 3)

Curse you choices! Kubernetes or Application Servers? (Part 3)

This is the finale of a series on whether Kubernetes is the new Application Server. In this part I discuss the choice between Kubernetes, a traditional application server, and alternatives.  Such alternatives can be referred to as “Just enough Application Server”, like Thorntail. There are several articles on Thorntail (previously known as Wildfly Swarm) on the Red Hat Developer blog. A good introduction to Thorntail is in the 2.2 product announcement.

Continue reading Curse you choices! Kubernetes or Application Servers? (Part 3)

Share
Automating tests and metrics gathering for Kubernetes and OpenShift  (part 3)

Automating tests and metrics gathering for Kubernetes and OpenShift (part 3)

This is the third of a series of three articles based on a session I held at Red Hat Tech Exchange EMEA. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup. In the second article, we looked at building an observability stack. In this third part, we will see how the execution of the performance tests can be automated and related metrics gathered.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Automating tests and metrics gathering for Kubernetes and OpenShift (part 3)”

Share
Are App Servers Dead in the Age of Kubernetes? (Part 2)

Are App Servers Dead in the Age of Kubernetes? (Part 2)

Welcome to the second in a series of posts on Kubernetes, application servers, and the future. Part 1, Kubernetes is the new application operating environment, discussed Kubernetes and its place in application development. In this part, we explore application servers and their role in relation to Kubernetes.

You may recall from  that we were exploring the views put forth in Why Kubernetes is The New Application Server and thinking about what those views mean for Java EE, Jakarta EE, Eclipse MicroProfile, and application servers. Is it a curtain call for application servers? Are we seeing the start of an imminent decline in their favor and usage?

Before answering that, we need to discuss the use case for application servers. Then can we decide whether it’s still a valid use case.

Continue reading “Are App Servers Dead in the Age of Kubernetes? (Part 2)”

Share
Kubernetes is the new application operating environment (Part 1)

Kubernetes is the new application operating environment (Part 1)

This is the first in a series of articles that consider the role of Kubernetes and application servers. Do application servers need to exist? Where does the current situation leave developers trying to choose the right path forward for their applications?

Why Kubernetes is the new application server

By now you’ve likely read “Why Kubernetes is The New Application Server” and you might be wondering what that means for you. How does it impact Java EE or Jakarta EE and Eclipse MicroProfile? What about application servers or fat JARs? Is it the end as we’ve known it for nearly two decades?

In reality, it doesn’t impact the worldview for most. It’s in line with the efforts of a majority of vendors around Docker and Kubernetes deployments over the last few years. In addition, there’s greater interest in service mesh infrastructures, such as Istio, and how they can further assist with managing Kubernetes deployments.

Continue reading “Kubernetes is the new application operating environment (Part 1)”

Share
Why Kubernetes is The New Application Server

Why Kubernetes is The New Application Server

Have you ever wondered why you are deploying your multi-platform applications using containers? Is it just a matter of “following the hype”? In this article, I’m going to ask some provocative questions to make my case for Why Kubernetes is the new application server.

You might have noticed that the majority of languages are interpreted and use “runtimes” to execute your source code. In theory, most Node.js, Python, and Ruby code can be easily moved from one platform (Windows, Mac, Linux) to another platform. Java applications go even further by having the compiled Java class turned into a bytecode, capable of running anywhere that has a JVM (Java Virtual Machine).

The Java ecosystem provides a standard format to distribute all Java classes that are part of the same application. You can package these classes as a JAR (Java Archive), WAR (Web Archive), and EAR (Enterprise Archive) that contains the front end, back end, and libraries embedded. So I ask you: Why do you use containers to distribute your Java application? Isn’t it already supposed to be easily portable between environments?

Continue reading “Why Kubernetes is The New Application Server”

Share
An API Journey: From Idea to Deployment the Agile Way–Part III

An API Journey: From Idea to Deployment the Agile Way–Part III

This is part III of a three-part series describing a proposed approach for an agile API lifecycle: from ideation to production deployment. If you missed it or need a refresher, please take some time to read part I and part II.

This series is coauthored with Nicolas Massé, also a Red Hatter, and it is based on our own real-life experiences from our work with the Red Hat customers we’ve met.

In part II, we discovered how ACME Inc. is taking an agile API journey for its new Beer Catalog API deployment. ACME set up modern techniques for continuously testing its API implementation within the continuous integration/continuous delivery (CI/CD) pipeline. Let’s go now to securing the exposition.

Continue reading “An API Journey: From Idea to Deployment the Agile Way–Part III”

Share
An API Journey: From Idea to Deployment the Agile Way–Part II

An API Journey: From Idea to Deployment the Agile Way–Part II

This is part II of a three-part series describing a proposed approach for an agile API lifecycle from ideation to production deployment. If you missed part 1 or need a refresher, please take some time to read part I.

This series is coauthored with Nicolas Massé, also a Red Hatter, and it is based on our own real-life experiences from our work with the Red Hat customers we’ve met.

In part I, we explored how ACME Inc. is taking an agile API journey for its new Beer Catalog API, and ACME completed the API ideation, contract design, and sampling stages. Let’s go now to mocking.

Continue reading “An API Journey: From Idea to Deployment the Agile Way–Part II”

Share
An API Journey: From Idea to Deployment the Agile Way–Part I

An API Journey: From Idea to Deployment the Agile Way–Part I

The goal of this series of posts is to describe a proposed approach for an agile API delivery process. It will cover not only the development part but also the design, the tests, the delivery, and the management in production. You will learn how to use mocking to speed up development and break dependencies, use the contract-first approach for defining tests that will harden your implementation, protect the exposed API through a management gateway and, finally, secure deliveries using a CI/CD pipeline.

I coauthored this series with Nicolas Massé, who is also a Red Hatter. This series is based on our own real-life experience from our work with the Red Hat customers we’ve met, as well as from my previous position as SOA architect at a large insurance company. This series is a translation of a typical use case we run during workshops or events such as APIdays.

Continue reading “An API Journey: From Idea to Deployment the Agile Way–Part I”

Share