CI/CD

Application lifecycle management for container-native development

Application lifecycle management for container-native development

Container-native development is primarily about consistency, flexibility, and scalability. Legacy Application Lifecycle Management (ALM) tooling often is not, leading to situations where it:

  • Places artificial barriers on development speed, and therefore time to value,
  • Creates single points of failure in the infrastructure, and
  • Stifles innovation through inflexibility.

Ultimately, developers are expensive, but they are the domain experts in what they build. With development teams often being treated as product teams (who own the entire lifecycle and support of their applications), it becomes imperative that they control the end-to-end process on which they rely to deliver their applications into production. This means decentralizing both the ALM process and the tooling that supports that process. In this article, we’ll explore this approach and look at a couple of implementation scenarios.

Continue reading “Application lifecycle management for container-native development”

Share
Self-service messaging with Red Hat AMQ Online and GitOps

Self-service messaging with Red Hat AMQ Online and GitOps

This article explores the service model of Red Hat AMQ Online 1.1 and how it maps to a GitOps workflow for different teams in your organization. For more information on new features in AMQ Online 1.1, see the release notes.

AMQ Online is an operator of stateful messaging services running on Red Hat OpenShift. AMQ Online is built around the principle that the responsibility of operating the messaging service is separate from the tenants consuming it. The operations team in can manage the messaging infrastructure, while the development teams provision messaging in a self-service manner, just as if they were using a public cloud service.

Continue reading “Self-service messaging with Red Hat AMQ Online and GitOps”

Share
Get started with Jenkins CI/CD in Red Hat OpenShift 4

Get started with Jenkins CI/CD in Red Hat OpenShift 4

Automation is what we (developers) do. We automate ticket sales and automobiles and streaming music services and everything you can possibly tie into an analog-to-digital converter. But, have we taken the time to automate our processes?

In this article, I’ll show how to build an automated integration and continuous delivery pipeline using Jenkins CI/CD and Red Hat OpenShift 4. I will not dive into a lot of details—and there are a lot of details—but we’ll get a good overview. The details will be explained later in this series of blog posts.

Continue reading “Get started with Jenkins CI/CD in Red Hat OpenShift 4”

Share
Full API lifecycle management: A primer

Full API lifecycle management: A primer

APIs are the cornerstone of so many recent breakthroughs: from mobile applications, to the Internet of Things, to cloud computing. All those technologies expose, consume, and are built on APIs. And those APIs are a key driver for generating new revenue. Salesforce generates 50% of its revenue through APIs, Expedia generates 90% of its, and eBay generates 60% of its. With APIs becoming so central, it becomes essential to deal with full API lifecycle management. The success of your digital transformation project depends on it!

This article describes a set of full API lifecycle management activities that can guide you from an idea to the realization, from the inception of an API program up to management at scale throughout your whole company.

Continue reading “Full API lifecycle management: A primer”

Share
IoT edge development and deployment with containers through OpenShift: Part 2

IoT edge development and deployment with containers through OpenShift: Part 2

In the first part of this series, we saw how effective a platform as a service (PaaS) such as Red Hat OpenShift is for developing IoT edge applications and distributing them to remote sites, thanks to containers and Red Hat Ansible Automation technologies.

Usually, we think about IoT applications as something specially designed for low power devices with limited capabilities.  IoT devices might use a different CPU architectures or platform. For this reason, we tend to use completely different technologies for IoT application development than for services that run in a data center.

In part two, we explore some techniques that allow you to build and test contains for alternate architectures such as ARM64 on an x86_64 host.  The goal we are working towards is to enable you to use the same language, framework, and development tools for code that runs in your datacenter or all the way out to IoT edge devices. In this article, I’ll show building and running an AArch64 container image on an x86_64 host and then building an RPI3 image to run it on physical hardware using Fedora and Podman.

Continue reading “IoT edge development and deployment with containers through OpenShift: Part 2”

Share
IoT edge development and deployment with containers through OpenShift: Part 1

IoT edge development and deployment with containers through OpenShift: Part 1

Usually, we think about IoT applications as something very special made for low power devices that have limited capabilities. For this reason, we tend to use completely different technologies for IoT application development than the technology we use for creating a datacenter’s services.

This article is part 1 of a two-part series. In it, we’ll explore some techniques that may give you a chance to use containers as a medium for application builds—techniques that enable the portability of containers across different environments. Through these techniques, you may be able to use the same language, framework, or tool used in your datacenter straight to the “edge,” even with different CPU architectures!

We usually use “edge” to refer to the geographic distribution of computing nodes in a network of IoT devices that are at the “edge” of an enterprise. The “edge” could be a remote datacenter or maybe multiple geo-distributed factories, ships, oil plants, and so on.

Continue reading “IoT edge development and deployment with containers through OpenShift: Part 1”

Share
Automating tests and metrics gathering for Kubernetes and OpenShift  (part 3)

Automating tests and metrics gathering for Kubernetes and OpenShift (part 3)

This is the third of a series of three articles based on a session I held at Red Hat Tech Exchange EMEA. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup. In the second article, we looked at building an observability stack. In this third part, we will see how the execution of the performance tests can be automated and related metrics gathered.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Automating tests and metrics gathering for Kubernetes and OpenShift (part 3)”

Share
Building an observability stack for automated performance tests on Kubernetes and OpenShift (part 2)

Building an observability stack for automated performance tests on Kubernetes and OpenShift (part 2)

This is the second of a series of three articles based on a session I held at Red Hat Tech Exchange in EMEA. In the first article, I presented the rationale and approach for leveraging Red Hat OpenShift or Kubernetes for automated performance testing, and I gave an overview of the setup.

In this article, we will look at building an observability stack. In production, the observability stack can help verify that the system is working correctly and performing well. It can also be leveraged during performance tests to provide insight into how the application performs under load.

An example of what is described in this article is available in my GitHub repository.

Continue reading “Building an observability stack for automated performance tests on Kubernetes and OpenShift (part 2)”

Share
Building Java 11 and Gradle containers for OpenShift

Building Java 11 and Gradle containers for OpenShift

How do YOU get your Java apps running in a cloud?

First you grab a cloud from the sky by, for example,  (1) Getting started with a free account on Red Hat OpenShift Online, or (2) locally on your laptop using Red Hat Container Development Kit (CDK) or upstream Minishift on Windows, macOS, and Linux, or (3) using oc cluster up (only on Linux), or (4) by obtaining a login from someone running Red Hat OpenShift on a public or on-premises cloud. Then, you download the oc CLI client tool probably for Windows (and put it on your PATH). Then you select the Copy Login Command from the menu in the upper right corner under your name in the OpenShift Console’s UI, and you use, for example, the oc status command.

Great—now you just need to containerize your Java app. You could, of course, start to write your own Dockerfile, pick an appropriate container base image (and discuss Red Hat Enterprise Linux versus CentOS versus Fedora versus Ubuntu versus Debian versus Alpine with your co-workers; and, especially if you’re in an enterprise environment, figure out how to have that supported in production), figure out appropriate JVM startup parameters for a container, add monitoring, and so.

But perhaps what you really wanted to do today is…well, just get your Java app running in a cloud!

Read on to find an easier way.

Continue reading “Building Java 11 and Gradle containers for OpenShift”

Share
Building .NET Core container images using S2I

Building .NET Core container images using S2I

Red Hat OpenShift implements .NET Core support via a source-to-image (S2I) builder. In this article, we’ll take a closer look at how you can use that builder directly. Using S2I, you can build .NET Core application images without having to write custom build scripts or Dockerfiles. This can be useful on your development machine or as part of a CI/CD pipeline.

Continue reading “Building .NET Core container images using S2I”

Share