containers

IoT edge development and deployment with containers through OpenShift: Part 1

IoT edge development and deployment with containers through OpenShift: Part 1

Usually, we think about IoT applications as something very special made for low power devices that have limited capabilities. For this reason, we tend to use completely different technologies for IoT application development than the technology we use for creating a datacenter’s services.

This article is part 1 of a two-part series. In it, we’ll explore some techniques that may give you a chance to use containers as a medium for application builds—techniques that enable the portability of containers across different environments. Through these techniques, you may be able to use the same language, framework, or tool used in your datacenter straight to the “edge,” even with different CPU architectures!

We usually use “edge” to refer to the geographic distribution of computing nodes in a network of IoT devices that are at the “edge” of an enterprise. The “edge” could be a remote datacenter or maybe multiple geo-distributed factories, ships, oil plants, and so on.

Continue reading “IoT edge development and deployment with containers through OpenShift: Part 1”

Share
Curse you choices! Kubernetes or Application Servers? (Part 3)

Curse you choices! Kubernetes or Application Servers? (Part 3)

This is the finale of a series on whether Kubernetes is the new Application Server. In this part I discuss the choice between Kubernetes, a traditional application server, and alternatives.  Such alternatives can be referred to as “Just enough Application Server”, like Thorntail. There are several articles on Thorntail (previously known as Wildfly Swarm) on the Red Hat Developer blog. A good introduction to Thorntail is in the 2.2 product announcement.

Continue reading Curse you choices! Kubernetes or Application Servers? (Part 3)

Share
Podman can now ease the transition to Kubernetes and CRI-O

Podman can now ease the transition to Kubernetes and CRI-O

Configuring Kubernetes is an exercise in defining objects in YAML files.  While not required, it is nice to have an editor that can at least understand YAML, and it’s even better if it knows the Kubernetes language.  Kubernetes YAML is descriptive and powerful. We love the modeling of the desired state in a declarative language. That said, if you are used to something simple like podman run, the transition to YAML descriptions can be a bitter pill to swallow.

As the development of Podman has continued, we have had more discussions focused on developer use cases and developer workflows.  These conversations are fueled by user feedback on our various principles, and it seems clear that the proliferation of container runtimes and technologies has some users scratching their heads.  One of these recent conversations was centered around orchestration and specifically, local orchestration. Then Scott McCarty tossed out an idea: “What I would really like to do is help users get from Podman to orchestrating their containers with Kubernetes.” And just like that, the proverbial light bulb went on.

A recent pull request to libpod has started to deliver on that very idea. Read on to learn more.

Continue reading “Podman can now ease the transition to Kubernetes and CRI-O”

Share
Integration of storage services (part 6)

Integration of storage services (part 6)

In Part 5 this series, we looked into details that determine how your integration becomes the key to transforming your customer experience.

It started with laying out the process of how I’ve approached the use case by researching successful customer portfolio solutions as the basis for a generic architectural blueprint. Now it’s time to cover various blueprint details.

This article covers the final elements in the blueprint, storage services, which are fundamental to the generic architectural overview.

Continue reading “Integration of storage services (part 6)”

Share
Podman: Managing pods and containers in a local container runtime

Podman: Managing pods and containers in a local container runtime

People associate running pods with Kubernetes. And when they run containers in their development runtimes, they do not even think about the role pods could play—even in a localized runtime.  Most people coming from the Docker world of running single containers do not envision the concept of running pods. There are several good reasons to consider using pods locally, other than using pods to naturally group your containers.

For example, suppose you have multiple containers that require the use of a MariaDB container.  But you would prefer to not bind that database to a routable network; either in your bridge or further.  Using a pod, you could bind to the localhost address of the pod and all containers in that pod will be able to connect to it because of the shared network name space.

Continue reading “Podman: Managing pods and containers in a local container runtime”

Share
Integration of container platform essentials (Part 5)

Integration of container platform essentials (Part 5)

In Part 4 of this series, we looked into details that determine how your integration becomes the key to transforming your omnichannel customer experience.

It started with laying out the process of how I’ve approached the use case by researching successful customer portfolio solutions as the basis for a generic architectural blueprint. Now it’s time to cover more blueprint details.

This article discusses the core elements in the blueprint (container platform and microservices) that are crucial to the generic architectural overview.

Continue reading “Integration of container platform essentials (Part 5)”

Share
Monitoring Node.js Applications on OpenShift with Prometheus

Monitoring Node.js Applications on OpenShift with Prometheus

Observability is Key

One of the great things about Node.js is how well it performs in a container. Its fast start up time, and relatively small size make it a favorite for microservice applications on OpenShift. But with this shift to containerized deployments comes some complexity. As a result, monitoring Node.js applications can be difficult. At times it seems as though the performance and behavior of our applications become opaque to us. So what can we do to find and address issues in our services before they become a problem? We need to enhance observability by monitoring the state of our services.

Instrumentation

Instrumentation of our applications is one way to increase observability. Therefore, in this article, I will demonstrate the instrumentation of a Node.js application using Prometheus.

Continue reading “Monitoring Node.js Applications on OpenShift with Prometheus”

Share
Integration of API management details (Part 4)

Integration of API management details (Part 4)

In Part 3 of this series, we started diving into the details that determine how your integration becomes the key to transforming your customer experience.

It started with laying out the process of how I’ve approached the use case by researching successful customer portfolio solutions as the basis for a generic architectural blueprint. Now it’s time to cover various blueprint details.

This article takes you deeper into specific elements (API management and reverse proxy) of the generic architectural overview.

Continue reading “Integration of API management details (Part 4)”

Share
Integration of external application details (Part 3)

Integration of external application details (Part 3)

In Part 2 of this series, we took a high-level view of the common architectural elements that determine how your integration becomes the key to transforming your customer experience.

I laid out how I’ve approached the use case and how I’ve used successful customer portfolio solutions as the basis for researching a generic architectural blueprint. The only thing left to cover was the order in which you’ll be led through the blueprint details.

This article takes you deeper to cover details pertaining to the specific elements (mobile and web application deployments) of the generic architectural overview.

Continue reading “Integration of external application details (Part 3)”

Share