When .NET was released to the open source world (November 12, 2014—not that I remember the date or anything), it didn’t just bring .NET to open source; it brought open source to .NET. Linux containers were one of the then-burgeoning, now-thriving technologies that became available to .NET developers. At that time, it was “docker, docker, docker” all the time. Now, it’s Podman and Buildah, and Kubernetes, and Red Hat OpenShift, and serverless, and … well, you get the idea. Things have progressed, and your .NET applications can progress, as well.
This article is part of a series introducing three ways to containerize .NET applications on Red Hat OpenShift. I’ll start with a high-level overview of Linux containers and .NET Core, then discuss a couple of ways to build and containerize .NET Core applications and deploy them on OpenShift.
Continue reading “Containerize .NET for Red Hat OpenShift: Linux containers and .NET Core”
The microservices pattern is pretty standard for today’s software architecture. Microservices let you break up your application into small chunks and avoid having one giant monolith. The only problem is that if one of these services fails, it could have a cascading effect on your whole architecture.
Continue reading Fail fast with Opossum circuit breaker in Node.js
Red Hat CodeReady Workspaces provides teams with predefined workspaces to streamline application development. Out of the box, CodeReady Workspaces supports numerous languages and plugins. However, many organizations want to customize a workspace and make it available to developers across the organization as a standard. In this article, I show you how to use a custom devfile registry to customize a workspace for C++ development. Once that’s done, we will deploy an example application using Docker.
Continue reading Using a custom devfile registry and C++ with Red Hat CodeReady Workspaces
Continue reading Containerize and deploy Strapi applications on Kubernetes and Red Hat OpenShift
Quarkus is already fast, but what if you could make inner loop development with the supersonic, subatomic Java framework even faster? Quarkus 1.5 introduced
fast-jar, a new packaging format that supports faster startup times. Starting in Quarkus 1.12, this great feature became the default packaging format for Quarkus applications. This article introduces you to the
fast-jar format and how it works.
Continue reading Build even faster Quarkus applications with fast-jar
Java is one of the most popular programming languages in the world. It has been among the top three languages used over the past two decades. Java powers millions of applications across many verticals and platforms. Linux is widely deployed in data centers, edge networks, and the cloud.
Continue reading Deploy Quarkus everywhere with Red Hat Enterprise Linux (RHEL)
Apicurio Registry is the upstream project for Red Hat Integration’s Service Registry component. Developers use Apicurio Registry to manage artifacts like API definitions and data structure schemas.
Continue reading Testing Apicurio Registry’s performance and scalability
The Kamelet is a concept introduced near the end of 2020 by Camel K to simplify the design of complex system integrations. If you’re familiar with the Camel ecosystem, you know that Camel offers many components for integrating existing systems. An integration’s granularity is related to its low-level components, however. With Kamelets you can reason at a higher level of abstraction than with Camel alone.
A Kamelet is a document specifying an integration flow. A Kamelet uses Camel components and enterprise integration patterns (EIPs) to describe a system’s behavior. You can reuse a Kamelet abstraction in any integration on top of a Kubernetes cluster. You can use any of Camel’s domain-specific languages (DSLs) to write a Kamelet, but the most natural choice is the YAML DSL. The YAML DSL is designed to specify input and output formats, so any integration developer knows beforehand what kind of data to expect.
A Kamelet also serves as a source or sink of events, making Kamelets the building blocks of an event-driven architecture. Later in the article, I will introduce the concept of source and sink events and how to use them in an event-driven architecture with Kamelets.
Continue reading “Design event-driven integrations with Kamelets and Camel K”
Any developer knows that when we talk about integration, we can mean many different concepts and architecture components. Integration can start with the API gateway and extend to events, data transfer, data transformation, and so on. It is easy to lose sight of what technologies are available to help you solve various business problems. Red Hat Integration‘s Q1 release introduces a new feature that targets this challenge: the Red Hat Integration Operator.
Continue reading Deploy integration components easily with the Red Hat Integration Operator