Announcing .NET Core 2.1 for Red Hat Platforms

We are very pleased to announce the general availability of .NET Core 2.1 for Red Hat Enterprise Linux and OpenShift platforms!

.NET Core is the open-source, cross-platform .NET platform for building microservices. .NET Core is designed to provide the best performance at scale for applications that use microservices and containers. Libraries can be shared with other .NET platforms, such as .NET Framework (Windows) and Xamarin (mobile applications). With .NET Core you have the flexibility of building and deploying applications on Red Hat Enterprise Linux or in containers. Your container-based applications and microservices can easily be deployed to your choice of public or private clouds using Red Hat OpenShift. All of the features of OpenShift and Kubernetes for cloud deployments are available to you.

.NET Core 2.1 continues to broaden its support and tools for microservice development in an open source environment. The latest version of .NET Core includes the following improvements:

Continue reading “Announcing .NET Core 2.1 for Red Hat Platforms”

Share

Debugging Memory Issues with Open vSwitch DPDK

Introduction

This article is about debugging out-of-memory issues with Open vSwitch with the Data Plane Development Kit (OvS-DPDK). It explains the situations in which you can run out of memory when using OvS-DPDK and it shows the log entries that are produced in those circumstances. It also shows some other log entries and commands for further debugging.

When you finish reading this article, you will be able to identify that you have an out-of-memory issue and you’ll know how to fix it. Spoiler: Usually having some more memory on the relevant NUMA node works. It is based on OvS 2.9.

Continue reading “Debugging Memory Issues with Open vSwitch DPDK”

Share

Using the STOMP Protocol with Apache ActiveMQ Artemis Broker

In this article, we will use a Python-based messaging client to connect and subscribe to a topic with a durable subscription in the Apache ActiveMQ Artemis broker. We will use the text-based STOMP protocol to connect and subscribe to the broker. STOMP clients can communicate with any STOMP message broker to provide messaging interoperability among many languages, platforms, and brokers.

If you need to brush up on the difference between persistence and durability in messaging, check Mary Cochran’s article on developers.redhat.com/blog.

A similar process can be used with Red Hat AMQ 7. The broker in Red Hat AMQ 7 is based on the Apache ActiveMQ Artemis project. See the overview on developers.redhat.com for more information.

Continue reading “Using the STOMP Protocol with Apache ActiveMQ Artemis Broker”

Share

Remotely Debug an ASP.NET Core Container Pod on OpenShift with Visual Studio

Last year, I wrote a blog post how to remotely debug your ASP.NET Core container on OpenShift with Visual Studio Code. Today I introduce how to remotely debug a pod using Visual Studio from your Windows computer. Sometimes you encounter an issue that happens only in the production environment. Remotely debugging a pod enables you to investigate such an issue.

Visual Studio and Visual Studio Code now support SSH as a transport protocol for remote debugging. If a remote host accepts an SSH connection, Visual Studio can do remote debugging using Visual Studio’s default feature. However, you need to use the oc command instead of an SSH client such as putty since Red Hat OpenShift pods don’t allow direct connections via SSH. The MIEngine debugger enables you to use any command for SSH connection.

Note:
All the steps below have been confirmed using a combination of Visual Studio 2017 (versions 15.7.2 and 15.8 preview2) on Windows 10 and OpenShift 3.9.

Continue reading “Remotely Debug an ASP.NET Core Container Pod on OpenShift with Visual Studio”

Share

Building Container-Native Node.js Applications with Red Hat OpenShift Application Runtimes and Istio

For developers working on a Kubernetes-based application environment such as Red Hat OpenShift, there are a number things that need to be considered to fully take advantage of the significant benefits provided by these technologies, including:

  • How do I communicate with the orchestration layer to indicate the application is operating correctly and is available to receive traffic?
  • What happens if the application detects a system fault, and how does the application relay this to the orchestration layer?
  • How can I accurately trace traffic flow between my applications in order to identify potential bottlenecks?
  • What tools can I use to easily deploy my updated application as part of my standard toolchain?
  • What happens if I introduce a network fault between my services, and how do I test this scenario?

These questions are central to building container-native solutions. At Red Hat, we define container-native as applications that conform to the following key tenets:

  • DevOps automation
  • Single concern principle
  • Service discovery
  • High observability
  • Lifecycle conformance
  • Runtime confinement
  • Process disposability
  • Image immutability

This may seem like a lot of overhead on top of the core application logic. Red Hat OpenShift Application Runtimes (RHOAR) and Istio provide developers with tools to adhere to these principles with minimal overhead in terms of coding and implementation.

In this blog post, we’re specifically focusing on how RHOAR and Istio combine to provide tools for DevOps automation, lifecycle conformance, high observability, and runtime confinement.

Continue reading “Building Container-Native Node.js Applications with Red Hat OpenShift Application Runtimes and Istio”

Share

Monitoring Red Hat AMQ 7 with the jmxtrans Agent

Monitoring Red Hat AMQ 7

Red Hat AMQ 7 includes some tools for monitoring the Red Hat AMQ broker. These tools allow you to get metrics about the performance and behavior of the broker and its resources. Metrics are very important for measuring performance and for identifying issues that are causing poor performance.

The following components are included for monitoring the Red Hat AMQ 7 broker:

  • Management web console that is based on Hawtio: This console includes some perspectives and dashboards for monitoring the most important components of the broker.
  • A Jolokia REST-like API: This provides full access to JMX beans through HTTP requests.
  • Red Hat JBoss Operation Network: This is an enterprise, Java-based administration and management platform for developing, testing, deploying, and monitoring Red Hat JBoss Middleware applications.

These tools are incredible and fully integrated with the original product. However, there are cases where Red Hat AMQ 7 is deployed in environments where other tools are used to monitor the broker, for example, jmxtrans.

Continue reading “Monitoring Red Hat AMQ 7 with the jmxtrans Agent”

Share

Red Hat Summit: Building production-ready containers

Bringing excitement to the last session on the last day of the show, Scott McCarty and Ben Breard wrapped up this year’s Red Hat Summit with a discussion of best practices for production-ready containers.

In the container era, Scott pointed out there are four building blocks you need to think about:

  • Container images
  • Container hosts
  • Container orchestration
  • Registry servers

Each of these topics is a huge rabbit hole you can go down if you want to really learn all there is to know about them. As you’d expect from the session title, Scott and Ben focused on container images. Despite that, you still have to consider the other three. How will your images interact with others? How will you get data to them? How will they interact with each other? Will you embed passwords in your images? (Spoiler alert: No.) You need to take all of these things into consideration as you move into the world of containers.

A single container is about as useful as a single Lego block1. You need to tie lots of them together in interesting ways to get the full power of containers behind you. Scott quoted Red Hat’s Ryan Hallisey:

Using containers is as much of a business advantage as a technical one. When building and using containers, layering is crucial. You need to look at your application and think about each of the pieces and how they work together–similar to the way you can break a program up into classes and functions.

If you’re building a Lego model, you take an instruction sheet and a set of building blocks and create something from them. In the world of containers, you take instructions (YAML) and building blocks (images) and create an application from them.

The goal of the Open Container Initiative (OCI) is to define standards for containers and runtimes. (Members include Red Hat, CoreOS, and pretty much every other player in the industry.) If you’re an architect, OCI protects your investment because you can create images once and know you can use them for the foreseeable future2 and know that the tooling and the distribution and logistics mechanisms and the registry server will still exist and will still work.

As the old joke goes, the great thing about standards is that you have so many to choose from3. Scott mentioned five relevant standards that are being driven by vendors, communities, and standards bodies:

When you’re running a container, there’s an image, there’s a registry, and there’s a host. With the Distribution spec, you’re protected if you put your images in an Amazon registry, an Azure registry, a Red Hat-provided registry, etc. You can move that image around and run it in another environment.

Scott mentioned a number of tools that are emerging as the standards become more entrenched. The community created crictl for the CRI standard. Podman (now available as a tech preview) offers an experience similar to the Docker command line. runc is a command-line tool to run containers according to the OCI runtime spec. Project Atomic created the Buildah tool to build OCI-compliant images. What’s great about Buildah is that it will work with your Dockerfiles. Buildah lets you add packages into the image when you build it, so that your final container doesn’t have to have a package manager. And of course you can use these tools in a toolchain.

Scott made the point that the Docker commands related to images actually work with repositories, not images. For example, this command:

docker pull registry.access.redhat.com/rhel7/rhel:latest

goes to the registry at registry.access.redhat.com, then looks in the rhel7 namespace, then looks for a repository named rhel, then takes the version of that image with the tag latest4. The hierarchy here is registry server/namespace/repo:tag. The registry server is resolved by DNS, but the namespace can mean different things depending on the registry server. At registry.access.redhat.com, the namespace is the product name. At Dockerhub, the namespace is the name of the user who committed the image. In creating images in your own registry, it’s important to think through what the different components of the name mean. How will your organization use namespaces? How will you name your repos? (A bit of advice from Scott: always use the full URL of the image in your Dockerfile to make sure you’re getting exactly what you expect.)

Note: If you’d like to get a better handle on registries vs. repos and develop a deeper understanding of containers, check out Scott’s article: A Practical Introduction to Container Terminology. That article, published in February of 2018 on developers.redhat.com/blog, is a complete update of Scott’s popular article from 2016.  The 2018 version includes a lot of what is happening in the world of containers beyond docker.

Next Ben took over to discuss his tenets for building images. He stressed that you should use source control for everything so that all of your artifacts are buildable from code. His five basic principles for building production-ready containers are:

  • Standardize
  • Minimize storage
  • Delegate
  • Process
  • Iterate

We’ll discuss these over the next few paragraphs.

Standardize: Your goal should be to have a standard set of images with a common lineage. Your base images will be things like application frameworks, app servers, databases, and other middleware. The obvious benefits here are that your images are easier to scale, reuse of common layers is maximized, and the differences between environments in your various containers are minimized. The size of registries can be a problem, especially with thousands of developers constantly cranking out images as toolchains and build pipelines do their magic. Standardizing on as few images as possible has huge benefits in your registry, at runtime, and whenever you need to update a base image. (Red Hat encourages you to use our base images, especially the LTS images.)

Minimize storage: The goal is to limit the content in a given image, particularly a base image, so that it only has what you’re using. Red Hat provides an image named rhel7-atomic (not to be confused with Project Atomic). This image has glibc and just enough rpm to add packages. There is no python, no systemd, or similar things that you probably don’t need. Remember, with an image, you’re building a sandbox. If your sandbox is the size of a stadium parking lot, it’s not a sandbox anymore.

Delegate: Ownership of an image should lie with the people who have the most expertise for that image, whatever it contains. Don’t be a hero; don’t be responsible for every image in your organization. Leverage your team’s skills.

Focus on process and automation: This is the most important rule. The barrier to getting started with containers is really low. It’s not that much trouble to create a Dockerfile and run docker build. That means it’s simple to do something once and never think about it again. But containers are not “fire and forget.” You need process around them for everything from testing to deployment to security. Ben mentioned tools like OpenSCAP and Clair that can scan your images for vulnerabilities.5

Iterate is the final goal. Don’t repeat the mistakes of the past. Making changes is no longer a big deal. If you have testing and security scanning as part of your build chain, then you go from “known good” to “known good” every single time.

The image a developer has on her laptop should be the same as the image in the dev/test environment, which should be the same as the image in production in the cloud. (BTW, the term “developer” here really means “anybody who builds an image.” That could include sys admins or architects or others.) As you distribute an image, the YAML file that defines persistent volumes, secrets, scaling policies, and other metadata should go along with it as well.

Finally, Ben made the point that you need to be building images in a system that has a pipeline. You can build images on your local machine for smoke testing, but any changes that are significant should be run through the pipeline. It’s crucial to remember that you can’t just fire up Docker on a laptop and test your image. To do test anything significant, you’ll need an environment that can start a Kubernetes cluster, pull down all the images that run together in production, and then start the pipeline. Nobody wants to run a whole orchestrated system on their laptops, but that’s the only way you can reliably test a set of services that work together.

The hallmark of a great piece of code is not just that it works, but that its architecture is elegant, intuitive, and flexible. Most experienced developers know how to do that. As we go forward, the ability to create a set of elegantly composed containers will be an essential skill. The right amount of functionality in each container, hierarchies of containers that simplify changes and upgrades, and the appropriate use of pipelines are all part of a production-ready, containerized application.

All in all, a great session with lots of best practices and good advice from two highly experienced speakers. If you’d like to hear all the details, the video of Scott and Ben’s presentation is one of the 100+ Red Hat Summit 2018 breakout sessions you can view online for free

https://youtu.be/nizud-1IK9c]

 


1 Containers on a container ship fit together like Lego blocks, if you think about it. Makes the metaphor even stronger.

2 Which is maybe a couple of years. Seriously, how many things do you do today that you didn’t do in 2015?

3 I didn’t say the joke was funny, I just said it was old. Sorry about that.

4 Keep in mind that the tag “latest” is just a convention. It may or may not point to the most recent version of the image.

5 Ben recommended reading The Phoenix Project as a cautionary tale.

Share

Using .NET Core in a “Disconnected” Environment

Security is a very important consideration when running your custom middleware applications.  The internet can be an unfriendly place.

Sometimes middleware users have a requirement for their software to run in a “‘disconnected” environment, which is one where the network is not routed to addresses outside the one the local node is on—in other words, no internet.

Continue reading “Using .NET Core in a “Disconnected” Environment”

Share

Application Modernization and Migration Tech Talk + Scotland JBug Meetup

I’m heading back to my friends in Scotland to speak at the JBoss User Group (JBug) Scotland next month. It’s a fun group of people who really seem to enjoy working with open source and JBoss software stacks.

First off, on June 6th there will be a wonderful tech talk on application modernization and migration. This is followed by the JBug Scotland hosting a  hands-on workshop. Come and get hands-on experience in a workshop showcasing application development in the cloud using containers, JBoss middleware, services, business logic, and APIs.

The events are on June 6th, 2018 from 14:00 onwards and are scheduled as follows.

Continue reading “Application Modernization and Migration Tech Talk + Scotland JBug Meetup”

Share

Customizing an OpenShift Ansible Playbook Bundle

Today I want to talk about Ansible Service Broker and Ansible Playbook Bundle. These components are relatively new in the Red Hat OpenShift ecosystem, but they are now fully supported features available in the Service Catalog component of OpenShift 3.9.

Before getting deep into the technology, I want to give you some basic information (quoted below from the product documentation) about all the components and their features:

  • Ansible Service Broker is an implementation of the Open Service Broker API that manages applications defined in Ansible Playbook Bundles.
  • Ansible Playbook Bundles (APB) are a method of defining applications via a collection of Ansible Playbooks built into a container with an Ansible runtime with the playbooks corresponding to a type of request specified in the Open Service Broker API specification.
  • Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.

Continue reading “Customizing an OpenShift Ansible Playbook Bundle”

Share