performance

Speeding up Open vSwitch with partial hardware offloading

Speeding up Open vSwitch with partial hardware offloading

Open vSwitch (OVS) can use the kernel datapath or the userspace datapath. There are interesting developments in the kernel datapath using hardware offloading through the TC Flower packet classifier, but in this article, the focus will be on the userspace datapath accelerated with the Data Plane Development Kit (DPDK) and its new feature—partial flow hardware offloading—to accelerate the virtual switch even more.

This article explains how the virtual switch worked before versus now and why the new feature can potentially save resources while improving the packet processing rate.

Continue reading “Speeding up Open vSwitch with partial hardware offloading”

Share
Using a local NuGet server with Red Hat OpenShift

Using a local NuGet server with Red Hat OpenShift

NuGet is the .NET package manager. By default, the .NET Core SDK will use packages from the nuget.org website.

In this article, you’ll learn how to deploy a NuGet server on Red Hat OpenShift Container Platform (RHOCP). We’ll use it as a caching server and see that it speeds up our builds. Before we get to that, we’ll explore some general NuGet concepts and see why it makes sense to use a local NuGet server.

Continue reading “Using a local NuGet server with Red Hat OpenShift”

Share
Performance improvements in OVN: Past and future

Performance improvements in OVN: Past and future

OVN (Open Virtual Network) is a subcomponent of Open vSwitch (OVS). It allows for the expression of overlay networks by connecting logical routers and logical switches. Cloud providers and cloud management systems have been using OVS for many years as a performant method for creating and managing overlay networks.

Lately, OVN has come into its own because it is being used more in Red Hat products. The result has been an increased amount of scrutiny for real-world scenarios with OVN. This has resulted in new features being added to OVN. More importantly, this has led to tremendous changes to improve performance in OVN.

In this article, I will discuss two game-changing performance improvements that have been added to OVN in the past year, and I will discuss future changes that we may see in the coming year.

Continue reading “Performance improvements in OVN: Past and future”

Share
Using a Kotlin-based gRPC API with Envoy proxy for server-side load balancing

Using a Kotlin-based gRPC API with Envoy proxy for server-side load balancing

These days, microservices-based architectures are being implemented almost everywhere. One business function could be using a few microservices that generate lots of network traffic in the form of messages being passed around. If we can make the way we pass messages more efficient by having a smaller message size, we could  the same infrastructure to handle higher loads.

Protobuf (short for “protocol buffers”) provides language- and platform-neutral mechanisms for serializing structured data for use in communications protocols, data storage, and more. gRPC is a modern, open source remote procedure call (RPC) framework that can run anywhere. Together, they provide an efficient message format that is automatically compressed and provides first-class support for complex data structures among other benefits (unlike JSON).

Microservices environments require lots of communication between services, and for this to happen, services need to agree on a few things. They need to agree on an API for exchanging data, for example, POST (or PUT) and GET to send and receive messages. And they need to agree on the format of the data (JSON). Clients calling the service also need to write lots of boilerplate code to make the remote calls (frameworks!). Protobuf and gRPC provide a way to define the schema of the message (JSON cannot) and generate skeleton code to consume a gRPC service (no frameworks required).

Continue reading “Using a Kotlin-based gRPC API with Envoy proxy for server-side load balancing”

Share
Using eXpress Data Path (XDP) maps in RHEL 8 Beta: Part 2

Using eXpress Data Path (XDP) maps in RHEL 8 Beta: Part 2

Diving into XDP

In the first part of this series on XDP, I introduced XDP and discussed the simplest possible example. Let’s now try to do something less trivial, exploring some more-advanced eBPF features—maps—and some common pitfalls.

XDP is available in Red Hat Enterprise Linux 8 Beta, which you can download and run now.

Continue reading “Using eXpress Data Path (XDP) maps in RHEL 8 Beta: Part 2”

Share
Achieving high-performance, low-latency networking with XDP: Part I

Achieving high-performance, low-latency networking with XDP: Part I

XDP: From zero to 14 Mpps

In past years, the kernel community has been using different approaches in the quest for ever-increasing networking performance. While improvements have been measurable in several areas, a new wave of architecture-related security issues and related counter-measures has undone most of the gains, and purely in-kernel solutions for some packet-processing intensive workloads still lag behind the bypass solution, namely Data Plane Development Kit (DPDK), by almost an order of magnitude.

But the kernel community never sleeps (almost literally) and the holy grail of kernel-based networking performance has been found under the name of XDP: the eXpress Data Path. XDP is available in Red Hat Enterprise Linux 8 Beta, which you can download and run now.

Continue reading “Achieving high-performance, low-latency networking with XDP: Part I”

Share
Reducing the startup overhead of SystemTap monitoring scripts with syscall_any tapset

Reducing the startup overhead of SystemTap monitoring scripts with syscall_any tapset

A number of the SystemTap script examples in the newly released SystemTap 4.0 available in Fedora 28 and 29 have reduced the amount of time required to convert the scripts into running instrumentation by using the syscall_any tapset.

This article discusses the particular changes made in the scripts and how you might also use this new tapset to make the instrumentation that monitors system calls smaller and more efficient. (This article is a follow-on to my previous article: Analyzing and reducing SystemTap’s startup cost for scripts.)

The key observation that triggered the creation of the syscall_any tapset was a number of scripts that did not use the syscall arguments. The scripts often used syscall.* and syscall.*.return, but they were only concerned with the particular syscall name and the return value. This type of information for all the system calls is available from the sys_entry and sys_exit kernel tracepoints. Thus, rather than creating hundreds of kprobes for each of the individual functions implementing the various system calls, there are just a couple of tracepoints being used in their place.

Continue reading “Reducing the startup overhead of SystemTap monitoring scripts with syscall_any tapset”

Share
Analyzing and reducing SystemTap’s startup cost for scripts

Analyzing and reducing SystemTap’s startup cost for scripts

SystemTap is a powerful tool for investigating system issues, but for some SystemTap instrumentation scripts, the startup times are too long. This article is Part 1 of a series and describes how to analyze and reduce SystemTap’s startup costs for scripts.

We can use SystemTap to investigate this problem and provide some hard data on the time required for each of the passes that SystemTap uses to convert a SystemTap script into instrumentation. SystemTap has a set of probe points marking the start and end of passes from 0 to 5:

  • pass0: Parsing command-line arguments
  • pass1: Parsing scripts
  • pass2: Elaboration
  • pass3: Translation to C
  • pass4: Compilation of C code into kernel module
  • pass5: Running the instrumentation

Articles in this series:

Continue reading “Analyzing and reducing SystemTap’s startup cost for scripts”

Share
Troubleshooting FDB table wrapping in Open vSwitch

Troubleshooting FDB table wrapping in Open vSwitch

When most people deploy an Open vSwitch configuration for virtual networking using the NORMAL rule, that is, using L2 learning, they do not think about configuring the size of the Forwarding DataBase (FDB).

When hardware-based switches are used, the FDB size is generally rather large and the large FDB size is a key selling point. However for Open vSwitch, the default FDB value is rather small, for example, in version 2.9 and earlier it is only 2K entries. Starting with version 2.10 the FDB size was increased to 8K entries. Note that for Open vSwitch, each bridge has its own FDB table for which the size is individually configurable.

This blog explains the effects of configuring too small an FDB table, how to identify which bridge is suffering from too small an FDB table, and how to configure the FDB table size appropriately.

Continue reading “Troubleshooting FDB table wrapping in Open vSwitch”

Share
Configuring the MongoDB WiredTiger memory cache for RHMAP

Configuring the MongoDB WiredTiger memory cache for RHMAP

This article describes how to configure MongoDB’s WiredTiger memory cache in Red Hat Mobile Application Platform (RHMAP) to prevent high-usage memory issues and Nagios alerts. If the WiredTiger cache consumes all the memory available for a container, memory issues and Nagios alerts will occur.

The WiredTiger storage engine is the default storage engine starting in MongoDB version 3.2. It uses MultiVersion Concurrency Control (MVCC) architecture for write operations in order to allow multiple different modifications to the same document at the same time.

WiredTiger also caches data and creates checkpoints to give you the ability to recover anytime it’s necessary. For example, if a MongoDB image deployed in a container fails, it is useful to recover the data that was not persisted. Additionally, WiredTiger can recover un-checkpointed data with its journal files. See the journal documentation and snapshots and checkpoint documentation for more information.

Continue reading “Configuring the MongoDB WiredTiger memory cache for RHMAP”

Share