Java inside docker: What you must know to not FAIL

Many developers are (or should be) aware that Java processes running inside Linux containers (docker, rkt, runC, lxcfs, etc) don’t behave as expected when we let the JVM ergonomics set the default values for the garbage collector, heap size, and runtime compiler. When we execute a Java application without any tuning parameter like “java -jar mypplication-fat.jar”, the JVM will adjust by itself several parameters to have the best performance in the execution environment.

This blog post takes a straightforward approach to show developers what they should know when packaging their Java applications inside Linux containers.

Continue reading “Java inside docker: What you must know to not FAIL”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Malloc Internals and You

Introduction

In my last blog, I mentioned I was asked to look at a malloc performance issue, but discussed the methods for measuring performance. In this blog, I’ll talk about the malloc issue itself, and some measures I took to address it. I’ll also talk a bit about how malloc’s internals work, and how that affects your performance.

Continue reading “Malloc Internals and You”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Towards Faster Ruby Hash Tables

Hash tables are an important part of dynamic programming languages. They are widely used because of their flexibility, and their performance is important for the overall performance of numerous programs. Ruby is not an exception. In brief, Ruby hash tables provide the following API:

  • insert an element with given key if it is not yet on the table or update the element value if it is on the table
  • delete an element with given key from the table
  • get the value of an element with given key if it is in the table
  • the shift operation (remove the earliest element inserted into the table)
  • traverse elements in their inclusion order, call a given function and depending on its return value, stop traversing or delete the current element and continue traversing
  • get the first N or all keys or values of elements in the table as an array
  • copy the table
  • clear the table

Continue reading “Towards Faster Ruby Hash Tables”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Unlock your Red Hat JBoss Data Grid data with Red Hat JBoss Data Virtualization

Welcome to another episode of the series: “Unlock your Red Hat JBoss Data Grid (JDG) data with Red Hat JBoss Data Virtualization (JDV).”

This post will guide you through an example of connecting to Red Hat JBoss Data Grid data source, using Teiid Designer. In this example, we will demonstrate connecting to a local JDG data source.  We’re using the JDG 6.6.1, but you can connect to any local or remote JDG source (version 6.6.1) if you wish, using the same steps.

Continue reading “Unlock your Red Hat JBoss Data Grid data with Red Hat JBoss Data Virtualization”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Programmatic Debugging: Part 1 the challenge

As every developer knows, debugging an application can be difficult and often enough you spend as much or more time debugging an application as originally writing it. Every programmer develops their collection of tools and techniques. Traditionally these have included full-fledged debuggers, instrumentation of the code, and tracing and logging. Each of these has their particular strengths and weaknesses.

Continue reading “Programmatic Debugging: Part 1 the challenge”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Docker project: Can you have overlay2 speed and density with devicemapper? Yep.

It’s been a while since our last deep-dive into the Docker project graph driver performance.  Over two years, in fact!  In that time, Red Hat engineers have made major strides in improving container storage:

All of that, in the name of providing enterprise-class stability, security and supportability to our valued customers.

As discussed in our previous blog, there are a particular set of behaviors and attributes to take into account when choosing a graph driver.  Included in those are page cache sharing, POSIX compliance and SELinux support.

Reviewing the technical differences between a union filesystem and devicemapper graph driver as it relates to performance, standards compliance and density, a union filesystem such as overlay2 is fast because

  • It traverses less kernel and devicemapper code on container creation (devicemapper-backed containers get a unique kernel device allocated at startup).
  • Containers sharing the same base image startup faster because of warm page cache
  • For speed/density benefits, you trade POSIX compliance and SELinux (well, not for long!)

There was no single graph driver that could give you all these attributes at the same time — until now.

How we can make devicemapper as fast as overlay2

With the industry move towards microservices, 12-factor guidelines and dense multi-tenant platforms, many folks both inside Red Hat as well as in the community have been discussing read-only containers.  In fact, there’s been a –read-only option to both the Docker project, and kubernetes for a long time.  What this does is create a mount point as usual for the container, but mount it read-only as opposed to read-write.  Read-only containers are an important security improvement as well as they reduce the container’s attack surface.  More details on this can be found in a blog post from Dan Walsh last year.

When a container is launched in this mode, it can no longer write to locations it may expect to (i.e. /var/log) and may throw errors because of this.  As discussed in the Processes section of 12factor.net, re-architected applications should store stateful information (such as logs or web assets) in a stateful backing service.  Attaching a persistent volume that is read-write fulfills this design aspect:  the container can be restarted anywhere in the cluster, and its persistent volume can follow it.

In other words, for applications that are not completely stateless an ideal deployment would be to couple read-only containers with read-write persistent volumes.  This gets us to a place in the container world that the HPC (high performance/scientific computing) world has been at for decades:  thousands of diskless, read-only NFS-root booted nodes that mount their necessary applications and storage over the network at boot time.  No matter if a node dies…boot another.  No matter if a container dies…start another.

Continue reading “Docker project: Can you have overlay2 speed and density with devicemapper? Yep.”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

JBoss EAP 7 Domain deployments – Part 2: Domain deployments through the EAP 7.0 Management Console

In this blog series we will present several ways to deploy an application on an EAP Domain. The series consists of 5 parts. Each one will be a standalone article, but the series as a whole will present a range of useful topics for working with JBoss EAP.

  • Part 1: Setup a simple EAP 7.0 Domain.
  • Part 2: Domain deployments through the new EAP 7.0 Management Console (this article)
  • Part 3:  Introduction to DMR (Dynamic Model Representation) and domain deployments from the Common Language Interface CLI.
  • Part 4: Domain deployment from the REST Management API.
  • Part 5: Manage EAP 6 Hosts from EAP 7.0 domain

In part 1 of this series on JBoss EAP 7 Domain deployments, we set up a simple EAP 7.0 domain with three hosts:

Review the domain Configuration

The domain controller host0, and two slaves hosts running several EAP 7.0 instances.

JBoss EAP Simple Domain

In the following tutorial we are going to see how to deploy an application on JBoss EAP domain using the new EAP 7.0 Management Console.

Continue reading “JBoss EAP 7 Domain deployments – Part 2: Domain deployments through the EAP 7.0 Management Console”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

The Benefits of Red Hat Enterprise Linux for Real Time

To deliver the best of predictability on real-time workloads, the Red Hat Enterprise Linux for Real Time provides state-of-art on determinism for the bullet-proof RHEL (Red Hat Enterprise Linux) platform. The availability of this product raises some questions, like: Do I need a real-time operating system? Or What are the benefits and drawbacks of running the RHEL for Real Time? This article aims to clarify how to leverage the success of your business using a real-time operating system, and what kind of workloads or type of industry can benefit from RT.

Introduction

Linux is the best choice for High-Performance Computing (HPC) due to the years of Linux kernel optimization focused on delivering high average throughput for a vast number of different workloads. Being optimized for throughput means that the algorithms for processing data are geared towards processing the most amount of data in the least amount of time. Examples of throughput-oriented operations are transferring megabytes/second over a network connection or the amount of data read from or written to a storage medium. These optimizations are the basis for the success of RHEL on servers and HPC environments.

Nevertheless, these optimizations for high throughput can cause drawbacks on other specific workloads. For example, the RHEL kernel uses a busy loop wait approach to avoid the scheduling overhead on some mutual exclusion methods, like spin locks and read/write locks. While busy waiting to enter in a critical section of code, the task waiting delays the scheduling of other potentially higher priority tasks in the same CPU. As a result, the higher priority task can not be scheduled and executed until the waiting task has completed, causing a delay on the high priority task’s response.

Although this delay is acceptable for the majority of common workloads, it is not acceptable for the class of tasks where correctness depends on meeting timing deadlines. This class of tasks, often classified as real-time tasks, has strict timing constraints where an answer must be delivered within a certain time period, and a late answer means it is wrong or fails.

For example, processing a 30 frames per second video requires the ability to deliver one frame every 33 milliseconds. If the system fails to deliver a frame every 33 milliseconds, the video processing will not be only late, but also wrong. It is natural to think then that real-time means delivering a quick response to an event and assume that real-time can be achieved only by making the system run faster. This assumption is a misconception, however. For instance, if the above-mentioned system can run fast enough to deliver a frame every 16.6 ms (60 frames per seconds), the video will be reproduced twice as fast. A faster response is not the expected behavior for processing this video, so the system will deliver not only early results but also wrong results. Hence, real-time systems are those that deliver a predictable timing behavior instead of just trying to deliver faster results.

To provide an enterprise environment for real-time workloads on Linux, Red Hat provides the RHEL for Real Time product. RHEL for Real Time is composed of the RHEL kernel optimized for determinism, along with a set of integrated tuning tools to provide the state-of-art of determinism on Linux. The deterministic timing behavior depends on both an application’s determinism using its own algorithms and on the Linux kernel determinism in managing the systems shared resources.

Continue reading “The Benefits of Red Hat Enterprise Linux for Real Time”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

DevNation Live Blog: Analyzing Java applications using Thermostat

Omair Majid, a Red Hat Senior Software Engineer, addressed the primordial issue of performance on the Java Virtual Machine. Performance issues of OS, CPU, Memory, and IO origins plague modern systems and present a complex issue to developers so the Thermostat tool focuses on alleviating and easing serviceability while enhancing monitoring of the JVM.

Continue reading DevNation Live Blog: Analyzing Java applications using Thermostat