A Puppet Module for tuned-adm

Some months ago, I developed and released a small Puppet module for tuned-adm. As this tool is a nice feature of RHEL, I think it is only fair from me to advertise about it here, on the Red Hat developer blog.

Quick overview of ‘tuned-adm’

(Photo credit: Accretion Disc)

To make this brief, this command will take care of tuning the operating system for you, based on the usage you want to make of it. For instance, if you want this system to be a regular server, you’ll use the ‘throughput-performance’ profile, while if you are running your Linux kernel on a laptop, you might prefer the ‘powersave’ profile, to protect your battery, and make it last longer.

To have a better idea of what options are available on your system, you can simply run the following command:

$ tuned-adm list
Available profiles:
- virtual-guest
- latency-performance
- powersave
- balanced
- throughput-performance
- virtual-host
Current active profile: /usr/lib/tuned/powersave/tuned.conf

And with the command ‘active’, you can quickly check, which profile has been activated:

Continue reading “A Puppet Module for tuned-adm”

Share

Red Hat Developer Exchange day – slides available

The day before Red Hat Summit in Boston, we hosted a Red Hat Developer Exchange day – a one day technical event which covered some great topics across four tracks:  DevOps, Programming on OpenShift, Languages and Tools, and Get more out of Red Hat Tools.  The feedback on these was consistently great, so we hope you find them useful too!

Track: DevOps

Track: Programming on OpenShift

Continue reading “Red Hat Developer Exchange day – slides available”

Share

Changing objects in flight, without a Debugger

How to change data in-flight, without a Debugger

Do you ever wish you could change the behavior of some object in the middle of a bunch of method calls?  I sometimes do.

Maybe you want to set some properties on an object to replicate a fault.  Maybe you’d like to force a specific return value from a method for some reason.  These are both very possible, without altering the source code for your application.

I like to do this through Byteman.  Byteman is a JBoss project that deals with bytecode injection.  This means that you can inject code you want to run, specifying where you want to run it.  It’s sort of like AOP, except I think it allows more freedom for the user.  It’s also pretty easy to use.

Here’s how:

1)  Make a helper class to do the work you want done
2)  Put the helper class in a .jar
3)  Write the Byteman rule that tells the helper class to do it’s thing

That’s it!  So here’s the example.

Part 1, the helper class
**********************************

public void addMarker(Message msg){
System.out.println(“Changing Message contents.”);
Body msgBody = msg.getBody();
msgBody.add(“Hey, I’ve got changed content”);
System.out.println(“Done changing Message contents”);
}
***********************************

Continue reading “Changing objects in flight, without a Debugger”

Share

Why PaaS and how does it work (just enough for Devs & not linux gurus)

Here’s a video of my OpenShift PaaS session from Developer Exchange day in Boston.  It’s something I’ve delivered a bunch of times – I hope you enjoy it!

Session Abstract:

Tired of waiting weeks to get a server or spinning up and configuring your virtual servers. Come learn about how PaaS can make you into a happier and more productive coder. We will start by teaching you the meaning behind some acronyms you see thrown around – IaaS, PaaS, and SaaS. Next we turn to PaaS and talk about why it is a developers dream come true. From there we will use Red Hat’s Open Source PaaS, OpenShift, to demonstrate how easy it is to get started with a PHP web application. This will be about 1/3 slide and 2/3 demo and code in action. Bring your laptop if you want to follow along – it is that easy. If time permits we may also show a little bit of working with MySQL and PHP on OpenShift. Come in skeptical leave with a smile and some working code.

Continue reading “Why PaaS and how does it work (just enough for Devs & not linux gurus)”

Share

NUMA – Verifying it’s not hurting your application performance [video]

As I mentioned here, Joe Mario and I delivered this session at Red Hat’s Developer Exchange session in Boston.  There were a lot of great questions and we hope you’ll find this video-recorded session useful.

Abstract:

Now that you followed all the steps to make your application NUMA-aware, how do you know if you got it right, or if you shifted your performance problem elsewhere?

In this session, Don and Joe will:

  • discuss initial high level steps to verify correct memory and cpu-process placement, including:
    • showing how performance can easily suffer with incorrect placement.
    • describing available options to correct placement.
  • discuss the open source tools, both available now and in development, which use the hardware’s performance counters to more accurately pinpoint:
    • where your program is making costly remote NUMA memory accesses,
    • identifying if and where other programs are inflicting NUMA-related performance penalties on your program,
    • how much those remote accesses are hurting your performance.
  • discuss various approaches for resolving these low-level issues.

Continue reading “NUMA – Verifying it’s not hurting your application performance “

Share

C/C++ Programming Abstractions for Parallelism and Concurrency – Part 2

Welcome to part 2 of this two-part article on C/C++ Programming Abstractions for Parallelism and Concurrency.  If you missed Part 1, view it here.

Supporting task-based parallelism

Let us now switch from concurrency to parallelism. I already mentioned that C++11 and C11 provide support for creating threads that execute additional work in parallel or concurrently. However, these facilities are rather resource abstractions (i.e., for operating system threads) than abstractions aimed purely at parallelism. One target for the latter is often task-based parallelism, which allows programmers to split a part of a program into tasks (i.e., units of work). These tasks will run in parallel, but they can also depend on other tasks in which case a dependent task will not start executing until all it’s dependencies are fulfilled (e.g., until a prior task has finished generating output that constitutes input for the current task). This essentially creates a directed acyclic graph (DAG) of tasks; tasks that are not ordered in the DAG wrt. each other can execute in parallel.

So, how can programmers express that they want to run a parallel task? When managing threads explicitly using the thread abstractions (explicit threading for short), this may look like this:

try {
  auto task = std::thread(work); // Execute the work() function
  // ... Do something else ...
  task.join();
  // ... Use the task's result ...
}
catch (std::system_error e) { error_fallback(); }

We explicitly create a new thread and join the thread (i.e., wait for it to complete its work) at task dependencies. We need error handling and a fallback in case we cannot create another thread for some reason.

Continue reading “C/C++ Programming Abstractions for Parallelism and Concurrency – Part 2”

Share

C/C++ Programming Abstractions for Parallelism and Concurrency – Part 1

When writing parallel or multi-threaded programs, programmers have to deal with parallelism and concurrency. Both are related concepts but are not the same. In this article, we will review the differences between them and outline a few programming abstractions for both (in particular, atomic data types, Transactional Memory, and task-based parallelism). Red Hat Developer Toolset 1.1 ships with GCC-4.7, which provides (experimental) support for these particular features. Finally, a short outlook on future features proposed for inclusion in the C/C++ standards and considered for upstream GCC.

Concurrent execution refers to situations in which several (logical) processes or threads execute at the same time and are not guaranteed to be independent; for example, they could communicate with each other, wait for other threads to make progress, or could have to execute operations mutually exclusive with the execution of other threads’ operations. In contrast, parallel execution refers to several processes performing independent operations that, informally, do not have to consider what the other parallel parts are doing.

Nonetheless, parallelism is related to concurrency in that a typical parallel program will also contain concurrent pieces of code (e.g., to merge the results of parallel computations into a single output value). Also, to benefit from parallelism in hardware, concurrent code often tries to execute as much as possible in parallel (see Amdahl’s Law).

Continue reading “C/C++ Programming Abstractions for Parallelism and Concurrency – Part 1”

Share

Sys Admins: Developers Asking for Unsupported ToolChains?

If you have ever worked as a system administrator, you are familiar with developers constantly wanting to use the latest toolchains even to the point of wanting to roll their own packages. Of course, the challenge is, if you are running a production environment, introducing change is always risky. If the change being introduced is from an unknown source, the risk is even higher. As a result, many admins rely on companies like Red Hat to provide them some assurances regarding the quality of the components underpinning the applications. However, a company like Red Hat also has an interest in only supporting tools that are known to be stable and fault-free (as much as anything can be). Sometimes this doesn’t meet the developers’ needs. As a result, Red Hat has introduced (currently in Beta) the Red Hat Software Collections bundle to try to find a happy medium.

All that being said, and arguably said before in press releases and the like :), there is another interesting use case for the software collections concept. Specifically, what about when the developers and the business have essentially abandoned an application? In other words, the application delivers on its promises to customers already, and, at this point in time, there is no desire to invest further resources in the application. Well, what does that mean for you System Admins? Well, generally, it means you get caught holding the bag having to maintain an older toolchain just to support the application.

Continue reading “Sys Admins: Developers Asking for Unsupported ToolChains?”

Share