Developer Tools

Released! Red Hat Software Collections now GA!

[EDITOR’s Note:  This article introduces V1.0 of Red Hat Software Collections, but we are now at 2.0.  Read about the latest here.]

We’ve had so much interest in Red Hat Software Collections during beta testing and now here they are!

Red Hat has announced the general availability of Red Hat Software Collections 1.0!

“Available via select Red Hat Enterprise Linux subscriptions, Red Hat Software Collections delivers the newest, most stable versions of open source runtime components to subscribers on a lifecycle that is separate from Red Hat Enterprise Linux. By providing a more frequent release cadence of these developer oriented technologies, Red Hat has responded to the need for access to rapid language and database innovation while also continuing to deliver the stability of the Red Hat Enterprise Linux platform.”

So now you have two choices for selecting tools and/or databases that align with your applications’ lifecycle:  1) those that are in “base” Red Hat Enterprise Linux with a 10-year lifecycle, and now 2), those in Red Hat Software Collections with major releases every approximately 18 months (minor releases @ 9 months) and a 3-year lifecycle.

Continue reading “Released! Red Hat Software Collections now GA!”


Red Hat Developer Toolset 2.0 is now Generally Available!

gcc developers – this is what you’ve been asking for!

Red Hat Developer Toolset V2.0 is now generally available.

Red Hat has “announced the general availability of Red Hat Developer Toolset 2.0.  Available to all Red Hat customers with an active Red Hat Enterprise Linux Developer subscription, Red Hat Developer Toolset provides access to the latest stable versions of open source development tools on a separate, accelerated life cycle.”

Continue reading “Red Hat Developer Toolset 2.0 is now Generally Available!”


Using Git with SVN


(Photo credit by Crystalline Radical)

Nowadays, while most people in our industry know about DVCS tools, such as Git and Mercurial, and what they can do, not all are aware that git can be used with SVN. It is, of course, quite a shame as on top of being the best way to learn how to use git – this feature also enables one to use all the nice tricks of DVCS (offline commit, local history rewriting, commit search, bisect…) while still being stuck with a remote SVN server…

Some years ago, I had already done a quick overview of this feature in my personal blog, so I decided to update and move content here – especially, as I’ve, even recently, run into fellow Red Hatters who did not know about it also !

This HowTo was, and still is, driven by uses cases, which is a good thing because most of those use cases match what any developer do on a daily basis with SVN. Hopefully, this will help readers relate to the tool, but will also make this blog entry a nifty reference page for later on. (In this regard, the DZone Refcardz on Git will also be an excellent reference material).


As for SVN, the very first step one will do with git-svn, is to get the sources from the remote server. If with SVN, one only checks out the latest version of the source code – hence calling this step a checkout, with a DVCS one retrieves the entire project history. Therefore, this step is no longer called a checkout but a clone.

Continue reading “Using Git with SVN”


C/C++ Programming Abstractions for Parallelism and Concurrency – Part 2

Welcome to part 2 of this two-part article on C/C++ Programming Abstractions for Parallelism and Concurrency.  If you missed Part 1, view it here.

Supporting task-based parallelism

Let us now switch from concurrency to parallelism. I already mentioned that C++11 and C11 provide support for creating threads that execute additional work in parallel or concurrently. However, these facilities are rather resource abstractions (i.e., for operating system threads) than abstractions aimed purely at parallelism. One target for the latter is often task-based parallelism, which allows programmers to split a part of a program into tasks (i.e., units of work). These tasks will run in parallel, but they can also depend on other tasks in which case a dependent task will not start executing until all it’s dependencies are fulfilled (e.g., until a prior task has finished generating output that constitutes input for the current task). This essentially creates a directed acyclic graph (DAG) of tasks; tasks that are not ordered in the DAG wrt. each other can execute in parallel.

So, how can programmers express that they want to run a parallel task? When managing threads explicitly using the thread abstractions (explicit threading for short), this may look like this:

try {
  auto task = std::thread(work); // Execute the work() function
  // ... Do something else ...
  // ... Use the task's result ...
catch (std::system_error e) { error_fallback(); }

We explicitly create a new thread and join the thread (i.e., wait for it to complete its work) at task dependencies. We need error handling and a fallback in case we cannot create another thread for some reason.

Continue reading “C/C++ Programming Abstractions for Parallelism and Concurrency – Part 2”


C/C++ Programming Abstractions for Parallelism and Concurrency – Part 1

When writing parallel or multi-threaded programs, programmers have to deal with parallelism and concurrency. Both are related concepts but are not the same. In this article, we will review the differences between them and outline a few programming abstractions for both (in particular, atomic data types, Transactional Memory, and task-based parallelism). Red Hat Developer Toolset 1.1 ships with GCC-4.7, which provides (experimental) support for these particular features. Finally, a short outlook on future features proposed for inclusion in the C/C++ standards and considered for upstream GCC.

Concurrent execution refers to situations in which several (logical) processes or threads execute at the same time and are not guaranteed to be independent; for example, they could communicate with each other, wait for other threads to make progress, or could have to execute operations mutually exclusive with the execution of other threads’ operations. In contrast, parallel execution refers to several processes performing independent operations that, informally, do not have to consider what the other parallel parts are doing.

Nonetheless, parallelism is related to concurrency in that a typical parallel program will also contain concurrent pieces of code (e.g., to merge the results of parallel computations into a single output value). Also, to benefit from parallelism in hardware, concurrent code often tries to execute as much as possible in parallel (see Amdahl’s Law).

Continue reading “C/C++ Programming Abstractions for Parallelism and Concurrency – Part 1”


RHEL Developer Toolkit 2.0 now in beta

Just under a year ago, we introduced the Red Hat Enterprise Linux Developer Toolset 1.0 which provides the latest, stable open source developer tool versions at an accelerated cadence than that of Red Hat Enterprise Linux.  That version started with gcc 4.7 and gdb 7.4.  Since then, we’ve added V1.1 with some additional components and today we are announcing V2.0  beta that adds Eclipse, and more:

Continue reading RHEL Developer Toolkit 2.0 now in beta


How Long Does It Take to …

One common idiom in performance monitoring is how long did it take for a program to do something. For example you may want to know the time taken for database queries in PostgreSQL or just-in-time translations in a Java Virtual Machine. SystemTap and user-space markers in Linux packages make it much easier to determine the duration of those operations.

The user-space markers compiled into Linux packages mark key points in the code where particular actions occur. The user-space markers also provide arguments that provide additional information about the action. For example, the markers and the available arguments in PostgreSQL can be listed using using the SystemTap command:

$ stap -L 'process("postgres").mark("*")'

The two user-space markers related to the start and completion of a query are:

Continue reading “How Long Does It Take to …”


Red Hat at the ISO C++ Standards Meeting, Bristol, UK

Red Hat has actively participated in the ISO group defining the C++ standard for many years, and continues to make a significant contribution. The Red Hat toolchain team was well-represented at the spring meeting of the standardization committee (technically JTC1/SC22/WG21) in Bristol, UK, last month: we had three people there for the full week, with one other visiting a couple of times during the week. In this article, Jason Merrill summarizes the main highlights and developments of interest to Red Hat’s customers and partners:

Continue reading Red Hat at the ISO C++ Standards Meeting, Bristol, UK


Starting with SystemTap

As I stare at this blank screen to start writing my first blog entry I have that same feeling that so many developers have when starting with an unfamiliar programming language or application.  The developers in our group realize that it is not easy starting from nothing and we strive to make it easier to productively use SystemTap to investigate performance problems.

Continue reading Starting with SystemTap


Array allocation in C++

This technical article covers a subtlety in C++ array allocation and how we changed the GNU C++ compiler to deal with it properly. When a programmer writes

T *p = new T[3];

the C++ compiler allocates room for at least three copies of objects of type T on the heap. These objects require 3 * sizeof(T) bytes. For this example, assume sizeof(T) is 12, then it is straightforward to allocate 36 bytes (for example, using malloc). But what happens if the array length is 3937053355 (or 16909515400900422315 on a 64-bit architecture)? Then 47244640260 bytes are required. This number cannot be expressed in 32-bits, so if 32-bit arithmetic is used to perform the multiplication, the result is a mere 4. Unless special care is taken, a C++ implementation will provide a pointer to a heap area that is much too small for holding the requested number of objects (4 bytes instead of 47244640260 bytes).

Continue reading “Array allocation in C++”