Containers

Have your own Microservices playground

Microservices are standing at the “Peak of Inflated Expectations“. It’s immeasurable the number of developers and companies that want to bring in this new development paradigm and don’t know what challenges they will face. Of course, the challenges and the reality of an Enterprise company that has been producing software for the last 10 or 20 years is totally different from the start-up company that just released its first software some months ago.

320px-gartner_hype_cycle-svg

Before adopting microservices as an architectural pattern, there are several questions that need to be addressed:

  • Which languages and technologies should I adopt?
  • Where and how do I deploy my microservices?
  • How do I perform service-discovery in this environment?
  • How do I manage my data?
  • How do I design my application to handle failure? (Yes! It will fail!) 
  • How do I address authentication, monitoring and tracing?

    Continue reading “Have your own Microservices playground”

Share

Debugging Java Applications using the Red Hat Container Development Kit

Containerization technology is fundamentally changing the way applications are packaged and deployed. The ability to create a uniform runtime that can be deployed and scaled is revolutionizing how many organizations develop applications. Platforms such as OpenShift also provide additional benefits such as service orchestration through Kubernetes and a suite of tools for achieving continuous integration and continuous delivery of applications. However, even with all of these benefits, developers still need to be able to utilize the same patterns they have used for years in order for them to be productive. For Java developers, this includes developing in an environment that mimics production and the ability to utilize common development tasks, such as testing and debugging running applications. To bridge the gap developers may face when creating containerized applications, the Red Hat Container Development Kit (CDK) can be utilized to develop, build, test and debug running applications.

Red Hat’s Container Development Kit is a pre-built container development environment that enables developers to create containerized applications targeting OpenShift Enterprise and Red Hat Enterprise Linux. Once the prerequisite tooling is installed and configured, starting the CDK is as easy as running the “vagrant up” command. Developers immediately have a fully containerized environment at their fingertips.

More information on the Red Hat Container Development can be found Red Hat Developers, and on the Red Hat Customer Portal

One of the many ways to utilize the CDK is to build, run, and test containerized applications on OpenShift. Java is one of the frameworks that can be run on OpenShift, and these applications can be run in a traditional application server, such as JBoss, as well as in a standalone fashion. Even as runtime methodologies change, being able to debug running applications to validate functionality remains an important component of the software development process. Debugging a remote application in Java is made possible through the use of the Java Debug Wire Protocol (JDWP). By adding a few startup arguments, the application can be configured to accept remote connections, for example, from an Integrated Development Environment (IDE) such as Eclipse. In the following sections, we will discuss how to remotely debug an application deployed to OpenShift running on the CDK from an IDE.

Continue reading “Debugging Java Applications using the Red Hat Container Development Kit”

Share

From Fragile to Antifragile Software

One of my favourite books is Antifragile by Nassim Taleb where the author talks about things that gain from disorder. Nacim introduces the concept of antifragility which is similar to hormesis in biology or creative destruction in economics and analyses it charecteristics in great details. If you find this topic interesting, there are also other authors who have examined the same phenomenon in different industries such as Gary Hamel, C. S. Holling, Jan Husdal. The concept of antifragile is the opposite of the fragile. A fragile thing such as a package of wine glasses is easily broken when dropped but an antifragile object would benefit from such stress. So rather than marking such a box with “Handle with Care”, it would be labelled “Please Mishandle” and the wine would get better with each drop (would be awesome woulnd’t it).

shipping_box_example

It didn’t take long for the concept of antifragility to be used also for describing some of the software development principles and architectural styles. Some would say that SOLID prinsiples are antifragile, some would say that microservices are antifragile, and some would say software systems cannot be antifragile ever. This article is my take on the subject.

According to Taleb, fragility, robustness, resilience and antifragility are all very different. Fragility involves loss and penalisation from disorder. Robustness is enduring to stress with no harm nor gain. Resilience involves adapting to stress and staying the same. And antifragility involves gain and benefit from disorder. If we try to relate these concepts and their characteristics to software systems, one way to define them would be as the following.

Continue reading “From Fragile to Antifragile Software”

Share

JBoss EAP 7 on OpenShift

RHJB_EnterpriseApplicationPlatform_Logotype_RGB-Gray_0213_cw_72JBoss EAP 7 was recently released, and brings with it a whole host of new features and support, such as support for Java EE 7, reduced port usage, graceful shutdown, improved GUI and CLI management, optimizations for cloud and containers, and much more. EAP 7’s small footprint, fast startup time and support for modern Java and non-Java frameworks make it uniquely suitable for deployment onto PaaS cloud environments, and Red Hat happens to have a leading one: OpenShift.

Continue reading JBoss EAP 7 on OpenShift

Share

Carving the Java EE Monolith Into Microservices: Prefer Verticals Not Layers

Following my introduction blog about why microservices should be event-driven, I’d like to take another few steps and blog about it. (Hopefully I saw you at jBCNconf and Red Hat Summit in San Francisco, where I spoke about some of these topics). Follow me on twitter @christianposta for updates on this project. In this article we discuss the first parts of carving up a monolith.

The monolith I’m exploring in depth for these articles will be from the Ticket Monster tutorial which for a long time has been the canonical example of how to build an awesome application with Java EE and Red Hat technologies. We are using Ticket Monster because it’s a well-written app that straddles the “non-trivial” and “too-complex for an example” line pretty well. It is perfect for illustrative purposes and we can point to it concretely and discuss the pros and cons of certain approaches with true example code. Please take a closer look at the domain and current architecture in light of the further discussions.

TM Architecture

Looking at the current architecture above we can see things are nicely broken out already. We have the UI components, the business services, and the long-term persistence storage nicely separated and decoupled from each other yet packaged as a single deployable (a WAR file in this case). If we examine the source code, we see the code has a similar structure. If we were to deploy this, any changes to any of the components would dictate a build, test, and release of the entire deployable. One of the prerequisites to doing microservices is autonomy of components so they can be developed, tested, deployed in isolation without disrupting the rest of the system. So what if we just carve out the different layers here and deploy those independently? Then we can achieve some of that autonomy?

Continue reading “Carving the Java EE Monolith Into Microservices: Prefer Verticals Not Layers”

Share

DevNation Live Blog: fabric8-ing Continous Improvement with Kubernetes and Jenkins Pipeline

I’m sure you have heard and read a lot about microservices in the recent past  and how they are here to defend our end users from the horrible monolith. Breaking an application up into many components is a great start, but to take your organization to the next level requires a platform focused on integrating microservices into your continuous improvement process. Red Hat’s James Rawlings & James Strachan led us through achieving our new goal of continuous delivery with containerized microservices. The way to go fast while developing is ensuring that all microservices have their own release cycle. Splitting your team up to align with your microservices will allow faster changes, the ultimate goal. In order to take advantage of many rapid releases your deployment and testing processes must be automated. Automating your build process and creating continuous feedback loops is the way to go.

Continue reading DevNation Live Blog: fabric8-ing Continous Improvement with Kubernetes and Jenkins Pipeline

Share
Four different approaches to run WildFly Swarm in OpenShift

Four different approaches to run WildFly Swarm in OpenShift

WildFly Swarm 1.0.0.Final was released this week at DevNation. It allows the developer to package his application and a JavaEE runtime in a “fat-jar” file. To execute the application, the developer will only need a Java SE Runtime installed and have access to the “fat-jar”. No other downloads or configurations are needed.

Besides being a well known (and consolidated) Java EE runtime, WildFly Swarm is also an excellent choice for Cloud-native Java apps through the “built-in support for third party apps and frameworks like Logstash and NetFlix OSS projects like Hystrix and Ribbon.

OpenShift v3 is Red Hat‘s PaaS based on Linux containers and Kubernetes. It’s an amazing cloud platform that is capable of managing containers based on pre-existing Docker images or creating images from the source-code of your application in a process called S2I (Source-to-image).

This post will show 4 different approaches to deploy a WildFly Swarm application in OpenShift v3.

Continue reading “Four different approaches to run WildFly Swarm in OpenShift”

Share
Push it Real Good: Continuous Delivery for the people at the push of a button and repo

Push it Real Good: Continuous Delivery for the people at the push of a button and repo

The Problem

Several months back, our emerging Developer Programs engineering team assembled during the last breaths of Brno’s Czech winter and dedicated a full day towards a deceptively complex task:

Be a user.  Assemble in groups and, using a technology stack of your choosing, conceive of and create an application to be presented to the full team in 6 hours.

Keep in mind that I hold my colleagues in extremely high regard; they’re capable, creative, and experienced.  Surely churning out a greenfield demo application would be a laughable exercise done by lunch affording us the rest of the afternoon to take in local culture (read: Czech beer).

So we started to break down the tasks and assign people to ’em:

  • Bootstrap the application codebase
  • Provision a CI environment to build and test
  • Stand up a deployment environment
  • Hook everything together so we’re all looking at the same thing through the dev cycle

We wanted the same conceptual infrastructure we use in delivering Red Hat products and our open source projects – authoritative systems and Continuous Delivery.

And therein lies the problem.  Of the 6 hours spent on this exercise, I noted that every team spent over four and a half hours getting themselves set up and hacked furiously on their real job – the application – in the final sprints.

But that’s not the real problem.

The real problem is that users, all across the globe, have the problem.

And in this moment, it crystallized that it was now our mission to fix this.

Our industry has given developers wonderful tooling, frameworks, and runtimes. With containers, we even have standardized deployment.  And by the way, we require that you load your own containers onto the boat.

We’re missing a unified, cohesive story which brings applications out of the development environment and into service.

Continue reading “Push it Real Good: Continuous Delivery for the people at the push of a button and repo”

Share

DevNation Live Blog: Make applications great again: OpenShift Enterprise 3 walk-through with Docker and Kubernetes

OpenShift 3 is all about Docker containers.  More importantly, it is about management orchestration of containerized applications.  Red Hat IT was a big consumer of OpenShift 2 and likewise, we are moving as many applications as possible to containers.  OpenShift 3 is a big part of this strategy.  On a personal note, OpenShift 3 is an incredible product.  I even have it installed at home for various services 🙂

Continue reading DevNation Live Blog: Make applications great again: OpenShift Enterprise 3 walk-through with Docker and Kubernetes

Share

DevNation Live Blog: Developing with OpenShift without the build waits

Red Hat’s Peter Larsen, the OpenShift Domain Architect, gave a talk at DevNation, “Developing on OpenShift without the build waits”. Developing with the OpenShift Platform-as-a-Service can be very compelling: developing and deploying software without having to worry about the infrastructure. When you first try OpenShift, it’s quite impressive to see how easy it is to develop and deploy software using the built-in templates that include preconfigured components such as databases and application servers. This allows developers to start coding right away.

Continue reading DevNation Live Blog: Developing with OpenShift without the build waits

Share