Why Kubernetes is The New Application Server

Have you ever wondered why you are deploying your multi-platform applications using containers? Is it just a matter of “following the hype”? In this article, I’m going to ask some provocative questions to make my case for Why Kubernetes is the new application server.

You might have noticed that the majority of languages are interpreted and use “runtimes” to execute your source code. In theory, most Node.js, Python, and Ruby code can be easily moved from one platform (Windows, Mac, Linux) to another platform. Java applications go even further by having the compiled Java class turned into a bytecode, capable of running anywhere that has a JVM (Java Virtual Machine).

The Java ecosystem provides a standard format to distribute all Java classes that are part of the same application. You can package these classes as a JAR (Java Archive), WAR (Web Archive), and EAR (Enterprise Archive) that contains the front end, back end, and libraries embedded. So I ask you: Why do you use containers to distribute your Java application? Isn’t it already supposed to be easily portable between environments?

Continue reading “Why Kubernetes is The New Application Server”

Share

An API Journey: From Idea to Deployment the Agile Way–Part III

This is part III of a three-part series describing a proposed approach for an agile API lifecycle: from ideation to production deployment. If you missed it or need a refresher, please take some time to read part I and part II.

This series is coauthored with Nicolas Massé, also a Red Hatter, and it is based on our own real-life experiences from our work with the Red Hat customers we’ve met.

In part II, we discovered how ACME Inc. is taking an agile API journey for its new Beer Catalog API deployment. ACME set up modern techniques for continuously testing its API implementation within the continuous integration/continuous delivery (CI/CD) pipeline. Let’s go now to securing the exposition.

Continue reading “An API Journey: From Idea to Deployment the Agile Way–Part III”

Share

An API Journey: From Idea to Deployment the Agile Way–Part II

This is part II of a three-part series describing a proposed approach for an agile API lifecycle from ideation to production deployment. If you missed part 1 or need a refresher, please take some time to read part I.

This series is coauthored with Nicolas Massé, also a Red Hatter, and it is based on our own real-life experiences from our work with the Red Hat customers we’ve met.

In part I, we explored how ACME Inc. is taking an agile API journey for its new Beer Catalog API, and ACME completed the API ideation, contract design, and sampling stages. Let’s go now to mocking.

Continue reading “An API Journey: From Idea to Deployment the Agile Way–Part II”

Share

An API Journey: From Idea to Deployment the Agile Way–Part I

The goal of this series of posts is to describe a proposed approach for an agile API delivery process. It will cover not only the development part but also the design, the tests, the delivery, and the management in production. You will learn how to use mocking to speed up development and break dependencies, use the contract-first approach for defining tests that will harden your implementation, protect the exposed API through a management gateway and, finally, secure deliveries using a CI/CD pipeline.

I coauthored this series with Nicolas Massé, who is also a Red Hatter. This series is based on our own real-life experience from our work with the Red Hat customers we’ve met, as well as from my previous position as SOA architect at a large insurance company. This series is a translation of a typical use case we run during workshops or events such as APIdays.

Continue reading “An API Journey: From Idea to Deployment the Agile Way–Part I”

Share

Automate integration CI/CD process

Red Hat Fuse Integration Service 2.0 tech preview was released a few weeks ago and as it’s based on Red Hat OpenShift 3.3, which has pipeline capability on top of it (tech preview on OpenShift as well), you are able to get one step closer to a more automated and agile continuous integration. As well as, a deployment one-stop platform for us, the integration developer.

Continue reading Automate integration CI/CD process

Share

The fast-moving monolith: how we sped-up delivery from every three months, to every week

Editor’s note: Raffaele Spazzoli is an Architect with Red Hat Consulting’s PaaS and DevOps Practice. This blog post reflects his experience working for Key Bank prior to joining Red Hat.

A recount of the journey from three-months, to one-week release cycle-time.

This is the journey of KeyBank, a super-regional bank, from quarterly deployments to production to weekly deployments to production. In the process we adopted all open source software migrating from WebSphere to Tomcat and adopting OpenShift as our private Linux container cloud platform. We did this in the context of the digital channel modernization project, arguably the most important project for the bank during that period of time.

The scope of the digital channel modernization project was to migrate a 15-year old Java web app that was servlet-based, developed on a homegrown MVC framework and running on Java 1.6 and WebSphere 7.x to a more modern web experience and to create a new mobile web app.

This web app had grown more expensive to maintain and to meet our SLAs. It was the quintessential monolith app. Our architectural objective was to create an API layer to separate the presentation logic (web or mobile) from the business logic — what lay ahead was an effort to completely modernize the continuous integration and deployment process.

Continue reading “The fast-moving monolith: how we sped-up delivery from every three months, to every week”

Share

Microservices CI/CD Pipelines in Openshift

One of the greatest advantages of using docker containers is the fact that you can move them between environments. A promotion from Development to a Production environment, shouldn’t take more than some few seconds. This is one aspect of “Continuous Delivery”

Because Microservices Architectures are “independently replaceable and upgradeable”, they are the best scenario to show a “Deployment Pipeline”.

 

Red Hat Developers has produced a sample and free application called “Red Hat Helloworlds MSA” that demonstrates different aspects of microservices (You can read more about this application in the following post: Have your own Microservices playground). This application shows how you can independently deploy the microservices using different technologies (JAX-RS and WildFly Swarm, Spring-boot, Vert.XNodeJS, etc) and how you can use different invocation patterns to integrate them. It also uses Netflix OSS, integrated via Kubeflix, and ZipKin for tracing.

Continue reading “Microservices CI/CD Pipelines in Openshift”

Share

A simple guide to provisioning Vagrant boxes with Ansible

Over the last couple of weeks, I’ve been working on some Red Hat JBoss BPM Suite workshop material. One part of the workshop is a four-hour lab that guides the attendees through the development of JBoss BPM Suite 6.x processes, rules and applications (note that this workshop is available to our customers, please contact your Red Hat account manager or local Red Hat sales representative for more information), but I’m going to be sharing part of the story with you today.

The lab-sessions require a pre-installed and pre-provisioned virtual machine, which contains all the material required to run the lab. This means that I need to create, install, provision and deploy a virtual machine that contains all the required tools and code to successfully run the lab sessions. Obviously this can done by hand, but what’s the fun in that?!

Manual work is error prone, hard to reproduce and definitely hard to version control. Hence, we want to have an automated solution in which we can (declaratively) define the layout and configuration of our virtual machine. Say hello to Vagrant and Ansible!

Continue reading “A simple guide to provisioning Vagrant boxes with Ansible”

Share