Microservices Deployments Evolution

Microservices Are Here, to Stay

A few years back, most software systems had a monolithic architecture and slow release cycle. In the recent years, there is a clear move towards Microservices architecture, which is optimized for scalability, elasticity, failure, and speed of change. This trend has been further enforced by the adoption of cloud and containers, which also enabled practices such as DevOps.

Trends in the IT Industry

All these changes have resulted in a growing number of services to develop and an even bigger number of deployments to do. It soon became clear that the explosion in the number of deployments cannot be controlled using pre-microservices tools and techniques, and new ways have been born. In this article, we will see how Cloud Native platforms such as Kubernetes allow deployment of Microservices in high scale with minimal human intervention.

Continue reading “Microservices Deployments Evolution”

Running Spark Jobs On OpenShift

Introduction:

A feature of OpenShift is jobs and today I will be explaining how you can use jobs to run your spark machine, learning data science applications against Spark running on OpenShift.  You can run jobs as a batch or scheduled, which provides cron like functionality. If jobs fail, by default OpenShift will retry the job creation again. At the end of this article, I have a video demonstration of running spark jobs from OpenShift templates against Spark running on OpenShift v3.

Continue reading “Running Spark Jobs On OpenShift”

Data Encapsulation vs. Immutability in Javascript

A while ago, I wrote a fairly long post attempting to shed some light on a few things you can do in your JavaScript classes to enforce the concept of data encapsulation – or data “hiding”. But as soon as I posted it, I got some flak from a friend who is a Clojure programmer. His first comment about the article was this.

Mutability and data encapsulation are fundamentally at odds.

Eventually, he walked that back – but only just a little bit. His point, though, was intriguing. I asked him to explain what he meant.

Why is it so wrong to return the id in your example? I’m guessing it’s not. It might be darn useful to fetch it. In fact, it might greatly enhance the data model for it to be there. But you feel you must “hide” it. Why? Because it’s mutable or because you must go to great lengths to make it immutable. Because JavaScript. But if you were returning an immutable data structure, you wouldn’t even think about it. All that stress just falls away; you no longer care about hiding your data or encapsulating it. You only care that it’s correct and that it properly conveys the essential complexity of your system.

We’ll ignore his little dig on the language itself, for now. But maybe what he’s saying has some value. I do like the idea of a bunch of “stress just falling away”. Let’s look at where we ended up in that last post about data encapsulation.

const ID = Symbol
class Product {
  constructor (name) {
    this.name = name;
    this[ID] = 2340847;
  }
  related () {
    return lookupRelatedStuff( this[ID] );
  }
}

So, here we’ve done our best to hide the id property using a Symbol as a property key. It’s not accessible within userland, and it’s barely visible unless you know about Reflect.ownKeys() or Object.getOwnPropertySymbols(). And of course, I never mentioned the name property in the last article. But the truth is, it suffers from the same issues that plague the id property. It really shouldn’t change. But to accomplish that, I have to replace every this.name with this[NAME] using a Symbol for the property key. And as my friend said, these properties are arguably useful in userland. I just don’t want them changed. I want immutability. How can I do this using JavaScript?

Is it cold in here, or is it just me?

Object.freeze() is nothing new. It’s been around forever. Let’s take a look at how we’d use it to make our Product instances immutable.

class Product {
  constructor (name) {
    this.name = name;
    this.id = 2340847;
    // make this instance immutable
    Object.freeze(this);
  }
}
const widget = new Product
// Setting the name to something else has no effect.
widget.name = something-else
widget.name; // lta-widget

There now. That wasn’t so hard, was it? We give a Product instance the deep freeze and return it. What about those situations where you really need to mutate your application state. What if, for example, there’s a price that could change over time? Normally, we’d do something super simple. Like just update the price.

this.price = getUpdatedPrice(this);

But of course, if we’re going for immutability and the safety that comes along with that, then this is clearly not the correct approach. We are mutating the Product instance when we do this.price = someValue(). What can we do about it? One strategy might be to use Object.assign() to copy properties from one object to another, always generating a new object for every data mutation. Perhaps something like this.

class Product {
  updatePrice () {
    // check DB to see if price has changed
    return Object.assign(new Product(), this, { price: getNewPrice(this) } );
  }
}

Now we are getting somewhere. We can use Object.freeze() to make our objects immutable, and then Object.assign() to generate a new object using existing properties whenever something needs to be mutated. Let’s see how well this works.

acmeWidget.updatePrice();
TypeError: Cannot assign to read only property price of object
    at repl:1:23
    at sigintHandlersWrap (vm.js:22:35)
    at sigintHandlersWrap (vm.js:96:12)
    at ContextifyScript.Script.runInThisContext (vm.js:21:12)
    at REPLServer.defaultEval (repl.js:313:29)
    at bound (domain.js:280:14)
    at REPLServer.runBound [as eval] (domain.js:293:12)
    at REPLServer. (repl.js:513:10)
    at emitOne (events.js:101:20)
    at REPLServer.emit (events.js:188:7)

Ugh! This is happening because I’ve got new Product() as the first parameter to the Object.assign() call, and once a Product is constructed, it’s frozen. I need to defer freezing the object until after it’s constructed. I could use a factory function to return frozen instances of Product. But really, why do I need the Product data type at all? Wouldn’t a simple Object be fine? For the sake of simplification and experimentation, let’s give it a shot.

// Use a factory function to return plain old JS objects
const productFactory = (name, price) = Object.freeze({ name, price });

// Always bump the price by 4%! 🙂
const updatePrice = (product) =gt Object.freeze(
      Object.assign({}, product, { price: product.price * 1.04 }));

const widget = productFactory(Acme Widget 1.00)
// ={ name: Acme Widget, price: 1 }

const updatedWidget = updatePrice(widget);
// ={ name: Acme Widget, price: 1.04 }

widget;
// = { name: Acme Widget, price: 1 }

Lingering doubts

I still have doubts, though. For one thing, making a new instance for every change seems pretty inefficient, doesn’t it? And for another, what happens when my data model has nested objects as properties? Do I have to freeze those as well? It turns out, yes I do. All of the properties on my product object are immutable. But properties of nested objects can be changed. That freeze doesn’t go very deep. Maybe I can fix that by just freezing the nested objects.

const productFactory = (name, price) =
  Object.freeze({
    name,
    price,
    metadata: Object.freeze({
      manufacturer: name.split()[0]
    })
  });

Well, that’s OK, perhaps. But there is still a problem here. Can you tell what it is? What if my data model is nested several layers deep? That’s not very uncommon, and now my factory ends up looking something like this.

const productFactory = (name, price) =
  Object.freeze({
    name,
    price,
    metadata: Object.freeze({
      manufacturer: name.split()[0],
      region: Object.freeze({
        country: Denmark
        address: Object.freeze({
          street: HCA Way
          city: Copenhagen
        })
      })
    })
  });

Ugh! This can start to get ugly real fast. And we haven’t even started to discuss collections of objects, like Arrays. Maybe my friend was right. Maybe this is a language issue.

You feel you must “hide” it. Why? Because it’s mutable or because you must go to great lengths to make it immutable. Because JavaScript.

OK, so is this it? Should I just throw in the towel and give up on immutability in my JavaScript applications? After all, I’ve gone this far without it. And I didn’t have that many bugs. Really… I promise! Well, if you want, to embrace this style fully is to write your application in Clojure or Scala or a similarly designed language where data is immutable. This is a fundamental part of the Clojure language. Instead of spending all of your time reading blog posts about fitting a square peg into a round hole, with Clojure you can just focus on writing your application and be done with it. But maybe that’s not an option. Maybe you’ve got to follow company language standards. And anyway, some of us kind of do like writing code in JavaScript, so let’s, for the sake of argument, take a look at some options. But first, let’s just review why we’re going to all of this trouble.

The case for immutability

So much of what makes software development hard (other than cache invalidation, and naming) has to do with state maintenance. Did an object change state? Does that mean that other objects need to know about it? How do we propagate that state across our system? objects, if we shift our thinking about data so that everything is simply a value, then there is no state maintenance to worry about. Don’t think of references to these values as variables. It’s just a reference to a single, unchanging value. But this shift in thinking must also affect how we structure and think about our code. Really, we need to start thinking more like a functional programmer. Any function that mutates data, should receive an input value, and return a new output value – without changing the input. When you think about it, this constraint pretty much eliminates the need for the classthis. Or at least it eliminates the use of any data type that can modify itself in the traditional sense, for example with an instance method. In this worldview, the only use for class is namespacing your functions by making them static. But to me, that seems a little weird. Wouldn’t it just be easier to stick to native data types? Especially since the module system effectively provides namespacing for us. Exports are namespaced by whatever name we choose to bind them to when require() file.

product.js

const factory = (name, price) = Object.freeze({ name, price });

const updatePrice = (product) = Object.freeze(
  Object.assign({}, product, { price: product.price * 1.04 }));

module.exports = exports = { factory, updatePrice };

app.js

const Product = require(/product.js&);
Product.factory; // = [Function: factory]
Product.updatePrice; // = [Function: updatePrice]

For now, just keep these few things in mind.

  • Think of variables (or preferably consts) as values not objects. A value cannot be changed, while objects can be.
  • Avoid the use of class and this. Use only native data types, and if you must use a class, don’t ever modify its internal properties in place.
  • Never mutate native type data in place, functions that alter the application state should always return a copy with new values.

That seems like a lot of extra work

Yeah, it is a lot of extra work, and as I noted earlier, it sure seems inefficient to make a full copy of your objects every time you need to change a value. Truthfully, to do this properly, you need to be using shared persistent data structures which employ techniques such as hash map tries and vector tries to efficiently avoid deep copying. This stuff is hard, and you probably don’t want to roll your own. I know I don’t.

Someone else has already done it

Facebook has released a popular NPM module called, strangely enough,immutable. By employing the techniques above, immutable takes care of the hard stuff for you, and provides an efficient implementation of

A mutative API, which does not update the data in-place, but instead always yields new updated data.

Rather than turning this post into an immutable module tutorial, I will just show you how it might apply to our example data model. The immutable module has a number of different data types. Since we’ve already seen our Product model as a plain old JavaScript Object, it probably makes the most sense to use the Map data type from immutable. product.js

const Immutable = require(immutable);
const factory = (name, price) =Immutable.Map({name, price});
module.exports = exports = { factory };

That’s it. Pretty simple, right? We don’t need an updatePrice function, since we can just use set(), and Immutable.Map handles the creation of a new reference. Check out some example usage. app.js

const Product = require(/product.js);

const widget = Product.factory(Acme widget, 1.00);
const priceyWidget = widget.set(price, 1.04);
const clonedWidget = priceyWidget;
const anotherWidget = clonedWidget.set(price, 1.04);

console.log(widget); // = Map {name: 1 }
console.log(priceyWidget); // = Map {Acme widget: 1.04 }
console.log(clonedWidget); // = Map { Acme widget: 1.04 }
console.log(anotherWidget); // = Map { Acme widget: 1.04 }

Things to take note of here: first, take a look at how we are creating the priceyWidget reference. We use the return value from widget.set(), which oddly enough, doesn’t actually change the widget reference. Also, I’ve cloned priceyWidget. To create a clone we just need to assign one reference to another. And then, finally, an equivalent value for price is set on clonedWidget to create yet another value.

Value comparisons

Let’s see how equality works with these values.

// everything but has a price of 1.04
// so is not equivalent to any of them
assert(widget !== priceyWidget);
assert(widget !== clonedWidget);
assert(!widget.equals(priceyWidget));
assert(!widget.equals(clonedWidget));
assert(!widget.equals(anotherWidget));

This makes intuitive sense. We create a widget and when we change a property, the return value of the mutative function provides us with a new value that is not equivalent as either a reference or value. Additional references to the new value instance priceyWidget are also not equivalent. But what about comparisons between priceyWidget and its clone. Or priceyWidget and a mutated version of the clone that actually contains all of the same property values. Whether we are comparing references with === or using the deep Map.equals, we find that equivalence holds. How cool is that?

// priceyWidget is equivalent to its clone
assert(priceyWidget === clonedWidget);
assert(priceyWidget.equals(clonedWidget));

// Its also equivalent to another, modified value
// because, unlike setting a new value for 
// to create this modification didnt
// actually change the value.
assert(priceyWidget === anotherWidget);
assert(priceyWidget.equals(anotherWidget));

This is just the beginning

When I started writing this post, it was primarily as a learning experience for me. My friend’s friendly jab got me interested in learning about immutable data in JavaScript, and how to apply these techniques to my own code. What I really learned is that, while immutable systems have benefits, there are many hurdles to jump through when writing code this way in JavaScript. Using a high-quality package like immutable.js is a good way to address these complexities. I don’t think I will immediately change all of my existing packages to use these techniques. Now I have a new tool in my toolbox, and this exploration has opened my eyes to the benefits of thinking about data in new ways. If any of this has peaked your interest, I encourage you to read further. Topics such as nested data structures, merging data from multiple values, and collections are all worth exploring. Below, you’ll find links for additional reading.

  • immutable.js documentation: http://facebook.github.io/immutable-js/docs/#/
  • Persistent data structures: http://en.wikipedia.org/wiki/Persistent_data_structure
  • Hash map tries: http://en.wikipedia.org/wiki/Hash_array_mapped_trie
  • Vector tries: http://hypirion.com/musings/understanding-persistent-vector-pt-1

Getting Started with Fuse Integration Service 2.0 Tech preview

To get started with FIS 2.0, for people who are just getting to know the technology, here is how I interpret it. Basically, it’s divided into two aspects.

1. Integration development: FIS uses Apache Camel as the core technology that creates, orchestrates, and composes microservices into a super lightweight thin integration layer, and becomes the API provider and service orchestrator through exposing RESTful or messaging service endpoints. And you can choose to either package and run it with Spring-Boot or Karaf.

2. Application Deployment and Management: FIS takes advantages of the OpenShift platform, and allows you to separately deploy the micro-integration service among a distributed environment, at the same time it takes care of the failover, high availability, load balancing and look up available services for you.

So, now we know what is in FIS 2.0, it’s time to take a closer look at how it is achieved, as a developer, you first need to decide to go with either Karaf (OSGi) or Spring-Boot framework, I personally prefer the Spring-Boot, because it matches closest to the microservice concept of a lighter deployment package. But it is up to you, the developer, after all, you are the god and creator of your application. After the framework is chosen, the developer will start developing the micro-integration services, composing between microservices or even use it to create a microservice. (With two frameworks, the development experience of the route itself is basically the same, by configuring camel components.)

Once the developer is ready to deploy the integration service, we can then decide how to deploy it onto the OpenShift platform. In FIS 2.0 there are two options, Binary S2i will build the entire application locally, and push it onto OpenShift, so OpenShift platform will use it to create and build the container image that it will run on, and for Source S2i, everything is build on top of OpenShift, so developer need to set the location of the source code in order for OpenShift to retrieve it to build the application and the container image.

And that is all. It is actually much more powerful, it’s hard to describe it in one go, dive into it, and you will soon find how fascinating the technology is, and how it can help you to resolve your current problems.

Here is a quick video that shows you how to get your first FIS 2.0 running.

The steps are as follows,

  • Install and start up OpenShift on your local machine 
  • Install FIS image stream definition into OpenShift (raw.githubusercontent.com/jboss-fuse/application-templates/application-templates-2.0.redhat-000026/fis-image-streams.json) 
  • In JBoss developer studio, create a Camel Spring-boot using the new archetype in FIS 2.0 
  • (Archetype catalog url: https://maven.repository.redhat.com/earlyaccess/all/io/fabric8/archetypes/archetypes-catalog/2.2.180.redhat-000004/archetypes-catalog-2.2.180.redhat-000004-archetype-catalog.xml)
  • Deploy to OpenShift using the Binary S2i tool (maven plugin)

 

Automate integration CI/CD process

Red Hat Fuse Integration Service 2.0 tech preview was released a few weeks ago and as it’s based on Red Hat OpenShift 3.3, which has pipeline capability on top of it (tech preview on OpenShift as well), you are able to get one step closer to a more automated and agile continuous integration. As well as, a deployment one-stop platform for us, the integration developer.

For the pipeline to work on OpenShift, you need Jenkins installed and running. OpenShift uses it to build, process and handle all the workflows. If you are familiar with developing in OpenShift, building the pipeline is pretty simple and straight-forward. The pipeline is defined as a build configuration in OpenShift, just create a build config then import it to the namespace you want it to be in. And that is it.

This is what the build config looks like, note the strategy type is called JenkinsPipeline.  This will trigger the interaction with Jenkins, and pushes the defined Jenkinsfile onto the server itself. The Jenkins Server will then interact with Openshift and start the automated CI/CD process.

kind: BuildConfig
apiVersion: v1
metadata:
 name: pipelinename
 labels:
 name: pipelinename 
spec:
 triggers:
 - type: GitHub
 github:
 secret: secret101
 - type: Generic
 generic:
 secret: secret101
 strategy:
 type: JenkinsPipeline
 jenkinsPipelineStrategy:
 jenkinsfile: "
 node('maven') { 
 stage('build') { 
 print 'build'
 openshiftBuild(buildConfig: 'buildconfigname', showBuildLogs: 'true')
 } 
 stage('staging') {
 print 'stage'
 openshiftDeploy(deploymentConfig: 'deploymentconfigame') 
 } 
 }"
As you can see on the above Jenkinsfile in the build configuration, it’s interacting with OpenShift itself through the OpenShift and Jenkins plugin. For instance, you could trigger build an image, deploy the application through calling the deployment config, tag an image or even scale up and down the number of containers.

This upper part of the blog is pretty generic to most of the applications running on OpenShift, and Fuse Integration Service is just another application running on top of it. But this application just simply contains PATTERN BASE integration technology that has 160+ built-in components in it, so we don’t have to waste time and energy on repetitive stuff, no big deal. 🙂

No matter what version you are using, this pipeline capability can help you automate your integration microservice.

Here is a quick demo video that takes you through the entire process.

Using Pipelines in OpenShift 3.3+ for CI/CD

It’s been a while since Red Hat released version 3.3 of OpenShift Container Platform, this version is full of features.

One of my favorites is the support for Pipelines (Tech Preview for now) that lets you easily integrate Jenkins builds on your OpenShift (Origin) Platform.

OpenShift Pipelines

OpenShift Pipelines are based on the Jenkins Pipeline plugin. (https://jenkins.io/solutions/pipeline/)

Integrating Jenkins Pipelines into OpenShift unlocks all the features for the CI/CD world enabling its users to easily manage repeatable tasks in the easiest way.

As you can imagine OpenShift lets you run a containerized version of the Jenkins container in one of your projects and then, after setting the right permission for the Jenkins’ ServiceAccount, it’ll do the job for you.

Pipelines are nothing more than a BuildConfig with type ‘JenkinsPipeline’.

But let’s take a more in-depth look using this simple scenario below:

  1. Jenkins OpenShift project: The base project, handling the Jenkins container and all the pipelines.
  2. Development OpenShift project: The project used for the development environment, it will handle the BuildConfig for building the app from source.
  3. Testing OpenShift project: The project used for the testing environment, it will not use any BuildConfig and it’ll expect ImageStream to be the only source for new deployments.

We’ll create two Pipelines that will simulate a Continuous Integration scenario:

  • Development Pipeline: It will trigger the BuildConfig for the development project and handle its deployment.
  • Testing Pipeline: It will handle the tagging/pulling/pushing operations to let the image flow from development project to testing project and then it will schedule a new deployment.

OpenShift start

First of all, I’ll start my OpenShift cluster, you can skip to the next section in case you’re already up & running.

For running OpenShift on my laptop, the easiest and fastest method I found is “oc cluster up”. All you need to do is to have a working Linux container daemon and an updated origin-clients package. On Fedora 25 I’ve successfully installed “origin-clients-1.3.1” from the default repos.

So that’s all, let’s “oc cluster up” my OpenShift platform:

[alex@freddy ~]$ oc cluster up --host-data-dir=/var/lib/origin/openshift.local.data --use-existing-config --version=v1.3.1 --public-hostname=192.168.123.1
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... Deleted existing OpenShift container
-- Checking for openshift/origin:v1.3.1 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ...
-- Checking type of volume mount ... Using nsenter mounter for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... Using public hostname IP 192.168.123.1 as the host IP Using 192.168.123.1 as the server IP
-- Starting OpenShift container ...
Starting OpenShift using container 'origin'
Waiting for API server to start listening
OpenShift server started
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ...
Now using project "myproject" on server "https://192.168.123.1:8443".
-- Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://192.168.123.1:8443
You are logged in as:
User: developer
Password: developer
To login as administrator:
oc login -u system:admin

Please note: I’ve manually created the “host-data” folder, the other options used are self-explanatory.

The Jenkins project

We should now be ready to sign into our OpenShift platform. openshift-first-login

Now, let’s create our first project, the Jenkins project: fireshot-capture-35-openshift-web-console-https___192-168-123-1_8443_console_create-project

Select the “Jenkins ephemeral” template. fireshot-capture-36-openshift-web-console_-https___192-168-123-1_8443_console

Leave all the parameters set to default and press create. At the end, you should see a notice like the following: Make a note of the generated password. You may need this in the future. (Anyway you can easily recover it should you need it).

fireshot-capture-37-openshift-web-console_-https___192-168-123-1_8443_console

Enabling Pipelines feature (currently in Tech Preview)

As you can see by clicking on the Builds tab menu, there is no trace of the Pipelines support. As specified in the title this feature is a tech preview, so we need to activate it. fireshot-capture-40-openshift-web-conso_-https___127-0-0-1_8443_console_project_jenkins_overview

For activating the Pipelines feature we need to create a JS config file, for enabling it:

# echo "window.OPENSHIFT_CONSTANTS.ENABLE_TECH_PREVIEW_FEATURE.pipelines = true;" >> /var/lib/origin/openshift.local.config/master/tech-preview.js

Please note: You can create the file in a location you prefer. Then we need to inject the file through the master-config.yaml file, in my case, using “oc cluster up”, it’s located in “/var/lib/origin/openshift.local.config/master/”. Place the following lines in your config file:

assetConfig: ... extensionScripts: - /var/lib/origin/openshift.local.config/master/tech-preview.js

Then restart your OpenShift master. You should then be able to find the Pipelines section under the Builds tab: fireshot-capture-41-openshift-web-conso_-https___127-0-0-1_8443_console_project_jenkins_overview

We’re almost ready to start working on our pipelines.

The development project

We can now create the development project, which we’ll use as a root for source building:

$ oc new-project development --display-name="Development" --description="Development project"
Now using project "development" on server "https://192.168.123.1:8443".

We can now use the template I just prepared for our development environment. In this demo, we’ll use the nodejs-example application available in the standard set of the OpenShift templates. Let’s populate the just created development project:

$ oc new-app https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/templates/nodejs-dev.json
--> Deploying template nodejs-example for "https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/templates/nodejs-dev.json"

Node.js
———
This is an example of a Node.js application with no database. For more information about using this template, including OpenShift considerations, see https://github.com/openshift/nodejs-ex/blob/master/README.md.

The following service(s) have been created in your project: nodejs-example.

For more information about using this template, including OpenShift considerations, see https://github.com/openshift/nodejs-ex/blob/master/README.md.

* With parameters:
* Name=nodejs-example
* Namespace=openshift
* Memory Limit=512Mi
* Git Repository URL=https://github.com/alezzandro/nodejs-ex.git
* Git Reference=
* Context Directory=
* Application Hostname=
* GitHub Webhook Secret=cR48n2GX67ADfxwi63uGomiXjxgMUCEykekbNR0G # generated
* Generic Webhook Secret=Hvx3stEhQuAmKPnjaujQHvYFV1cl1cvmh4IjXnri # generated
* Database Service Name=
* MongoDB Username=
* MongoDB Password=
* Database Name=
* Database Administrator Password=
* Custom NPM Mirror URL=

–> Creating resources with label app=nodejs-example …
service “nodejs-example” created
route “nodejs-example” created
imagestream “nodejs-example” created
buildconfig “nodejs-example” created
deploymentconfig “nodejs-example” created
–> Success
Use ‘oc start-build nodejs-example’ to start a build.
Run ‘oc status’ to view your app.

As you can see by running “oc get pods”, no deployment has started so no pods will be seen. This is a wanted behavior because we want to manage the build process and the deployment through a Jenkins’ Pipeline. For achieving this, I’ve just edited the original nodejs-ex template and removed all the triggers from the DeploymentConfig. Looking at our development project we’ll have created the following elements at the end: A BuildConfig, an ImageStream, a DeploymentConfig, a Route and a Service.

$ oc get all
NAME
bc/nodejs-example
NAME
is/nodejs-example
NAME
dc/nodejs-example
NAME
routes/nodejs-example
NAME
svc/nodejs-example

The testing project

We can now setup the testing project, like the development project I’ve already set up a template, removing the BuildConfig section. We’ll promote the container built in the development project to testing, using Jenkins Pipeline. Let’s create and populate the environment:

$ oc new-project testing --display-name="Testing" --description="Testing project"
Now using project "testing" on server "https://192.168.123.1:8443".

You can add applications to this project with the ‘new-app’ command. For example, try:

oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

$ oc new-app https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/templates/nodejs-test.json
–> Deploying template nodejs-example for “https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/templates/nodejs-test.json”

Node.js
———
This is an example of a Node.js application with no database. For more information about using this template, including OpenShift considerations, see https://github.com/openshift/nodejs-ex/blob/master/README.md.

The following service(s) have been created in your project: nodejs-example.

For more information about using this template, including OpenShift considerations, see https://github.com/openshift/nodejs-ex/blob/master/README.md.

* With parameters:
* Name=nodejs-example
* Namespace=openshift
* Memory Limit=512Mi
* Git Repository URL=https://github.com/alezzandro/nodejs-ex.git
* Git Reference=
* Context Directory=
* Application Hostname=
* GitHub Webhook Secret=XFlNUpDsLBotlrcyAnRQdLkKyq65iKE6xOMxqQr5 # generated
* Generic Webhook Secret=LX3PdBcU4dTKPyvTi8aw02VeXBjCxuJpyA7kgV8c # generated
* Database Service Name=
* MongoDB Username=
* MongoDB Password=
* Database Name=
* Database Administrator Password=
* Custom NPM Mirror URL=

–> Creating resources with label app=nodejs-example …
service “nodejs-example” created
route “nodejs-example” created
imagestream “nodejs-example” created
deploymentconfig “nodejs-example” created
–> Success
Run ‘oc status’ to view your app.

As you can see by running “oc get pods”, no deployment has started so no pods will be seen. This is a wanted behavior because we want to manage the deployment through a Jenkins’ Pipeline. For achieving this, I’ve just edited the original nodejs-ex template and removed all the triggers from the DeploymentConfig. Looking at our testing project we’ll have at the end the following elements created:

$ oc get all
NAME
is/nodejs-example
NAME
dc/nodejs-example
NAME
routes/nodejs-example
NAME
svc/nodejs-example

Please note: As I said before, there is no BuildConfig, we’ll promote the container built in the development project to testing, using Jenkins Pipeline.

Pipelines definition and import

Ok, we’re now ready to define our Pipelines. I’ve prepared two Jenkins’ pipelines, one for the development project and one for the testing project. Return back to the Jenkins project and import the two BuildConfigs containing the pre-configured pipelines:

$ oc project jenkins
Now using project "jenkins" on server "https://192.168.123.1:8443".

$ oc create -f https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/pipeline/development-pipeline.yaml
buildconfig “development-pipeline” created

$ oc create -f https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/pipeline/promote2testing-pipeline.yaml
buildconfig “testing-pipeline” created

$ oc get bc
NAME TYPE FROM LATEST
development-pipeline JenkinsPipeline 0
testing-pipeline JenkinsPipeline 0

We can now take a look a what the two pipelines will be able to do.

Jenkins development pipeline

apiVersion: v1
kind: BuildConfig
metadata:
annotations:
pipeline.alpha.openshift.io/uses: '[{"name": "nodejs-example", "namespace": "development",
"kind": "DeploymentConfig"}]'
creationTimestamp: 2016-12-22T13:54:23Z
labels:
app: jenkins-pipeline-development
name: development-pipeline
template: application-template-development-pipeline
name: development-pipeline
namespace: jenkins
resourceVersion: "5781"
selfLink: /oapi/v1/namespaces/jenkins/buildconfigs/development-pipeline
uid: 24c166c2-c84e-11e6-b4f7-68f7286606f4
spec:
output: {}
postCommit: {}
resources: {}
runPolicy: Serial
source:
type: None
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node('maven') {
stage 'build'
openshiftBuild(buildConfig: 'nodejs-example', showBuildLogs: 'true', namespace: 'development')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'nodejs-example', namespace: 'development')
}
type: JenkinsPipeline
...

As you can see this BuildConfig’s type is: “JenkinsPipeline” with a well-defined “JenkinsPipelineStrategy” defined through a “JenkinsFile”. The pipeline itself is composed of two stages:

  1. Build: we start the build process in the project/namespace “development” through the “BuildConfig” named: “nodejs-example”.
  2. Deploy: after the build, we can then start a new deployment in the project/namespace “development” through the “DeploymentConfig” named: “nodejs-example”.

 

Jenkins testing pipeline

$ oc get bc/testing-pipeline -o yaml
apiVersion: v1
kind: BuildConfig
metadata:
annotations:
pipeline.alpha.openshift.io/uses: '[{"name": "nodejs-example", "namespace": "testing",
"kind": "DeploymentConfig"}]'
creationTimestamp: 2016-12-22T13:54:30Z
labels:
app: jenkins-pipeline-testing
name: testing-pipeline
template: application-template-testing-pipeline
name: testing-pipeline
namespace: jenkins
resourceVersion: "5994"
selfLink: /oapi/v1/namespaces/jenkins/buildconfigs/testing-pipeline
uid: 292fa5e5-c84e-11e6-b4f7-68f7286606f4
spec:
output: {}
postCommit: {}
resources: {}
runPolicy: Serial
source:
type: None
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node('maven') {
stage 'tag'
openshiftTag(namespace: 'development', sourceStream: 'nodejs-example', sourceTag: 'latest', destinationNamespace: 'testing', destinationStream: 'nodejs-example', destinationTag: 'latest')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'nodejs-example', namespace: 'testing')
}
type: JenkinsPipeline
...

As in the previous BuildConfig, you can see this BuildConfig’s type is: “JenkinsPipeline” with a well-defined “JenkinsPipelineStrategy” defined through a “JenkinsFile”. The pipeline itself is composed of two stages:

  1. Tag: we tag the latest ImageStream built on “development” project, setting the destination to “testing” project. Through this action, we’re promoting the image from dev to test environment.
  2. Deploy: after the image promotion, we can then deploy the new image in the “testing” project through the “DeploymentConfig” named: “testing”.

 

Jenkins Service Account

Now, we need to enable Jenkins service account (sa) to access and edit resources on “development” and “testing” project:

$ oc policy add-role-to-user edit system:serviceaccount:jenkins:jenkins -n testing
$ oc policy add-role-to-user edit system:serviceaccount:jenkins:jenkins -n development

Run the pipelines!

We’re now ready to see the pipelines in action! You can access the Pipelines page through Builds->Pipelines. 

We’re almost ready, just click on the “Start Pipeline” button for the “development-pipeline”. You’ll see the Build starting and moving forward:

Clicking on the “View Log” link will redirect you to the Jenkins login page. You can gain access through user “admin” and the generated password. The password is in the environment variables for the Jenkins pod.

At end of the process, you’ll see all the steps completed and marked in green:

We now have at least one image ready for the promotion process. We can start the testing-pipeline:

Finally, we can check the result by querying OpenShift using the web interface: Development project:

Testing project:

Or by console:

$ oc project development
Now using project "development" on server "https://192.168.123.1:8443".

$ oc get pods
NAME READY STATUS RESTARTS AGE
nodejs-example-1-build 0/1 Completed 0 23m
nodejs-example-1-trurc 1/1 Running 0 22m

$ oc project testing
Now using project “testing” on server “https://192.168.123.1:8443”.

$ oc get pods
NAME READY STATUS RESTARTS AGE
nodejs-example-1-b1kcf 1/1 Running 0 19m

That’s all! Should you have any doubts, please comment!

About Alessandro

Alessandro Arrichiello is a Platform Consultant for Red Hat Inc. He has a passion for GNU/Linux systems, that began at age 14 and continues today. He worked with tools for automating Enterprise IT: configuration management and continuous integration through virtual platforms. He’s now working on distributed cloud environment involving PaaS (OpenShift), IaaS (OpenStack) and Processes Management (CloudForms), Containers building, instances creation, HA services management, workflows build.

Unlock your PostgreSQL data with Red Hat JBoss Data Virtualization

And here we go for another episode of the series: “Unlock your [….] data with Red Hat JBoss Data Virtualization.” Through this blog series, we will look at how to connect Red Hat JBoss Data Virtualization (JDV) to different and heterogeneous data sources.

JDV is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. It makes data spread across physically diverse systems — such as multiple databases, XML files, and Hadoop systems — appear as a set of tables in a local database. By providing the following functionality, JDV enables agile data use:

  1. Connect: Access data from multiple, heterogeneous data sources.
  2. Compose: Easily combine and transform data into reusable, business-friendly virtual data models and views.
  3. Consume: Makes unified data easily consumable through open standards interfaces.

It hides complexities, like the true locations of data or the mechanisms required to access or merge it. Data becomes easier for developers and users to work with. This post will guide you step-by-step on how to connect JDV to a PostgreSQL database using Teiid Designer. We will connect to a PostgreSQL database using the PostgreSQL JDBC driver.

Continue reading “Unlock your PostgreSQL data with Red Hat JBoss Data Virtualization”