Getting started with OpenShift Java S2I

Introduction

The OpenShift Java S2I image, which allows you to automatically build and deploy your Java microservices, has just been released and is now publicly available. This article describes how to get started with the Java S2I container image, but first let’s discuss why having a Java S2I image is so important.

Continue reading “Getting started with OpenShift Java S2I”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Announcing Fuse for agile integration on the cloud – FIS 2.0 release

Today, I am very pleased to announce the GA of Fuse Integration Service 2.0. This release will make integration applications more portable, flexible and allow agile developers to react faster to business needs by supporting microservice architectures. Developers will now be able to realize the benefits of microservices within integration projects and be able to leverage integration patterns while breaking up monolithic applications and reducing the size of services pushed onto older ESB technology.

With FIS 2.0, developers can now choose a more suitable technology for the composition and integration of microservices,  with a more lightweight runtime providing for faster deployment, while simplifying packaging and ensuring a smoother process from development to production, as well as allowing management of the distributed application and taking care of fault tolerance all at the same time.

The list goes on. The best thing about Fuse Integration Service 2.0 is that it can be used as a best-practice foundation for developers to focus on building business value in microservices without worrying about how they need to solve every problem on the list. And here is why…

Superior pattern-based solution for building and composing microservice  – The new FIS 2.0 comes with Apache Camel 2.18,  with 150+ built-in components and data transformation, it fits perfectly with the microservice principle of building smart endpoints. Developers can simply configure connectors to various systems and services.  

Enterprise Integration Patterns – are a new, best practice in the concept of agile integration; developers can compose microservices with ease (Simple pipeline), and simply reuse the pattern without reinventing each time.

Excellent developer experience with Fuse Tooling.   From getting started,  to real world production deployments, Fuse provides a comprehensive set of tools to help developers through the complete application life cycle. Developers can choose between traditional Java programming styles or leverage drag and drop features from tooling. Debugging and unit testing can also be done in the IDE with testing suite libraries. Maven is included for dependency and builds management.  For getting started, Fuse also provides a set of quick-start examples that simplify the learning curve but are also great for experienced developers wanting to rapidly prototype new projects.


Containerized applications – 
 FIS 2.0 offers a repeatable and declarative environment, allowing developers to quickly package an integration application into a container, simplifying the use of the same image in development, QA, and production environments.  Fuse has pre-defined a base image for the docker-like container, allowing developers to use it as a base for application logic after which it will generate images using the tooling provided. 


Support Spring Boot and Karaf runtime – 
FIS now officially supports Spring Boot,  a widely adopted environment for microservices.  Spring Boot’s “autowire” capability, and ability to create lightweight stand-alone applications has made it a natural fit as a microservice runtime. Karaf as an OSGi runtime is also supported for existing Fuse developers.

 

API Support and Service Resiliency – With REST DSL, developers can now define a REST endpoint within minutes and
automatically export the API documentation (Swagger). When connecting APIs, it is important to make sure to maintain service resiliency. Fuse Integration Service adopts Kubernetes as it’s orchestration layer for containers, which will detect any failure of the service and recover by spinning up another running instance. By supporting Hystrixs in Camel, Fuse makes sure the failure is isolated without affecting other instances. 

Complete CI/CD cycle – FIS 2.0 provides a great user experience for continuous integration with features in the IDE, Maven, source control with git and other SCM applications, and the source-to-image plugin helps developers build images either to test locally or for deploying into an actual cloud platform. The out-of-the-box pipeline support helps developers create a complete cycle for continuous delivery.

Container Orchestration – Last but not least, as the number of microservices grows, developers needed a way to manage and orchestrate all containers and services. Kubernetes in FIS 2.0 will help to automatically discover services, load balance incoming requests, handle clustering and dynamically configure services when they come alive. Operations can scale up and down services on-demand and manage resource as needed.

Of course, don’t worry if your application can not immediately jump to microservices,  Fuse can also support traditional ESB approaches. TRY IT  TODAY and get started with the agile integration experience!

 

 


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


Download and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java™ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.

Red Hat Logo

An Incremental Path to Microservices

As a consultant for Red Hat, I have the privilege of seeing many customers. Some of them are working to find ways to split their applications in smaller chunks to implement the microservices architecture. I’m sure this trend is generalized even outside my own group of the customers.

There is undoubtedly hype around microservices. Some organizations are moving toward microservices because it’s a trend, rather than to achieve a clear and measurable objective.

In the process, these organizations are missing a few key points, and when we all awake from this microservices “hype”, some of these organizations will discover that they now have to take care of ten projects when before they had one, without any tangible gain.

To understand what it takes to reap real benefits from microservices, let’s look at how this neologism came to being.

Continue reading “An Incremental Path to Microservices”

How to build a containerized IoT solution with OpenShift

For businesses looking to build scalable Internet of Things (IoT) solutions using containers, here is a sample project built on the Red Hat OpenShift Container Platform. This project implements an intelligent IoT gateway on the OpenShift Container platform. The IoT Gateway is critical for enterprise IoT as it brings intelligence, and enables key services, at the edge. In this project, the gateway application is deployed as a set of microservices inside containers on OpenShift.

Continue reading “How to build a containerized IoT solution with OpenShift”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


Download and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java™ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Microservices: Zero Downtime Deployment; Hot reconfiguration on OpenShift

2017: Time for a new resolution and the most important resolution for this year should be to adopt microservices to spend less effort on development and improve your time to market (TTM). Nowadays, there are plenty of tools and frameworks at the disposal of the discerning developer to rapidly build microservices. A few examples include Spring Boot, Vertx, etc.

Continue reading “Microservices: Zero Downtime Deployment; Hot reconfiguration on OpenShift”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


Download and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java™ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Jenkins Pipeline Builds and A/B Deployments in CDK

The CDK 2.3 version has added the newest OpenShift Container Platform 3.3, allowing us to make use of the Jenkins Pipeline builds as well a special route configuration, which enables A/B deployments. In this post, I will show you how to achieve that configuration using a microservice application.

Continue reading “Jenkins Pipeline Builds and A/B Deployments in CDK”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Getting Started with Fuse Integration Service 2.0 Tech preview

To get started with FIS 2.0, for people who are just getting to know the technology, here is how I interpret it. Basically, it’s divided into two aspects.

1. Integration development: FIS uses Apache Camel as the core technology that creates, orchestrates, and composes microservices into a super lightweight thin integration layer, and becomes the API provider and service orchestrator through exposing RESTful or messaging service endpoints. And you can choose to either package and run it with Spring-Boot or Karaf.

2. Application Deployment and Management: FIS takes advantages of the OpenShift platform, and allows you to separately deploy the micro-integration service among a distributed environment, at the same time it takes care of the failover, high availability, load balancing and look up available services for you.

So, now we know what is in FIS 2.0, it’s time to take a closer look at how it is achieved, as a developer, you first need to decide to go with either Karaf (OSGi) or Spring-Boot framework, I personally prefer the Spring-Boot, because it matches closest to the microservice concept of a lighter deployment package. But it is up to you, the developer, after all, you are the god and creator of your application. After the framework is chosen, the developer will start developing the micro-integration services, composing between microservices or even use it to create a microservice. (With two frameworks, the development experience of the route itself is basically the same, by configuring camel components.)

Once the developer is ready to deploy the integration service, we can then decide how to deploy it onto the OpenShift platform. In FIS 2.0 there are two options, Binary S2i will build the entire application locally, and push it onto OpenShift, so OpenShift platform will use it to create and build the container image that it will run on, and for Source S2i, everything is build on top of OpenShift, so developer need to set the location of the source code in order for OpenShift to retrieve it to build the application and the container image.

And that is all. It is actually much more powerful, it’s hard to describe it in one go, dive into it, and you will soon find how fascinating the technology is, and how it can help you to resolve your current problems.

Here is a quick video that shows you how to get your first FIS 2.0 running.

The steps are as follows,

  • Install and start up OpenShift on your local machine 
  • Install FIS image stream definition into OpenShift (raw.githubusercontent.com/jboss-fuse/application-templates/application-templates-2.0.redhat-000026/fis-image-streams.json) 
  • In JBoss developer studio, create a Camel Spring-boot using the new archetype in FIS 2.0 
  • (Archetype catalog url: https://maven.repository.redhat.com/earlyaccess/all/io/fabric8/archetypes/archetypes-catalog/2.2.180.redhat-000004/archetypes-catalog-2.2.180.redhat-000004-archetype-catalog.xml)
  • Deploy to OpenShift using the Binary S2i tool (maven plugin)

 


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


Download and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java™ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Automate integration CI/CD process

Red Hat Fuse Integration Service 2.0 tech preview was released a few weeks ago and as it’s based on Red Hat OpenShift 3.3, which has pipeline capability on top of it (tech preview on OpenShift as well), you are able to get one step closer to a more automated and agile continuous integration. As well as, a deployment one-stop platform for us, the integration developer.

For the pipeline to work on OpenShift, you need Jenkins installed and running. OpenShift uses it to build, process and handle all the workflows. If you are familiar with developing in OpenShift, building the pipeline is pretty simple and straight-forward. The pipeline is defined as a build configuration in OpenShift, just create a build config then import it to the namespace you want it to be in. And that is it.

This is what the build config looks like, note the strategy type is called JenkinsPipeline.  This will trigger the interaction with Jenkins, and pushes the defined Jenkinsfile onto the server itself. The Jenkins Server will then interact with Openshift and start the automated CI/CD process.

kind: BuildConfig
apiVersion: v1
metadata:
 name: pipelinename
 labels:
 name: pipelinename 
spec:
 triggers:
 - type: GitHub
 github:
 secret: secret101
 - type: Generic
 generic:
 secret: secret101
 strategy:
 type: JenkinsPipeline
 jenkinsPipelineStrategy:
 jenkinsfile: "
 node('maven') { 
 stage('build') { 
 print 'build'
 openshiftBuild(buildConfig: 'buildconfigname', showBuildLogs: 'true')
 } 
 stage('staging') {
 print 'stage'
 openshiftDeploy(deploymentConfig: 'deploymentconfigame') 
 } 
 }"
As you can see on the above Jenkinsfile in the build configuration, it’s interacting with OpenShift itself through the OpenShift and Jenkins plugin. For instance, you could trigger build an image, deploy the application through calling the deployment config, tag an image or even scale up and down the number of containers.

This upper part of the blog is pretty generic to most of the applications running on OpenShift, and Fuse Integration Service is just another application running on top of it. But this application just simply contains PATTERN BASE integration technology that has 160+ built-in components in it, so we don’t have to waste time and energy on repetitive stuff, no big deal. 🙂

No matter what version you are using, this pipeline capability can help you automate your integration microservice.

Here is a quick demo video that takes you through the entire process.


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


Download and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java™ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.

Red Hat Logo

Spring Boot and OAuth2 with Keycloak

The tutorial Spring Boot and OAuth2 showed how to enable OAuth2 with Spring Boot with Facebook as AuthProvider; this blog is the extension of showing how to use KeyCloak as AuthProvider instead of Facebook. I intend to keep this example as close to the original Spring Boot and OAuth2 and will explain the changes to the configuration to make the same application work with KeyCloak. The source code for the examples are available in the github repositories listed below.

This project deploys and gets KeyCloak running in your environment.  Refer to the README on the repo for more information on how to set it up and get started.

This project is the same application used in Spring Boot and OAuth2 with some modifications done for this specific demo. The application deployment environment can be either minikube or MiniShift or RHEL CDK,  as a developer you don’t need to worry how it’s deployed there, as the application makes use of the super fabric8, which does the seamless deployment across different Kubernetes based environments. So from the projects all you have to do is issue the maven commands, the README of projects will guide you with the required maven commands. OK, I hope we spoke enough on how to setup, where to find source etc., but what was really done to make this work is what we will see now. Before we go further let’s take a look at the applicaiton.yml, which was used on the original Spring Boot and OAuth2.

The important changes from this one with respect to Keycloak are:

  • accessTokenUri

 The required REST URI to get the “access_token” for Keycloak is “http://keycloakHost:keycloakPort/auth/realms/{relam}/protocol/openid-connect/token”.

  • userAuthorizationUri

 The required REST URI to authorize a user for Keycloak is “http://keycloakHost:keycloakPort/auth/realms/{relam}/protocol/openid-connect/auth”.

  • tokenName

We don’t need to set this explicitly, as by default spring-security uses the “access_token” which can be retrieved form Keycloak OAuth2 response.

  • authenticationScheme

This property is used to define how the credentials are sent to the Auth provider, Keycloak expects it to be “header”, we can ignore this property as spring-security-oauth by default sets the “header” authentication scheme.

  • clientAuthenticationScheme

This property will be used to determine how the “token” is transmitted to the Auth provider, as Keycloak uses the “header” based authentication scheme we can either set this property to “header” or skip it, as by default the clientAuthenticaitonScheme is set to “header” by spring-security-oauth.

I preferred to set these values specifically for better readability and understanding of the application.yml. For this demo application we will be using a realm called “springboot” with a clientId as “spring-boot-demos”, the new application.yml with the updates for Keycloak looks as follows:

The environment variables ${CLIENT_ID} and ${CLIENT_SECRET} are made available via the Kubernetes secret, which are the base64 encoded values of clientId “spring-boot-demos” and its corresponding client secret which is available here. The ${KEYCLOAK_URL} will be dynamically computed via fabric8 annotations and set as environment variable in the springbook-keycloak-demo deployment, this is done using the exposecontroller pod which will be available from fabric8 and will be deployed to the minikube or  MiniShift or RHEL CDK environments. Please refer to the fabric8 documentation on how to set it up for the environment of your choice. Now we are all set to deploy the application and check its integration with Keycloak, to get the application deployed you need to do the following.

  • Setup KeyCloak with demo realm
    • Clone the keycloak-demo-server setup from github, lets call the project directory as $KEYCLOAK_SERVER_HOME.
    • Form the $KEYCLOAK_SERVER_HOME run command “mvn clean install fabric8:deploy”, this command will deploy Keycloak  with demo realm and users pre-loaded.
    • To access the Keycloak url, use the command “gofabric8 service keycloak-demo-server –url” to obtain the url of the deployed Keycloak and use the output url on the console to access Keycloak.
    • The default admin credentials is “admin/admin”.
    • For demo users and more detailed deployment configuration refer to the README.
  • Deploy the Spring Boot demo application
    • Clone the keycloak-demo-server setup from github, lets call the project directory as $DEMO_APP_HOME.
    • Form the $DEMO_APP_HOME run the command “mvn -Pfabric8 clean install fabric8:deploy”, this command will deploy the Spring Boot application.
    • To access the application url, use the command “gofabric8 service springboot-keycloak-demo –url” to obtain the url of the deployed application and use the output url on the console to access the application.
    • The demo users found here can be used to login to the demo application.
    • For any additional details on deployment refer to README.

You need to wait for sometime for the pods to be available and running before you can use the application. Last but not the least, the Keycloak setup using the steps described above has a mock url set for the client “spring-boot-demos” pointing to localhost:8080, you need to update this using the Keycloak admin console and set client urls to application url retrieved using the command “gofabric8 service springboot-keycloak-demo –url” e.g. assuming that your application url from command “gofabric8 service springboot-keycloak-demo –url” is “http://192.168.64.14:30219” then the following screenshots shows the updates done via Keycloak console.

screen-shot-2016-12-12-at-10-29-35-pmscreen-shot-2016-12-12-at-10-30-13-pm

There you go, now you have a spring boot demo application configured to work with Keycloak, your application can now be configured with Single OAuth2 provider KeyCloak which can then be configured to provide;

  • New User Registrations
  • Integrate with LDAP
  • Integrate with Third Party Identity providers like GitHub, Google etc.,
  • And much more…

Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Eclipse Vert.x Core Cheat Sheet

Eclipse Vert.x is a toolkit used to build reactive and distributed systems on the Java Virtual Machine. Vert.x supports a variety of languages letting you choose which one you’d prefer. The Vert.x Core cheat sheet covers the creation of a project using Apache Maven, Gradle or the Vert.x CLI, and references most common Vert.x Core APIs, in 3 different languages (Java, JavaScript, and Groovy). Forgot how to create an HTTP server, use the HTTP client, implement a request-response on the event bus?  Just check the cheat sheet. Together with the Red Hat Developer Team, I’ve put together this handy cheat sheet – hopefully, you’ll find it useful too!

Download the Vert.x cheat sheet.

 


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!