Applying API Best Practices in Fuse

API plays a huge part in modern integration architecture design, a good design will allow your application to thrive, a bad design will end up on the cold stone bench and eventually vanishes. ūüôĀ

Continue reading “Applying API Best Practices in Fuse”


Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 


Download¬†and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java‚ĄĘ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.

Data Encapsulation vs. Immutability in Javascript

A while ago, I wrote a fairly long post attempting to shed some light on a few things you can do in your JavaScript classes to enforce the concept of data encapsulation – or data “hiding”. But as soon as I posted it, I got some flak from a friend who is a Clojure programmer. His first comment about the article was this.

Mutability and data encapsulation are fundamentally at odds.

Eventually, he walked that back – but only just a little bit. His point, though, was intriguing. I asked him to explain what he meant.

Why is it so wrong to return the id in your example? I’m guessing it’s not. It might be darn useful to fetch it. In fact, it might greatly enhance the data model for it to be there. But you feel you must “hide” it. Why? Because it’s mutable or because you must go to great lengths to make it immutable. Because JavaScript. But if you were returning an immutable data structure, you wouldn’t even think about it. All that stress just falls away; you no longer care about hiding your data or encapsulating it. You only care that it’s correct and that it properly conveys the essential complexity of your system.

We’ll ignore his little dig on the language itself, for now. But maybe what he’s saying has some value. I do like the idea of a bunch of “stress just falling away”. Let’s look at where we ended up in that last post about data encapsulation.

const ID = Symbol
class Product {
  constructor (name) {
    this.name = name;
    this[ID] = 2340847;
  }
  related () {
    return lookupRelatedStuff( this[ID] );
  }
}

So, here we’ve done our best to hide the id property using a Symbol as a property key. It’s not accessible within userland, and it’s barely visible unless you know about Reflect.ownKeys() or Object.getOwnPropertySymbols(). And of course, I never mentioned the name property in the last article. But the truth is, it suffers from the same issues that plague the id property. It really shouldn’t change. But to accomplish that, I have to replace every this.name with this[NAME] using a Symbol for the property key. And as my friend said, these properties are arguably useful in userland. I just don’t want them changed. I want immutability. How can I do this using JavaScript?

Is it cold in here, or is it just me?

Object.freeze() is nothing new. It’s been around forever. Let’s take a look at how we’d use it to make our Product instances immutable.

class Product {
  constructor (name) {
    this.name = name;
    this.id = 2340847;
    // make this instance immutable
    Object.freeze(this);
  }
}
const widget = new Product
// Setting the name to something else has no effect.
widget.name = something-else
widget.name; // lta-widget

There now. That wasn’t so hard, was it? We give a Product instance the deep freeze and return it. What about those situations where you really need to mutate your application state. What if, for example, there’s a price that could change over time? Normally, we’d do something super simple. Like just update the price.

this.price = getUpdatedPrice(this);

But of course, if we’re going for immutability and the safety that comes along with that, then this is clearly not the correct approach. We are mutating the Product instance when we do this.price = someValue(). What can we do about it? One strategy might be to use Object.assign() to copy properties from one object to another, always generating a new object for every data mutation. Perhaps something like this.

class Product {
  updatePrice () {
    // check DB to see if price has changed
    return Object.assign(new Product(), this, { price: getNewPrice(this) } );
  }
}

Now we are getting somewhere. We can use Object.freeze() to make our objects immutable, and then Object.assign() to generate a new object using existing properties whenever something needs to be mutated. Let’s see how well this works.

acmeWidget.updatePrice();
TypeError: Cannot assign to read only property price of object
    at repl:1:23
    at sigintHandlersWrap (vm.js:22:35)
    at sigintHandlersWrap (vm.js:96:12)
    at ContextifyScript.Script.runInThisContext (vm.js:21:12)
    at REPLServer.defaultEval (repl.js:313:29)
    at bound (domain.js:280:14)
    at REPLServer.runBound [as eval] (domain.js:293:12)
    at REPLServer. (repl.js:513:10)
    at emitOne (events.js:101:20)
    at REPLServer.emit (events.js:188:7)

Ugh! This is happening because I’ve got new Product() as the first parameter to the Object.assign() call, and once a Product is constructed, it’s frozen. I need to defer freezing the object until after it’s constructed. I could use a factory function to return frozen instances of Product. But really, why do I need the Product data type at all? Wouldn’t a simple Object be fine? For the sake of simplification and experimentation, let’s give it a shot.

// Use a factory function to return plain old JS objects
const productFactory = (name, price) = Object.freeze({ name, price });

// Always bump the price by 4%! ūüôā
const updatePrice = (product) =gt Object.freeze(
      Object.assign({}, product, { price: product.price * 1.04 }));

const widget = productFactory(Acme Widget 1.00)
// ={ name: Acme Widget, price: 1 }

const updatedWidget = updatePrice(widget);
// ={ name: Acme Widget, price: 1.04 }

widget;
// = { name: Acme Widget, price: 1 }

Lingering doubts

I still have doubts, though. For one thing, making a new instance for every change seems pretty inefficient, doesn’t it? And for another, what happens when my data model has nested objects as properties? Do I have to freeze those as well? It turns out, yes I do. All of the properties on my product object are immutable. But properties of nested objects can be changed. That freeze doesn’t go very deep. Maybe I can fix that by just freezing the nested objects.

const productFactory = (name, price) =
  Object.freeze({
    name,
    price,
    metadata: Object.freeze({
      manufacturer: name.split()[0]
    })
  });

Well, that’s OK, perhaps. But there is still a problem here. Can you tell what it is? What if my data model is nested several layers deep? That’s not very uncommon, and now my factory ends up looking something like this.

const productFactory = (name, price) =
  Object.freeze({
    name,
    price,
    metadata: Object.freeze({
      manufacturer: name.split()[0],
      region: Object.freeze({
        country: Denmark
        address: Object.freeze({
          street: HCA Way
          city: Copenhagen
        })
      })
    })
  });

Ugh! This can start to get ugly real fast. And we haven’t even started to discuss collections of objects, like Arrays. Maybe my friend was right. Maybe this is a language issue.

You feel you must “hide” it. Why? Because it’s mutable or because you must go to great lengths to make it immutable. Because JavaScript.

OK, so is this it? Should I just throw in the towel and give up on immutability in my JavaScript applications? After all, I’ve gone this far without it. And I didn’t have that many bugs. Really… I promise! Well, if you want, to embrace this style fully is to write your application in Clojure or Scala or a similarly designed language where data is immutable. This is a fundamental part of the Clojure language. Instead of spending all of your time reading blog posts about fitting a square peg into a round hole, with Clojure you can just focus on writing your application and be done with it. But maybe that’s not an option. Maybe you’ve got to follow company language standards. And anyway, some of us kind of do like writing code in JavaScript, so let’s, for the sake of argument, take a look at some options. But first, let’s just review why we’re going to all of this trouble.

The case for immutability

So much of what makes software development hard (other than cache invalidation, and naming) has to do with state maintenance. Did an object change state? Does that mean that other objects need to know about it? How do we propagate that state across our system? objects, if we shift our thinking about data so that everything is simply a value, then there is no state maintenance to worry about. Don’t think of references to these values as variables. It’s just a reference to a single, unchanging value. But this shift in thinking must also affect how we structure and think about our code. Really, we need to start thinking more like a functional programmer. Any function that mutates data, should receive an input value, and return a new output value – without changing the input. When you think about it, this constraint pretty much eliminates the need for the classthis. Or at least it eliminates the use of any data type that can modify itself in the traditional sense, for example with an instance method. In this worldview, the only use for class is namespacing your functions by making them static. But to me, that seems a little weird. Wouldn’t it just be easier to stick to native data types? Especially since the module system effectively provides namespacing for us. Exports are namespaced by whatever name we choose to bind them to when require() file.

product.js

const factory = (name, price) = Object.freeze({ name, price });

const updatePrice = (product) = Object.freeze(
  Object.assign({}, product, { price: product.price * 1.04 }));

module.exports = exports = { factory, updatePrice };

app.js

const Product = require(/product.js&);
Product.factory; // = [Function: factory]
Product.updatePrice; // = [Function: updatePrice]

For now, just keep these few things in mind.

  • Think of variables (or preferably consts) as values not objects. A value cannot be changed, while objects can be.
  • Avoid the use of class and this. Use only native data types, and if you must use a class, don’t ever modify its internal properties in place.
  • Never mutate native type data in place, functions that alter the application state should always return a copy with new values.

That seems like a lot of extra work

Yeah, it is a lot of extra work, and as I noted earlier, it sure seems inefficient to make a full copy of your objects every time you need to change a value. Truthfully, to do this properly, you need to be using shared persistent data structures which employ techniques such as hash map tries and vector tries to efficiently avoid deep copying. This stuff is hard, and you probably don’t want to roll your own. I know I don’t.

Someone else has already done it

Facebook has released a popular NPM module called, strangely enough,immutable. By employing the techniques above, immutable takes care of the hard stuff for you, and provides an efficient implementation of

A mutative API, which does not update the data in-place, but instead always yields new updated data.

Rather than turning this post into an immutable module tutorial, I will just show you how it might apply to our example data model. The immutable module has a number of different data types. Since we’ve already seen our Product model as a plain old JavaScript Object, it probably makes the most sense to use the Map data type from immutable. product.js

const Immutable = require(immutable);
const factory = (name, price) =Immutable.Map({name, price});
module.exports = exports = { factory };

That’s it. Pretty simple, right? We don’t need an updatePrice function, since we can just use set(), and Immutable.Map handles the creation of a new reference. Check out some example usage. app.js

const Product = require(/product.js);

const widget = Product.factory(Acme widget, 1.00);
const priceyWidget = widget.set(price, 1.04);
const clonedWidget = priceyWidget;
const anotherWidget = clonedWidget.set(price, 1.04);

console.log(widget); // = Map {name: 1 }
console.log(priceyWidget); // = Map {Acme widget: 1.04 }
console.log(clonedWidget); // = Map { Acme widget: 1.04 }
console.log(anotherWidget); // = Map { Acme widget: 1.04 }

Things to take note of here: first, take a look at how we are creating the priceyWidget reference. We use the return value from widget.set(), which oddly enough, doesn’t actually change the widget reference. Also, I’ve cloned priceyWidget. To create a clone we just need to assign one reference to another. And then, finally, an equivalent value for price is set on clonedWidget to create yet another value.

Value comparisons

Let’s see how equality works with these values.

// everything but has a price of 1.04
// so is not equivalent to any of them
assert(widget !== priceyWidget);
assert(widget !== clonedWidget);
assert(!widget.equals(priceyWidget));
assert(!widget.equals(clonedWidget));
assert(!widget.equals(anotherWidget));

This makes intuitive sense. We create a widget and when we change a property, the return value of the mutative function provides us with a new value that is not equivalent as either a reference or value. Additional references to the new value instance priceyWidget are also not equivalent. But what about comparisons between priceyWidget and its clone. Or priceyWidget and a mutated version of the clone that actually contains all of the same property values. Whether we are comparing references with === or using the deep Map.equals, we find that equivalence holds. How cool is that?

// priceyWidget is equivalent to its clone
assert(priceyWidget === clonedWidget);
assert(priceyWidget.equals(clonedWidget));

// Its also equivalent to another, modified value
// because, unlike setting a new value for 
// to create this modification didnt
// actually change the value.
assert(priceyWidget === anotherWidget);
assert(priceyWidget.equals(anotherWidget));

This is just the beginning

When I started writing this post, it was primarily as a learning experience for me. My friend’s friendly jab got me interested in learning about immutable data in JavaScript, and how to apply these techniques to my own code. What I really learned is that, while immutable systems have benefits, there are many hurdles to jump through when writing code this way in JavaScript. Using a high-quality package like immutable.js is a good way to address these complexities. I don’t think I will immediately change all of my existing packages to use these techniques. Now I have a new tool in my toolbox, and this exploration has opened my eyes to the benefits of thinking about data in new ways. If any of this has peaked your interest, I encourage you to read further. Topics such as nested data structures, merging data from multiple values, and collections are all worth exploring. Below, you’ll find links for additional reading.

  • immutable.js documentation: http://facebook.github.io/immutable-js/docs/#/
  • Persistent data structures: http://en.wikipedia.org/wiki/Persistent_data_structure
  • Hash map tries: http://en.wikipedia.org/wiki/Hash_array_mapped_trie
  • Vector tries: http://hypirion.com/musings/understanding-persistent-vector-pt-1

Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 

Getting Started with Fuse Integration Service 2.0 Tech preview

To get started with FIS 2.0, for people who are just getting to know the technology, here is how I interpret it. Basically, it’s divided into two aspects.

1. Integration development: FIS uses Apache Camel as the core technology that creates, orchestrates, and composes microservices into a super lightweight thin integration layer, and becomes the API provider and service orchestrator through exposing RESTful or messaging service endpoints. And you can choose to either package and run it with Spring-Boot or Karaf.

2. Application Deployment and Management: FIS takes advantages of the OpenShift platform, and allows you to separately deploy the micro-integration service among a distributed environment, at the same time it takes care of the failover, high availability, load balancing and look up available services for you.

So, now we know what is in FIS 2.0, it’s time to take a closer look at how it is achieved, as a developer, you first need to decide to go with either Karaf (OSGi) or Spring-Boot framework, I personally prefer the Spring-Boot, because it matches closest to the microservice concept of a lighter deployment package. But it is up to you, the developer, after all, you are the god and creator of your application. After the framework is chosen, the developer will start developing the micro-integration services, composing between microservices or even use it to create a microservice. (With two frameworks, the development experience of the route itself is basically the same, by configuring camel components.)

Once the developer is ready to deploy the integration service, we can then decide how to deploy it onto the OpenShift platform. In FIS 2.0 there are two options, Binary S2i will build the entire application locally, and push it onto OpenShift, so OpenShift platform will use it to create and build the container image that it will run on, and for Source S2i, everything is build on top of OpenShift, so developer need to set the location of the source code in order for OpenShift to retrieve it to build the application and the container image.

And that is all. It is actually much more powerful, it’s hard to describe it in one go, dive into it, and you will soon find how fascinating the technology is, and how it can help you to resolve your current problems.

Here is a quick video that shows you how to get your first FIS 2.0 running.

The steps are as follows,

  • Install and start up OpenShift on your local machine¬†
  • Install FIS image stream definition into OpenShift (raw.githubusercontent.com/jboss-fuse/application-templates/application-templates-2.0.redhat-000026/fis-image-streams.json)¬†
  • In JBoss developer studio, create a Camel Spring-boot using the new archetype in FIS 2.0¬†
  • (Archetype catalog url: https://maven.repository.redhat.com/earlyaccess/all/io/fabric8/archetypes/archetypes-catalog/2.2.180.redhat-000004/archetypes-catalog-2.2.180.redhat-000004-archetype-catalog.xml)
  • Deploy to OpenShift using the Binary S2i tool (maven plugin)

 


Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 


Download¬†and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java‚ĄĘ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Red Hat Logo

Spring Boot and OAuth2 with Keycloak

The tutorial Spring Boot and OAuth2 showed how to enable OAuth2 with Spring Boot with Facebook as AuthProvider; this blog is the extension of showing how to use KeyCloak as AuthProvider instead of Facebook. I intend to keep this example as close to the original Spring Boot and OAuth2 and will explain the changes to the configuration to make the same application work with KeyCloak. The source code for the examples are available in the github repositories listed below.

This project deploys and gets KeyCloak running in your environment.  Refer to the README on the repo for more information on how to set it up and get started.

This project is the same application used in¬†Spring Boot and OAuth2¬†with some modifications done for this specific demo. The application deployment environment can be either¬†minikube¬†or MiniShift¬†or¬†RHEL CDK, ¬†as a developer you don’t need to worry how it’s deployed there, as the application makes use of the super fabric8, which does the seamless deployment across different Kubernetes based environments. So from the projects all you have to do is issue the maven commands, the README of projects will guide you with the required maven commands. OK, I hope we spoke enough on how to setup, where to find source etc., but what was really done to make this work is what we will see now. Before we go further let’s take a look at the applicaiton.yml,¬†which was used on the original¬†Spring Boot and OAuth2.

The important changes from this one with respect to Keycloak are:

  • accessTokenUri

¬†The required REST URI to get the “access_token” for Keycloak is “http://keycloakHost:keycloakPort/auth/realms/{relam}/protocol/openid-connect/token”.

  • userAuthorizationUri

¬†The required REST URI to authorize a user for Keycloak is “http://keycloakHost:keycloakPort/auth/realms/{relam}/protocol/openid-connect/auth”.

  • tokenName

We don’t need to set this explicitly, as by default spring-security uses the “access_token” which can be retrieved form Keycloak OAuth2 response.

  • authenticationScheme

This property is used to define how the credentials are sent to the Auth provider, Keycloak expects it to be “header”, we can ignore this property as spring-security-oauth by default sets the “header” authentication scheme.

  • clientAuthenticationScheme

This property will be used to determine how the “token” is transmitted to the Auth provider, as Keycloak uses the “header” based authentication scheme we can either set this property to “header” or skip it, as by default the clientAuthenticaitonScheme is set to “header” by spring-security-oauth.

I preferred to set these values specifically for better readability and understanding of the application.yml.¬†For this demo application we will be using a realm called “springboot” with a clientId as “spring-boot-demos”, the new application.yml with the updates for Keycloak looks as follows:

The environment variables ${CLIENT_ID} and ${CLIENT_SECRET} are made available via the Kubernetes secret, which are the base64 encoded values of clientId “spring-boot-demos” and¬†its corresponding client secret which is available here. The ${KEYCLOAK_URL} will be dynamically computed via fabric8 annotations and set as environment variable in the springbook-keycloak-demo deployment,¬†this is done using the exposecontroller pod which will be available from fabric8 and will be¬†deployed to the¬†minikube¬†or ¬†MiniShift¬†or¬†RHEL CDK¬†environments. Please refer to the fabric8 documentation on how to set it up for the environment of your choice. Now we are all set to deploy the application and check its integration with Keycloak, to get the application deployed you need to do the following.

  • Setup KeyCloak with demo realm
    • Clone the keycloak-demo-server setup from github, lets call the project directory as $KEYCLOAK_SERVER_HOME.
    • Form the $KEYCLOAK_SERVER_HOME run command “mvn clean install fabric8:deploy”, this command will deploy Keycloak¬† with demo realm and users pre-loaded.
    • To access the Keycloak url, use the command “gofabric8 service keycloak-demo-server –url” to obtain¬†the url of the deployed Keycloak and use the output url on the console to access Keycloak.
    • The default admin credentials is “admin/admin”.
    • For demo users and more detailed deployment configuration refer to the¬†README.
  • Deploy the Spring Boot demo application
    • Clone the keycloak-demo-server setup from github, lets call the project directory as $DEMO_APP_HOME.
    • Form the $DEMO_APP_HOME run the command “mvn -Pfabric8 clean install fabric8:deploy”, this command will deploy the Spring Boot application.
    • To access the application url, use the command “gofabric8 service springboot-keycloak-demo –url” to obtain the url of the deployed application¬†and use the output url on the console to access the application.
    • The demo users found here¬†can be used to¬†login to the demo application.
    • For any additional details on deployment refer to README.

You need to wait for sometime for the pods to be available and running before you can use the application. Last but not the least, the Keycloak setup using the steps described above has a mock url set for the client “spring-boot-demos” pointing to localhost:8080, you need to update this using the Keycloak admin console and set client urls to application url retrieved using the command “gofabric8 service springboot-keycloak-demo –url” e.g. assuming that your application url from command¬†“gofabric8 service springboot-keycloak-demo –url” is “http://192.168.64.14:30219” then the following screenshots shows the updates done via Keycloak console.

screen-shot-2016-12-12-at-10-29-35-pmscreen-shot-2016-12-12-at-10-30-13-pm

There you go, now you have a spring boot demo application configured to work with Keycloak, your application can now be configured with Single OAuth2 provider KeyCloak which can then be configured to provide;

  • New User Registrations
  • Integrate with LDAP
  • Integrate with Third Party Identity providers like GitHub, Google etc.,
  • And much more…

Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Eclipse Vert.x Core Cheat Sheet

Eclipse Vert.x is a toolkit used to build reactive and distributed systems on the Java Virtual Machine. Vert.x supports a variety of languages letting you choose which one you‚Äôd prefer. The Vert.x Core cheat sheet covers the creation of a project using Apache Maven, Gradle or the Vert.x CLI, and references most common Vert.x Core APIs, in 3 different languages (Java, JavaScript, and Groovy). Forgot how to create an HTTP server, use the HTTP client, implement a request-response on the event bus? ¬†Just check the cheat sheet. Together with the Red Hat Developer Team, I‚Äôve put together this handy cheat sheet ‚Äď hopefully, you‚Äôll find it useful too!

Download the Vert.x cheat sheet.

 


Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 

New vscode-java 0.0.8 release

Version 0.0.8 of the Java extension for Visual Studio Code¬†(a.k.a. vscode-java) has been unleashed onto the world. It’s available in the Visual Studio Code Marketplace¬†and can be found and installed directly from within Code.

Highlights of this release can be seen in this screencast:

Gradle Support

vscode-java finally provides basic Gradle support for Java projects. Basically, you just need to open a folder containing a build.gradle file in its root and wait for the Java support to kick in. Code completion, Navigation, References and all the other existing features of the Java support will work as long as the Gradle project can be successfully imported.

However, please be aware that Gradle-based Android projects are not supported, because of a limitation in BuildShip, the upstream project in which this Gradle support is based on.

Update project configuration

Whenever a build configuration file is modified and saved, a project re-configuration (eg. Java compilation level update) or classpath change (dependencies or source paths) might be required, so the VS Code internal model stays in sync with the build descriptor.

So, a warning will pop up whenever a Maven pom.xml or Gradle (*.gradle) file is saved:

Choosing Never¬†will discard the message so it won’t show up on subsequent build file changes. Clicking ¬†Now¬†will trigger a project update command, but the message will show¬†up again on the next change. Selecting ¬†Always will trigger configuration updates every time the build file is saved.

Please be aware a project update can be a long-running, CPU-intensive operation. For large projects, it might be preferable to keep the option turned off and instead call the  Update project configuration command manually (via Ctrl+Alt+U or Cmd+Alt+U on MacOS) when the editor is focused on a Maven pom.xml or a Gradle file.

Whenever you need to change the behavior, you can open the workspace settings and look for the java.configuration.updateBuildConfiguration key. It specifies how modifications on build files update the Java classpath/configuration. Supported values are:

  • disabled¬†: never updates automatically on save, doesn’t show a warning.
  • interactive¬†: asks about updating on every build file save.
  • automatic¬†: updating is automatically triggered on build file save.

Mute “Incomplete classpath” warnings

Whenever a java file is opened, that does not belong to a project (what we call a standalone Java file), vscode-java is unable to compute a proper classpath. It makes it useless to report compilation errors, as the UI would be filled with distracting red errors all over the file. But vscode-java is still able to provide content-assist for base JDK classes, and report syntax errors, so the following warning is displayed:

It’s now possible to discard the message permanently, by clicking the Don’t show again option.

Should you change your mind, it’s possible to modify that¬†choice in VS Code’s user settings: The¬†java.errors.incompleteClasspath.severity¬† key specifies the severity of the message when the classpath is incomplete for a Java file. Supported values are ignore, info, warning¬†and¬†error.

Conclusion

This release represents an important milestone for this small project as it finally provides basic Gradle support for Java projects, by far, the most requested feature by the community along with some important usability features.

A complete changelog for this version is available there.

This Java extension is powered by two components, a front-end part, the VS Code client, and a back-end part, the headless Java Language Server, based on Eclipse JDT, The M2E¬†project (for Maven support) and now BuildShip (for¬†Gradle support). Both components are developed under the open source Eclipse Public License. All contributions are welcome, whether it’s code, feedback, bug reports. Please do so under any of these Github repositories:

 


Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 

How to containerize your Camel route on Karaf within OpenShift

The Red Hat JBoss Fuse solution offers a new approach of ESB, both lightweight and modular. It is perfectly suited to allow you to implement light integrations.

JBoss Fuse is fully supported,¬†based¬†on the power of Apache Karaf¬†— Karaf allows¬†for the easy deployment of your ActiveMQ Broker, your CXF web services, or your own Apache Camel routes.

Most of us are more familiar with the OSGI Environment, and what it offers: things like control of classloader behavior, module isolation, and APIs within a single app/JVM process.

For this post, we are gonna to setup a simple camel-route using a FIS (Fuse Integration Service) based on a Karaf image (jboss-fuse-6/fis-karaf-openshift), with which we will containerize your camel route on Karaf within OpenShift!


Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 


Download¬†and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java‚ĄĘ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Spring Cloud for Microservices Compared to Kubernetes

Spring Cloud and Kubernetes both claim to be the best environment for developing and running Microservices, but they are both very different in nature and address different concerns. In this article we will look at how each platform is helping in delivering Microservice based architectures (MSA), in which areas they are good at, and how to take best of both worlds in order to succeed in the Microservices journey.

Background Story

Recently I read a great article about building Microservice Architectures With Spring Cloud and Docker by A. Lukyanchikov. If you haven’t read it, you should, as it gives a comprehensive overview of what it takes to create a simple Microservices based system using Spring Cloud. In order to build a scalable and resilient Microservices system that could grow to tens or hundreds of services, it must be centrally managed and governed with the help of a tool set that has extensive build time and run time capabilities. With Spring Cloud, that involves implementing both functional services (such as statistics service, account service and notification service) and supporting infrastructure services (such as log analysis, configuration server, service discovery, auth service). A diagram describing such a MSA using Spring Cloud is below:

365c0d94-eefa-11e5-90ad-9d74804ca412-2
MSA with Spring Cloud (by A. Lukyanchikov)

Continue reading “Spring Cloud for Microservices Compared to Kubernetes”


Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 

What's New in Red Hat JBoss BRMS and BPM Suite 6.4

Logotype_RHJB_BPMSuite_CMYK_GrayRed Hat has just released new versions of its popular business automation products: Red Hat JBoss BRMS &  Red Hat JBoss BPM Suite 6.4. In this post we will highlight the improvements and new features these releases brings. Apart from stability and performance improvements, version 6.4 brings new, highly requested, features that improve the platform experience in larger enterprises.

Logotype_RHJB_BRMS_CMYK_GrayThe new versions of the platforms are available both from the Red Hat Customer Portal (BPM Suite and BRMS) and the Red Hat Developers website. Installation instructions can be found in the “Getting Started Guide” for BPM Suite and BRMS¬†and on the¬†Red Hat Developers “Get Started” pages for BPM Suite and BRMS. Finally, the installation demo’s have been updated to target the latest versions:

  • https://github.com/jbossdemocentral/bpms-install-demo
  • https://github.com/jbossdemocentral/brms-install-demo

Continue reading “What's New in Red Hat JBoss BRMS and BPM Suite 6.4”


Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!

 

How To Setup Integration & SOA Tooling For JBoss Developer Studio 10

 The release of the latest JBoss Developer Studio (JBDS) brings with it the questions around how to get started with the various JBoss Integration and BPM product tool sets that are not installed out of the box.

In this series of articles we will outline for you how to install each set of tools and explain which products they are supporting. This should help you in making an informed decision about what tooling you might want to install before embarking on your next JBoss integration project.

There are four different software packs that offer tooling for various JBoss integration products:

  1. JBoss Integration and SOA Development
  2. JBoss Data Virtualization Development
  3. JBoss Business Process and Rules Development

    Tooling is available under software updates with early access enabled.
  4. JBoss Fuse Development

This article will outline how to get started with the JBoss integration and SOA development tooling and any of the JBDS 10 series of releases.

Continue reading “How To Setup Integration & SOA Tooling For JBoss Developer Studio 10”


Join Red Hat Developers, a developer program for you to learn, share, and code faster ‚Äď and get access to Red Hat software for your development.¬† The developer program and software are both free!