Unlock your Microsoft Excel data with Red Hat JBoss Data Virtualization

After Unlock your MariaDB/MySQL data, Unlock your PostgreSQL data, and Unlock your Hadoop data with Hortonworks episodes, let’s continue the journey with this new episode of the series: “Unlock your [….] data with Red Hat JBoss Data Virtualization.” Through this blog series, we will look at how to connect Red Hat JBoss Data Virtualization (JDV) to different and heterogeneous data sources.

JDV is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. It makes data spread across physically diverse systems — such as multiple databases, XML files, and Hadoop systems — appear as a set of tables in a local database. By providing the following functionality, JDV enables agile data use:

  1. Connect: Access data from multiple, heterogeneous data sources.
  2. Compose: Easily combine and transform data into reusable, business-friendly virtual data models and views.
  3. Consume: Makes unified data easily consumable through open standards interfaces.

It hides complexities, like the true locations of data or the mechanisms required to access or merge it. Data becomes easier for developers and users to work with.

This post will guide you step-by-step on how to connect JDV to a Microsoft Excel spreadsheet using Teiid Designer and the Microsoft Excel translator. A translator acts as the bridge between JBoss Data Virtualization and an external system. The Microsoft Excel translator provides a quick and easy way to read a Microsoft Excel spreadsheet and provides contents of the spreadsheet in the tabular form that can be integrated with other sources.

Continue reading “Unlock your Microsoft Excel data with Red Hat JBoss Data Virtualization”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

PowerShell on RHEL in One Minute

While not specifically related to .NET on Linux, PowerShell on Linux is available and — let’s face it — if you’re a Windows developer you’re using PowerShell.

If you’re not using PowerShell, now is the time to start. While bash is the traditional Linux shell, PowerShell gives you the advantage of objects. In PowerShell, everything is an object, with properties you can directly access. It’s also a very powerful object-oriented scripting language, with classes and methods, much like any OOP language.

Add to that the fact that you now have one scripting language for any platform, and PowerShell may (should in my not-so-humble opinion) become your shell and scripting language of choice.

(Hint: If you aren’t using PowerShell, here is your opportunity to turn your coding skills up to 11.)

Continue reading “PowerShell on RHEL in One Minute”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Programmatic Debugging: Part 1 the challenge

As every developer knows, debugging an application can be difficult and often enough you spend as much or more time debugging an application as originally writing it. Every programmer develops their collection of tools and techniques. Traditionally these have included full-fledged debuggers, instrumentation of the code, and tracing and logging. Each of these has their particular strengths and weaknesses.

Continue reading “Programmatic Debugging: Part 1 the challenge”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

The Camel Rest DSL

Camel

Apache Camel is a piece of JBoss Fuse.  It is an open source integration framework with a variety of components to fit your integration needs.  Camel is a Java-based implementation of the Enterprise Integration Patterns based on a book by Gregor Hohpe and Bobby Woolf.  Camel includes components for HTTP, Files, FTP, JMS, JDBC, AWS, and much more.  While Camel can be used for many different purposes, this post will focus on the REST DSL specifically.

Continue reading “The Camel Rest DSL”


Download and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java™ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.

Coala, setting it up and auto patching

Coala is a free and open-source language-independent analysis toolkit, written in Python. The primary goal of coala is to make it easier for developers to create rules, which a project’s code should conform to the developer-defined rules. It has support for more than 40+ programming languages and is best for people who want their code to look good and follow good coding practices. It’s been developed at https://github.com/coala/coala

Continue reading “Coala, setting it up and auto patching”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Getting Started with Fuse Integration Service 2.0 Tech preview

To get started with FIS 2.0, for people who are just getting to know the technology, here is how I interpret it. Basically, it’s divided into two aspects.

1. Integration development: FIS uses Apache Camel as the core technology that creates, orchestrates, and composes microservices into a super lightweight thin integration layer, and becomes the API provider and service orchestrator through exposing RESTful or messaging service endpoints. And you can choose to either package and run it with Spring-Boot or Karaf.

2. Application Deployment and Management: FIS takes advantages of the OpenShift platform, and allows you to separately deploy the micro-integration service among a distributed environment, at the same time it takes care of the failover, high availability, load balancing and look up available services for you.

So, now we know what is in FIS 2.0, it’s time to take a closer look at how it is achieved, as a developer, you first need to decide to go with either Karaf (OSGi) or Spring-Boot framework, I personally prefer the Spring-Boot, because it matches closest to the microservice concept of a lighter deployment package. But it is up to you, the developer, after all, you are the god and creator of your application. After the framework is chosen, the developer will start developing the micro-integration services, composing between microservices or even use it to create a microservice. (With two frameworks, the development experience of the route itself is basically the same, by configuring camel components.)

Once the developer is ready to deploy the integration service, we can then decide how to deploy it onto the OpenShift platform. In FIS 2.0 there are two options, Binary S2i will build the entire application locally, and push it onto OpenShift, so OpenShift platform will use it to create and build the container image that it will run on, and for Source S2i, everything is build on top of OpenShift, so developer need to set the location of the source code in order for OpenShift to retrieve it to build the application and the container image.

And that is all. It is actually much more powerful, it’s hard to describe it in one go, dive into it, and you will soon find how fascinating the technology is, and how it can help you to resolve your current problems.

Here is a quick video that shows you how to get your first FIS 2.0 running.

The steps are as follows,

  • Install and start up OpenShift on your local machine 
  • Install FIS image stream definition into OpenShift (raw.githubusercontent.com/jboss-fuse/application-templates/application-templates-2.0.redhat-000026/fis-image-streams.json) 
  • In JBoss developer studio, create a Camel Spring-boot using the new archetype in FIS 2.0 
  • (Archetype catalog url: https://maven.repository.redhat.com/earlyaccess/all/io/fabric8/archetypes/archetypes-catalog/2.2.180.redhat-000004/archetypes-catalog-2.2.180.redhat-000004-archetype-catalog.xml)
  • Deploy to OpenShift using the Binary S2i tool (maven plugin)

 


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


Download and learn more about Red Hat JBoss Fuse, an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java™ EE 7 certified and features powerful, enterprise-grade features such as high availability clustering, distributed caching, messaging, transactions, and a full web services stack.


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Using Pipelines in OpenShift 3.3+ for CI/CD

It’s been a while since Red Hat released version 3.3 of OpenShift Container Platform, this version is full of features.

One of my favorites is the support for Pipelines (Tech Preview for now) that lets you easily integrate Jenkins builds on your OpenShift (Origin) Platform.

OpenShift Pipelines

OpenShift Pipelines are based on the Jenkins Pipeline plugin. (https://jenkins.io/solutions/pipeline/)

Integrating Jenkins Pipelines into OpenShift unlocks all the features for the CI/CD world enabling its users to easily manage repeatable tasks in the easiest way.

As you can imagine OpenShift lets you run a containerized version of the Jenkins container in one of your projects and then, after setting the right permission for the Jenkins’ ServiceAccount, it’ll do the job for you.

Pipelines are nothing more than a BuildConfig with type ‘JenkinsPipeline’.

But let’s take a more in-depth look using this simple scenario below:

  1. Jenkins OpenShift project: The base project, handling the Jenkins container and all the pipelines.
  2. Development OpenShift project: The project used for the development environment, it will handle the BuildConfig for building the app from source.
  3. Testing OpenShift project: The project used for the testing environment, it will not use any BuildConfig and it’ll expect ImageStream to be the only source for new deployments.

We’ll create two Pipelines that will simulate a Continuous Integration scenario:

  • Development Pipeline: It will trigger the BuildConfig for the development project and handle its deployment.
  • Testing Pipeline: It will handle the tagging/pulling/pushing operations to let the image flow from development project to testing project and then it will schedule a new deployment.

OpenShift start

First of all, I’ll start my OpenShift cluster, you can skip to the next section in case you’re already up & running.

For running OpenShift on my laptop, the easiest and fastest method I found is “oc cluster up”. All you need to do is to have a working Linux container daemon and an updated origin-clients package. On Fedora 25 I’ve successfully installed “origin-clients-1.3.1” from the default repos.

So that’s all, let’s “oc cluster up” my OpenShift platform:

[alex@freddy ~]$ oc cluster up --host-data-dir=/var/lib/origin/openshift.local.data --use-existing-config --version=v1.3.1 --public-hostname=192.168.123.1
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... Deleted existing OpenShift container
-- Checking for openshift/origin:v1.3.1 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ...
-- Checking type of volume mount ... Using nsenter mounter for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... Using public hostname IP 192.168.123.1 as the host IP Using 192.168.123.1 as the server IP
-- Starting OpenShift container ...
Starting OpenShift using container 'origin'
Waiting for API server to start listening
OpenShift server started
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ...
Now using project "myproject" on server "https://192.168.123.1:8443".
-- Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://192.168.123.1:8443
You are logged in as:
User: developer
Password: developer
To login as administrator:
oc login -u system:admin

Please note: I’ve manually created the “host-data” folder, the other options used are self-explanatory.

The Jenkins project

We should now be ready to sign into our OpenShift platform. openshift-first-login

Now, let’s create our first project, the Jenkins project: fireshot-capture-35-openshift-web-console-https___192-168-123-1_8443_console_create-project

Select the “Jenkins ephemeral” template. fireshot-capture-36-openshift-web-console_-https___192-168-123-1_8443_console

Leave all the parameters set to default and press create. At the end, you should see a notice like the following: Make a note of the generated password. You may need this in the future. (Anyway you can easily recover it should you need it).

fireshot-capture-37-openshift-web-console_-https___192-168-123-1_8443_console

Enabling Pipelines feature (currently in Tech Preview)

As you can see by clicking on the Builds tab menu, there is no trace of the Pipelines support. As specified in the title this feature is a tech preview, so we need to activate it. fireshot-capture-40-openshift-web-conso_-https___127-0-0-1_8443_console_project_jenkins_overview

For activating the Pipelines feature we need to create a JS config file, for enabling it:

# echo "window.OPENSHIFT_CONSTANTS.ENABLE_TECH_PREVIEW_FEATURE.pipelines = true;" >> /var/lib/origin/openshift.local.config/master/tech-preview.js

Please note: You can create the file in a location you prefer. Then we need to inject the file through the master-config.yaml file, in my case, using “oc cluster up”, it’s located in “/var/lib/origin/openshift.local.config/master/”. Place the following lines in your config file:

assetConfig: ... extensionScripts: - /var/lib/origin/openshift.local.config/master/tech-preview.js

Then restart your OpenShift master. You should then be able to find the Pipelines section under the Builds tab: fireshot-capture-41-openshift-web-conso_-https___127-0-0-1_8443_console_project_jenkins_overview

We’re almost ready to start working on our pipelines.

The development project

We can now create the development project, which we’ll use as a root for source building:

$ oc new-project development --display-name="Development" --description="Development project"
Now using project "development" on server "https://192.168.123.1:8443".

We can now use the template I just prepared for our development environment. In this demo, we’ll use the nodejs-example application available in the standard set of the OpenShift templates. Let’s populate the just created development project:

$ oc new-app https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/templates/nodejs-dev.json
--> Deploying template nodejs-example for "https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/templates/nodejs-dev.json"

Node.js
———
This is an example of a Node.js application with no database. For more information about using this template, including OpenShift considerations, see https://github.com/openshift/nodejs-ex/blob/master/README.md.

The following service(s) have been created in your project: nodejs-example.

For more information about using this template, including OpenShift considerations, see https://github.com/openshift/nodejs-ex/blob/master/README.md.

* With parameters:
* Name=nodejs-example
* Namespace=openshift
* Memory Limit=512Mi
* Git Repository URL=https://github.com/alezzandro/nodejs-ex.git
* Git Reference=
* Context Directory=
* Application Hostname=
* GitHub Webhook Secret=cR48n2GX67ADfxwi63uGomiXjxgMUCEykekbNR0G # generated
* Generic Webhook Secret=Hvx3stEhQuAmKPnjaujQHvYFV1cl1cvmh4IjXnri # generated
* Database Service Name=
* MongoDB Username=
* MongoDB Password=
* Database Name=
* Database Administrator Password=
* Custom NPM Mirror URL=

–> Creating resources with label app=nodejs-example …
service “nodejs-example” created
route “nodejs-example” created
imagestream “nodejs-example” created
buildconfig “nodejs-example” created
deploymentconfig “nodejs-example” created
–> Success
Use ‘oc start-build nodejs-example’ to start a build.
Run ‘oc status’ to view your app.

As you can see by running “oc get pods”, no deployment has started so no pods will be seen. This is a wanted behavior because we want to manage the build process and the deployment through a Jenkins’ Pipeline. For achieving this, I’ve just edited the original nodejs-ex template and removed all the triggers from the DeploymentConfig. Looking at our development project we’ll have created the following elements at the end: A BuildConfig, an ImageStream, a DeploymentConfig, a Route and a Service.

$ oc get all
NAME
bc/nodejs-example
NAME
is/nodejs-example
NAME
dc/nodejs-example
NAME
routes/nodejs-example
NAME
svc/nodejs-example

The testing project

We can now setup the testing project, like the development project I’ve already set up a template, removing the BuildConfig section. We’ll promote the container built in the development project to testing, using Jenkins Pipeline. Let’s create and populate the environment:

$ oc new-project testing --display-name="Testing" --description="Testing project"
Now using project "testing" on server "https://192.168.123.1:8443".

You can add applications to this project with the ‘new-app’ command. For example, try:

oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

$ oc new-app https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/templates/nodejs-test.json
–> Deploying template nodejs-example for “https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/templates/nodejs-test.json”

Node.js
———
This is an example of a Node.js application with no database. For more information about using this template, including OpenShift considerations, see https://github.com/openshift/nodejs-ex/blob/master/README.md.

The following service(s) have been created in your project: nodejs-example.

For more information about using this template, including OpenShift considerations, see https://github.com/openshift/nodejs-ex/blob/master/README.md.

* With parameters:
* Name=nodejs-example
* Namespace=openshift
* Memory Limit=512Mi
* Git Repository URL=https://github.com/alezzandro/nodejs-ex.git
* Git Reference=
* Context Directory=
* Application Hostname=
* GitHub Webhook Secret=XFlNUpDsLBotlrcyAnRQdLkKyq65iKE6xOMxqQr5 # generated
* Generic Webhook Secret=LX3PdBcU4dTKPyvTi8aw02VeXBjCxuJpyA7kgV8c # generated
* Database Service Name=
* MongoDB Username=
* MongoDB Password=
* Database Name=
* Database Administrator Password=
* Custom NPM Mirror URL=

–> Creating resources with label app=nodejs-example …
service “nodejs-example” created
route “nodejs-example” created
imagestream “nodejs-example” created
deploymentconfig “nodejs-example” created
–> Success
Run ‘oc status’ to view your app.

As you can see by running “oc get pods”, no deployment has started so no pods will be seen. This is a wanted behavior because we want to manage the deployment through a Jenkins’ Pipeline. For achieving this, I’ve just edited the original nodejs-ex template and removed all the triggers from the DeploymentConfig. Looking at our testing project we’ll have at the end the following elements created:

$ oc get all
NAME
is/nodejs-example
NAME
dc/nodejs-example
NAME
routes/nodejs-example
NAME
svc/nodejs-example

Please note: As I said before, there is no BuildConfig, we’ll promote the container built in the development project to testing, using Jenkins Pipeline.

Pipelines definition and import

Ok, we’re now ready to define our Pipelines. I’ve prepared two Jenkins’ pipelines, one for the development project and one for the testing project. Return back to the Jenkins project and import the two BuildConfigs containing the pre-configured pipelines:

$ oc project jenkins
Now using project "jenkins" on server "https://192.168.123.1:8443".

$ oc create -f https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/pipeline/development-pipeline.yaml
buildconfig “development-pipeline” created

$ oc create -f https://raw.githubusercontent.com/alezzandro/nodejs-ex/master/openshift/pipeline/promote2testing-pipeline.yaml
buildconfig “testing-pipeline” created

$ oc get bc
NAME TYPE FROM LATEST
development-pipeline JenkinsPipeline 0
testing-pipeline JenkinsPipeline 0

We can now take a look a what the two pipelines will be able to do.

Jenkins development pipeline

apiVersion: v1
kind: BuildConfig
metadata:
annotations:
pipeline.alpha.openshift.io/uses: '[{"name": "nodejs-example", "namespace": "development",
"kind": "DeploymentConfig"}]'
creationTimestamp: 2016-12-22T13:54:23Z
labels:
app: jenkins-pipeline-development
name: development-pipeline
template: application-template-development-pipeline
name: development-pipeline
namespace: jenkins
resourceVersion: "5781"
selfLink: /oapi/v1/namespaces/jenkins/buildconfigs/development-pipeline
uid: 24c166c2-c84e-11e6-b4f7-68f7286606f4
spec:
output: {}
postCommit: {}
resources: {}
runPolicy: Serial
source:
type: None
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node('maven') {
stage 'build'
openshiftBuild(buildConfig: 'nodejs-example', showBuildLogs: 'true', namespace: 'development')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'nodejs-example', namespace: 'development')
}
type: JenkinsPipeline
...

As you can see this BuildConfig’s type is: “JenkinsPipeline” with a well-defined “JenkinsPipelineStrategy” defined through a “JenkinsFile”. The pipeline itself is composed of two stages:

  1. Build: we start the build process in the project/namespace “development” through the “BuildConfig” named: “nodejs-example”.
  2. Deploy: after the build, we can then start a new deployment in the project/namespace “development” through the “DeploymentConfig” named: “nodejs-example”.

 

Jenkins testing pipeline

$ oc get bc/testing-pipeline -o yaml
apiVersion: v1
kind: BuildConfig
metadata:
annotations:
pipeline.alpha.openshift.io/uses: '[{"name": "nodejs-example", "namespace": "testing",
"kind": "DeploymentConfig"}]'
creationTimestamp: 2016-12-22T13:54:30Z
labels:
app: jenkins-pipeline-testing
name: testing-pipeline
template: application-template-testing-pipeline
name: testing-pipeline
namespace: jenkins
resourceVersion: "5994"
selfLink: /oapi/v1/namespaces/jenkins/buildconfigs/testing-pipeline
uid: 292fa5e5-c84e-11e6-b4f7-68f7286606f4
spec:
output: {}
postCommit: {}
resources: {}
runPolicy: Serial
source:
type: None
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node('maven') {
stage 'tag'
openshiftTag(namespace: 'development', sourceStream: 'nodejs-example', sourceTag: 'latest', destinationNamespace: 'testing', destinationStream: 'nodejs-example', destinationTag: 'latest')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'nodejs-example', namespace: 'testing')
}
type: JenkinsPipeline
...

As in the previous BuildConfig, you can see this BuildConfig’s type is: “JenkinsPipeline” with a well-defined “JenkinsPipelineStrategy” defined through a “JenkinsFile”. The pipeline itself is composed of two stages:

  1. Tag: we tag the latest ImageStream built on “development” project, setting the destination to “testing” project. Through this action, we’re promoting the image from dev to test environment.
  2. Deploy: after the image promotion, we can then deploy the new image in the “testing” project through the “DeploymentConfig” named: “testing”.

 

Jenkins Service Account

Now, we need to enable Jenkins service account (sa) to access and edit resources on “development” and “testing” project:

$ oc policy add-role-to-user edit system:serviceaccount:jenkins:jenkins -n testing
$ oc policy add-role-to-user edit system:serviceaccount:jenkins:jenkins -n development

Run the pipelines!

We’re now ready to see the pipelines in action! You can access the Pipelines page through Builds->Pipelines. 

We’re almost ready, just click on the “Start Pipeline” button for the “development-pipeline”. You’ll see the Build starting and moving forward:

Clicking on the “View Log” link will redirect you to the Jenkins login page. You can gain access through user “admin” and the generated password. The password is in the environment variables for the Jenkins pod.

At end of the process, you’ll see all the steps completed and marked in green:

We now have at least one image ready for the promotion process. We can start the testing-pipeline:

Finally, we can check the result by querying OpenShift using the web interface: Development project:

Testing project:

Or by console:

$ oc project development
Now using project "development" on server "https://192.168.123.1:8443".

$ oc get pods
NAME READY STATUS RESTARTS AGE
nodejs-example-1-build 0/1 Completed 0 23m
nodejs-example-1-trurc 1/1 Running 0 22m

$ oc project testing
Now using project “testing” on server “https://192.168.123.1:8443”.

$ oc get pods
NAME READY STATUS RESTARTS AGE
nodejs-example-1-b1kcf 1/1 Running 0 19m

That’s all! Should you have any doubts, please comment!

About Alessandro

Alessandro Arrichiello is a Platform Consultant for Red Hat Inc. He has a passion for GNU/Linux systems, that began at age 14 and continues today. He worked with tools for automating Enterprise IT: configuration management and continuous integration through virtual platforms. He’s now working on distributed cloud environment involving PaaS (OpenShift), IaaS (OpenStack) and Processes Management (CloudForms), Containers building, instances creation, HA services management, workflows build.


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 


For more information about Red Hat OpenShift and other related topics, visit: OpenShift, OpenShift Online.

Unlock your PostgreSQL data with Red Hat JBoss Data Virtualization

And here we go for another episode of the series: “Unlock your [….] data with Red Hat JBoss Data Virtualization.” Through this blog series, we will look at how to connect Red Hat JBoss Data Virtualization (JDV) to different and heterogeneous data sources.

JDV is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. It makes data spread across physically diverse systems — such as multiple databases, XML files, and Hadoop systems — appear as a set of tables in a local database. By providing the following functionality, JDV enables agile data use:

  1. Connect: Access data from multiple, heterogeneous data sources.
  2. Compose: Easily combine and transform data into reusable, business-friendly virtual data models and views.
  3. Consume: Makes unified data easily consumable through open standards interfaces.

It hides complexities, like the true locations of data or the mechanisms required to access or merge it. Data becomes easier for developers and users to work with. This post will guide you step-by-step on how to connect JDV to a PostgreSQL database using Teiid Designer. We will connect to a PostgreSQL database using the PostgreSQL JDBC driver.

Continue reading “Unlock your PostgreSQL data with Red Hat JBoss Data Virtualization”


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Installing Red Hat Developer Studio 10.2.0.GA through RPM

With the release of Red Hat JBoss Developer Studio 10.2, it is now possible to install Red Hat JBoss Developer Studio as an RPM. It is available as a tech preview. The purpose of this article is to describe the steps you should follow in order to install Red Hat JBoss Developer Studio.

Red Hat Software Collections

JBoss Developer Studio RPM relies on Red Hat Software Collections. You don’t need to install Red Hat Software Collections but you need to enable the Red Hat Software Collections repositories before you start the installation of the Red Hat JBoss Developer Studio.

Enabling the Red Hat Software Collections base repository

The identifier for the repository is rhel-server-rhscl-7-rpms on Red Hat Enterprise Linux Server and rhel-workstation-rhscl-7-rpms on Red Hat Enterprise Linux Workstation.

The command to enable the repository on Red Hat Enterprise Linux Server is:

sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms

The command to enable the repository on Red Hat Enterprise Linux Workstation is:

sudo subscription-manager repos --enable rhel-workstation-rhscl-7-rpms

For more information, please refer to the Red Hat Software Collections documentation.

JBoss Developer Studio repository

As this is a tech preview, you need to manually configure the JBoss Developer Studio repository.

Create a file /etc/yum.repos.d/rh-eclipse46-devstudio.repo with the following content:

[rh-eclipse46-devstudio-stable-10.x]
name=rh-eclipse46-devstudio-stable-10.x
baseurl=https://devstudio.redhat.com/static/10.0/stable/rpms/x86_64/
enabled=1
gpgcheck=1
upgrade_requirements_on_install=1
metadata_expire=24h

Red Hat developer signing key

As this is a tech preview, you need to accept the Red Hat developer signing key that has been used for producing the JBoss Developer Studio RPM.

Execute the following command:

sudo rpm --import "https://www.redhat.com/security/a5787476.txt"

Install Red Hat JBoss Developer Studio

You’re now ready to install Red Hat JBoss Developer Studio through RPM.

Enter the following command:

sudo yum install rh-eclipse46-devstudio

Answer ‘y’ when asked and after all required dependencies have been downloaded and installed, Red Hat JBoss Developer Studio is available on your system through the standard update channel !!!

You should see messages like the following:

rh eclipse46 devstudio.log

Launch Red Hat JBoss Developer Studio

From the system menu, mouse over the Programming menu, and the Red Hat Eclipse menu item will appear.

programming menu

Select this menu item and Red Hat JBoss Developer Studio user interface will appear then:

devstudio

Enjoy!

Jeff Maury 


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!

 

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

New vscode-java 0.0.8 release

Version 0.0.8 of the Java extension for Visual Studio Code (a.k.a. vscode-java) has been unleashed onto the world. It’s available in the Visual Studio Code Marketplace and can be found and installed directly from within Code.

Highlights of this release can be seen in this screencast:

Gradle Support

vscode-java finally provides basic Gradle support for Java projects. Basically, you just need to open a folder containing a build.gradle file in its root and wait for the Java support to kick in. Code completion, Navigation, References and all the other existing features of the Java support will work as long as the Gradle project can be successfully imported.

However, please be aware that Gradle-based Android projects are not supported, because of a limitation in BuildShip, the upstream project in which this Gradle support is based on.

Update project configuration

Whenever a build configuration file is modified and saved, a project re-configuration (eg. Java compilation level update) or classpath change (dependencies or source paths) might be required, so the VS Code internal model stays in sync with the build descriptor.

So, a warning will pop up whenever a Maven pom.xml or Gradle (*.gradle) file is saved:

Choosing Never will discard the message so it won’t show up on subsequent build file changes. Clicking  Now will trigger a project update command, but the message will show up again on the next change. Selecting  Always will trigger configuration updates every time the build file is saved.

Please be aware a project update can be a long-running, CPU-intensive operation. For large projects, it might be preferable to keep the option turned off and instead call the  Update project configuration command manually (via Ctrl+Alt+U or Cmd+Alt+U on MacOS) when the editor is focused on a Maven pom.xml or a Gradle file.

Whenever you need to change the behavior, you can open the workspace settings and look for the java.configuration.updateBuildConfiguration key. It specifies how modifications on build files update the Java classpath/configuration. Supported values are:

  • disabled : never updates automatically on save, doesn’t show a warning.
  • interactive : asks about updating on every build file save.
  • automatic : updating is automatically triggered on build file save.

Mute “Incomplete classpath” warnings

Whenever a java file is opened, that does not belong to a project (what we call a standalone Java file), vscode-java is unable to compute a proper classpath. It makes it useless to report compilation errors, as the UI would be filled with distracting red errors all over the file. But vscode-java is still able to provide content-assist for base JDK classes, and report syntax errors, so the following warning is displayed:

It’s now possible to discard the message permanently, by clicking the Don’t show again option.

Should you change your mind, it’s possible to modify that choice in VS Code’s user settings: The java.errors.incompleteClasspath.severity  key specifies the severity of the message when the classpath is incomplete for a Java file. Supported values are ignore, info, warning and error.

Conclusion

This release represents an important milestone for this small project as it finally provides basic Gradle support for Java projects, by far, the most requested feature by the community along with some important usability features.

A complete changelog for this version is available there.

This Java extension is powered by two components, a front-end part, the VS Code client, and a back-end part, the headless Java Language Server, based on Eclipse JDT, The M2E project (for Maven support) and now BuildShip (for Gradle support). Both components are developed under the open source Eclipse Public License. All contributions are welcome, whether it’s code, feedback, bug reports. Please do so under any of these Github repositories:

 


Join Red Hat Developers, a developer program for you to learn, share, and code faster – and get access to Red Hat software for your development.  The developer program and software are both free!