This article is the final in a series taking readers on a journey to peek inside life in a Red Hat Open Innovation Labs residency.
This is the top-tier experience for any customer*, exposing them to open collaboration, open technologies, and fast agile application delivery methods.
This experience often escapes organizations attempting digital transformation, so through submersion in an Open Innovation Labs residency, Red Hat shares its experience in managing, developing, and delivering solutions with communities, open technologies, and open collaboration.
Join me as I share experiences from inside a real-life residency, watching Red Hat work intimately with a customer, exposing new ways of working, leveraging open technologies using fast, agile application delivery methods and open collaboration.
In the first part, I shared what’s in a Red Hat Open Innovation Labs residency. Then in part two, I looked at what I encountered as the residency progressed towards delivery. All that’s left now is to share the delivery week, known as Demo Day.
Continue reading “Inside a Red Hat Open Innovation Labs Residency (Part 3)”
This series (see Part 1) takes the reader on a journey, taking a peek inside a Red Hat Open Innovation Labs Residency. A top tier experience for any customer*, a residency exposes them to open collaboration, open technologies, and fast agile application delivery methods.
This experience often escapes organizations attempting digital transformation. Through submersion in an Open Innovation Labs residency, Red Hat shares its experience in managing, developing, and delivering solutions. This is about successfully achieving organizational goals using open communities, open technologies, and open collaboration.
Join me as I share experiences from inside a real life residency. Watch how Red Hat engages intimately with a customer by exposing them to new ways of working. It is demonstrated by leveraging open technologies using fast and agile application delivery methods with open collaboration.
Continue reading “Inside a Red Hat Open Innovation Labs Residency (Part 2)”
Ansible is a simple agent-less automation tool that has changed the world for the better. It has many use cases and wide adoption (used by many upstream projects like Kubernetes and there are thousands of rules submitted to Ansible Galaxy). In this article, we are going to demonstrate Ansible. The intention of this article is not to teach you the basics of Ansible, but to motivate you to learn it.
Continue reading “New level of automation with Ansible”
One of the common requirements for Java based applications on OpenShift is to have these workloads connect back out to an enterprise database that resides outside of the OpenShift infrastructure. While OpenShift natively supports a variety of relational databases (including Postgres and MySQL) as Docker based deployments within the platform, connecting to an existing enterprise database infrastructure is preferred in many large organizations for a variety of reasons including:
- Inherent confidence in traditional databases due to in house experience around developing and managing these databases
- Ability to leverage existing backup/recovery procedures around these databases
- Technical limitations with these databases in being able to be deployed in a containerized model
One of the strengths of the OpenShift platform is its ability to accommodate these “traditional” workloads so that middleware operations can take advantage of the benefits/efficiencies gained from Dockeri’zed applications while giving development teams a platform to start designing/architecting applications that would fit into more of a Microservice based pattern that would leverage a datastore such as MongoDB or MySQL that OpenShift supports.
In addition to that, another common workflow in many organizations from a deployment point of view is to externalize the database connection information so that the application can be migrated from environment to environment (example Dev to QA to Prod) with the appropriate database connection information for the various environments. In addition, these teams typically work with the application binary (.war, .ear, .jar) deployment as the artifact thats promoted between environments as opposed to Docker based images.
In this article, I will walk through an example implementation for achieving this. A sensitive aspect of this migration process are the credentials to the database, where storing credentials in clear text is frowned upon. I will cover a variety of strategies in dealing with this in a follow on article. For this example, I will be using the following project which contains the source code that I will be covering in this article.
Lets get started!
Continue reading “Connecting to a Remote database from a JWS/Tomcat application on OpenShift”
JBoss EAP 7 was recently released, and brings with it a whole host of new features and support, such as support for Java EE 7, reduced port usage, graceful shutdown, improved GUI and CLI management, optimizations for cloud and containers, and much more. EAP 7’s small footprint, fast startup time and support for modern Java and non-Java frameworks make it uniquely suitable for deployment onto PaaS cloud environments, and Red Hat happens to have a leading one: OpenShift.
Continue reading JBoss EAP 7 on OpenShift
Over the years, I’ve come across many command line interfaces (CLI) to larger applications, each with varying levels of access and power. Having a CLI at all is a great first step for an application, as it opens up a much wider range of possibilities: administration, extension, and trust.
CLIs also promote scriptability – the ability to create and maintain repeatable scripts, and the easier it is to develop said scripts, the better. Sometimes scripts can solve issues that developers of the app never thought of. (Pro tip: find good user experience designers who know the product and are comfortable on the command line, then put them in charge of the CLI user experience. Your users will love you.
Continue reading “Offline CLI with JBoss EAP 7”
Red Hat JBoss Core Services Collection is a group of common services that are critical for application developers. The services included change as new services and projects are added over time, but the idea is to include common, developer-friendly projects under a single subscription. The collection makes it much easier for developers to access these services.
The launch of the Core Services Collection includes services that focus on three areas: web servers, security, and monitoring.
There are six components available in the launch of Core Services Collection:
- JBoss Operations Network, which is based on the former RHQ project (now Hawkular). From a high level, this is a monitoring and management server, but the key is that it is developed in parallel with other JBoss products, so there is tight integration with other JBoss products. This centralizes all management for JBoss middleware products and also for Java applications running on JBoss EAP.
- An integrated single sign-on server based on the Keycloak project. This SSO server supports SAML 2.0, OAuth, and OpenID and it can work with LDAP servers and Active Directory for user identity management. Keycloak SSO makes it a lot easier to define user domains, federated identities, and client applications because it has a very simple graphical UI, as well as REST APIs.
- The Apache Commons Jsvc daemon provides a way to manage Java virtual machines on Unix/Linux; in general, this is used as a wrapper for Java applications so that those applications can be managed by native system tools.
- Apache HTTP server is the most-used web server in the world. Web servers are used to route traffic and load balance requests to JBoss EAP and other middleware servers.
- Web connectors provide a connection with third-party web servers which need to interact with JBoss middleware products and may not have a native connection. For this release, there are two connectors available:
- Microsoft IIS
- Oracle iPlanet
Continue reading “An Announcement for JBoss Core Services Collection”
I’m guessing if you’ve done enough repeated builds on OpenShift, using Maven, that you are probably aware of the “download the internet” phenomenon that plagues build times. You start a build, expecting all those Maven dependencies you downloaded for your last build to be re-used, but quickly see your network traffic ramp up while the same 100MB of jars are downloaded again and again. Even builds of a few minutes tend to grind on me, frustrate me as a developer when I’m trying to test/deploy/fix quickly.
Thankfully, Maven has a nice feature that allows you to set up local mirrors that cache dependencies and make them available to future builds, only updating from the upstream repo as needed on a regular (and configurable) schedule.
Continue reading “Maven mirrors on OpenShift with and without Source to Image (S2I)”