Red Hat JBoss Data Virtualization (JDV) is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. JDV makes data spread across physically diverse systems such as multiple databases, XML files, and Hadoop systems appear as a set of tables in a local database.
When deployed on OpenShift, JDV enables:
- Service enabling your data
- Bringing data from outside to inside the PaaS
- Breaking up monolithic data sources virtually for a microservices architecture
Together with the JDV for OpenShift image, we have made available OpenShift templates that allow you to test and bootstrap JDV.
This article will demonstrate how to get started with JDV running on OpenShift. JDV is available as a containerized xPaaS image that is designed for use with OpenShift Enterprise 3.2 and later. We’ll be using the Red Hat Container Development Kit (CDK) to get started quickly.
The CDK provides a pre-built CDK based on Red Hat Enterprise Linux to help you develop container-based (sometimes called docker) applications quickly. The containers you build can be easily deployed on any Red Hat container host or platform, including: Red Hat Enterprise Linux, Red Hat Enterprise Linux Atomic Host, and our platform-as-a-service solution, OpenShift Enterprise 3.
Continue reading “Red Hat JBoss Data Virtualization on OpenShift: Part 1 – Getting started”
We are happy to announce the availability of Red Hat JBoss Data Virtualization (JDV) 6.3 image running on OpenShift.
JDV is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. JDV makes data spread across physically diverse systems such as multiple databases, XML files, and Hadoop systems appear as a set of tables in a local database.
When deployed on OpenShift, JDV enables:
Continue reading “Announcement: Red Hat JBoss Data Virtualization on OpenShift now available”
Welcome to this first episode of this series: “Unlock your [….] data with Red Hat JBoss Data Virtualization (JDV).”
This post will guide you through an example of connecting to a Hadoop source via the Hive2 driver, using Teiid Designer. In this example we will demonstrate connection to a local Hadoop source. We’re using the Hortonworks 2.5 Sandbox running in Virtual Box for our source, but you can connect to another Hortonwork source if you wish using the same steps.
Hortonworks provides Hive JDBC and ODBC drivers that let you connect popular tools to query, analyze and visualize data stored within the Hortonworks Data Platform (HDP).
Note: we support HBase as well, stay tuned for an episode of Unlock your HBase data with Hortonworks and JDV.
Continue reading “Unlock your Hadoop data with Hortonworks and Red Hat JBoss Data Virtualization”
There’s a whole host of GUI tools to connect to MongoDB databases and browse, however despite a steeper learning curve, I’ve always found myself more productive using a command line interface (CLI).
Continue reading A Mongo Shell Cheat Sheet
At DevNation, Red Hat’s Galder Zamarreño gave a talk with a live demo, Building reactive applications with Node.js and Red Hat JBoss Data Grid. The demo consisted of building an event-based three tier web application using JBoss Data Grid (JDG) as the data layer, an event manager running on Node.js, and a web client. Recently, support for Node.js clients was added to JDG, opening up the performance of a horizontally scalable in-memory data grid, to reactive web and mobile applications.
Continue reading DevNation Live Blog: Building Reactive Applications with Node.js and Red Hat JBoss Data Grid
A few days ago I had a rant about the misuse and misunderstanding of REST (typically HTTP) for microservices.
To summarize, a few people/groups have been suggesting that you cannot do asynchronous interactions with HTTP, and that as a result of using HTTP you cannot break down a monolithic application into more agile microservices. The fact that most people refer to REST when they really mean HTTP is also a source of personal frustration, because by this stage experienced people in our industry really should know the difference. If you’re unsure of the difference then check out the restcookbook or even Roy’s PhD thesis (it’s quite a good read!)
However, I digress, so back to the rant: My goal is to point people in the right direction and make some recommendations, hence this followup post.
Continue reading “REST and microservices – breaking down the monolith step by asynchronous step”
Containerizing things is particularly popular these days. Today we’ll talk about the idioms we can use for containerization, and specifically play with apache spark and cassandra in our use case for creating easily deployed, immutable microservices.
Note: This post is done using centos7 as a base for the containers, but these same recipes will apply with RHEL and Fedora base images.
Continue reading “Microservice principles and Immutability – demonstrated with Apache Spark and Cassandra”
Agility is the key for benefiting from the use of Big Data for operational excellence and improved profitability. Ovum Research finds that organizations that take an iterative approach to refining analytic models, consolidating data sources, and transitioning to the cloud, tend to find more success with Big Data.
Attend this webinar to learn how to:
- Consolidate your data sources
- Build open, flexible Big Data ecosystems
- Find success with Big Data
Continue reading “Webinar: How to Stay Agile with Big Data: A Roadmap – 10 September”
Abstract: Historically, the term “Hadoop” has been considered synonymous with its core technologies: MapReduce and the Hadoop Distributed File System (HDFS). But today the definition of Hadoop is rapidly evolving.
The Hadoop community is generalizing the application runtime model beyond MapReduce. On the storage front, we’re seeing the emergence of many alternative Hadoop-compatible file systems. Red Hat has built an interface layer for its Red Hat Storage Server product. This complete implementation of the Hadoop file system interface lets Hadoop-related projects run transparently, directly on a Red Hat Storage Server cluster.
Continue reading “DevNation 2014: Scott McClellan – Hadoop and Beyond”