Go anywhere

HTTP-based Kafka messaging with Red Hat AMQ Streams

HTTP-based Kafka messaging with Red Hat AMQ Streams

August 4, 2020

Apache Kafka is a rock-solid, super-fast, event streaming backbone that is not only for microservices. It’s an enabler for many use cases, including activity tracking, log aggregation, stream processing, change-data capture, Internet of Things (IoT) telemetry, and more. Red Hat AMQ Streams makes it easy to run and manage Kafka natively on Red Hat OpenShift. […]

Authorizing multi-language microservices with Louketo Proxy

Authorizing multi-language microservices with Louketo Proxy

August 3, 2020

What if you needed to provide authentication to several microservices that were written in different languages? You could use Red Hat Single Sign-On (SSO) to handle the authentication, but then you would still need to integrate each microservice with Keycloak. Wouldn’t it be great if a service could just handle the authentication flow and pass […]

Choosing the right asynchronous-messaging infrastructure for the job

Choosing the right asynchronous-messaging infrastructure for the job

July 31, 2020

The term asynchronous means “not occurring at the same time.” In the context of distributed systems and messaging, this term implies that request processing will occur at an arbitrary point in time. Asynchronous interactions hold many advantages over synchronous ones, but they also introduce new challenges. In this article, we will focus on specific considerations […]

Developer joy for distributed teams with CodeReady Workspaces | DevNation Tech Talk

Developer joy for distributed teams with CodeReady Workspaces | DevNation Tech Talk

July 30, 2020

Enabling teams on projects has been often challenging due to hardware configurations, software dependencies, and lack of documentation. In this session, we'll show you how admins can easily provide CodeReady Workspaces, a multi-tenant in-browser IDE system on top of OpenShift. CodeReady Workspaces can get Developers comfortably started with coding and testing their changes in Kubernetes-containerized environments (workspaces), and deploying their apps to the Platform.

From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

From notebooks to pipelines: Using Open Data Hub and Kubeflow on OpenShift

July 29, 2020

Data scientists often use notebooks to explore data and create and experiment with models. At the end of this exploratory phase is the product-delivery phase, which is basically getting the final model to production. Serving a model in production is not a one-step final process, however. It is a continuous phase of training, development, and […]

Latest Comments

Waiting for Disqus…