Hugo Guerrero

Hugo Guerrero (@hguerreroo) is an information technology professional with 15+ years of experience in software development. He has worked as a developer, consultant, architect and software development manager with major clients in private and federal public sectors. He is a Red Hatter and currently an open source integration technology evangelist.

Areas of Expertise

Java, APIs, Messaging, Middleware

Recent Posts

Replacing Confluent Schema Registry with Red Hat integration service registry

Replacing Confluent Schema Registry with Red Hat integration service registry

With the latest release of Red Hat Integration now available, we’ve introduced some exciting new capabilities. Along the enhancements for Apache Kafka-based environments, Red Hat announced the Technical Preview of the Red Hat Integration service registry to help teams to govern their services schemas. Developers can now use the registry to query for the schemas and artifacts required by each service endpoint or register and store new structures for future use.

Continue reading “Replacing Confluent Schema Registry with Red Hat integration service registry”

Share
Getting started with Red Hat Integration service registry

Getting started with Red Hat Integration service registry

New projects require some help. Imagine you are getting ready to start that new feature your business has been asking for the last couple of months. Your team is ready to start coding to implement the new awesome thing that would change your business.

To achieve it, the team will need to interact with the current existing software components of your organization. Your developers will need to interact with API services and event endpoints already available in your architecture. Before being able to send and process information, developers need to be aware of the structure or schema expected by those services.

Red Hat announced the Technical Preview of the Red Hat Integration service registry to help teams to govern their services schemas. The service registry is a store for schema (and API design) artifacts providing a REST API and a set of optional rules for enforcing content validity and evolution. Teams can now use the service registry to query for the schemas required by each service endpoint or register and store new structures for future use.

Continue reading “Getting started with Red Hat Integration service registry”

Share
Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge

Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge

Red Hat continues to increase the features available for users looking to implement a 100% open source, event-driven architecture (EDA) through running Apache Kafka on Red Hat OpenShift and Red Hat Enterprise Linux. The Red Hat Integration Q4 release provides new features and capabilities, including ones aimed at simplifying usage and deployment of the AMQ streams distribution of Apache Kafka. 

Continue reading “Red Hat simplifies transition to open source Kafka with new service registry and HTTP bridge”

Share
Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

After a couple of months in Developer Preview, the Debezium Apache Kafka connectors for change data capture (CDC) are now available as a Technical Preview as part of the Q4 release of Red Hat Integration. Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process.

Continue reading Red Hat advances Debezium CDC connectors for Apache Kafka support to Technical Preview

Share
Developer preview of Debezium Apache Kafka connectors for Change Data Capture (CDC)

Developer preview of Debezium Apache Kafka connectors for Change Data Capture (CDC)

With the release of Red Hat AMQ Streams 1.2, Red Hat Integration now includes a developer preview of Change Data Capture (CDC) capabilities to enable data integration for modern cloud-native microservices-based applications. CDC features are based on the upstream project Debezium and are natively integrated with Apache Kafka and Strimzi to run on top of Red Hat OpenShift Container Platform, the enterprise Kubernetes, as part of the AMQ Streams release.

Continue reading “Developer preview of Debezium Apache Kafka connectors for Change Data Capture (CDC)”

Share
Announcing Red Hat AMQ streams 1.2 with Apache Kafka 2.2 support

Announcing Red Hat AMQ streams 1.2 with Apache Kafka 2.2 support

We are thrilled to announce an updated release of the data streaming component of our messaging suite, Red Hat AMQ streams 1.2, which is part of Red Hat integration.

Red Hat AMQ streams, based on the Apache Kafka project, offers a distributed backbone that allows microservices and other applications to share data with extremely high throughput and extremely low latency. AMQ streams makes running and managing Apache Kafka a Kubernetes-native experience, by additionally delivering Red Hat OpenShift Operators, a simplified and automated way to deploy, manage, upgrade and configure a Kafka ecosystem installation on Kubernetes.

Continue reading “Announcing Red Hat AMQ streams 1.2 with Apache Kafka 2.2 support”

Share
Data as a microservice: Distributed data-focused integration

Data as a microservice: Distributed data-focused integration

Microservices is the architecture design favored in new software projects; however, getting the most from this type of approach requires overcoming several previous requirements. As the evolution from a monolithic to a distributed system takes place not only in the application space but also at the data store, managing your data becomes one of the hardest challenges. This article examines some of the considerations for implementing data as a service.

Continue reading “Data as a microservice: Distributed data-focused integration”

Share
Distributed microservices architecture: Istio, managed API gateways and, enterprise integration

Distributed microservices architecture: Istio, managed API gateways and, enterprise integration

The rise of microservices architectures drastically changed the software development landscape. In the past few years, we have seen a shift from centralized monoliths to distributed computing that benefits from cloud infrastructure. With distributed deployments, the adoption of microservices, and system scaling to cloud levels, new problems emerged, as well as new components that tried to solve the problems.

By now, you most likely have heard that the service mesh or Istio is here to save the day. However, you might be wondering how it fits with your current enterprise integration investments and API management initiatives. That is what I discuss in this article.

Continue reading “Distributed microservices architecture: Istio, managed API gateways and, enterprise integration”

Share
Announcing Kubernetes-native self-service messaging with Red Hat AMQ Online

Announcing Kubernetes-native self-service messaging with Red Hat AMQ Online

Microservices architecture is taking over software development discussions everywhere. More and more companies are adapting to develop microservices as the core of their new systems. However, when going beyond the “microservices 101” googled tutorial, required services communications become more and more complex. Scalable, distributed systems, container-native microservices, and serverless functions benefit from decoupled communications to access other dependent services. Asynchronous (non-blocking) direct or brokered interaction is usually referred to as messaging.

Continue reading “Announcing Kubernetes-native self-service messaging with Red Hat AMQ Online”

Share
How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams

How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams

On October 25th Red Hat announced the general availability of their AMQ Streams Kubernetes Operator for Apache Kafka. Red Hat AMQ Streams focuses on running Apache Kafka on Openshift providing a massively-scalable, distributed, and high performance data streaming platform. AMQ Streams, based on the Apache Kafka and Strimzi projects, offers a distributed backbone that allows microservices and other applications to share data with extremely high throughput. This backbone enables:

  • Publish and subscribe: Many to many dissemination in a fault tolerant, durable manner.
  • Replayable events: Serves as a repository for microservices to build in-memory copies of source data, up to any point in time.
  • Long-term data retention: Efficiently stores data for immediate access in a manner limited only by disk space.
  • Partition messages for more horizontal scalability: Allows for organizing messages to maximum concurrent access.

One of the most requested items from developers and architects is how to get started with a simple deployment option for testing purposes. In this guide we will use Red Hat Container Development Kit, based on minishift, to start an Apache Kafka cluster on Kubernetes.

Continue reading “How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams”

Share