Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Red Hat build of OpenTelemetry and OpenShift distributed tracing 3.3: New features for developers

September 5, 2024
Jose Gomez-Selles
Related topics:
Developer ProductivityHybrid CloudKubernetesOperators
Related products:
Red Hat OpenShiftRed Hat OpenShift Container Platform

Share:

    This article covers new features in the latest release of both the Red Hat build of OpenTelemetry and Red Hat OpenShift distributed tracing.

    First of all, apart from all the features shared below, let’s start by sharing that both releases come with support for certificate rotation. By automatically reloading the new certificates, the process of certificate renewal is made transparent to users in Tempo and OpenTelemetry pods.

    What’s new in OpenTelemetry?

    The Red Hat build of OpenTelemetry 3.3 is based on the open source OpenTelemetry operator release 0.107.0.

    As usual, there’s a lot going on in the OpenTelemetry space, and this release continues expanding the capabilities of this observability component. Let’s start with something that we believe will help users in their application observability and integration efforts.OpenTelemetry

    OpenTelemetry collector dashboards

    We all know that, despite observability being designed to help debugging and troubleshooting complicated issues, configuring it and checking that everything is going well throughout the data pipelines can rapidly become very tedious. That’s why in this release we have added the OpenTelemetry collector dashboard as part of the OpenShift console.

    With this release, once the Red Hat build of OpenTelemetry is installed, users can navigate directly into the Observe section and find the OpenTelemetry Collector dashboard, as shown in Figure 1.

    OpenTelemetry collector dashboard with the navigation menu on the left.
    Figure 1: OpenTelemetry collector dashboard.

    This dashboard is embedded directly in the Red Hat OpenShift platform, which means that users don’t need to leverage on any other third-party tools than the Red Hat build of OpenTelemetry and the User Workload Monitoring to directly obtain observability insights, such as:

    • Amount of data processed.
    • Ratio between ingested and rejected or failed data.
    • Per-signal based information.
    • Resources consumed by the OpenTelemetry collector.

    Figure 2 depicts these metrics.

    Three line graphs depicting traces data pipelines ratios.
    Figure 2: Traces data pipelines ratios. 

    Users can also filter incoming data by many parameters: from namespace level or OpenTelemetry collector instance up to specific exporter or receiver. This information can be presented granularly or high-level, depending on needs. See Figure 3.

    A view of the filtering options in the OpenTelemtry collector dashboards.
    Figure 3: Filtering OpenTelemetry collector dashboards. 

    Don’t hesitate to try these dashboards out and let us know your feedback! We believe this is just the beginning of a journey in which we aim to give users in-platform information to help in the vast integration landscape of Observability platforms and components.

    New OpenTelemetry components

    This time, the Red Hat build of OpenTelemetry adds to the collector, in Technology Preview, the following components that, detailed below. Based on our users feedback, these features will help everybody to achieve a great experience transforming, connecting and forwarding observability data (upstream Github links are provided for context, but users can find relevant configuration docs at the official documentation page, e.g., processors):

    • Metrics transform processor, to rename metrics and add, transform, or delete labels.
    • Group by attributes processor,  to change association of telemetry data towards another desired resource.
    • Routing connector, to establish routes between OpenTelemetry collector pipelines, based on resource attributes. 
    • Prometheus Remote Write exporter, in Developer Preview, and very soon in Technology preview, is one of the most widely used exporters, right after OTLP (check the OpenTelemetry collector survey for more info!) It allows users to send OpenTelemetry metrics to Prometheus remote write compatible backends such as Cortex, Mimir, and Thanos.

    OTLP Logs are now native to OpenShift

    It’s been a while since this analysis was published on the adoption of the OpenTelemetry protocol (OTLP) across OpenShift components. There, it was outlined that we still needed to enable sending logs to the OpenShift supported log store (Loki), and that's why we added the developer preview of the Loki exporter as part of our previous release. 

    After the awesome work of the community behind Loki and OpenShift logging teams, we can announce that all the necessary arrangements to successfully connect the OpenTelemetry collector and Loki storage are done, and will be generally available as part of the Logging 6 major release.

    Now that this milestone has been achieved, we will remove the Loki exporter in our next release.

    What’s new in distributed tracing?

    Tempo operator users can now configure temporary access to AWS S3 with AWS STS, enabling a more secure method of access without local storage of the secret on the cluster.

    Also, TLS is streamlined in OpenShift via service annotation when gateway/multitenancy is disabled.

    As previously announced, we made the decision to deprecate Jaeger, and our support will reach end of life by November 2025. We are gathering feedback from users to help simplify the migration process. Don’t miss out the latest blog post by Andreas Gerstmayr about the Tempo Monolithic deployment to learn more about how users can leverage on this binary to obtain all the power of distributed tracing  for small deployments, demo, and test setups. This is the recommended migration path of the Red Hat OpenShift distributed tracing platform (Jaeger) all-in-one deployment.

    The Red Hat OpenShift distributed tracing platform 3.3 is based on the open source Grafana Tempo 2.5.0 version via the Tempo operator release v0.12.0.

    What’s next?

    Looking ahead, we are already preparing for what’s to come in the latest release of the year, and it already looks very promising. Very soon, now that we already have some tracing capabilities in the OpenShift console, we will be adding correlation and the awaited Gantt chart view to expand our distributed tracing features.

    We are also working on new components for the OpenTelemetry collector, such as an AMQP/RabbitMQ receiver and exporter. 

    We’ve been also adding many Technology Preview features, and now it’s time to reach General Availability to many of them, specially the instrumentation CR, that our customers are asking for.  

    We value your feedback, which is crucial for enhancing our products. Share your questions and recommendations with us using the Red Hat OpenShift feedback form.

    Related Posts

    • Introducing Tempo Monolithic mode

    • How to deploy the new Grafana Tempo operator on OpenShift

    • Introducing the new Traces UI in the Red Hat OpenShift Web Console

    • Distributed tracing with OpenTelemetry, Knative, and Quarkus

    • How to enable OpenTelemetry traces in React applications

    • OpenTelemetry: A Quarkus Superheroes demo of observability

    Recent Posts

    • Cloud bursting with confidential containers on OpenShift

    • Reach native speed with MacOS llama.cpp container inference

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    What’s up next?

    Learn the foundations of OpenShift through hands-on experience deploying and working with applications, using a no-cost OpenShift cluster through the Developer Sandbox for Red Hat OpenShift.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue