Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Best practices for migration from Jaeger to Tempo

April 9, 2025
Pavol Loffay Jamie Parker
Related topics:
Observability
Related products:
Red Hat OpenShift

Share:

    In this article, we will cover best practices for migrating from Jaeger, the deprecated OpenShift distributed tracing platform, to the OpenShift distributed tracing platform (Tempo). The support for the former product will be dropped towards the end of 2025.

    These two distributed tracing backends are fundamentally different in how they store data. However, they support similar ingestion protocols—the OpenTelemetry protocol (OTLP) and visualization Jaeger user interface (UI). These capabilities allow us to provide smooth migration. 

    Tempo

    The OpenShift distributed tracing platform Tempo is based on the upstream Grafana Tempo project. For persistence, Tempo can use either an object store (e.g., S3) or local storage. For production deployments, we recommend an object store. The OpenShift distributed tracing platform Tempo can be deployed with two custom resource definitions (CRDs): TempoStack and TempoMonolithic. Local storage is supported only with TempoMonolitic which deploys a single pod with all services that access the same local filesystem. On the other hand, TempoStack deploys Tempo services in individual pods.

    For visualization, TempoStack and TempoMonolithic can be configured to expose Jaeger UI or the new OpenShift distributed tracing UI plug-in provided by the cluster observability operator. The UI plug-in is based on the upstream CNCF Perses project which is a preferred distributed tracing UI on OpenShift. The plug-in UI will, in the future, support fine-grained query role-based access control (RBAC) to allow users to retrieve spans only from services and namespaces they are allowed to access.

    The query API is another innovation. Tempo supports the TraceQL query language which allows users to construct more complex queries. For instance, use range queries for HTTP response codes or use structural operators to define relationships between spans.

    Tempo versus Jaeger

    The OpenShift distributed tracing platform Jaeger uses Elasticsearch 6 as a backend persistent store. Compared to Tempo, it does not require an object store. However, it relies on fast persistent volumes. The Jaeger backend implementation also does not support multi-tenancy which is natively implemented in Tempo.

    Migrate from Jaeger to Tempo

    The foundational idea when migrating from Jaeger to Tempo is to run both systems simultaneously for a period of time until the data from Jaeger is no longer needed or is removed due to retention. Therefore, the migration configuration has to ensure the trace data is sent to both systems simultaneously for some time, or only to Tempo.

    Before we look into the migration, let’s first deploy Tempo into a tempo-observability namespace by applying manifests from migrate-from-jaeger-to-tempo/tempo-observability. It deploys a multi-tenant Tempo instance and an OpenTelemetry collector which pushes data to the dev tenant. In this new setup, applications should be sending data to the OpenTelemetry collector and not directly to the Tempo backend. This flexible architecture enables users to make important changes to their setup in the OpenTelemetry collector. For instance, users can send all data or a subset of data to another backend, perform filtering of sensitive data or additional downsampling.

    As mentioned above, the data in Tempo can be visualized in Jaeger UI deployed alongside Tempo or via the new OpenShift distributed tracing UI plug-in. You can access the Jaeger UI and the OpenShift UI plug-in from the observe menu in the OpenShift admin console.

    Reconfiguring applications

    The first approach we will explore is changing the instrumentation to report data to the newly deployed Tempo instance. There are two ways the application can send data to the tracing backend. It either configures the exporter in the software development kit (SDK) to directly send data to the backend, or it sends data to a sidecar. 

    Reconfigure SDK exporter

    Configuring the SDK exporter depends on how the application is built and configured. Some applications might hardcode the exporter in the source code or define it as environment variable. 

    In this case, the workloads should be configured to send data to the OpenTelemetry collector in the tempo-observability namespace. The service dev-collector exposes all protocol Jaeger agent and collector supports.

    Switch from the Jaeger sidecar to OpenTelemetry

    You can configure the application to send data to the Jaeger sidecar which runs in an additional container in the application pod. In this scenario, the OpenTelemetry collector sidecar can remove the Jaeger sidecar. The following OpenTelemetry collector CR configures a sidecar which can receive data in all Jaeger supported protocols. Then the sidecar has to be injected into the pod by adding annotation sidecar.opentelemetry.io/inject=otel-sidecar to the pod annotations in the deployment, as follows:

    apiVersion: opentelemetry.io/v1beta1
    kind: OpenTelemetryCollector
    metadata:
      name: otel-sidecar
    spec:
      mode: sidecar
      config:
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: 0.0.0.0:4317
              http:
                endpoint: 0.0.0.0:4318
          jaeger:
            protocols:
              thrift_compact:
                endpoint: 0.0.0.0:6831
              thrift_binary:
                endpoint: 0.0.0.0:6832
              thrift_http:
                endpoint: 0.0.0.0:14268
              grpc:
                endpoint: 0.0.0.0:14250
          zipkin:
            endpoint: 0.0.0.0:9411
        processors:
          resourcedetection/env:
            detectors: [ env ]
            timeout: 2s
            override: false
        exporters:
          otlphttp/tempo:
            endpoint: http://dev-collector.tempo-observability.svc.cluster.local:4318
          otlphttp/jaeger:
            endpoint: http://jaeger-collector.ploffay.svc.cluster.local:4318
            tls:
              ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
          debug: {}
        service:
          pipelines:
            traces:
              receivers: [otlp,jaeger,zipkin]
              processors: [resourcedetection/env]
              exporters: [debug, otlphttp/tempo]

    If you wish to forward data to Jaeger, then configure the Jaeger exporter endpoint and add the exporter to the traces pipeline.

    Change Jaeger to Tempo with OpenTelemetry collector

    You can configure Red Hat OpenShift Service Mesh to send trace data directly to the trace backend (Jaeger) and optionally provision it, or send data to an OpenTelemetry collector. For migration, the latter case is easy to solve, because in that case we can simply reconfigure the collector by adding an exporter to export data to our newly provisioned Tempo backend.

    If the OpenShift Service Mesh (OSSM) was configured to provision Jaeger backend (as shown next), we can reconfigure it to send trace data first to an OpenTelemetry collector where we can easily configure an additional exporter to a newly deployed Tempo backend.

    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      version: v2.6
      mode: ClusterWide
      tracing:
        type: Jaeger
        sampling: 10000
      addons:
        kiali:
          enabled: false
          name: kiali
        grafana:
          enabled: false
        jaeger:
          name: jaeger
          install:
            storage:
              type: Memory
            ingress:
              enabled: true

    The following manifests deploy an OpenTelemetry collector in the OpenShift Service Mesh namespace which forwards trace data to Tempo and the old Jaeger instance. It also reconfigures the OSSM to send the data to the collector, as follows:

    apiVersion: opentelemetry.io/v1beta1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: istio-system
    spec:
      mode: deployment
      config:
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: 0.0.0.0:4317
              http:
                endpoint: 0.0.0.0:4318
        exporters:
          debug:
            verbosity: detailed
          otlphttp/tempo:
            endpoint: http://dev-collector.tempo-observability.svc.cluster.local:4318
          otlphttp/jaeger:
            endpoint: http://jaeger-collector.istio-system.svc.cluster.local:4318
            tls:
              ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
        service:
          pipelines:
            traces:
              receivers: [otlp]
              processors: []
              exporters: [debug, otlphttp/tempo, otlphttp/jaeger]
    ---
    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      version: v2.6
      mode: ClusterWide
      tracing:
        type: Jaeger
        sampling: 10000
      addons:
        kiali:
          enabled: false
          name: kiali
        grafana:
          enabled: false
        jaeger:
          name: jaeger
          install:
            storage:
              type: Memory
            ingress:
              enabled: true
      meshConfig:
        extensionProviders:
          - name: otel
            opentelemetry:
              port: 4317
              service: otel-collector.istio-system.svc.cluster.local
    ---
    apiVersion: telemetry.istio.io/v1alpha1
    kind: Telemetry
    metadata:
      name: mesh-default
      namespace: istio-system
    spec:
      tracing:
        - providers:
            - name: otel
          randomSamplingPercentage: 100

    Next steps

    In this article, we covered possible migrating approaches from Jaeger to Tempo trace backend. The approach will ultimately depend on how the tracing infrastructure with instrumentation is set up. However, given the flexibility of the OpenTelemetry collector with supporting all Jaeger and Zipkin ingestion protocols, the migration is straightforward.

    You can find all the manifests from this article hosted in GitHub.

    • OpenShift Distributed tracing UI plug-in
    • Perses project
    • Kubernetes manifests for this article. 
    • OpenShift Service Mesh docs

    Related Posts

    • A guide to the open source distributed tracing landscape

    • Distributed tracing with OpenTelemetry, Knative, and Quarkus

    • Building and understanding reactive microservices using Eclipse Vert.x and distributed tracing

    • 4 steps to run an application under OpenShift Service Mesh

    • Why service mesh and API management are better together

    Recent Posts

    • Storage considerations for OpenShift Virtualization

    • Upgrade from OpenShift Service Mesh 2.6 to 3.0 with Kiali

    • EE Builder with Ansible Automation Platform on OpenShift

    • How to debug confidential containers securely

    • Announcing self-service access to Red Hat Enterprise Linux for Business Developers

    What’s up next?

    The Red Hat OpenShift cheat sheet presents basic oc commands to help you build, deploy and manage an application with OpenShift.

    Get the cheat sheet
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue