Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Best practices for migration from Jaeger to Tempo

April 9, 2025
Pavol Loffay Jamie Parker
Related topics:
Observability
Related products:
Red Hat OpenShift

    In this article, we will cover best practices for migrating from Jaeger, the deprecated OpenShift distributed tracing platform, to the OpenShift distributed tracing platform (Tempo). The support for the former product will be dropped towards the end of 2025.

    These two distributed tracing backends are fundamentally different in how they store data. However, they support similar ingestion protocols—the OpenTelemetry protocol (OTLP) and visualization Jaeger user interface (UI). These capabilities allow us to provide smooth migration. 

    Tempo

    The OpenShift distributed tracing platform Tempo is based on the upstream Grafana Tempo project. For persistence, Tempo can use either an object store (e.g., S3) or local storage. For production deployments, we recommend an object store. The OpenShift distributed tracing platform Tempo can be deployed with two custom resource definitions (CRDs): TempoStack and TempoMonolithic. Local storage is supported only with TempoMonolitic which deploys a single pod with all services that access the same local filesystem. On the other hand, TempoStack deploys Tempo services in individual pods.

    For visualization, TempoStack and TempoMonolithic can be configured to expose Jaeger UI or the new OpenShift distributed tracing UI plug-in provided by the cluster observability operator. The UI plug-in is based on the upstream CNCF Perses project which is a preferred distributed tracing UI on OpenShift. The plug-in UI will, in the future, support fine-grained query role-based access control (RBAC) to allow users to retrieve spans only from services and namespaces they are allowed to access.

    The query API is another innovation. Tempo supports the TraceQL query language which allows users to construct more complex queries. For instance, use range queries for HTTP response codes or use structural operators to define relationships between spans.

    Tempo versus Jaeger

    The OpenShift distributed tracing platform Jaeger uses Elasticsearch 6 as a backend persistent store. Compared to Tempo, it does not require an object store. However, it relies on fast persistent volumes. The Jaeger backend implementation also does not support multi-tenancy which is natively implemented in Tempo.

    Migrate from Jaeger to Tempo

    The foundational idea when migrating from Jaeger to Tempo is to run both systems simultaneously for a period of time until the data from Jaeger is no longer needed or is removed due to retention. Therefore, the migration configuration has to ensure the trace data is sent to both systems simultaneously for some time, or only to Tempo.

    Before we look into the migration, let’s first deploy Tempo into a tempo-observability namespace by applying manifests from migrate-from-jaeger-to-tempo/tempo-observability. It deploys a multi-tenant Tempo instance and an OpenTelemetry collector which pushes data to the dev tenant. In this new setup, applications should be sending data to the OpenTelemetry collector and not directly to the Tempo backend. This flexible architecture enables users to make important changes to their setup in the OpenTelemetry collector. For instance, users can send all data or a subset of data to another backend, perform filtering of sensitive data or additional downsampling.

    As mentioned above, the data in Tempo can be visualized in Jaeger UI deployed alongside Tempo or via the new OpenShift distributed tracing UI plug-in. You can access the Jaeger UI and the OpenShift UI plug-in from the observe menu in the OpenShift admin console.

    Reconfiguring applications

    The first approach we will explore is changing the instrumentation to report data to the newly deployed Tempo instance. There are two ways the application can send data to the tracing backend. It either configures the exporter in the software development kit (SDK) to directly send data to the backend, or it sends data to a sidecar. 

    Reconfigure SDK exporter

    Configuring the SDK exporter depends on how the application is built and configured. Some applications might hardcode the exporter in the source code or define it as environment variable. 

    In this case, the workloads should be configured to send data to the OpenTelemetry collector in the tempo-observability namespace. The service dev-collector exposes all protocol Jaeger agent and collector supports.

    Switch from the Jaeger sidecar to OpenTelemetry

    You can configure the application to send data to the Jaeger sidecar which runs in an additional container in the application pod. In this scenario, the OpenTelemetry collector sidecar can remove the Jaeger sidecar. The following OpenTelemetry collector CR configures a sidecar which can receive data in all Jaeger supported protocols. Then the sidecar has to be injected into the pod by adding annotation sidecar.opentelemetry.io/inject=otel-sidecar to the pod annotations in the deployment, as follows:

    apiVersion: opentelemetry.io/v1beta1
    kind: OpenTelemetryCollector
    metadata:
      name: otel-sidecar
    spec:
      mode: sidecar
      config:
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: 0.0.0.0:4317
              http:
                endpoint: 0.0.0.0:4318
          jaeger:
            protocols:
              thrift_compact:
                endpoint: 0.0.0.0:6831
              thrift_binary:
                endpoint: 0.0.0.0:6832
              thrift_http:
                endpoint: 0.0.0.0:14268
              grpc:
                endpoint: 0.0.0.0:14250
          zipkin:
            endpoint: 0.0.0.0:9411
        processors:
          resourcedetection/env:
            detectors: [ env ]
            timeout: 2s
            override: false
        exporters:
          otlphttp/tempo:
            endpoint: http://dev-collector.tempo-observability.svc.cluster.local:4318
          otlphttp/jaeger:
            endpoint: http://jaeger-collector.ploffay.svc.cluster.local:4318
            tls:
              ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
          debug: {}
        service:
          pipelines:
            traces:
              receivers: [otlp,jaeger,zipkin]
              processors: [resourcedetection/env]
              exporters: [debug, otlphttp/tempo]

    If you wish to forward data to Jaeger, then configure the Jaeger exporter endpoint and add the exporter to the traces pipeline.

    Change Jaeger to Tempo with OpenTelemetry collector

    You can configure Red Hat OpenShift Service Mesh to send trace data directly to the trace backend (Jaeger) and optionally provision it, or send data to an OpenTelemetry collector. For migration, the latter case is easy to solve, because in that case we can simply reconfigure the collector by adding an exporter to export data to our newly provisioned Tempo backend.

    If the OpenShift Service Mesh (OSSM) was configured to provision Jaeger backend (as shown next), we can reconfigure it to send trace data first to an OpenTelemetry collector where we can easily configure an additional exporter to a newly deployed Tempo backend.

    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      version: v2.6
      mode: ClusterWide
      tracing:
        type: Jaeger
        sampling: 10000
      addons:
        kiali:
          enabled: false
          name: kiali
        grafana:
          enabled: false
        jaeger:
          name: jaeger
          install:
            storage:
              type: Memory
            ingress:
              enabled: true

    The following manifests deploy an OpenTelemetry collector in the OpenShift Service Mesh namespace which forwards trace data to Tempo and the old Jaeger instance. It also reconfigures the OSSM to send the data to the collector, as follows:

    apiVersion: opentelemetry.io/v1beta1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: istio-system
    spec:
      mode: deployment
      config:
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: 0.0.0.0:4317
              http:
                endpoint: 0.0.0.0:4318
        exporters:
          debug:
            verbosity: detailed
          otlphttp/tempo:
            endpoint: http://dev-collector.tempo-observability.svc.cluster.local:4318
          otlphttp/jaeger:
            endpoint: http://jaeger-collector.istio-system.svc.cluster.local:4318
            tls:
              ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
        service:
          pipelines:
            traces:
              receivers: [otlp]
              processors: []
              exporters: [debug, otlphttp/tempo, otlphttp/jaeger]
    ---
    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
      namespace: istio-system
    spec:
      version: v2.6
      mode: ClusterWide
      tracing:
        type: Jaeger
        sampling: 10000
      addons:
        kiali:
          enabled: false
          name: kiali
        grafana:
          enabled: false
        jaeger:
          name: jaeger
          install:
            storage:
              type: Memory
            ingress:
              enabled: true
      meshConfig:
        extensionProviders:
          - name: otel
            opentelemetry:
              port: 4317
              service: otel-collector.istio-system.svc.cluster.local
    ---
    apiVersion: telemetry.istio.io/v1alpha1
    kind: Telemetry
    metadata:
      name: mesh-default
      namespace: istio-system
    spec:
      tracing:
        - providers:
            - name: otel
          randomSamplingPercentage: 100

    Next steps

    In this article, we covered possible migrating approaches from Jaeger to Tempo trace backend. The approach will ultimately depend on how the tracing infrastructure with instrumentation is set up. However, given the flexibility of the OpenTelemetry collector with supporting all Jaeger and Zipkin ingestion protocols, the migration is straightforward.

    You can find all the manifests from this article hosted in GitHub.

    • OpenShift Distributed tracing UI plug-in
    • Perses project
    • Kubernetes manifests for this article. 
    • OpenShift Service Mesh docs

    Related Posts

    • A guide to the open source distributed tracing landscape

    • Distributed tracing with OpenTelemetry, Knative, and Quarkus

    • Building and understanding reactive microservices using Eclipse Vert.x and distributed tracing

    • 4 steps to run an application under OpenShift Service Mesh

    • Why service mesh and API management are better together

    Recent Posts

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    • How EvalHub manages two-layer Kubernetes control planes

    • Tekton joins the CNCF as an incubating project

    What’s up next?

    The Red Hat OpenShift cheat sheet presents basic oc commands to help you build, deploy and manage an application with OpenShift.

    Get the cheat sheet
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.