Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Migrate your OpenShift logging stack from Elasticsearch to Loki

September 1, 2025
Oscar Casal Sanchez Jamie Parker
Related topics:
DatabasesKubernetesObservability
Related products:
Red Hat OpenShift

Share:

    To use the latest features of logging for Red Hat OpenShift 6.0, you must migrate from Elasticsearch to Loki. This article is a guide for users to test these changes in their development and test environments and develop a plan for implementing these changes in production.

    Loki is a horizontally scalable, highly available, multitenant log aggregation system offered as a general availability (GA) log store for logging for Red Hat OpenShift. It can be visualized with the Red Hat OpenShift observability UI. The Loki configuration provided by OpenShift logging is a short-term log store designed to helps users perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster.

    Why migrate to Loki?

    Our experience is that Loki is highly performant, which is attributed to Loki’s use of log labels instead of log lines from an indexing perspective. We also prefer how Loki allows multiple tenants to use a single Loki instance, which reduces complexity and uses fewer compute resources. In addition, Elastic and Kibana are deprecated in logging 5.x versions.

    Migrate the default log store to Loki

    The following describes how to migrate the OpenShift logging storage service from Elasticsearch to LokiStack. This article includes steps to switch forwarding logs from Elasticsearch to LokiStack. It does not include any steps for migrating data between the two. It aims to ensure both log storage stacks run in parallel until the informed user can confidently shut down Elasticsearch.

    In summary, after applying the steps:

    • The old logs will still be served by Elasticsearch and visible only through Kibana.
    • The new logs will be served by LokiStack and visible through the OpenShift console logs pages (for example, Admin → Observe → Logs).

    Prerequisites

    • Installed logging for Red Hat OpenShift operator (current stable when writing v5.5.5).
    • Installed OpenShift Elasticsearch operator (current stable when writing v5.5.5).
    • Installed Loki operator provided by Red Hat (current stable when writing v5.5.5).
    • Ensure sufficient resources on the target nodes for running Elasticsearch and LokiStack (consider the LokiStack deployment sizing table).

    Current stack

    Note: If Fluentd is the collector type, consider reading the Red Hat Knowledgebase article Migrating the log collector from Fluentd to Vector reducing the number of logs duplicated in RHOCP 4.

    Assume your current stack looks like the following block, which represents a fully managed OpenShift logging stack with logStore: Elastisearch and Kibana, including collection, forwarding, storage, and visualization. Disclaimer: The stack might vary regarding the resources/nodes/tolerations/selectors/collector type/back-end storage used.

    apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: gp2 size: 80Gi resources: requests: memory: 16Gi limits: memory: 16Gi redundancyPolicy: "SingleRedundancy" retentionPolicy: application: maxAge: 24h audit: maxAge: 24h infra: maxAge: 24h visualization: type: "kibana" kibana: replicas: 1 collection: [...]

    Bonus: Using ClusterLogForwarder to forward audit logs

    In the case of using the Forwarding audit logs to the log store guide to forward audit logs to the default store, you do not need to change anything on the ClusterLogForwarder resource. The collector pods will be configured to continue sending audit logs to forward new audit logs to LokiStack, too.

    Install LokiStack only

    Following the guide Deploying LokiStack, apply only the next two steps:

    • Installing the Loki Operator by using the OpenShift Container Platform web console
    • Creating a secret for Loki object storage by using the web console

    An example of the documented in the guide Deploying LokiStack follows. For more details and options, review the documentation.

    Step 1: Create the S3 secret

    For this example, the secret created will be for AWS S3, but review the fields needed for other kind of object storage in the documentation section Loki object storage:

    cat << EOF |oc create -f - apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging data: access_key_id: $(echo "PUT_S3_ACCESS_KEY_ID_HERE"|base64 -w0) access_key_secret: $(echo "PUT_S3_ACCESS_KEY_SECRET_HERE"|base64 -w0) bucketnames: $(echo "s3-bucket-name"|base64 -w0) endpoint: $(echo "https://s3.eu-central-1.amazonaws.com"|base64 -w0) region: $(echo "eu-central-1"|base64 -w0) EOF

    Step 2: Deploy LokiStack CR

    Deploy the LokiStack Custom Resource (CR), changing the spec.size as needed:

    cat << EOF |oc create -f - apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.small storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 type: s3 storageClassName: gp2 tenants: mode: openshift-logging EOF

    Disconnect Elasticsearch and Kibana CRs from ClusterLogging

    To ensure Elasticsearch and Kibana continue to run on the cluster while you switch ClusterLogging from them to LokiStack/OpenShift console, you need to disconnect the custom resources from being owned by ClusterLogging.

    Step 1: Temporarily set ClusterLogging to State Unmanaged

    Enter:

    oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge

    Step 2: Remove ClusterLogging ownerReferences from the Elasticsearch resource

    The following command ensures that the ClusterLogging does not own the Elasticsearch resource anymore. This means updates on the ClusterLogging resource's logStore field will not be applied to the Elasticsearch resource anymore.

    oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge

    Step 3: Remove ClusterLogging ownerReferences from the Kibana resource

    The following command ensures that the ClusterLogging does not own the Kibana resource anymore. This means updates on the ClusterLogging resource's visualization field will not be applied to the Kibana resource anymore.

    oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge

    Step 4: Back up Elasticsearch and Kibana resources

    To ensure that no accidental deletes destroy the previous storage/visualization components, namely Elasticsearch and Kibana, the following steps describe how to back up the resources. (This requires the small utility yq.)

    Elasticsearch:

    oc -n openshift-logging get elasticsearch elasticsearch -o yaml \ | yq -r 'del(.status,.metadata | .resourceVersion,.uid,.generation,.creationTimestamp,.selfLink)' > /tmp/cr-elasticsearch.yaml

    Kibana:

    oc -n openshift-logging get kibana kibana -o yaml \ | yq -r 'del(.status,.metadata | .resourceVersion,.uid,.generation,.creationTimestamp,.selfLink)' > /tmp/cr-kibana.yaml

    Switch ClusterLogging to LokiStack

    Now that you've disconnected the Elasticsearch and Kibana custom resources, you can update the ClusterLogging resource to point to LokiStack.

    Step 1: Switch log storage to LokiStack

    The manifest applies several changes to the ClusterLogging resource:

    • It re-instantiates the management state to Managed again.
    • It switches the logStore spec from elasticsearch to lokistack. In turn, this restarts the collector pods to start forwarding logs to lokistack from now on.
    • It removes the visualization spec. In turn, the cluster-logging-operator will install the logging-view-plugin that enables observing lokistack logs in the OpenShift console.
    • Replace the current spec.collection section with the available in the running cluster.
    cat << EOF |oc replace -f - apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "lokistack" lokistack: name: logging-loki collection: <------------------ replace with the current collection configuration [...] visualization: #Keep this section as long as you need to keep Kibana. kibana: replicas: 1 type: kibana EOF

    Step 2: Re-instantiate Kibana resource

    Because we removed the visualization field entirely in the previous step in favor of the operator to install the OpenShift console integration, the same operator will remove the Kibana resource, too. This is unfortunately a non-critical issue as long as you have a backup of the Kibana resource.

    The reason is the operator removes the Kibana resource named kibana from openshift-logging automatically without checking any owner references. This used to be correct as long as Kibana was the only supported visualization component on logging for Red Hat OpenShift.

    oc -n openshift-logging apply -f /tmp/cr-kibana.yaml

    Step 3: Enable the console view plug-in

    You will need to enable the console view plug-in, if it isn't already, to view the logs integrated from the OpenShift Container Platform console → Observe → Logs. Enter:

    oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ "spec": { "plugins": ["logging-view-plugin"] } }'

    Delete the Elasticsearch stack

    When the retention period for the log stored in the Elasticsearch log store is expired and no more logs are visible in the Kibana instance, it is possible to remove the old stack to release resources.

    Step 1: Delete the Elasticsearch and Kibana resources:

    oc -n openshift-logging delete kibana/kibana elasticsearch/elasticsearch

    Step 2: Delete the PVCs used by the Elasticsearch instances:

    oc delete -n openshift-logging pvc -l logging-cluster=elasticsearch

    Summary

    Migrating to Loki is necessary to use the latest logging features in logging for Red Hat OpenShift 6.0, as Elasticsearch and Kibana are deprecated in logging for Red Hat OpenShift 5.x versions. This article described how to test these changes in your development and test environments and to create a plan for production implementation.

    Related Posts

    • Logging in Open vSwitch

    • Our advice for the best Node.js logging tool

    • Node.js serverless functions on Red Hat OpenShift, Part 1: Logging

    • How to automate multi-cluster deployments using Argo CD

    • Printf-style debugging using GDB, Part 1

    • How to set up and experiment with Prometheus remote-write

    Recent Posts

    • How to enable Ansible Lightspeed intelligent assistant

    • Why some agentic AI developers are moving code from Python to Rust

    • Confidential VMs: The core of confidential containers

    • Benchmarking with GuideLLM in air-gapped OpenShift clusters

    • Run Qwen3-Next on vLLM with Red Hat AI: A step-by-step guide

    What’s up next?

    Open source AI for developers introduces and covers key features of Red Hat OpenShift AI, including Jupyter Notebooks, PyTorch, and enhanced monitoring and observability tools, along with MLOps and continuous integration/continuous deployment (CI/CD) workflows.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue