Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Consume Pino logs from Node.js applications

October 28, 2021
Ash Cripps
Related topics:
KubernetesNode.jsOperators
Related products:
Red Hat OpenShift

    Node.js offers a vast array of options to developers. This is why Red Hat and IBM teamed up to produce the Node.js reference architecture, a series of recommendations to help you build Node.js applications in the cloud. One of our recommendations is that you use Pino, an object logger for Node.js. You can visit this GitHub page for an overview of how and why to use Pino. This article demonstrates how to create and consume Pino logs with the Red Hat OpenShift Logging service.

    Prerequisites

    To follow along, you need a Red Hat OpenShift cluster and a Node.js application you can deploy to OpenShift. For our example, we'll use the nodejs-circuit-breaker from NodeShift, a collection of tools maintained by Red Hat for Node.js developers.

    Installing OpenShift Logging

    To deploy OpenShift Logging, we'll install two operators: The OpenShift Elasticsearch Operator and the OpenShift Logging Operator.

    To install the OpenShift Elasticsearch Operator:

    1. In the OpenShift web console, open OperatorHub under the Operators submenu.
    2. Select OpenShift Elasticsearch Operator and click Install.
    3. Double-check that the All namespaces on the cluster option is selected.
    4. For an installed namespace, select openshift-operators-redhat.
    5. Select the option to enable recommended monitoring on this namespace.
    6. Click Install.
    7. Wait for the operator to install.

    This operator installs both the Elasticsearch text data store and its Kibana visualization tool, which serve as the backbone of the OpenShift Logging system.

    After the Elasticsearch Operator is installed, install the OpenShift Logging Operator as follows:

    1. Navigate back to the OperatorHub and select the OpenShift Logging Operator.
    2. Select a specific namespace, then openshift-logging.
    3. Select the option to enable recommended monitoring on this namespace.
    4. Click Install.
    5. Wait for the operator to install.

    The key component installed with this operator is the OpenShift Log Forwarder, which sends logs to the Elasticsearch instance. The Log Forwarder takes the container logs from every pod in every namespace and forwards them to the namespace and containers running Elasticsearch. This communication allows the logs to flow where you can analyze them without requiring each container to have a certificate and route set up to access the separate namespace containing Elasticsearch.

    Deploying OpenShift Logging

    Now that you have the building blocks installed via operators, you will deploy the pods containing the logging system. To do this you need a custom resource definition (CRD), a configuration concept in Kubernetes.

    This CRD defines what and how many pods you need, where to install them, and key setup features for the Elasticsearch instance, such as the size of the disk and the retention policy. The following YAML code is an example CRD for deploying the logging infrastructure:

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
      namespace: "openshift-logging"
    spec:
      managementState: "Managed"  
      logStore:
        type: "elasticsearch"  
        retentionPolicy:
          application:
            maxAge: 1d
          infra:
            maxAge: 7d
          audit:
            maxAge: 7d
        elasticsearch:
          nodeCount: 3
          storage:
            storageClassName:
            size: 200G
          resources:
            requests:
              memory: "8Gi"
          proxy:
            resources:
              limits:
                memory: 256Mi
              requests:
                 memory: 256Mi
          redundancyPolicy: "SingleRedundancy"
      visualization:
        type: "kibana"  
        kibana:
          replicas: 1
      curation:
        type: "curator"
        curator:
          schedule: "30 3 * * *"
      collection:
        logs:
          type: "fluentd"  
          fluentd: {}

    Note: OpenShift Logging is not designed to be a long-term storage solution. This example stores its logs for only seven days before deletion. For long-lived logs, you need to change the retentionPolicy property and the storage type under storageClassName. For more information on how to set up suitable storage for long-lived logs, please refer to the documentation.

    To create the CRD:

    1. Navigate to Custom Resource Definitions under the Administration tab in the sidebar. Search for "ClusterLogging" and click on the result.
    2. On this page, click on Actions and then View Instances (the page might need a refresh to load). Then click Create.
    3. Replace the YAML code there with the YAML from the preceding example and click Create again.

    To check the installation's progress, navigate to the pods page. The page should show three Elasticsearch pods spinning up, along with the Kibana pod and some Fluentd pods that support logging. These pods will take a few minutes to spin up.

    Enabling JSON parsing

    As explained at the beginning of this article, we use Pino for logging in our sample Node.js application. To most effectively use the log data generated by Pino, you need to ensure that the OpenShift Logging Operator can parse the JSON data correctly. JSON parsing is possible as of version 5.1 of this operator. You only need to deploy a custom ClusterLogForwarder resource. This will overwrite the Fluentd pods and provide the configuration needed to parse JSON logs. The configuration is:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      outputDefaults:
        elasticsearch:
          structuredTypeKey: kubernetes.pod_name
      pipelines:
      - inputRefs:
          - application
          - infrastructure
          - audit
        name: all-to-default
        outputRefs:
          - default
        parse: json

    The structuredTypeKey property determines how the new indexes are split up. In this example, the forwarder creates a new index for each pod that has its logs forwarded to Elasticsearch.

    Generating the Node.js logs

    Next, you'll push the application to generate logs from the NodeShift starter repository.

    In a terminal, clone the repository and change into the directory installed:

    $ git clone git@github.com:nodeshift-starters/nodejs-circuit-breaker.git
    
    $ cd nodejs-circuit-breaker

    Before deploying your application, log in to your OpenShift cluster. Logging in requires a token, which you can retrieve from the OpenShift user interface (UI) by clicking on Copy login command from the user drop-down menu in the top right corner. This gives you a command similar to:

    oc login --token=$TOKEN --server=$SERVER:6443

    After logging in, run the deployment script to deploy the application to OpenShift:

    $ ./start-openshift.sh

    Deployment takes a few minutes. You can check progress from the Topology overview in the Developer console. Once the services are deployed, you can start viewing your logs.

    Viewing the Node.js logs

    To view your logs, first set up a Kibana instance as follows:

    1. Inside the OpenShift UI, click the nine squares at the top right and then select logging.
    2. Accept the permissions required by the service account.

    This takes you to your Kibana page, where you have to do a few things before viewing data.

    The first task is to set up an index pattern so you can view the data. Enter "app-nodejs*" for the pattern. Thanks to the trailing asterisk, the pattern allows you to view all logs from any application that uses "nodejs" in its naming convention for its pods. The prepended string "app" is from the ClusterLogForwarder, to indicate that this index came from an application pod.

    Select Timestamp as the time filter field.

    That's all you need to retrieve the logs.

    Now, select Discover at the top left, which displays all the logs inside your Elasticsearch instance. Here, you can filter through all the logs and look for specific logs from certain pods.

    Because the index pattern I've suggested here matches logs from indexes belonging to my "nodejs" apps, I only have three logs, as shown in Figure 1. If I go down the left-hand side and select all the "structured." fields, the display shows only the parsed JSON in my Kibana results. These are the fields you can search on, making the most of your JSON logging.

    An example of the Kibana output showing only logs from three Node.js applications.
    Figure 1. Kibana output, showing the logs selected by filtering for Node.js applications.

    Conclusion

    This article was an introduction to using OpenShift's built-in cluster logging to consume Pino logs from your Node.js applications. We installed both the Elasticsearch Operator and the OpenShift Logging Operator, then deployed the OpenShift default Elasticsearch service and a custom ClusterLogForwarder, all of which enabled us to collate all of our application logs.

    If you want to learn more about what Red Hat is up to on the Node.js front, check out our Node.js landing page.

    Last updated: September 20, 2023

    Related Posts

    • Why we developed the Node.js reference architecture

    • Fail fast with Opossum circuit breaker in Node.js

    • Easily deploy Node.js applications to Red Hat OpenShift using Nodeshift

    • What's happening in the Node.js community

    Recent Posts

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.