Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Consume Pino logs from Node.js applications

October 28, 2021
Ash Cripps
Related topics:
KubernetesNode.jsOperators
Related products:
Red Hat OpenShift

Share:

    Node.js offers a vast array of options to developers. This is why Red Hat and IBM teamed up to produce the Node.js reference architecture, a series of recommendations to help you build Node.js applications in the cloud. One of our recommendations is that you use Pino, an object logger for Node.js. You can visit this GitHub page for an overview of how and why to use Pino. This article demonstrates how to create and consume Pino logs with the Red Hat OpenShift Logging service.

    Prerequisites

    To follow along, you need a Red Hat OpenShift cluster and a Node.js application you can deploy to OpenShift. For our example, we'll use the nodejs-circuit-breaker from NodeShift, a collection of tools maintained by Red Hat for Node.js developers.

    Installing OpenShift Logging

    To deploy OpenShift Logging, we'll install two operators: The OpenShift Elasticsearch Operator and the OpenShift Logging Operator.

    To install the OpenShift Elasticsearch Operator:

    1. In the OpenShift web console, open OperatorHub under the Operators submenu.
    2. Select OpenShift Elasticsearch Operator and click Install.
    3. Double-check that the All namespaces on the cluster option is selected.
    4. For an installed namespace, select openshift-operators-redhat.
    5. Select the option to enable recommended monitoring on this namespace.
    6. Click Install.
    7. Wait for the operator to install.

    This operator installs both the Elasticsearch text data store and its Kibana visualization tool, which serve as the backbone of the OpenShift Logging system.

    After the Elasticsearch Operator is installed, install the OpenShift Logging Operator as follows:

    1. Navigate back to the OperatorHub and select the OpenShift Logging Operator.
    2. Select a specific namespace, then openshift-logging.
    3. Select the option to enable recommended monitoring on this namespace.
    4. Click Install.
    5. Wait for the operator to install.

    The key component installed with this operator is the OpenShift Log Forwarder, which sends logs to the Elasticsearch instance. The Log Forwarder takes the container logs from every pod in every namespace and forwards them to the namespace and containers running Elasticsearch. This communication allows the logs to flow where you can analyze them without requiring each container to have a certificate and route set up to access the separate namespace containing Elasticsearch.

    Deploying OpenShift Logging

    Now that you have the building blocks installed via operators, you will deploy the pods containing the logging system. To do this you need a custom resource definition (CRD), a configuration concept in Kubernetes.

    This CRD defines what and how many pods you need, where to install them, and key setup features for the Elasticsearch instance, such as the size of the disk and the retention policy. The following YAML code is an example CRD for deploying the logging infrastructure:

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
      namespace: "openshift-logging"
    spec:
      managementState: "Managed"  
      logStore:
        type: "elasticsearch"  
        retentionPolicy:
          application:
            maxAge: 1d
          infra:
            maxAge: 7d
          audit:
            maxAge: 7d
        elasticsearch:
          nodeCount: 3
          storage:
            storageClassName:
            size: 200G
          resources:
            requests:
              memory: "8Gi"
          proxy:
            resources:
              limits:
                memory: 256Mi
              requests:
                 memory: 256Mi
          redundancyPolicy: "SingleRedundancy"
      visualization:
        type: "kibana"  
        kibana:
          replicas: 1
      curation:
        type: "curator"
        curator:
          schedule: "30 3 * * *"
      collection:
        logs:
          type: "fluentd"  
          fluentd: {}

    Note: OpenShift Logging is not designed to be a long-term storage solution. This example stores its logs for only seven days before deletion. For long-lived logs, you need to change the retentionPolicy property and the storage type under storageClassName. For more information on how to set up suitable storage for long-lived logs, please refer to the documentation.

    To create the CRD:

    1. Navigate to Custom Resource Definitions under the Administration tab in the sidebar. Search for "ClusterLogging" and click on the result.
    2. On this page, click on Actions and then View Instances (the page might need a refresh to load). Then click Create.
    3. Replace the YAML code there with the YAML from the preceding example and click Create again.

    To check the installation's progress, navigate to the pods page. The page should show three Elasticsearch pods spinning up, along with the Kibana pod and some Fluentd pods that support logging. These pods will take a few minutes to spin up.

    Enabling JSON parsing

    As explained at the beginning of this article, we use Pino for logging in our sample Node.js application. To most effectively use the log data generated by Pino, you need to ensure that the OpenShift Logging Operator can parse the JSON data correctly. JSON parsing is possible as of version 5.1 of this operator. You only need to deploy a custom ClusterLogForwarder resource. This will overwrite the Fluentd pods and provide the configuration needed to parse JSON logs. The configuration is:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      outputDefaults:
        elasticsearch:
          structuredTypeKey: kubernetes.pod_name
      pipelines:
      - inputRefs:
          - application
          - infrastructure
          - audit
        name: all-to-default
        outputRefs:
          - default
        parse: json

    The structuredTypeKey property determines how the new indexes are split up. In this example, the forwarder creates a new index for each pod that has its logs forwarded to Elasticsearch.

    Generating the Node.js logs

    Next, you'll push the application to generate logs from the NodeShift starter repository.

    In a terminal, clone the repository and change into the directory installed:

    $ git clone git@github.com:nodeshift-starters/nodejs-circuit-breaker.git
    
    $ cd nodejs-circuit-breaker

    Before deploying your application, log in to your OpenShift cluster. Logging in requires a token, which you can retrieve from the OpenShift user interface (UI) by clicking on Copy login command from the user drop-down menu in the top right corner. This gives you a command similar to:

    oc login --token=$TOKEN --server=$SERVER:6443

    After logging in, run the deployment script to deploy the application to OpenShift:

    $ ./start-openshift.sh

    Deployment takes a few minutes. You can check progress from the Topology overview in the Developer console. Once the services are deployed, you can start viewing your logs.

    Viewing the Node.js logs

    To view your logs, first set up a Kibana instance as follows:

    1. Inside the OpenShift UI, click the nine squares at the top right and then select logging.
    2. Accept the permissions required by the service account.

    This takes you to your Kibana page, where you have to do a few things before viewing data.

    The first task is to set up an index pattern so you can view the data. Enter "app-nodejs*" for the pattern. Thanks to the trailing asterisk, the pattern allows you to view all logs from any application that uses "nodejs" in its naming convention for its pods. The prepended string "app" is from the ClusterLogForwarder, to indicate that this index came from an application pod.

    Select Timestamp as the time filter field.

    That's all you need to retrieve the logs.

    Now, select Discover at the top left, which displays all the logs inside your Elasticsearch instance. Here, you can filter through all the logs and look for specific logs from certain pods.

    Because the index pattern I've suggested here matches logs from indexes belonging to my "nodejs" apps, I only have three logs, as shown in Figure 1. If I go down the left-hand side and select all the "structured." fields, the display shows only the parsed JSON in my Kibana results. These are the fields you can search on, making the most of your JSON logging.

    An example of the Kibana output showing only logs from three Node.js applications.
    Figure 1. Kibana output, showing the logs selected by filtering for Node.js applications.

    Conclusion

    This article was an introduction to using OpenShift's built-in cluster logging to consume Pino logs from your Node.js applications. We installed both the Elasticsearch Operator and the OpenShift Logging Operator, then deployed the OpenShift default Elasticsearch service and a custom ClusterLogForwarder, all of which enabled us to collate all of our application logs.

    If you want to learn more about what Red Hat is up to on the Node.js front, check out our Node.js landing page.

    Last updated: September 20, 2023

    Related Posts

    • Why we developed the Node.js reference architecture

    • Fail fast with Opossum circuit breaker in Node.js

    • Easily deploy Node.js applications to Red Hat OpenShift using Nodeshift

    • What's happening in the Node.js community

    Recent Posts

    • GuideLLM: Evaluate LLM deployments for real-world inference

    • Unleashing multimodal magic with RamaLama

    • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

    • Streamline multi-cloud operations with Ansible and ServiceNow

    • Automate dynamic application security testing with RapiDAST

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue