Page
Set up OpenTelemetry
Now that we have our Jaeger instance and Tempo subscription enabled, it’s time to deploy OpenTelemetry (OTEL). You can use OpenTelemetry for a transparent migration and to easily duplicate and adapt to a specific storage selection with appropriate signals.
Prerequisites:
- Jaeger instance
- Tempo subscription enabled
- Red Hat Hybrid Cloud Console access
In this lesson, you will:
- Deploy the OTEL collector.
Deploy the collector
When facilitating a migration, it’s important to make sure that you can store your old logs, metrics, traces, and events will be able to be stored in your newer storage destination. The following steps will walk us through how to deploy the OTEL collector through the Red Hat build of OpenTelemetry Operator, so we can take care of the transition process.
First, create the namespace for Red Hat build of OpenTelemetry Operator by executing the following command:
cat <<'EOF' | oc create -f- apiVersion: v1 kind: Namespace metadata: name: openshift-opentelemetry-operator EOFThen create the
OperatorGroup:cat <<'EOF' | oc create -f- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: Instrumentation.v1alpha1.opentelemetry.io,OpAMPBridge.v1alpha1.opentelemetry.io,OpenTelemetryCollector.v1alpha1.opentelemetry.io,OpenTelemetryCollector.v1beta1.opentelemetry.io,TargetAllocator.v1alpha1.opentelemetry.io name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOFNext, we’ll create the subscription for the operator:
cat <<'EOF' | oc create -f- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/opentelemetry-product.openshift-opentelemetry-operator: '' name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: konflux-catalog-otel sourceNamespace: openshift-marketplace startingCSV: opentelemetry-operator.v0.135.0-1 EOFIf the
ServiceMeshnamespace doesn’t already exist, use the following command to create one:cat <<'EOF' | oc create -f- apiVersion: v1 kind: Namespace metadata: name: istio-system EOFNext, let's create the
ClusterRole/Bindingfor theServiceAccountto read k8s attributes:cat <<'EOF' | oc create -f- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: argocd.argoproj.io/sync-wave: "10" name: user-collector-read-k8sattributes rules: - apiGroups: - "" resources: - pods verbs: - get - watch - list - apiGroups: - apps resources: - replicasets verbs: - get - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: argocd.argoproj.io/sync-wave: "10" name: user-collector-read-k8sattributes roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: user-collector-read-k8sattributes subjects: - kind: ServiceAccount name: user-collector namespace: istio-system EOFReferencing our earlier created namespace, edit the configuration for the
OpenTelemetryCollectorinstance forServiceMesh:cat <<'EOF' | oc create -f- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: user namespace: istio-system spec: config: connectors: routing: table: - pipelines: - traces/tempo - traces/jaeger-tempo statement: route() exporters: otlp/tempo: auth: authenticator: bearertokenauth endpoint: tempo-tempo-gateway.tempo.svc.cluster.local:8090 headers: X-Scope-OrgID: user sending_queue: queue_size: 150000 tls: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt otlp/jaeger: endpoint: http://jaeger-all-in-one-inmemory-collector.distributed-tracing.svc.cluster.local:4317 tls: insecure: true extensions: bearertokenauth: filename: /var/run/secrets/kubernetes.io/serviceaccount/token processors: k8sattributes: {} probabilistic_sampler/jaeger: sampling_percentage: 100 probabilistic_sampler/tempo: sampling_percentage: 100 receivers: jaeger: protocols: thrift_compact: endpoint: 0.0.0.0:6831 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 zipkin: endpoint: 0.0.0.0:9411 service: extensions: - bearertokenauth pipelines: traces: exporters: - routing receivers: - otlp traces/jaeger: exporters: - otlp/jaeger processors: - k8sattributes - probabilistic_sampler/jaeger receivers: - jaeger - zipkin traces/jaeger-tempo: exporters: - otlp/jaeger processors: - k8sattributes - probabilistic_sampler/jaeger receivers: - routing traces/tempo: exporters: - otlp/tempo processors: - k8sattributes - probabilistic_sampler/tempo receivers: - routing telemetry: metrics: readers: - pull: exporter: prometheus: host: 0.0.0.0 port: 8888 configVersions: 3 daemonSetUpdateStrategy: {} deploymentUpdateStrategy: {} ingress: route: {} ipFamilyPolicy: SingleStack managementState: managed mode: statefulset networkPolicy: enabled: true observability: metrics: enableMetrics: true podDnsConfig: {} replicas: 1 resources: {} targetAllocator: allocationStrategy: consistent-hashing collectorNotReadyGracePeriod: 30s collectorTargetReloadInterval: 30s filterStrategy: relabel-config observability: metrics: {} prometheusCR: scrapeInterval: 30s resources: {} upgradeStrategy: automatic EOF
Now that we have configured and deployed OTEL, it’s time to create control planes for OpenShift Service Mesh.