Picture this: A customer approaches a support engineer due to an issue they are facing, perhaps something like packet drop or latency—it could be anything. The immediate response from the support engineer is to request system information or an SOS report. They then attempt to replicate the setup to simulate the packet drops. The root cause might remain elusive. If the issue remains unsolved, they turn to the developer for guidance. This back-and-forth communication can be quite time-consuming, stretching over weeks or even months in some cases.
The idea here is to accelerate this process and speed up the communication between the customers, Global Support Services (GSS), and developers to quickly address the anomaly.
Open vSwitch (OVS) is the key component that provides a production-grade network for many Red Hat products (e.g., Red Hat OpenShift, Red Hat OpenStack Platform). It is vital to have visibility into the behavior of packet processing within OVS-DPDK as more Red Hat customers use Fast Datapath (FDP) in their production networks.
In this article, we'll demonstrate how to easily detect common networking problems related to Open vSwitch with the help of the Red Hat Insights tool:
- Add Insights rules for OVS coverage stats and OVS Logs using Insights parsers.
- Document the existing logs and counters in OVS and explain their reasons and effects. This document can be handy for looking for reasons, effects, and possible solutions for a particular problem.
Why is Insights a better tool for monitoring OVS-related metrics?
OVS is very complex. While it offers numerous metrics that can be used by customers and GSS teams to detect and troubleshoot networking issues, currently one has to sift through the counters and logs manually to inspect a problem. Wouldn't it be more efficient if we had a command to automate this inspection? Sounds convenient, right?
To summarize, a way of exposing these metrics to GSS teams is needed. This is where the Red Hat Insights tool can be of great help.
The Insights tool is widely used by GSS teams. It is supported on Red Hat’s layered products such as Red Hat OpenStack Platform and Red Hat OpenShift Container Platform. To monitor a new set of metrics, a parser and a rule can be added. That way, Insights can track it. These Insights rules can be run against any kind of archive like an SOS report.
Insights is capable of integrating information from various sources. When monitoring a system, there might be a situation where data from different sources must be analyzed together to draw meaningful conclusions. Fortunately, Insights has combiners that simplify this task. Combiners allow rule authors to combine "facts" from different sources.
All these points make the Insights tool best for monitoring OVS-related metrics/logs.
Red Hat Insights is easily extendable
Adding a new rule or extending an existing one in the Insights tool is straightforward. Currently, we have added two OVS-related rules to Insights: the ovs_coverage_rule
, which observes OVS coverage metrics
, and the ovs_logs_rule
for monitoring ovs-vswitchd
logs. As of now, these rules are specifically designed to track selected coverage counters and logs based on their importance, serving as an excellent starting point. As OVS evolves, there could be a need to track other coverage counters or logs, especially if new vital metrics are added to OVS. Hence, it is crucial to regularly update these rules.
I have designed these rules to be intuitive and extendable, so anyone can understand and extend them by simply referring to this guide I authored.
How do OVS rules work?
For every rule, we have to define a set of trigger conditions, which when met, rule action is triggered. In this case, the rule action is to return the concerned metrics to the Insights front end.
Let us now look at the workings of each rule in detail.
Coverage counters rule
A Threshold dictionary is defined that consists of the coverage counter names as keys. For each key, there is a sub-dictionary detailing the thresholds for its average rate over the last 5 seconds, 1 minute, and 1 hour.
THRESHOLDS = {.....
“Counter_name”: {“avg_rate_5_seconds”: “x”,
“avg_rate_1_minute”: y,
“avg_rate_1_hour”: z}
.....
}
Trigger conditions for coverage rule:
- OVS running.
- Specific coverage counter and its
avg_rate_5_seconds
,avg_rate_1_minute
, andavg_rate_1_hour
declared in the thresholds dictionary and its current values are greater than the threshold defined.
If 1 and 2:
- Return
ERROR_KEY
and errors.
If the current average rate of a counter is greater than its defined threshold, then the rule is hit and trigger conditions have been met.
OVS logs rule
A Threshold dictionary is defined that consists of a part of a log as a key. For each key, there is a sub-dictionary detailing the actual log and the thresholds for its frequency over the last 1 hour and 24 hours.
THRESHOLDS = {.....
“part_of_log”: {“pattern”: “abc”,
“1h_threshold”: y,
“24h_threshold”: z}
.....
}
Trigger conditions for OVS logs rule:
ovs-vswitchd.log
file present- Specific log and its number of occurrence thresholds declared in the thresholds dictionary and its current number of occurrences is greater than the threshold defined.
If 1 and 2:
- Return
ERROR_KEY
and errors
If the current frequency of a log is greater than its defined threshold, then the rule is hit and trigger conditions have been met.
How to use OVS Insights rules
OVS Insights rules can be run against SOS reports using insights-run
. The insights-run is a tool to execute a set of components including data sources (SPECs), parsers, combiners and rules on a host system, or one or more archive files.
Below are the steps to follow to run an Insights OVS rule against an sos report
.
- Get an
sos report
(one way to do this is through downloading from the case/support shell). - Start a Python
venv
, and install the required libraries (insights-core
). - Run the rule against an SOS report using
insights-run
.
Commands to run insights-run
:
# Run rule against SOS report
> pip install git+https://gitlab.cee.redhat.com/support-insights/gss-rules
> insights run -p shared_rules.networking[.<rule_name>] <sos-report_path>
If the output of the rule is Failed : 1
, Then the rule has been hit and trigger conditions have been met which means there is an anomaly that needs to be addressed. If the output is S
, it means the rule is skipped and trigger conditions have not been met.
How to extend these rules
To extend this rule to track another coverage counter, first decide suitable threshold values for all the rates. The OVS team can help in decking the threshold values. Then add the counter name as a key and its respective thresholds to this dictionary and you are done! The rule now is capable of tracking another counter.
Coverage counters rule:
counter_name
as key- Dictionary of
avg_rate_5_seconds
,avg_rate_1_minute
, andavg_rate_1_hour
threshold rates as value
For OVS logs rule:
log_overview
as key- Dictionary of
log_description pattern
andrespective thresholds
as value
In either case, Update the document OpenvSwitch debugging: coverage counters and logs. Add an entry to the “Logs Summary table” and fill in other columns. Explain the log, threshold, and possible directions to solve the anomaly.
Snippets
OVS_COVERAGE_RULE:
[supportshell-1.sush-001.prod.us-west-2.aws.redhat.com] [07:07:53+0000]
(venvdec) [vbarrenk@supportshell-1 ~]$ insights run -p shared_rules.networking.ovs_coverage_rule 03211783/0570-sosreport-worker-02-kub1-brhoaltb-2022-07-06-piqtdef.tar.xz/
Install colorama if console colors are preferred.
---------
Progress:
---------
F
--------------
Rules Executed
--------------
[FAIL] shared_rules.networking.ovs_coverage_rule.report
-------------------------------------------------------
Links:
jira:
https://issues.redhat.com/browse/XEINSIGHTS-335
product_documentation:
https://docs.google.com/document/d/15ehuyAv71lF5F5tQppZEbm_lcfNPAr_5-UpofXwLl2Q/edit?usp=sharing
Coverage counters of OVS
netlink_overflow:
Exeeceded the threshold for avg_rate_1_hour. Threshold is 0. Value was 0.0039
----------------------
Rule Execution Summary
----------------------
Missing Deps: 0
Passed : 0
Failed : 1
Info : 0
Ret'd None : 0
Metadata : 0
Metadata Key: 0
Fingerprint : 0
Exceptions : 0
OVS_LOGS_RULE:
[supportshell-1.sush-001.prod.us-west-2.aws.redhat.com] [07:03:02+0000]
(venvdec) [vbarrenk@supportshell-1 ~]$ insights run -p shared_rules.networking.ovs_logs_rule 03211783/0570-sosreport-worker-02-kub1-brhoaltb-2022-07-06-piqtdef.tar.xz/
Install colorama if console colors are preferred.
---------
Progress:
---------
F
--------------
Rules Executed
--------------
[FAIL] shared_rules.networking.ovs_logs_rule.report
---------------------------------------------------
Links:
jira:
https://issues.redhat.com/browse/XEINSIGHTS-336
product_documentation:
https://docs.google.com/document/d/15ehuyAv71lF5F5tQppZEbm_lcfNPAr_5-UpofXwLl2Q/edit?usp=sharing
Vswitchd logs from OVS
- - - - - - - - - -
PAST 1 HOUR:
log overview: lost packet on port channel
Timestamp: 2022-07-06T15:40:29.168Z - log: system@ovs-system: lost packet on port channel 1 of handler 1 (last polled 2 ms ago)
Timestamp: 2022-07-06T15:52:01.137Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 1 ms ago)
Timestamp: 2022-07-06T15:52:49.021Z - log: system@ovs-system: lost packet on port channel 23 of handler 1 (last polled 2 ms ago)
Timestamp: 2022-07-06T16:05:59.144Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 0 ms ago)
Timestamp: 2022-07-06T16:10:44.112Z - log: system@ovs-system: lost packet on port channel 23 of handler 0 (last polled 1 ms ago)
Timestamp: 2022-07-06T16:12:29.121Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 1 ms ago)
Timestamp: 2022-07-06T16:18:46.386Z - log: system@ovs-system: lost packet on port channel 23 of handler 1 (last polled 3 ms ago)
Timestamp: 2022-07-06T16:22:31.131Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 0 ms ago)
.
.
.
- - - - - - - - - -
PAST 24 HOURS:
log overview: lost packet on port channel
Timestamp: 2022-07-05T16:43:39.043Z - log: system@ovs-system: lost packet on port channel 23 of handler 1 (last polled 2 ms ago)
Timestamp: 2022-07-05T16:45:14.144Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 1 ms ago)
Timestamp: 2022-07-05T16:46:31.107Z - log: system@ovs-system: lost packet on port channel 1 of handler 1 (last polled 0 ms ago)
Timestamp: 2022-07-05T16:47:16.157Z - log: system@ovs-system: lost packet on port channel 1 of handler 1 (last polled 1 ms ago)
Timestamp: 2022-07-05T16:48:03.830Z - log: system@ovs-system: lost packet on port channel 23 of handler 1 (last polled 2 ms ago)
Timestamp: 2022-07-05T16:51:31.267Z - log: system@ovs-system: lost packet on port channel 23 of handler 0 (last polled 7 ms ago)
Timestamp: 2022-07-05T16:52:29.120Z - log: system@ovs-system: lost packet on port channel 1 of handler 1 (last polled 3 ms ago)
Timestamp: 2022-07-05T16:53:59.156Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 2 ms ago)
Timestamp: 2022-07-05T16:54:31.164Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 1 ms ago)
Timestamp: 2022-07-05T16:56:01.135Z - log: system@ovs-system: lost packet on port channel 1 of handler 1 (last polled 1 ms ago)
Timestamp: 2022-07-05T17:01:59.147Z - log: system@ovs-system: lost packet on port channel 1 of handler 1 (last polled 0 ms ago)
Timestamp: 2022-07-05T17:06:06.161Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 3 ms ago)
Timestamp: 2022-07-05T17:06:41.300Z - log: system@ovs-system: lost packet on port channel 1 of handler 1 (last polled 4 ms ago)
Timestamp: 2022-07-05T17:13:08.281Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 24 ms ago)
Timestamp: 2022-07-05T17:14:08.187Z - log: system@ovs-system: lost packet on port channel 1 of handler 1 (last polled 4 ms ago)
Timestamp: 2022-07-05T17:16:44.145Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 1 ms ago)
Timestamp: 2022-07-05T17:17:59.162Z - log: system@ovs-system: lost packet on port channel 1 of handler 0 (last polled 1 ms ago)
.
.
.
----------------------
Rule Execution Summary
----------------------
Missing Deps: 0
Passed : 0
Failed : 1
Info : 0
Ret'd None : 0
Metadata : 0
Metadata Key: 0
Fingerprint : 0
Exceptions : 0
Other important links and documents
- Coverage rule and OVS logs rule are easily extendable by following the document: Adding OVS Rules to GSS Insights repo
- A document on how to use OVS Insights rules.
- Troubleshooting documentation for coverage counters and OVS Logs: Troubleshooting Open vSwitch: coverage counters and logs
- Presentation: link
- Detailed demo: link
- Coverage rule
- OVS logs rule
Additional recommendations (OpenStack only)
Before Insights rules, run the OpenStack TripleO validations playbooks. It is a collection of Ansible roles and playbooks to detect and report potential issues during TripleO deployment. These playbooks are capable of looking for any misconfigurations.
- GitHub - openstack/tripleo-validations by Jagananthan Palanisamy.
- Detect and report potential issues or misconfigurations early in TripleO deployments.
- Help detect issues early and prevent field engineers from wasting time on misconfigurations.
- Written in Ansible. Easily consumable.