Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container?
If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code?
We'll show how two features of the OpenShift command-line client can help: the oc run
and oc debug
commands.
Starting throwaway containers on Red Hat OpenShift
Most developers think about containers and OpenShift only for running long-lived applications. You create deployment configurations, stateful sets, or cron jobs that stay alive forever, creating and re-creating pods as required. Your application is always on, or at least on at fixed intervals.
The oc run
command runs containers that perform a single task and then die. It creates unmanaged containers, that OpenShift does not replace when they die.
I once had an application that talked to a legacy database outside of my OpenShift cluster. The application was able to access the database from my local machine, but not from OpenShift. I needed the ability to test access to the database from inside OpenShift. This way I could find out whether I got the correct environment variables. I would also see whether the container could resolve the database’s host name. Maybe there was a firewall blocking access to the database?
This is a perfect scenario for the oc run
command. Just start a pod running the database container image. From that pod, you can use the database client and OS commands to troubleshoot configuration network connectivity. After a few quick tests, you don't need the pod anymore.
$ oc run -it test --rm --restart Never \ --image registry.access.redhat.com/rhscl/mysql-57-rhel7 bash
The previous command gives me an interactive (-it
) Bash prompt on a pod named test. OpenShift never restarts this pod (--restart Never
) and removes it when terminated (--rm
).
The MySQL database image from Red Hat (rhscl/mysql-57-rhel7) provides the MySQL client and a few other useful commands, such as dig and host. With this, I can check that I can resolve the server host name, connect to the database, and verify my access credentials.
Starting throwaway containers for management clients
I could start the MySQL client, or any other command available from that container image, directly from the oc run
command. For example:
$ oc run -it test --rm --restart Never \ --image registry.access.redhat.com/rhscl/mysql-57-rhel7 \ -- mysql -umydbuser -pmydbpassword -hmyserver.domain.example.com mydb
Note the use of a double dash (--
) to prevent the oc run
command from interpreting the command options intended for the MySQL client. In the previous command, there is no --mysql
option; there is a space between --
and mysql
.
As another example, I could start the same throwaway container from the first example, then use another terminal to copy a SQL script into the container using the oc cp
command. Then I can run the SQL script using the throwaway container shell.
Because the MySQL client can take SQL scripts from the standard input, I could just add input redirection to the second example and be done. I just populated a test database. What about doing this from a shell script or an Ansible playbook, while I do not write that fancy operator that would deploy and initialize the database for my application?
Thanks to the oc run
command, I can use administration clients embedded into many container images, for example, the CLI administration tools for JBoss EAP, AMQ, and single-sign on. I do not need to install any of them on my local machine. Cool, isn’t it?
Cloning a deployment to a debug container
I could add more command-line options to the oc run
command and replicate all the settings of an existing deployment: environment variables, resource limits, and so on. If my intent were to replicate the runtime environment of my application, this would be too much work and be prone to errors.
However, this would be a scenario for the oc debug
command. It creates a new pod that is a carbon copy of an existing pod. If your pod does not start for whatever reason, you can create the copy from its deployment configuration.
Suppose that I created my application using oc new-app
and named it myapp
. To create a debug pod from its deployment configuration, I would use the following command:
$ oc debug -t dc/myapp
I get a Bash shell running under the same restraints as my application: uid, SElinux context, environment variables, and the same container image.
If I suspect that some of these restraints may be causing a failure, I can selectively override them using options from the oc debug
command. For example, adding the --as-root
option to the previous example gives me a root prompt inside the pod, but only if my OpenShift user has access to a security context constraint that allows me to do so.
The debug pod runs with health probes disabled. I can start my application manually to check whether the health probes are incorrect and forcing my pod to terminate. I could add options to the oc debug
command to enable health probes, disable init containers, disable sidecar containers, change labels that affect pod scheduling, and thus find which, if any, of the deployment settings are not correct for my application.
Starting throwaway containers with RHEL tools container images
As with the oc run
command, your actions using the oc debug
command are limited by what is included with your application container image. Fortunately, you can override the container image in your debug container. Good candidates are the rhel7/tools and the rhel8/support-tools container images from Red Hat.
$ oc debug -t dc/myapp \ --image registry.access.redhat.com/rhel7/rhel-tools
These images provide access to standard RHEL troubleshooting commands that would not be included in most application images, for example, the ping
and dig
commands.
You'll need to download the rhel8/support-tools container image from the Red Hat terms-based registry (redhat.registry.io). Access to the terms-based registry requires a pull secret. Follow the instructions from Red Hat Enterprise Linux Support Tools if needed.
Conclusion
You do not need a local container engine to run throwaway containers that perform troubleshooting and one-time tasks. You can run these containers quickly and easily on Red Hat OpenShift using the oc run
and oc debug
commands. And, your OpenShift cluster, if it is not a Minishift instance, is probably quicker to download container images and likely has more storage space and better bandwidth than your local workstation.