When you are setting up a self-contained OpenShift v3 cluster, you usually run into the problem of how to resolve host names, local cluster addresses, application routes, and external names correctly everywhere. For a production deployment, a networking engineer would typically be responsible for working this out, but if you are just trying it out, getting properly hooked into organizational DNS can be daunting if not forbidden by policy.
Fortunately, if needed, you can handle DNS entirely yourself.
Assumptions
Let's assume you have a set of hosts on which to run OpenShift, and all you have to work with are their IP addresses - no DNS name resolves to the hosts. Let's say you would like to use the example.com domain for your hosts and a subdomain apps.example.com for your default application routes. Preferably you would have control over the domain's DNS records (perhaps this is some personal domain); otherwise external users will need to modify their network settings to resolve the domain's DNS the way you want, and that may not be possible depending on their circumstances.
Note: You can use public DNS without even registering a domain. Every IPv4 address already has public DNS set up for it at xip.io (visit for details). You can use these DNS entries as your hostnames and cloud domain. And in fact, if you're comfortable doing that, you don't need the workaround discussed here at all. But it is still handy if you want your URLs to look "pretty."
Custom DNS
You need a place to run your own DNS server for your domain. You can use a spare VM if you like, but the master is a good place to locate your own DNS server. The first hurdle is that the master already runs a DNS server - the internal SkyDNS server that resolves the local cluster URLs like kubernetes.default.svc.cluster.local. By default Kubernetes already injects this server IP into containers; we want to keep the existing resolution but also add hostnames and domain names. However you aren't given any options for configuring this DNS server as you would like.
Instead, you can run a dnsmasq DNS server on your master; you should be able to install the dnsmasq RPM (or on Atomic Host, run a dnsmasq container with the host network). So your /etc/dnsmasq.conf might look something like:
# Reverse DNS record for master host-record=master.example.com,192.168.1.100 # Wildcard DNS for OpenShift Applications - Points to Router address=/apps.example.com/192.168.1.100
If you start dnsmasq like this, it reads your /etc/hosts file on the master. Populate /etc/hosts with your desired hostnames and IPs for all your hosts.
The last line points your cloud domain at the master (where we assume you have a router running; adjust as needed depending on where your router actually runs. You will need a router to use application routes).
The problem is that when you start dnsmasq, you receive an error because it tries to bind port 53 on the host which is already bound by the SkyDNS server from the master (or, if the master is not already running, it will fail to bind port 53 when you do run it because dnsmasq has the port). The solution is to move the SkyDNS server to a different port and have the dnsmasq server forward local requests to it.
Moving the SkyDNS server is a simple master configuration change. In the master-config.yaml file you should change the following line:
dnsConfig: bindAddress: 0.0.0.0:53
Have the server instead bind to an available port on localhost:
dnsConfig: bindAddress: 127.0.0.1:8053
You will need to restart the master if it is running. Now you just need to add forwarding directives in /etc/dnsmasq.conf:
# Forward .local queries to SkyDNS server=/local/127.0.0.1#8053 # Forward reverse queries for service network to SkyDNS. # This is for default OpenShift SDN - change as needed. server=/17.30.172.in-addr.arpa/127.0.0.1#8053
At this point you should be able to run dnsmasq and the master's SkyDNS harmoniously, and dnsmasq should be able to resolve local addresses by forwarding them to SkyDNS.
There are a few more dnsmasq settings you should get in the habit of adding, especially if your host will be exposed to the internet. The first setting in particular prevents your server from becoming an open resolver, which can be used to amplify DDoS attacks:
# Do not read /etc/resolv.conf and forward requests # to nameservers listed there: no-resolv # Never forward plain names (without a dot or domain part) domain-needed # Never forward addresses in the non-routed address spaces. bogus-priv
Host Configuration
As mentioned, pods get the master DNS server inserted for free. Generally you would like hosts to be able to use this server for DNS resolution as well; but chances are that the default DNS configuration only placed one upstream nameserver in your host /etc/resolv.conf. You want to keep that nameserver for resolving everything outside your cluster, but you also want to use dnsmasq for resolving everything inside. Requests should go to your internal dnsmasq first; if the address is an external one, dnsmasq rejects the request and the resolver should ask the next nameserver in the list.You can do this by adding your server's IP above the existing entry in /etc/resolv.conf, e.g.:
nameserver 192.168.1.100 nameserver 192.168.1.1
In order to make this change persist when the host reboots or receives a new lease from DHCP, you can create (or append to) the /etc/dhcp/dhclient.conf file, e.g.:
prepend domain-name-servers 192.168.1.100;
Do this on all your hosts. This will also work on your workstation in order to have it resolve your domain's DNS properly - if your workstation is running Linux. Your mileage may vary for other operating systems; Mac OS X is particularly unforgiving of nameservers that reject requests and you may be forced to use /etc/hosts entries. However if you actually control the DNS for your domain, you can now set your master IP as your nameserver for the domain; once this setting propagates, everyone with network access to your master will be able to resolve your domain addresses correctly. It is slightly odd to point global DNS records to internal servers, but it should work fine.
Last updated: October 18, 2018