Abonnez-vous au flux

DNS in Red Hat OpenShift Container Platform?

Regarding Red Hat OpenShift Container Platform, there are two locations of DNS resolution to be concerned with:

  • Corporate DNS (outside Red Hat OpenShift Container Platform) for Master internal/external hostname and router wildcard name.

  • OpenShift DNS for communication between internal services.

In this blog, I plan to talk about the latter DNS, which is based on skyDNS.

For demonstration purpose, a simple application “hello openshift” will be used. This is the script to deploy it.

# Create DNS Demo Project
$ oc new-project dns-project

# Deploy hello world pod
$ curl https://raw.githubusercontent.com/openshift/origin/release-3.6/examples/hello-openshift/hello-pod.json|oc create -f -

# Check there is no svc
$ oc get svc
No resources found.

# Access the pod with Pod IP (Container IP - 10.x.x.x)
$ oc get pod -o wide
NAME             READY     STATUS    RESTARTS   AGE              IP                NODE
hello-openshift   1/1         Running             0            5s        10.130.2.13         XXXX

$ curl 10.130.2.13:8080
Hello OpenShift!

# Tip - use this command to use the Pod IP from your environment
$ curl $(oc get pod hello-openshift --template '{{ .status.podIP}}):8080
Hello OpenShift!

# Expose Service from Pod
$ oc expose pod hello-openshift 
service "hello-openshift" exposed

$  oc get svc
NAME              CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
hello-openshift   172.30.12.46   <none>        8080/TCP   5s
 

Internal DNS: Why do we need it?

In order to answer this question, I am going to use a simple example. When it comes to communication between applications in the VM world (e.g. Red Hat Virtualization, VMware,), each application uses the IP address of the guest, which typically does not change. However, in the container-centric world, the container gets a new IP address whenever it spins up.

Kubernetes, which orchestrates containers, needs a static IP for the container. The Kubernetes Service Object, which provides a Service IP, was created to do this, among other things. The Service Object has a hostname and an IP address. The hostname will not change, but the IP address will change if the Service Object is recreated. With this Service hostname, stable internal communication between pods (services) becomes possible.

Because the Service IP can change, we don’t want to specify a static IP address in the application or DeploymentConfig environment. What if the Service Object is recreated without the application team being notified? A new Service IP will be assigned to the service and the applications that use the Service IP will fail.

For these reasons, we normally use the Service hostname to avoid the use of a hardcoded IP address in the application. The internal DNS in Red Hat OpenShift Container Platform provides this functionality. It uses dynamic DNS, so whenever the Service Object is recreated, the DNS will be updated with new records. With this feature, each Service Object hostname (that has a specific format - refer to Red Hat OpenShift Container Platform DNS) has a unique and dynamic Service IP address. As a result, we recommend using the hostname to communicate between internal services.

Check Pod IP and Service IP/Hostname:

# Check Service IP
$ oc get svc
NAME       CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
hello-openshift   172.30.12.46     <none>        8080/TCP   7s

$  dig hello-openshift.dns-project.svc.cluster.local

;; QUESTION SECTION:
;hello-openshift.dns-project.svc.cluster.local. IN A

;; ANSWER SECTION:
hello-openshift.dns-project.svc.cluster.local. 30 IN A 172.30.12.46

;; Query time: 1 msec
;; SERVER: 10.10.181.97#53(10.10.181.97)
;; WHEN: Mon Oct 30 15:56:58 EDT 2017
;; MSG SIZE  rcvd: 79

# Access the pod with Service IP.
# hello openshift application can be accessible using service ip from another application like following.   
$ curl 172.30.12.46:8080
Hello OpenShift!


# Access the pod with Service Hostname

# Also, you can get the same result using service hostname.

$ curl hello-openshift.dns-project.svc.cluster.local:8080

Hello OpenShift!
 

Internal DNS: Where is it?

Internally, the DNS uses SkyDNS, which uses etcd. Since Red Hat OpenShift 3 was released, the internal DNS has changed twice, in Red Hat OpenShift Container Platform 3.2 and 3.6. Prior to 3.6, SkyDNS always ran on the master nodes (‘masters’), so pods in infrastructure/application nodes (‘nodes’) had to access one of the masters in order to resolve Service hostnames.

Main changes in 3.2

  • Dnsmasq is installed by default on masters and nodes

  • NetworkManager is required on masters and nodes

  • SkyDNS listens on port 8053 on the masters

  • All nodes connect to the masters on TCP/8053

  • Dnsmasq routes queries for cluster.local to Kubernetes Service IP (172.30.0.1:53)

Note: OpenShift 3.2 - 3.5 has the DNS structure below

image

Figure 1. DNS Structure for OpenShift 3.2 - 3.5

 

Main changes in Red Hat OpenShift Container Platform 3.6

  • Dnsmasq was optional but now is mandatory.

  • SkyDNS runs on masters and nodes

    • On masters, SkyDNS listens on port 8053 to avoid port conflict

    • On nodes, SkyDNS listens on port 53

  • Dnsmasq routes queries for cluster.local and in-addr.arpa to 127.0.0.1:53

Note: OpenShift 3.6 has the DNS structure below

image

Figure 2. DNS Structure for OpenShift 3.6 (master without node)
 

image

Figure 3. DNS Structure for OpenShift 3.6 (master with node)
 
 

Deep Dive

Daemons and port mapping:

As the above diagram (Figure 3) shows, there are three DNS daemons on masters. Let’s check which process uses each DNS port. The following output shows detailed information.

# Show daemons that listen on *53
# netstat -tunlp|grep 53
tcp        0      0 0.0.0.0:8053            0.0.0.0:*               LISTEN      47645/openshift     
tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      47678/openshift     
tcp        0      0 10.10.181.97:53       0.0.0.0:*              LISTEN      47644/dnsmasq       
….

# Show process names that use *53 ports
# ps -ef|grep openshift
root      47645      1  2 13:15 ?        00:02:08 /usr/bin/openshift start master api ...
root      47678      1  4 13:15 ?        00:03:17 /usr/bin/openshift start node ..

 

Dispatch Nameserver by Dnsmasq

Dnsmasq is still responsible for dispatching queries inside pods to the right nameserver. Until OpenShift 3.5, if it queried `cluster.local`, Dnsmasq was sending it to SkyDNS on master nodes, but now it goes to 127.0.01:53 because skyDNS is on all nodes in Red Hat OpenShift Container Platform 3.6. Plus, if it queries `in-addr,arpa`, it also returns the Service hostname. From dnsmasq configuration files, we can see that it forwards the queries that contain `cluster.local` and ` in-addr,arpa`  to 127.0.0.1:53.

# # Show all configuration about dnsmasq
# grep . /etc/dnsmasq.d/*
/etc/dnsmasq.d/node-dnsmasq.conf:server=/in-addr.arpa/127.0.0.1
/etc/dnsmasq.d/node-dnsmasq.conf:server=/cluster.local/127.0.0.1
/etc/dnsmasq.d/origin-dns.conf:no-resolv
/etc/dnsmasq.d/origin-dns.conf:domain-needed
/etc/dnsmasq.d/origin-dns.conf:no-negcache
/etc/dnsmasq.d/origin-dns.conf:max-cache-ttl=1
/etc/dnsmasq.d/origin-dns.conf:enable-dbus
/etc/dnsmasq.d/origin-dns.conf:bind-interfaces
/etc/dnsmasq.d/origin-dns.conf:listen-address=10.10.181.97 # default route interface IP
/etc/dnsmasq.d/origin-upstream-dns.conf:server=10.10.182.21 # original name server IP

# # Check if *.cluster.local can be resolved
# dig hello-openshift.dns-project.svc.cluster.local

;; QUESTION SECTION:
;hello-openshift.dns-project.svc.cluster.local. IN A

;; ANSWER SECTION:
hello-openshift.dns-project.svc.cluster.local. 30 IN A 172.30.12.46

;; Query time: 1 msec
;; SERVER: 10.10.181.97#53(10.10.181.97)

# # Check if *.in-addr.arpa  can be resolved
#  dig -x 172.30.12.46

;; QUESTION SECTION:
;46.12.30.172.in-addr.arpa. IN PTR

;; ANSWER SECTION:
46.12.30.172.in-addr.arpa. 30 IN PTR hello-openshift.dns-project.svc.cluster.local.

;; Query time: 1 msec
;; SERVER: 10.10.181.97#53(10.10.181.97)

 

NetworkManager with Dnsmasq

NetworkManager dispatcher 99-origin-dns.sh replicates the functionality of NetworkManager's dns=dnsmasq. With this script, it makes the default route IP to Dnsmasq listen IP. Container uses this IP as a default nameserver.

99-origin-dns.sh logs to journald on the unit NetworkManager-dispatcher

This dispatch script:

  • creates dnsmasq conf files :

    • node-dnsmasq.conf

    • origin-dns.conf  

    • origin-upstream-dns.conf

  • starts Dnsmasq daemon by default when NetworkManager start.

  • sets host default route IP to Dnsmasq listen IP.

  • updates /etc/resolv.conf using host default route IP.

  • creates /etc/origin/node/resolv.conf

 

DNS query flow Condition in OpenShift 3.6

image

Figure 4. DNS Flow of OpenShift 3.6

 

DNS query flow in OpenShift 3.6

image

Figure 5. DNS Query Flow of OpenShift 3.6

Note: No need to reach the Master from the Node to get SVC DNS data

 

Debugging DNS Flow with tcpdump

Test env:

  • Master Node

    • IP: 10.10.181.97

    • Hostname: dhcp181-97.gsslab.example.com

  • App Node

    • IP: 10.10.181.196

    • Hostname: dhcp181-196.gsslab.example.com

  • SkyDNS

    • IP: 127.0.0.1

  • Upstream Nameserver

    • IP: 10.10.182.21

Executing tcpdump command fo

# On App Node 
$  tcpdump  -xx -vvvv  -s 0 -l -n -i any port 53 -w test.pcap

 

Scenario 1. Resolve Master hostname from one of app nodes

Node query will access Dnsmasq and forward it to upstream DNS.

# Command
$ dig dhcp181-97.gsslab.example.com

# Result
1 10.10.181.196   10.10.181.196  DNS  … dhcp181-97.gsslab.example.com OPT
2 10.10.181.196   10.10.182.21   DNS ... dhcp181-97.gsslab.example.com OPT
3 10.10.182.21    10.10.181.196   DNS ...  dhcp181-97.gsslab.example.com A 10.10.181.97 NS ns01.xxx.redhat.com NS ns02.xxx.redhat.com A x.x.x.x A x.x.x.x OPT
4 10.10.181.196   10.10.181.196   DNS dhcp181-97.gsslab.example.com A 10.10.181.97 NS ns01.xxx.redhat.com NS ns02.xxx.redhat.com A x.x.x.x A x.x.x.x OPT

 

Scenario 2. Resolve Service hostname from one of app nodes

Node query will access Dnsmasq and forward it to SkyDNS.

# Command 
$ dig  hello-openshift.dns-project.svc.cluster.local

#Result 

1 10.10.181.196 10.10.181.196DNS hello-openshift.dns-project.svc.cluster.local OPT
2 127.0.0.1            127.0.0.1     DNS hello-openshift.dns-project.svc.cluster.local OPT
3 127.0.0.1            127.0.0.1     DNS hello-openshift.dns-project.svc.cluster.local A 172.30.12.46
4 10.10.181.196 10.10.181.196DNS hello-openshift.dns-project.svc.cluster.local A 172.30.12.46

 

Scenario 3. Resolve Service hostname inside docker container from one of app nodes

Pod query will access Dnsmasq and forward it to SkyDNS.

# Command inside container
sh-4.2# ip a
   ….
   inet 10.131.2.17/23 scope global eth0
  ...

sh-4.2# dig  hello-openshift.dns-project.svc.cluster.local

# Result
1 10.131.2.17   10.10.181.196   DNS Standard query A hello-openshift.dns-project.svc.cluster.local OPT
2 10.131.2.17   10.10.181.196   DNS Standard query A hello-openshift.dns-project.svc.cluster.local OPT
3 127.0.0.1     127.0.0.1       DNS Standard query A hello-openshift.dns-project.svc.cluster.local OPT
4 127.0.0.1     127.0.0.1       DNS Standard query response A hello-openshift.dns-project.svc.cluster.local A 172.30.12.46
5 10.10.181.196 10.131.2.17     DNS Standard query response A hello-openshift.dns-project.svc.cluster.local A 172.30.12.46
6 10.10.181.196 10.131.2.17     DNS Standard query response A hello-openshift.dns-project.svc.cluster.local A 172.30.12.46

Reference:

Interested in learning more about Red Hat OpenShift Container Platform? Join us at the Red Hat OpenShift Roadshow Event in London Jan. 18, 2018. Learn more and register here.

 

image Jooho Lee is a senior OpenShift Technical Account Manager (TAM) in Toronto supporting middleware products(EAP/ DataGrid/ Web Server) and cloud products (Docker/ Kubernetes/ OpenShift/ Ansible). He is an active member of JBoss User Group Korea and Openshift / Ansible Group. Find more posts by Jooho at https://www.redhat.com/en/blog/authors/jooho-lee​
A Red Hat Technical Account Manager (TAM) is a  specialized product expert who works collaboratively with IT organizations to strategically plan for successful deployments and help realize optimal performance and growth. The TAM is part of Red Hat’s world-class Customer Experience and Engagement organization and provides proactive advice and guidance to help you identify and address potential problems before they occur. Should a problem arise, your TAM will own the issue and engage the best resources to resolve it as quickly as possible with minimal disruption to your business.

Connect with TAMs at a Red Hat Convergence event near you! Red Hat Convergence is a free, invitation-only event offering technical users an opportunity to deepen their Red Hat product knowledge and discover new ways to apply open source technology to meet their business goals. These events travel to cities around the world to provide you with a convenient, local one-day experience to learn and connect with Red Hat experts and industry peers.


À propos de l'auteur

Jooho Lee is a senior OpenShift Technical Account Manager (TAM) in Toronto supporting middleware products(EAP/ DataGrid/ Web Server) and cloud products (Docker/ Kubernetes/ OpenShift/ Ansible). He is an active member of JBoss User Group Korea and Openshift / Ansible Group. 

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Parcourir par canal

automation icon

Automatisation

Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements

AI icon

Intelligence artificielle

Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement

open hybrid cloud icon

Cloud hybride ouvert

Découvrez comment créer un avenir flexible grâce au cloud hybride

security icon

Sécurité

Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies

edge icon

Edge computing

Actualité sur les plateformes qui simplifient les opérations en périphérie

Infrastructure icon

Infrastructure

Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde

application development icon

Applications

À l’intérieur de nos solutions aux défis d’application les plus difficiles

Original series icon

Programmes originaux

Histoires passionnantes de créateurs et de leaders de technologies d'entreprise