RUNNING COREDNS FOR LAB NAME RESOLUTION

Written by Sam McGeown on 24/1/2020 · Read in about 5 min (885 words)
Published under Cloud Native and Development

Up until recently I've been running a Windows Server Core VM with Active Directory, DNS and Certificate Services deployed to provide some core features in my home lab. However, I've also been conscious that running a lab on old hardware doesn't exactly have much in the way of green credentials. So, in an effort to reduce my carbon footprint (and electricity bill) I've been looking for ways to shut down my lab when it's not in use.

As a result of this, I've migrated a lot of the things I need running 24x7 (plex, home automation, unifi, openVPN, emonCMS) to running on Kubernetes on a RaspberryPi cluster, and it works pretty well for me. One nice to have though, was custom DNS resolution for these home services. I started looking at running BIND in a container on Kubernetes, but found that either the images available were not ARM compatible, or that they were generally not great. Then it struck me that there was already a very good DNS implementation on each node of my Kubernetes cluster - CoreDNS!

CoreDNS is written in Go and has been part of Kubernetes clusters since v1.11 - it replaced kube-dns (which ran multiple containers) with as single process, and has a pretty good eco-system of plugins to enhance functionality as required. It is actually possible to configure aliases, and even add a custom zone to the kube-system CoreDNS server. But, all of this involves modifying the configuration of the kube-system configuration, and it just doesn't sit quite right. I want my Kubernetes clusters to be as simple as possible.

However, CoreDNS is a general-purpose DNS server, and I know it will run on my Kubernetes cluster just fine because it already is! I set about recreating my Lab DNS zone (definit.local) using CoreDNS - here's what I ended up with:

  • Namespace - a container to run my CoreDNS
  • ConfigMaps - configuration of my Corefile and DNS Zone file(s)
  • Deployment - deployment specification of the CoreDNS pods
  • Services - MetalLb load balancer to provide a HA access for my network

Namespace

The namespace is pretty self explanatory…

1kind: Namespace
2apiVersion: v1
3metadata:
4 name: coredns

ConfigMaps

I played about with various configurations of ConfigMaps in the setup, and in the end decided to separate the Corefile (which is the central CoreDNS configuration file) and the DNS Zones into separate ConfigMaps.

The coredns-config ConfigMap tells CoreDNS how to run, and what plugins to load. It is mounted into the /etc/coredns folder later in the deployment configuration. Since I don't really want to restart my CoreDNS servers every time I make a change to a DNS zone, I am using the auto plugin to automatically load zone files (either new, or with a SOA Serial change) that are mounted in the /etc/core/zones folder. The second ConfigMap coredns-zones is mounted into the /etc/core/zones folder and provides the actual DNS zone information.

The log and errors plugins are self-explanatory, they tell CoreDNS to write the log and errors to STDOUT. The reload plugin allows me to reload configuration from the Corefile whenever changes are made, it will check every 30 seconds. The loadbalance module will round-robin dns A, AAAA, and PTR records.

Finally, the health and ready plugins will provide HTTP responses for the deployment health and ready probes.

 1apiVersion: v1
 2data:
 3  Corefile: |
 4    .:53 {
 5        # Automatically load zone files mounted in the "zones" directory
 6        auto {
 7            directory /etc/coredns/zones
 8        }
 9        # Enable logging to STDOUT
10        log
11        # Print errors to STDOUT
12        errors
13        # Automatically load changes to the Corefile
14        reload 30s
15        # Round-robin A, AAAA and PTR records
16        loadbalance
17        # Return HTTP 200 OK on :8080 when CoreDNS is running
18        health
19        # Return HTTP 200 OK on :8081 when CoreDNS and plugins are ready
20        ready
21    }
22kind: ConfigMap
23metadata:
24  name: coredns-config
25  namespace: coredns

The actual DNS zone data itself is stored in the coredns-zones ConfigMap, and as I mentioned before is mounted into the /etc/core/zones folder, from where the Corefile has specified that it will automatically load zones that are new, or have a SOA Serial increment. For brevity, this is just a snip of my DNS zone!

 1apiVersion: v1
 2data:
 3  db.definit.local: |
 4    $TTL  30M
 5    @   IN  SOA  ns1.definit.local.  definit.local.  (
 6                    2019041900  ; Serial
 7                    4H          ; Refresh
 8                    1H          ; Retry
 9                    7D          ; Expire
10                    4H )        ; Negative Cache TTL
11        IN  NS  ns1
12
13    ns1                       IN  A   192.168.21.4
14    pihole.definit.local.     IN  A   192.168.21.3
15    unifi.definit.local.      IN  A   192.168.21.10
16    vault.definit.local.      IN  A   192.168.21.11
17    syn-ds-01.definit.local.  IN  A   192.168.4.10
18    edgerouter                IN  A   192.168.1.1
19    sg300-10                  IN  A   192.168.9.2
20    sg300-20                  IN  A   192.168.100.3
21
22kind: ConfigMap
23metadata:
24  name: coredns-zones
25  namespace: coredns

Deployment

The deployment spec is pretty simple, it calls for 3 replicas of the CoreDNS container image to be running (one per worker node in my Kubernetes cluster). There is a -conf argument telling CoreDNS where to look for the Corefile configuration file. The containers allow both TCP and UDP connections on port 53 (though CoreDNS can run other protocols for DNS, such as TLS, HTTP, RPC). Both the livenessProbe and readinessProbe point to the /health and /ready plugins configured in the Corefile. Finally, the Volumes and VolumeMounts map the contents of the ConfigMaps (shown above) as files in the /etc/core folder.

 1
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: definit-coredns
 6  namespace: coredns
 7  labels:
 8    app.kubernetes.io/name: coredns
 9    app.kubernetes.io/instance: definit-coredns
10spec:
11  replicas: 3
12  selector:
13    matchLabels:
14      app.kubernetes.io/name: coredns
15      app.kubernetes.io/instance: definit-coredns
16  template:
17    metadata:
18      labels:
19        app.kubernetes.io/name: coredns
20        app.kubernetes.io/instance: definit-coredns
21    spec:
22      containers:
23      - name: coredns-lab
24        image: coredns/coredns:latest
25        imagePullPolicy: Always
26        args: [ "-conf", "/etc/coredns/Corefile" ]
27        ports:
28          - name: dns-tcp
29            containerPort: 53
30            protocol: TCP
31          - name: dns-udp
32            containerPort: 53
33            protocol: UDP
34        livenessProbe:
35          httpGet:
36            path: /health
37            port: 8080
38            scheme: HTTP
39          initialDelaySeconds: 60
40          timeoutSeconds: 5
41          successThreshold: 1
42          failureThreshold: 5
43        readinessProbe:
44          httpGet:
45            path: /ready
46            port: 8181
47            scheme: HTTP
48          initialDelaySeconds: 10
49          timeoutSeconds: 5
50          successThreshold: 1
51          failureThreshold: 5
52        volumeMounts:
53        - mountPath: "/etc/coredns/Corefile"
54          subPath: "Corefile"
55          name: coredns-config-volume
56        - mountPath: "/etc/coredns/zones"
57          name: coredns-zone-volume
58      restartPolicy: Always
59      volumes:
60        - name: coredns-config-volume
61          configMap:
62            name: coredns-config
63            items:
64              - key: Corefile
65                path: Corefile
66        - name: coredns-zone-volume
67          configMap:
68            name: coredns-zones

Services

Finally, in order to allow clients on my network to query the CoreDNS servers, and to provide a little bit of resilience, I have a MetalLb load balancer configured to provide access to the nodes. Because Kubernetes can't yet support multi-protocol load balancers, I have to configure both a TCP and UDP service. By specifying the same loadBalancerIp, MetalLb will use the same external IP address for both. From there, MetalLb peers with my physical routers via BGP and advertises the address via a /32 route.

 1apiVersion: v1
 2kind: Service
 3metadata:
 4  labels:
 5    app.kubernetes.io/name: coredns
 6    app.kubernetes.io/instance: definit-coredns
 7  annotations:
 8    metallb.universe.tf/allow-shared-ip: coredns
 9  name: coredns-tcp-service
10  namespace: coredns
11spec:
12  ports:
13  - port: 53
14    targetPort: 53
15    protocol: TCP
16    name: dns-tcp
17  selector:
18    app.kubernetes.io/name: coredns
19    app.kubernetes.io/instance: definit-coredns
20  loadBalancerIP: "192.168.21.4"
21  type: LoadBalancer
22  sessionAffinity: None
23  externalTrafficPolicy: Local
24
25---
26
27apiVersion: v1
28kind: Service
29metadata:
30  labels:
31    app.kubernetes.io/name: coredns
32    app.kubernetes.io/instance: definit-coredns
33  annotations:
34    metallb.universe.tf/allow-shared-ip: coredns
35  name: coredns-udp-service
36  namespace: coredns
37spec:
38  ports:
39  - port: 53
40    targetPort: 53
41    protocol: UDP
42    name: dns-udp
43  selector:
44    app.kubernetes.io/name: coredns
45    app.kubernetes.io/instance: definit-coredns
46  loadBalancerIP: "192.168.21.4"
47  type: LoadBalancer
48  sessionAffinity: None
49  externalTrafficPolicy: Local
50

And that's that - simple! My own custom CoreDNS service running on Kubernetes. If I want to update the configuration, or a zone file, I simple update the ConfigMap YAML and apply it, and within 60 seconds the DNS zone has updated!

kubectl get configmaps,pods,services

Query DNS resolution

Share this post