SCHEDULING A RESTART OF A KUBERNETES DEPLOYMENT USING CRONJOBS
Most of my home network runs on my Raspberry Pi Kubernetes cluster, and for the most part it’s rock solid. However, applications being applications, sometimes they become less responsive than they should (or for example, when my Synology updates itself and reboots, any mounted NFS volumes can cause the running pods to degrade in performance). This isn’t an issue with service liveliness, which can be mitigated with a liveness probe that restarts the pod if a service isn’t running.
If my PiHole or Plex deployments become slow to respond I can generally restart the Deployment
and everything springs back into life. Typically this is just a kubectl rollout restart deployment pihole
command to bounce the pods. This is generally pretty safe as it creates a new ReplicaSet
and ensures the Pods
are running before terminating the old ones.
Rather than waiting for performance to degrade, then manually restarting the deployment, I wanted to bake in an automated restart once a week while the family sleep. The added benefit here is also that I can keep my Plex server up to date by specifying imagePullPolicy: Always
. Fortunately, we can make use of the CronJob functionality to schedule the command. To do this I need to create a few objects:
ServiceAccount - an account I can delegate the rights to restart the deployment, as the default service account does not have rights
1kind: ServiceAccount
2apiVersion: v1
3metadata:
4 name: restart-pihole
5 namespace: pihole
Role - a role with minimal permissions to perform the actions on the deployment
1apiVersion: rbac.authorization.k8s.io/v1
2kind: Role
3metadata:
4 name: restart-pihole
5 namespace: pihole
6rules:
7 - apiGroups: ["apps", "extensions"]
8 resources: ["deployments"]
9 resourceNames: ["pihole"]
10 verbs: ["get", "patch"]
RoleBinding - to bind the role to the service account
1apiVersion: rbac.authorization.k8s.io/v1
2kind: RoleBinding
3metadata:
4 name: restart-pihole
5 namespace: pihole
6roleRef:
7 apiGroup: rbac.authorization.k8s.io
8 kind: Role
9 name: restart-pihole
10subjects:
11 - kind: ServiceAccount
12 name: restart-pihole
13 namespace: pihole
Lastly, I need a CronJob - the Cron-like task definition to run my kubectl command. I’m using the raspbernetes/kubectl image to provide kubectl compatible with my Raspberry Pi’s architecture - if you’re doing it on a different architecture I’d recommend the bitnami/kubectl image.
1apiVersion: batch/v1beta1
2kind: CronJob
3metadata:
4 name: restart-pihole
5 namespace: pihole
6spec:
7 concurrencyPolicy: Forbid # Do not run concurrently!
8 schedule: '0 0 * * 0' # Run once a week at midnight on Sunday morning
9 jobTemplate:
10 spec:
11 backoffLimit: 2
12 activeDeadlineSeconds: 600
13 template:
14 spec:
15 serviceAccountName: restart-pihole # Run under the service account created above
16 restartPolicy: Never
17 containers:
18 - name: kubectl
19 image: raspbernetes/kubectl # Specify the kubectl image
20 command: # The kubectl command to execute
21 - 'kubectl'
22 - 'rollout'
23 - 'restart'
24 - 'deployment/pihole'
Once these configurations have been applied, I can view my new setup:
1$ kubectl get cronjobs.batch
2NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
3restart-pihole 0 0 * * 0 False 0 <none> 10s
One final tip…if I don’t want to wait until next week to see if the cronjob works, I can create a new job based on the cronjob configuration using the --from=cronjob
flag:
1$ kubectl create job --from=cronjob/restart-pihole restart-pihole-now
2job.batch/restart-pihole-now created
3$ kubectl get jobs
4NAME COMPLETIONS DURATION AGE
5restart-pihole-now 1/1 111s 2m22s
6$ kubectl get pods
7NAME READY STATUS RESTARTS AGE
8pihole-c8774858-tqj65 1/1 Running 0 65s
9restart-pihole-now-l2r69 0/1 Completed 0 90s
10$ kubectl logs restart-pihole-now-l2r69
11deployment.apps/pihole restarted
From the command above I can see that the job is created, then has completed (COMPLETIONS: 1/1
) and the pihole pod was created 65s ago. I can also see the restart-pihole-now job pod has a STATUS: Completed
, and if I check the pod logs I can see the response to the kubectl command.
Hopefully that’s a useful little starter for CronJobs
in Kubernetes - as with cron jobs in Linux or scheduled tasks in Windows, it’s a useful tool in the administrator’s toolbelt!