This article is now 6 years old! It is highly likely that this information is out of date and the author will have completely forgotten about it. Please take care when following any guidance to ensure you have up-to-date recommendations.
Following on from me recent post deploying Kubernetes with the NSX-T
CNP
,
I wanted to extend my environment to make use of the vSphere Cloud
Provider
to enable Persistent Volumes backed by vSphere storage. This allows me to use
Storage Policy to create Persistent Volumes based on policy. For example, I’m
going to create two classes of storage, Fast and Slow - Fast will be vSAN
based and Slow will be NFS based.
I highly recommend you review the previous
post
in order to familiarise yourself with the environment.
As always, I’m doing this in my lab as a learning exercise, and blogging it as a
way of making sure I understand what I’m doing! I often stand on the shoulders
of others, and in this case it is Myles Gray
who deserves most of the credit. Again I am lucky to be able to poke Myles on
our corporate slack, but you can also check out his Kubernetes on
vSphere
series which was immensely helpful.
VM Pre-requisites
All the Kubernetes Node VMs need to be placed in a VM folder that’s used to
identify them. I’ve created folder /kubernetes/k8s-01 to reflect the name of
the Kubernetes cluster
In addition, the Kubernetes Node VMs need to have an advanced setting configured
disk.EnableUUID=1. You can configure this in the vSphere Web Client, or via
PowerCLI/govc. The VMs need to be shut down for the change to take effect. The
vSphere Cloud
Provider
docs say:
This step is necessary so that the VMDK always presents a consistent UUID to
the VM, thus allowing the disk to be mounted properly.
Assign the role vcp-manage-k8s-node-vms to the vcp-k8s-svc account on the
Cluster and VM folder in which your Kubernetes nodes run, ensure the “Propagate
to children” is ticked.
Cluster Role
Since I’ll be using two different storage policies, I’ve added the
vcp-manage-k8s-volumes to both the vSAN and NFS datastores.
Configure the vcp-view-k8s-spbm-profile role on the vCenter object
Finally, add the Read-only role to the Datacenter (and any Datastore Cluster or
Datastore Folder, if you have them).
Create a Kubernetes Secret
To secure the vSphere credentials they can be stored as a Kubernetes
Secret
. First,
covert the credentials to base64:
#Configuring the Kubernetes vSphere Cloud Provider
##On the Kubernetes Master node
Create vsphere.conf
Create the vSphere configuration file in /etc/kubernetes/vcp/vsphere.conf -
you’ll need to create the folder.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[Global]# Global settings are defaults for all VirtualCenters defined below secret-name ="vcp-vcenter" secret-namespace ="kube-system"port="443" insecure-flag ="1"[VirtualCenter "vcsa.definit.local"]datacenters="DefinIT"[Workspace]# Defines server="vcsa.definit.local"datacenter="DefinIT" default-datastore ="SYN-NFS-01" resourcepool-path ="Cluster02/Resources"folder="kubernetes/k8s-01"[Disk]# Required...scsicontrollertype= pvscsi
Modify the kubelet service
The vSphere Storage for Kubernetes docs are a little out of date on this - you
don’t modify the systemd unit file directly (which is likely to be overwritten
in upgrades). Instead there’s an environment file referenced in the
Append the --cloud-provider and --cloud-config flags to
/var/lib/kubelet/kubeadm-flags.env
The container manifests for the Kubernetes API server and controller also need
to be updated with the --cloud-provider and --cloud-config flags. These are
located in the /etc/kubernetes/manifests folder.
Add the following to the spec:containers:command array, paying attention to
whitespace (it is yaml, after all!)
Each Kubernetes node will need a *providerId *set so that the the created
volumes are mounted to the correct node. The manual method for doing this is to
look up the VM’s UUID in vSphere, then patch the node configuration with
kubectl with the providerId spec. You can check if the providerId is set by
running:
1
kubectl get nodes -o json | jq '.items[]|[.metadata.name, .spec.providerID, .status.nodeInfo.systemUUID]'
If the output contains null for the providerId, then you need to set it:
Fortunately the documentation provides a hand
script
for doing so, it just needs govc, jq and kubectl configured on each
machine. If you’re running on macOS like me, you can use homebrew to install
what you need.
Consuming vSphere Storage
Create Storage Class
To consume the new storage provider, we need to create some storage classes.
I’ve got two vSphere Storage Policies configured - one for vSAN “One-node vSAN
Policy", and one for NFS “NFS-Storage". The vSAN policy is the default policy
for my vSAN provider on the host the nodes run on, and the NFS policy is a
tag-based policy that will allow placement on any datastore tagged with the
“NFS” tag.
To use these policies I’m creating two storage classes - *fast *for vSAN:
The policies are imported using the kubectl apply -f sc-vsan-policy.yaml
command, and we can check the policy is created by using the kubectl describe storageclaim fast command.
Create a Persistent Volume Claim
To be able to consume the Storage Class we need to create a persistent volume
claim, or pvc, that uses the storage class: