Kubernetes Development: Vagrant and Kubeadm
When experimenting with kubernetes it is valuable to have an easily created development environment. Not needing to rely on an environment deployed to a cloud provider or a complex bare metal installation makes development much faster.
Vagrant is a widely used and well understood tool for creating virtual machines. It is well suited to both docker and kubernetes which makes it a good choice for our development environment. Installing kubernetes can be a complex (overwhelming) task especially for a development environment. Kubeadm makes installing and configuring a simple kubernetes cluster for local development easy and fast.
Using vagrant and kubeadm we will create a development environment with a single node kubernetes cluster which we can easily provision and destroy as needed.
The source code for this exercise can be found here https://github.com/cvgw/vagrant_kubeadm
The first step is to create our vagrantfile and specify the desired configuration.
Set the box and provisioner as follows
config.vm.box = "bento/ubuntu-16.04"
config.vm.provision "docker"
We want our vagrant machine accessible via a static IP from our host machine incase we wish to make use of the kubernetes API there. The IP address chosen is up to you.
config.vm.network "private_network", ip: "192.168.66.100"
During our experiments we may want to run a number of kubernetes resources simultaneously; to ensure there will be enough resource available on the node we need to provision adequate cpu and memory
config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 4
end
note* You maybe using a different provider so be sure to follow the correct process for specifying the resource for your provider.
We don’t want to manually execute kubeadm everytime we create a new vagrant machine so we will add a script which will run when the machine is provisioned.
The first step is to download kubeadm
, kubelet
, and kubectl
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
Next we disable swap as this is a requirement of kubelet
echo kubelet requires swap off
swapoff -a
echo keep swap off after reboot
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Next we update the kubelet cgroups
echo update cgroups
sed -i '0,/ExecStart=/s//Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"\n&/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
We need to add the static IP of our vagrant machine to the kube-apiserver so that it can serve traffic on that address. Additionally we will want to enable some of the alpha and beta features in kubernetes. Using a config file with kubeadm will make both of these things much easier.
We create the config file and then initialize the kubernetes cluster
# echo initialize kubernetes
cat << 'EOF' > /tmp/config.yaml
nodeName: vagrant
apiServerCertSANs:
- 192.168.66.100
apiServerExtraArgs:
enable-admission-plugins: MutatingAdmissionWebhook,ValidatingAdmissionWebhook,Initializers
runtime-config: admissionregistration.k8s.io/v1alpha1
EOF
kubeadm init --config /tmp/config.yaml
Finally we copy the admin credentials to be used by the vagrant user
echo Copying credentials to /home/vagrant...
sudo --user=vagrant mkdir -p /home/vagrant/.kube
cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
chown $(id -u vagrant):$(id -g vagrant) /home/vagrant/.kube/config
Start the vagrant machine and ssh to it once it is ready. Once logged into the vagrant shell we will need to do two more things before our kubernetes cluster is ready.
The first is to install a CNI provider. For this demo we will use cloud weave
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')
The second is to remove the master taint from the node so that we can schedule pods on it. The master node is tainted by default to prevent scheduling of nodes there, but since we are running a single node cluster we accept this.
kubectl taint nodes --all node-role.kubernetes.io/master-
The kubernetes cluster is now ready to use.
A huge thank you to Liz Rice whose post was the source for much of this information and without which I would not have been able to write this.
https://medium.com/@lizrice/kubernetes-in-vagrant-with-kubeadm-21979ded6c63