Introduction
This article has for goal to demonstrate you how to create a Kubenetes (bare metal) single master cluster with the kubeadm tool on a remote Debian 9 server.
Requirements
- 2 GBs minimum per machine.
- 2 CPUs minimum.
- Full network connectivity between all machines in the cluster.
- Unique hostname, MAC address, and product_uuid for every node.
- Swap disabled.
- Certain ports have to be open on your machines:Port details for Master node
Protocol Direction Port Range Purpose TCP Inbound 6443* Kubernetes API server TCP Inbound 2379-2380 etcd server client API TCP Inbound 10250 Kubelet API TCP Inbound 10251 kube-scheduler TCP Inbound 10252 kube-controller-manager TCP Inbound 10255 Read-only Kubelet API
Port details for Worker nodeProtocol Direction Port Range Purpose TCP Inbound 10250 Kubelet API TCP Inbound 10255 Read-only Kubelet API TCP Inbound 30000-32767 NodePort Services**
Provisionning
You have to know you can get away with a single host for testing but it’s highly recommended to have at least a single master and two worker nodes in your cluster.
Now, run the followings commands on each nodes as root (Gist). The following script install everything you need for running Kubernetes on your machine(s):
Linux utils
- apt-transport-https
- ca-certificates
- curl
- software-properties-common
Docker + Kubernetes
- docker-ce (docker community edition)
- kubelet (the component that runs on all of the machines in your cluster and does things like starting pods and containers.)
- kubeadm (to bootstrap the cluster, could be only installed in your master)
- kubectl (to control the cluster)
1 |
$ curl -sL https://gist.githubusercontent.com/rimiti/7827217049de4e9cb9b398e7a5f2cc12/raw/c69df2657cc0de78e8783065644be7ce62d9f753/installations.sh | sh |
Disable swap
Since Kubernetes 1.8, you need to disable swap on each servers.
1 |
$ sudo swapoff -a |
Support for swap is non-trivial. Guaranteed pods should never require swap. Burstable pods should have their requests met without requiring swap. BestEffort pods have no guarantee. The kubelet right now lacks the smarts to provide the right amount of predictable behavior here across pods. (@derekwaynecarr)
Create the cluster
At this step, connect you from ssh to your remote master server.
1 |
$ ssh -i ~/.ssh/my_secret_key kube@kube01.dimsolution.com |
As root, initialize your cluster with kubeadm.
1 |
$ sudo kubeadm init |
You should have a similar output like that:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
[init] Using Kubernetes version: vX.Y.Z [preflight] Running pre-flight checks [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 39.511972 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node master as master by adding a label and a taint [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: <token> [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> |
At this moment, your kubernetes cluster is created, but to start using it, you need to run the following steps (as a regular user / not root):
1 2 3 4 |
$ sudo su - kube $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config |
Trick
To not every time adding –kube-config argument to your kubectl command, example
1 |
$ kubectl --kube-config $HOME/.kube/config get all |
You can create an environment variable to say to Kubernetes to use this config file as default.
1 2 |
$ echo "export KUBECONFIG=$HOME/.kube/config" | tee -a ~/.bashrc $ source ~/.bashrc |
Now, you can use kubectl without –kube-config argument.
Join the cluster (optional for this tutorial)
If you have workers, you can easily join the cluster with the command generated by the previous kubeadm command.
1 |
$ sudo kubeadm join --token c30633.d178035db2b4bb9a 10.0.0.5:6443 --discovery-token-ca-cert-hash sha256:<hash> |
Create a network (optional for this tutorial)
There are many network providers available for Kubernetes, but none are included by default. Weave Net (created by Weaveworks) is one of the most popular providers in the Kubernetes community. One of its many benefits is that it works directly without any configuration.
1 |
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" |
Apply your first chart
Before applying the kubernetes-dashboard chart, we check the cluster and nodes status.
1 2 3 4 5 6 7 |
$ kubectl cluster-info Kubernetes master is running at https://163.173.23.173:6443 KubeDNS is running at https://163.173.23.173:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy $ kubectl get nodes NAME STATUS ROLES AGE VERSION kube-01 Ready master 2h v1.11.0 |
Everything is ok, we can apply the recommended kubernetes-dashboard chart (Github).
1 |
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml |
2 comments
Oussama Boudhri
Posted on 5 juillet 2018 at 13 h 53 minInteresting article thxx 🙂
Dimitri DO BAIRRO
Posted on 5 juillet 2018 at 14 h 56 minThank you @Oussama!