Kubernetes is a production-grade container orchestrator.
I wrote this guide in a way that you should be able to get your Kubernetes cluster running as simple as going through step-by-step & copy-paste.
Please let me know in case of issues or suggestions, I will be happy to improve this doc.
You will need 4x 2GB RAM VM’s:
Additionally, each node will be running Kubernetes essential services such as kube-dns, kube-proxy, weave-net and Docker.
for i in k8s-controller-1 k8s-controller-2 k8s-worker-1 k8s-worker-2; do
ssh -o StrictHostKeyChecking=no root@$i \
"timedatectl set-local-rtc 0; timedatectl set-timezone Europe/Prague"
grep -w k8s /etc/hosts | ssh root@$i "tee -a /etc/hosts"
echo -e "net.bridge.bridge-nf-call-iptables=1\n\
net.bridge.bridge-nf-call-ip6tables=1\n\
net.ipv4.ip_forward=1" \
| ssh root@$i "tee /etc/sysctl.d/kubernetes.conf && \
modprobe br_netfilter && sysctl -p /etc/sysctl.d/kubernetes.conf"
done
ssh -A root@k8s-controller-1
Create openssl configuration file:
CONTROLLER1_IP=$(getent ahostsv4 k8s-controller-1 | tail -1 | awk '{print $1}')
CONTROLLER2_IP=$(getent ahostsv4 k8s-controller-2 | tail -1 | awk '{print $1}')
SERVICE_IP="10.96.0.1"
mkdir -p /etc/kubernetes/pki
cd /etc/kubernetes/pki
cat > openssl.cnf << EOF
[ req ]
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_ca ]
basicConstraints = critical, CA:TRUE
keyUsage = critical, digitalSignature, keyEncipherment, keyCertSign
[ v3_req_server ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
[ v3_req_client ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
[ v3_req_apiserver ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names_cluster
[ v3_req_etcd ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names_etcd
[ alt_names_cluster ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-controller-1
DNS.6 = k8s-controller-2
# DNS.7 = ${KUBERNETES_PUBLIC_ADDRESS}
IP.1 = ${CONTROLLER1_IP}
IP.2 = ${CONTROLLER2_IP}
IP.3 = ${SERVICE_IP}
# IP.4 = ${KUBERNETES_PUBLIC_IP}
[ alt_names_etcd ]
DNS.1 = k8s-controller-1
DNS.2 = k8s-controller-2
IP.1 = ${CONTROLLER1_IP}
IP.2 = ${CONTROLLER2_IP}
EOF
used to sign the rest of K8s certs.
openssl ecparam -name secp521r1 -genkey -noout -out ca.key
chmod 0600 ca.key
openssl req -x509 -new -sha256 -nodes -key ca.key -days 3650 -out ca.crt \
-subj "/CN=kubernetes-ca" -extensions v3_ca -config ./openssl.cnf
used as default x509 apiserver cert.
openssl ecparam -name secp521r1 -genkey -noout -out kube-apiserver.key
chmod 0600 kube-apiserver.key
openssl req -new -sha256 -key kube-apiserver.key -subj "/CN=kube-apiserver" \
| openssl x509 -req -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial \
-out kube-apiserver.crt -days 365 \
-extensions v3_req_apiserver -extfile ./openssl.cnf
used for x509 client authentication to the kubelet’s HTTPS endpoint.
openssl ecparam -name secp521r1 -genkey -noout -out apiserver-kubelet-client.key
chmod 0600 apiserver-kubelet-client.key
openssl req -new -key apiserver-kubelet-client.key \
-subj "/CN=kube-apiserver-kubelet-client/O=system:masters" \
| openssl x509 -req -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial \
-out apiserver-kubelet-client.crt -days 365 \
-extensions v3_req_client -extfile ./openssl.cnf
used by a human to administrate the cluster.
openssl ecparam -name secp521r1 -genkey -noout -out admin.key
chmod 0600 admin.key
openssl req -new -key admin.key -subj "/CN=kubernetes-admin/O=system:masters" \
| openssl x509 -req -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial \
-out admin.crt -days 365 -extensions v3_req_client \
-extfile ./openssl.cnf
As per https://github.com/kubernetes/kubernetes/issues/22351#issuecomment-26082410
The service account token private key (sa.key) given only to the controller manager, is used to sign the tokens. The masters only need the public key portion (sa.pub) in order to verify the tokens signed by the controller manager.
The service account public key has to be the same for all. In an HA setup that means you need to explicitly give it to each apiserver (recommended) or make all the apiservers use the same serving cert/private TLS key (not recommended).
As a convenience, you can provide a private key to both, and the public key portion of it will be used by the api server to verify token signatures.
As a further convenience, the api server’s private key for it’s serving certificate is used to verify service account tokens if you don’t specify
--service-account-key-file
--tls-cert-file
and--tls-private-key-file
are used to provide the serving cert and key to the api server. If you don’t specify these, the api server will make a self-signed cert/key-pair and store it atapiserver.crt/apiserver.key
openssl ecparam -name secp521r1 -genkey -noout -out sa.key
openssl ec -in sa.key -outform PEM -pubout -out sa.pub
chmod 0600 sa.key
openssl req -new -sha256 -key sa.key \
-subj "/CN=system:kube-controller-manager" \
| openssl x509 -req -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial \
-out sa.crt -days 365 -extensions v3_req_client \
-extfile ./openssl.cnf
used to allow access to the resources required by the kube-scheduler component.
openssl ecparam -name secp521r1 -genkey -noout -out kube-scheduler.key
chmod 0600 kube-scheduler.key
openssl req -new -sha256 -key kube-scheduler.key \
-subj "/CN=system:kube-scheduler" \
| openssl x509 -req -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial \
-out kube-scheduler.crt -days 365 -extensions v3_req_client \
-extfile ./openssl.cnf
used to sign front proxy client cert.
openssl ecparam -name secp521r1 -genkey -noout -out front-proxy-ca.key
chmod 0600 front-proxy-ca.key
openssl req -x509 -new -sha256 -nodes -key front-proxy-ca.key -days 3650 \
-out front-proxy-ca.crt -subj "/CN=front-proxy-ca" \
-extensions v3_ca -config ./openssl.cnf
used to verify client certificates on incoming requests before trusting usernames in headers specified by
--requestheader-username-headers
openssl ecparam -name secp521r1 -genkey -noout -out front-proxy-client.key
chmod 0600 front-proxy-client.key
openssl req -new -sha256 -key front-proxy-client.key \
-subj "/CN=front-proxy-client" \
| openssl x509 -req -sha256 -CA front-proxy-ca.crt \
-CAkey front-proxy-ca.key -CAcreateserial \
-out front-proxy-client.crt -days 365 \
-extensions v3_req_client -extfile ./openssl.cnf
Create kube-proxy x509 cert only if you want to use a kube-proxy role instead of a kube-proxy service account with its JWT token (kubernetes secrets) for auhentication.
openssl ecparam -name secp521r1 -genkey -noout -out kube-proxy.key
chmod 0600 kube-proxy.key
openssl req -new -key kube-proxy.key \
-subj "/CN=kube-proxy/O=system:node-proxier" \
| openssl x509 -req -sha256 -CA ca.crt -CAkey ca.key -CAcreateserial \
-out kube-proxy.crt -days 365 -extensions v3_req_client \
-extfile ./openssl.cnf
etcd CA cert used to sign the rest of etcd certs.
openssl ecparam -name secp521r1 -genkey -noout -out etcd-ca.key
chmod 0600 etcd-ca.key
openssl req -x509 -new -sha256 -nodes -key etcd-ca.key -days 3650 \
-out etcd-ca.crt -subj "/CN=etcd-ca" -extensions v3_ca \
-config ./openssl.cnf
etcd cert used for securing connections to etcd (client-to-server).
openssl ecparam -name secp521r1 -genkey -noout -out etcd.key
chmod 0600 etcd.key
openssl req -new -sha256 -key etcd.key -subj "/CN=etcd" \
| openssl x509 -req -sha256 -CA etcd-ca.crt -CAkey etcd-ca.key \
-CAcreateserial -out etcd.crt -days 365 \
-extensions v3_req_etcd -extfile ./openssl.cnf
etcd peer cert used for securing connections between peers (server-to-server).
openssl ecparam -name secp521r1 -genkey -noout -out etcd-peer.key
chmod 0600 etcd-peer.key
openssl req -new -sha256 -key etcd-peer.key -subj "/CN=etcd-peer" \
| openssl x509 -req -sha256 -CA etcd-ca.crt -CAkey etcd-ca.key \
-CAcreateserial -out etcd-peer.crt -days 365 \
-extensions v3_req_etcd -extfile ./openssl.cnf
# for i in *crt; do
echo $i:;
openssl x509 -subject -issuer -noout -in $i;
echo;
done
admin.crt:
subject= /CN=kubernetes-admin/O=system:masters
issuer= /CN=kubernetes-ca
apiserver-kubelet-client.crt:
subject= /CN=kube-apiserver-kubelet-client/O=system:masters
issuer= /CN=kubernetes-ca
ca.crt:
subject= /CN=kubernetes-ca
issuer= /CN=kubernetes-ca
etcd-ca.crt:
subject= /CN=etcd-ca
issuer= /CN=etcd-ca
etcd.crt:
subject= /CN=etcd
issuer= /CN=etcd-ca
etcd-peer.crt:
subject= /CN=etcd-peer
issuer= /CN=etcd-ca
front-proxy-ca.crt:
subject= /CN=front-proxy-ca
issuer= /CN=front-proxy-ca
front-proxy-client.crt:
subject= /CN=front-proxy-client
issuer= /CN=front-proxy-ca
kube-apiserver.crt:
subject= /CN=kube-apiserver
issuer= /CN=kubernetes-ca
kube-scheduler.crt:
subject= /CN=system:kube-scheduler
issuer= /CN=kubernetes-ca
sa.crt:
subject= /CN=system:kube-controller-manager
issuer= /CN=kubernetes-ca
# Optional:
kube-proxy.crt:
subject= /CN=kube-proxy/O=system:node-proxier
issuer= /CN=kubernetes-ca
ssh -o StrictHostKeyChecking=no root@k8s-controller-2 "mkdir /etc/kubernetes"
scp -pr -- /etc/kubernetes/pki/ k8s-controller-2:/etc/kubernetes/
Install these binaries on all controller nodes:
TAG=v1.6.4
URL=https://storage.googleapis.com/kubernetes-release/release/$TAG/bin/linux/amd64
curl -# -L -o /usr/bin/kube-apiserver $URL/kube-apiserver
curl -# -L -o /usr/bin/kube-controller-manager $URL/kube-controller-manager
curl -# -L -o /usr/bin/kube-scheduler $URL/kube-scheduler
curl -# -L -o /usr/bin/kube-proxy $URL/kube-proxy
curl -# -L -o /usr/bin/kubelet $URL/kubelet
curl -# -L -o /usr/bin/kubectl $URL/kubectl
chmod +x -- /usr/bin/kube*
mkdir -p /var/lib/{kubelet,kube-proxy}
kubeconfig files are used by a service or a user to authenticate oneself.
CONTROLLER1_IP=$(getent ahostsv4 k8s-controller-1 | tail -1 | awk '{print $1}')
CONTROLLER2_IP=$(getent ahostsv4 k8s-controller-2 | tail -1 | awk '{print $1}')
INTERNAL_IP=$(hostname -I | awk '{print $1}')
KUBERNETES_PUBLIC_ADDRESS=$INTERNAL_IP
CLUSTER_NAME="default"
KCONFIG=controller-manager.kubeconfig
KUSER="system:kube-controller-manager"
KCERT=sa
cd /etc/kubernetes/
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
kubectl config set-credentials ${KUSER} \
--client-certificate=pki/${KCERT}.crt \
--client-key=pki/${KCERT}.key \
--embed-certs=true \
--kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUSER} \
--kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
CONTROLLER1_IP=$(getent ahostsv4 k8s-controller-1 | tail -1 | awk '{print $1}')
CONTROLLER2_IP=$(getent ahostsv4 k8s-controller-2 | tail -1 | awk '{print $1}')
INTERNAL_IP=$(hostname -I | awk '{print $1}')
KUBERNETES_PUBLIC_ADDRESS=$INTERNAL_IP
CLUSTER_NAME="default"
KCONFIG=scheduler.kubeconfig
KUSER="system:kube-scheduler"
KCERT=kube-scheduler
cd /etc/kubernetes/
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
kubectl config set-credentials ${KUSER} \
--client-certificate=pki/${KCERT}.crt \
--client-key=pki/${KCERT}.key \
--embed-certs=true \
--kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUSER} \
--kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
CONTROLLER1_IP=$(getent ahostsv4 k8s-controller-1 | tail -1 | awk '{print $1}')
CONTROLLER2_IP=$(getent ahostsv4 k8s-controller-2 | tail -1 | awk '{print $1}')
INTERNAL_IP=$(hostname -I | awk '{print $1}')
KUBERNETES_PUBLIC_ADDRESS=$INTERNAL_IP
CLUSTER_NAME="default"
KCONFIG=admin.kubeconfig
KUSER="kubernetes-admin"
KCERT=admin
cd /etc/kubernetes/
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
kubectl config set-credentials ${KUSER} \
--client-certificate=pki/${KCERT}.crt \
--client-key=pki/${KCERT}.key \
--embed-certs=true \
--kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUSER} \
--kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
etcd is a distributed key-value store used for storing state of distributed applications like Kubernetes.
Recommendation: run etcd under the etcd user.
TAG=v3.1.8
URL=https://github.com/coreos/etcd/releases/download/$TAG
cd
curl -# -LO $URL/etcd-$TAG-linux-amd64.tar.gz
tar xvf etcd-$TAG-linux-amd64.tar.gz
chown -Rh root:root etcd-$TAG-linux-amd64/
find etcd-$TAG-linux-amd64/ -xdev -type f -exec chmod 0755 '{}' \;
cp etcd-$TAG-linux-amd64/etcd* /usr/bin/
mkdir -p /var/lib/etcd
CONTROLLER1_IP=$(getent ahostsv4 k8s-controller-1 | tail -1 | awk '{print $1}')
CONTROLLER2_IP=$(getent ahostsv4 k8s-controller-2 | tail -1 | awk '{print $1}')
INTERNAL_IP=$(hostname -I | awk '{print $1}')
ETCD_CLUSTER_TOKEN="default-27a5f27fe2" # this should be unique per cluster
ETCD_NAME=$(hostname --short)
ETCD_CERT_FILE=/etc/kubernetes/pki/etcd.crt
ETCD_CERT_KEY_FILE=/etc/kubernetes/pki/etcd.key
ETCD_PEER_CERT_FILE=/etc/kubernetes/pki/etcd-peer.crt
ETCD_PEER_KEY_FILE=/etc/kubernetes/pki/etcd-peer.key
ETCD_CA_FILE=/etc/kubernetes/pki/etcd-ca.crt
ETCD_PEER_CA_FILE=/etc/kubernetes/pki/etcd-ca.crt
cat > /etc/systemd/system/etcd.service << EOF
[Unit]
Description=etcd
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
ExecStart=/usr/bin/etcd \\
--name ${ETCD_NAME} \\
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--data-dir=/var/lib/etcd \\
--cert-file=${ETCD_CERT_FILE} \\
--key-file=${ETCD_CERT_KEY_FILE} \\
--peer-cert-file=${ETCD_PEER_CERT_FILE} \\
--peer-key-file=${ETCD_PEER_KEY_FILE} \\
--trusted-ca-file=${ETCD_CA_FILE} \\
--peer-trusted-ca-file=${ETCD_CA_FILE} \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--initial-cluster-token ${ETCD_CLUSTER_TOKEN} \\
--initial-cluster k8s-controller-1=https://${CONTROLLER1_IP}:2380,k8s-controller-2=https://${CONTROLLER2_IP}:2380 \\
--initial-cluster-state new
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd -l
Using etcdctl:
etcdctl \
--ca-file=/etc/kubernetes/pki/etcd-ca.crt \
--cert-file=/etc/kubernetes/pki/etcd.crt \
--key-file=/etc/kubernetes/pki/etcd.key \
cluster-health
etcdctl \
--ca-file=/etc/kubernetes/pki/etcd-ca.crt \
--cert-file=/etc/kubernetes/pki/etcd.crt \
--key-file=/etc/kubernetes/pki/etcd.key \
member list
Using openssl:
echo -e "GET /health HTTP/1.1\nHost: $INTERNAL_IP\n" \
| timeout 2s openssl s_client -CAfile /etc/kubernetes/pki/etcd-ca.crt \
-cert /etc/kubernetes/pki/etcd.crt \
-key /etc/kubernetes/pki/etcd.key \
-connect $INTERNAL_IP:2379 \
-ign_eof
Kubernetes control plane consist of:
CONTROLLER1_IP=$(getent ahostsv4 k8s-controller-1 | tail -1 | awk '{print $1}')
CONTROLLER2_IP=$(getent ahostsv4 k8s-controller-2 | tail -1 | awk '{print $1}')
INTERNAL_IP=$(hostname -I | awk '{print $1}')
SERVICE_CLUSTER_IP_RANGE="10.96.0.0/12"
cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/bin/kube-apiserver \\
--apiserver-count=2 \\
--allow-privileged=true \\
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds \\
--authorization-mode=RBAC \\
--secure-port=6443 \\
--bind-address=0.0.0.0 \\
--advertise-address=${INTERNAL_IP} \\
--insecure-port=0 \\
--insecure-bind-address=127.0.0.1 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kube-audit.log \\
--client-ca-file=/etc/kubernetes/pki/ca.crt \\
--etcd-cafile=/etc/kubernetes/pki/etcd-ca.crt \\
--etcd-certfile=/etc/kubernetes/pki/etcd.crt \\
--etcd-keyfile=/etc/kubernetes/pki/etcd.key \\
--etcd-servers=https://${CONTROLLER1_IP}:2379,https://${CONTROLLER2_IP}:2379 \\
--service-account-key-file=/etc/kubernetes/pki/sa.pub \\
--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE} \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/etc/kubernetes/pki/kube-apiserver.crt \\
--tls-private-key-file=/etc/kubernetes/pki/kube-apiserver.key \\
--experimental-bootstrap-token-auth=true \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \\
--requestheader-username-headers=X-Remote-User \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-allowed-names=front-proxy-client \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver -l
CLUSTER_CIDR="10.96.0.0/16"
SERVICE_CLUSTER_IP_RANGE="10.96.0.0/12"
CLUSTER_NAME="default"
cat > /etc/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/bin/kube-controller-manager \\
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
--address=127.0.0.1 \\
--leader-elect=true \\
--controllers=*,bootstrapsigner,tokencleaner \\
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
--insecure-experimental-approve-all-kubelet-csrs-for-group=system:bootstrappers \\
--cluster-cidr=${CLUSTER_CIDR} \\
--cluster-name=${CLUSTER_NAME} \\
--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE} \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key \\
--root-ca-file=/etc/kubernetes/pki/ca.crt \\
--use-service-account-credentials=true \\
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager -l
cat > /etc/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/bin/kube-scheduler \\
--leader-elect=true \\
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig \\
--address=127.0.0.1 \\
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler -l
export KUBECONFIG=/etc/kubernetes/admin.kubeconfig
kubectl version
kubectl get componentstatuses
etcd will report “bad certificate” in kubernetes 1.6, but is fixed in 1.7 https://github.com/kubernetes/kubernetes/pull/39716#issuecomment-296811189
This might be handy:
yum -y install bash-completion
source /etc/profile.d/bash_completion.sh
source <(kubectl completion bash)
Run this once and remember
BOOTSTRAP_TOKEN
TOKEN_PUB=$(openssl rand -hex 3)
TOKEN_SECRET=$(openssl rand -hex 8)
BOOTSTRAP_TOKEN="${TOKEN_PUB}.${TOKEN_SECRET}"
kubectl -n kube-system create secret generic bootstrap-token-${TOKEN_PUB} \
--type 'bootstrap.kubernetes.io/token' \
--from-literal description="cluster bootstrap token" \
--from-literal token-id=${TOKEN_PUB} \
--from-literal token-secret=${TOKEN_SECRET} \
--from-literal usage-bootstrap-authentication=true \
--from-literal usage-bootstrap-signing=true
kubectl -n kube-system get secret/bootstrap-token-${TOKEN_PUB} -o yaml
INTERNAL_IP=$(hostname -I | awk '{print $1}')
KUBERNETES_PUBLIC_ADDRESS=$INTERNAL_IP
CLUSTER_NAME="default"
KCONFIG="bootstrap.kubeconfig"
KUSER="kubelet-bootstrap"
cd /etc/kubernetes
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUSER} \
--kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
Make sure the bootstrap kubeconfig file does not contain the bootstrap token before you expose it via the cluster-info configmap.
kubectl -n kube-public create configmap cluster-info \
--from-file /etc/kubernetes/pki/ca.crt \
--from-file /etc/kubernetes/bootstrap.kubeconfig
Allow anonymous user to acceess the cluster-info configmap:
kubectl -n kube-public create role system:bootstrap-signer-clusterinfo \
--verb get --resource configmaps
kubectl -n kube-public create rolebinding kubeadm:bootstrap-signer-clusterinfo \
--role system:bootstrap-signer-clusterinfo --user system:anonymous
Allow a bootstrapping worker node join the cluster:
kubectl create clusterrolebinding kubeadm:kubelet-bootstrap \
--clusterrole system:node-bootstrapper --group system:bootstrappers
Install Docker and Kubelet on all controllers and workers.
I used CentOS 7.3 here. Adjust docker settings for your installation in case you are going to use different distribution.
yum install -y docker
sed -i 's;\(^DOCKER_NETWORK_OPTIONS=$\);\1--iptables=false;' \
/etc/sysconfig/docker-network
sed -i 's;\(^DOCKER_STORAGE_OPTIONS=$\);\1--storage-driver overlay;' \
/etc/sysconfig/docker-storage
systemctl daemon-reload
systemctl enable docker
systemctl start docker
Install these binaries on all worker nodes:
TAG=v1.6.4
URL=https://storage.googleapis.com/kubernetes-release/release/$TAG/bin/linux/amd64
curl -# -L -o /usr/bin/kube-proxy $URL/kube-proxy
curl -# -L -o /usr/bin/kubelet $URL/kubelet
curl -# -L -o /usr/bin/kubectl $URL/kubectl
chmod +x -- /usr/bin/kube*
mkdir -p /var/lib/{kubelet,kube-proxy}
Note that this skips the TLS verification. You might choose to use an alternative method for obtaining the CA certificate.
mkdir -p /etc/kubernetes/pki
kubectl -n kube-public get cm/cluster-info \
--server https://k8s-controller-1:6443 --insecure-skip-tls-verify=true \
--username=system:anonymous --output=jsonpath='{.data.ca\.crt}' \
| tee /etc/kubernetes/pki/ca.crt
kubectl -n kube-public get cm/cluster-info \
--server https://k8s-controller-1:6443 --insecure-skip-tls-verify=true \
--username=system:anonymous \
--output=jsonpath='{.data.bootstrap\.kubeconfig}' \
| tee /etc/kubernetes/bootstrap.kubeconfig
Now write previously generated BOOTSTRAP_TOKEN
to the bootstrap kubeconfig:
read -r -s -p "BOOTSTRAP_TOKEN: " BOOTSTRAP_TOKEN
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=/etc/kubernetes/bootstrap.kubeconfig
Container Network Interface (CNI) - networking for Linux containers.
To find the latest CNI binary release refer to - https://github.com/kubernetes/kubernetes/blob/master/cluster/images/hyperkube/Makefile
cd
mkdir -p /etc/cni/net.d /opt/cni
ARCH=amd64
CNI_RELEASE=0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff
URL=https://storage.googleapis.com/kubernetes-release/network-plugins
curl -sSL $URL/cni-${ARCH}-${CNI_RELEASE}.tar.gz | tar -xz -C /opt/cni
CLUSTER_DNS_IP=10.96.0.10
mkdir -p /etc/kubernetes/manifests
cat > /etc/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \\
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.conf \\
--require-kubeconfig=true \\
--pod-manifest-path=/etc/kubernetes/manifests \\
--allow-privileged=true \\
--network-plugin=cni \\
--cni-conf-dir=/etc/cni/net.d \\
--cni-bin-dir=/opt/cni/bin \\
--cluster-dns=${CLUSTER_DNS_IP} \\
--cluster-domain=cluster.local \\
--authorization-mode=Webhook \\
--client-ca-file=/etc/kubernetes/pki/ca.crt \\
--cgroup-driver=systemd \\
--cert-dir=/etc/kubernetes
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet -l
Make controller nodes unschedulable by any pods:
for i in k8s-controller-1 k8s-controller-2; do
kubectl label node $i node-role.kubernetes.io/master=
kubectl taint nodes $i node-role.kubernetes.io/master=:NoSchedule
done
Essential kubernetes components:
Install kube-proxy on all controllers & workers.
Create a kube-proxy service account:
Create a kube-proxy service account only if you are not planning to use x509 certificate to authenticate the kube-proxy role.
The JWT token will be automatically created once you create a service account.
kubectl -n kube-system create serviceaccount kube-proxy
Create a kube-proxy kubeconfig:
INTERNAL_IP=$(hostname -I | awk '{print $1}')
KUBERNETES_PUBLIC_ADDRESS=$INTERNAL_IP
export KUBECONFIG=/etc/kubernetes/admin.kubeconfig
SECRET=$(kubectl -n kube-system get sa/kube-proxy \
--output=jsonpath='{.secrets[0].name}')
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
CLUSTER_NAME="default"
KCONFIG="kube-proxy.kubeconfig"
cd /etc/kubernetes
kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
kubectl config set-context ${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=default \
--namespace=default \
--kubeconfig=${KCONFIG}
kubectl config set-credentials ${CLUSTER_NAME} \
--token=${JWT_TOKEN} \
--kubeconfig=${KCONFIG}
kubectl config use-context ${CLUSTER_NAME} --kubeconfig=${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
Bind a kube-proxy service account (from kube-system
namespace) to a
clusterrole system:node-proxier
to allow RBAC:
kubectl create clusterrolebinding kubeadm:node-proxier \
--clusterrole system:node-proxier \
--serviceaccount kube-system:kube-proxy
Deliver kube-proxy kubeconfig to the rest of worker nodes:
for i in k8s-worker-1 k8s-worker-2; do
scp -p -- /etc/kubernetes/kube-proxy.kubeconfig $i:/etc/kubernetes/
done
Create a kube-proxy service file and run it:
mkdir /var/lib/kube-proxy
cat > /etc/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \\
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy -l
export DNS_DOMAIN="cluster.local"
export DNS_SERVER_IP="10.96.0.10"
cd /etc/kubernetes/manifests/
URL=https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns
curl -sSL -o - $URL/kubedns-controller.yaml.sed \
| sed -e "s/\\\$DNS_DOMAIN/${DNS_DOMAIN}/g" > kubedns-controller.yaml
curl -sSL -o - $URL/kubedns-svc.yaml.sed \
| sed -e "s/\\\$DNS_SERVER_IP/${DNS_SERVER_IP}/g" > kubedns-svc.yaml
curl -sSLO $URL/kubedns-sa.yaml
curl -sSLO $URL/kubedns-cm.yaml
for i in kubedns-{sa,cm,controller,svc}.yaml; do
kubectl --namespace=kube-system apply -f $i;
done
https://www.weave.works/docs/net/latest/kube-addon/
cd /etc/kubernetes/manifests
curl -sSL -o weave-kube-1.6.yaml https://git.io/weave-kube-1.6
kubectl apply -f weave-kube-1.6.yaml
To view weave-net status:
curl -sSL -o /usr/local/bin/weave \
https://github.com/weaveworks/weave/releases/download/latest_release/weave \
&& chmod +x /usr/local/bin/weave
weave status
weave status connections
weave status peers
That’s all folks!
Kubernetes cluster is now up & running.
You can view its state by running:
kubectl get csr
kubectl get nodes -o wide
kubectl get pods --all-namespaces -o wide
kubectl get svc --all-namespaces -o wide
kubectl get all --all-namespaces --show-labels
To enable weave-net encryption, it is enough to export the WEAVE_PASSWORD
environment variable to weave-net containers and restart the relevant pods:
[root@k8s-controller-1 ~]# diff -Nur weave-kube-1.6.yaml /etc/kubernetes/manifests/weave-kube-1.6.yaml
--- weave-kube-1.6.yaml 2017-05-26 22:02:53.793355946 +0200
+++ /etc/kubernetes/manifests/weave-kube-1.6.yaml 2017-05-26 22:04:24.215869495 +0200
@@ -59,6 +59,9 @@
image: weaveworks/weave-kube:1.9.5
command:
- /home/weave/launch.sh
+ env:
+ - name: WEAVE_PASSWORD
+ value: "Tr7W2wTpjG5fzFXCV5PmXCp9ay4WLN21"
livenessProbe:
initialDelaySeconds: 30
httpGet:
Delete weave-net pods so that weave-net daemonset will automatically redeploy them applying a new configuration:
kubectl -n kube-system delete pods -l name=weave-net
Check the status:
[root@k8s-controller-1 ~]# weave status connections
-> [redacted]:6783 established encrypted fastdp f6:50:45:ba:df:9d(k8s-controller-2) encrypted=truemtu=1376
<- [redacted]:44629 established encrypted fastdp 3a:34:e8:06:06:e2(k8s-worker-1) encrypted=truemtu=1376
<- [redacted]:55055 established encrypted fastdp fe:4c:df:33:4a:8e(k8s-worker-2) encrypted=truemtu=1376
-> [redacted]:6783 failed cannot connect to ourself, retry: never
Since you have a running kubernetes cluster now, you may want to deploy some apps.
cd /etc/kubernetes/manifests
curl -sSLO https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
kubectl apply -f kubernetes-dashboard.yaml
kubectl -n kube-system expose deployment kubernetes-dashboard \
--name kubernetes-dashboard-nodeport --type=NodePort
At the current moment you cannot specify the nodePort with “kubectl expose” command
So you would need to find it this way:
# kubectl -n kube-system get svc/kubernetes-dashboard-nodeport
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard-nodeport 10.97.194.198 <nodes> 9090:32685/TCP 1m
After that you can access the Kubernetes Dashboard by visiting either one:
https://www.weave.works/docs/scope/latest/installing/#k8s
cd /etc/kubernetes/manifests
curl -sSL -o scope.yaml \
"https://cloud.weave.works/k8s/v1.6/scope.yaml?k8s-service-type=NodePort"
kubectl -n kube-system apply -f scope.yaml
Find out the nodePort:
# kubectl -n kube-system get svc/weave-scope-app
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
weave-scope-app 10.100.167.248 <nodes> 80:30830/TCP 31s
Now you can access the Weave Scope by visiting either one:
https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb
mkdir /etc/kubernetes/manifests/monitoring
cd /etc/kubernetes/manifests/monitoring
URL=https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config
curl -sSLO $URL/influxdb/influxdb.yaml
curl -sSLO $URL/rbac/heapster-rbac.yaml
curl -sSLO $URL/influxdb/heapster.yaml
curl -sSLO $URL/influxdb/grafana.yaml
kubectl apply -f influxdb.yaml
kubectl apply -f heapster-rbac.yaml
kubectl apply -f heapster.yaml
kubectl apply -f grafana.yaml
kubectl -n kube-system expose deployment monitoring-grafana \
--name monitoring-grafana-nodeport --type=NodePort
Find out the nodePort:
# kubectl -n kube-system get svc/monitoring-grafana-nodeport
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
monitoring-grafana-nodeport 10.108.103.241 <nodes> 3000:31358/TCP 23s
Or alternatively:
kubectl -n kube-system get svc/monitoring-grafana \
--output=jsonpath='{.spec.clusterIP}:{.spec.ports[0].nodePort}'; echo
Now you can access the Grafana by visiting either one:
Sock Shop is pretty heavy app and it is going to take more resources than you would have available by this time.
Hence, you might want to join an additional worker node to your cluster or delete apps that you have just deployed (grafana, heapster, influxdb, weavescope, kubernetes dashboard).
kubectl create namespace sock-shop
kubectl apply -n sock-shop -f "https://raw.githubusercontent.com/microservices-demo/microservices-demo/fe48e0fb465ab694d50d0c9e51299ac75a7e3e47/deploy/kubernetes/complete-demo.yaml"
Find out the nodePort:
# kubectl -n sock-shop get svc front-end
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front-end 10.110.164.38 <nodes> 80:30001/TCP 19s
# kubectl get pods -n sock-shop -o wide
Now you can access the Sock Shop by visiting either one:
To uninstall the socks shop sample app, just remove its namespace:
kubectl delete namespace sock-shop
Make sure to deploy your containers on different worker nodes.
kubectl run c1 --image centos:7 --labels k8s-app=my-centos -- sleep 3600
kubectl run c2 --image centos:7 --labels k8s-app=my-centos -- sleep 3600
With weave-net encryption and fastdp
enabled:
[root@c1-281931205-nflnh ~]# iperf3 -c 10.32.0.3
Connecting to host 10.32.0.3, port 5201
[ 4] local 10.34.0.0 port 57352 connected to 10.32.0.3 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 75.0 MBytes 628 Mbits/sec 58 556 KBytes
[ 4] 1.00-2.00 sec 71.1 MBytes 598 Mbits/sec 222 446 KBytes
[ 4] 2.00-3.00 sec 77.2 MBytes 647 Mbits/sec 0 557 KBytes
[ 4] 3.00-4.00 sec 76.3 MBytes 640 Mbits/sec 33 640 KBytes
[ 4] 4.00-5.00 sec 81.2 MBytes 682 Mbits/sec 154 720 KBytes
[ 4] 5.00-6.00 sec 84.4 MBytes 707 Mbits/sec 0 798 KBytes
[ 4] 6.00-7.00 sec 70.7 MBytes 593 Mbits/sec 35 630 KBytes
[ 4] 7.00-8.00 sec 76.9 MBytes 645 Mbits/sec 175 696 KBytes
[ 4] 8.00-9.00 sec 71.1 MBytes 596 Mbits/sec 0 763 KBytes
[ 4] 9.00-10.00 sec 78.3 MBytes 658 Mbits/sec 0 833 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 762 MBytes 639 Mbits/sec 677 sender
[ 4] 0.00-10.00 sec 760 MBytes 637 Mbits/sec receiver
Without weave-net encryption and without fastdp
:
[root@c1-281931205-nflnh /]# iperf3 -c 10.32.0.3 -P1
Connecting to host 10.32.0.3, port 5201
[ 4] local 10.34.0.0 port 59676 connected to 10.32.0.3 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.01 sec 5.43 MBytes 45.3 Mbits/sec 27 68.5 KBytes
[ 4] 1.01-2.00 sec 5.44 MBytes 45.7 Mbits/sec 22 64.6 KBytes
[ 4] 2.00-3.00 sec 5.83 MBytes 49.1 Mbits/sec 17 82.8 KBytes
[ 4] 3.00-4.00 sec 6.00 MBytes 50.4 Mbits/sec 25 76.3 KBytes
[ 4] 4.00-5.00 sec 5.20 MBytes 43.5 Mbits/sec 21 64.6 KBytes
[ 4] 5.00-6.00 sec 5.26 MBytes 44.0 Mbits/sec 23 60.8 KBytes
[ 4] 6.00-7.00 sec 5.44 MBytes 45.9 Mbits/sec 16 54.3 KBytes
[ 4] 7.00-8.00 sec 6.04 MBytes 50.7 Mbits/sec 22 51.7 KBytes
[ 4] 8.00-9.00 sec 5.82 MBytes 48.8 Mbits/sec 15 60.8 KBytes
[ 4] 9.00-10.00 sec 5.75 MBytes 48.3 Mbits/sec 5 78.9 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 56.2 MBytes 47.2 Mbits/sec 193 sender
[ 4] 0.00-10.00 sec 56.2 MBytes 47.2 Mbits/sec receiver
iperf Done.
Surprisingly, I have got a worse bandwith without the encryption.
This happened because fastdp (fast datapath) was not enabled when using weave-net without encryption.
sleeve
indicates Weave Net’s fall-back encapsulation method is used:
[root@k8s-worker-1 ~]# weave status connections
<- [redacted]:57823 established sleeve f6:50:45:ba:df:9d(k8s-controller-2) mtu=1438
<- [redacted]:37717 established sleeve 46:d6:a6:c6:1e:f2(k8s-controller-1) mtu=1438
<- [redacted]:51252 established sleeve fe:4c:df:33:4a:8e(k8s-worker-2) mtu=1438
-> [redacted]:6783 failed cannot connect to ourself, retry: never
Here is the result without encryption but with fastdp
enabled:
Note: tested after cluster reinstallation.
[root@c1-281931205-z5z3m /]# iperf3 -c 10.40.0.2
Connecting to host 10.40.0.2, port 5201
[ 4] local 10.32.0.2 port 59414 connected to 10.40.0.2 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 79.9 MBytes 670 Mbits/sec 236 539 KBytes
[ 4] 1.00-2.00 sec 69.7 MBytes 584 Mbits/sec 0 625 KBytes
[ 4] 2.00-3.00 sec 69.9 MBytes 586 Mbits/sec 0 698 KBytes
[ 4] 3.00-4.00 sec 73.3 MBytes 615 Mbits/sec 38 577 KBytes
[ 4] 4.00-5.00 sec 88.8 MBytes 745 Mbits/sec 19 472 KBytes
[ 4] 5.00-6.00 sec 85.9 MBytes 721 Mbits/sec 0 586 KBytes
[ 4] 6.00-7.00 sec 92.1 MBytes 772 Mbits/sec 0 688 KBytes
[ 4] 7.00-8.00 sec 84.8 MBytes 712 Mbits/sec 39 575 KBytes
[ 4] 8.00-9.00 sec 80.2 MBytes 673 Mbits/sec 0 668 KBytes
[ 4] 9.00-10.00 sec 88.3 MBytes 741 Mbits/sec 19 568 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 813 MBytes 682 Mbits/sec 351 sender
[ 4] 0.00-10.00 sec 811 MBytes 680 Mbits/sec receiver
iperf Done.
# weave status connections
<- [redacted]:34158 established fastdp 56:ae:60:6b:be:79(k8s-controller-2) mtu=1376
-> [redacted]:6783 established fastdp 12:af:67:0d:0d:1a(k8s-worker-1) mtu=1376
<- [redacted]:52937 established fastdp 86:27:10:95:00:e5(k8s-worker-2) mtu=1376
-> [redacted]:6783 failed cannot connect to ourself, retry: never
More read on weave-net fast datapath:
Cleanup:
kubectl delete deployments/c1
kubectl delete deployments/c2