W1214 08:46:14.303158 8461 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1214 08:46:14.303772 8461 version.go:102] falling back to the local client version: v1.17.0 W1214 08:46:14.304223 8461 validation.go:28] Cannot validate kube-proxy config - no validator is available W1214 08:46:14.304609 8461 validation.go:28] Cannot validate kubelet config - no validator is available k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5
for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
W1221 17:44:19.880281 2865 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1221 17:44:19.880405 2865 version.go:102] falling back to the local client version: v1.17.0 W1221 17:44:19.880539 2865 validation.go:28] Cannot validate kube-proxy config - no validator is available W1221 17:44:19.880546 2865 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.102] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.0.102 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.0.102 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1221 17:50:12.262505 2865 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1221 17:50:12.268198 2865 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 17.504683 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node ubuntu as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node ubuntu as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 1rpp8b.axfud1xrsvx4q8nw [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
初始化时,如不存在则自动下载镜像,初始化后镜像如下:
1 2 3 4 5 6 7 8 9
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.17.0 7d54289267dc 13 days ago 116MB registry.aliyuncs.com/google_containers/kube-apiserver v1.17.0 0cae8d5cc64c 13 days ago 171MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.17.0 5eb3b7486872 13 days ago 161MB registry.aliyuncs.com/google_containers/kube-scheduler v1.17.0 78c190f736b1 13 days ago 94.4MB registry.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 6 weeks ago 41.6MB registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 8 weeks ago 288MB registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
# kubectl delete pod coredns-9d85f5447-67qtv coredns-9d85f5447-cg87c -n kube-system pod "coredns-9d85f5447-67qtv" deleted pod "coredns-9d85f5447-cg87c" deleted
[preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.17.0 7d54289267dc 2 weeks ago 116MB registry.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 7 weeks ago 41.6MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 11 months ago 52.6MB registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2fde9bb78fd7 ff281650a721 "/opt/bin/flanneld -…" 7 minutes ago Up 7 minutes k8s_kube-flannel_kube-flannel-ds-amd64-28p6z_kube-system_f40a2875-70eb-468b-827d-fcb59be3416b_1 aa7ca3d5825e registry.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube…" 8 minutes ago Up 8 minutes k8s_kube-proxy_kube-proxy-n6xv5_kube-system_3df8b7ae-e5b8-4256-9857-35bd24f7e025_0 ac61ed8d7295 registry.aliyuncs.com/google_containers/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-flannel-ds-amd64-28p6z_kube-system_f40a2875-70eb-468b-827d-fcb59be3416b_0 423f9e42c082 registry.aliyuncs.com/google_containers/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-proxy-n6xv5_kube-system_3df8b7ae-e5b8-4256-9857-35bd24f7e025_0
# kubectl run -i --tty busybox --image=latelee/busybox --restart=Never -- sh
稍等片刻,即可进入 busybox 命令行:
1 2 3 4 5
If you don't see a command prompt, try pressing enter. / # / # / # uname -a Linux busybox 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 GNU/Linux
另起命令行,查看 pod 运行状态:
1 2 3
# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 74s 10.244.1.4 node <none> <none>
可以看到 pod 为 Running 状态,运行在 node 上。 在 node 节点上查看:
1 2 3
# docker ps | grep busybox ba5d1a480294 latelee/busybox "sh" 2 minutes ago Up 2 minutes k8s_busybox_busybox_default_20d757f7-8ea7-4e51-93fc-514029065a59_0 8c643171ac09 registry.aliyuncs.com/google_containers/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_busybox_default_20d757f7-8ea7-4e51-93fc-514029065a59_0
root@ubuntu:~/k8s# kubeadm reset [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y // !!! 输入y [preflight] Running pre-flight checks [reset] Removing info for node "ubuntu" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace W1215 11:50:28.154924 6848 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints .Please manually remove this etcd member using etcdctl [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
执行如下命令清除目录、删除网络设备:
1 2 3 4 5 6 7 8 9 10
rm -rf $HOME/.kube/* rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /etc/kubernetes/ rm -rf /etc/cni/ ifconfig cni0 down ifconfig flannel.1 down ip link delete cni0 ip link delete flannel.1
6.2 节点机器退出
在 master 上执行: 1、退出节点:
1 2 3 4 5
# kubectl drain node node/node cordoned evicting pod "busybox" // 注:因有 pod 运行,故出现此信息 pod/busybox evicted node/node evicted
此时提示:
1 2 3 4
# kubectl get nodes NAME STATUS ROLES AGE VERSION node Ready,SchedulingDisabled <none> 18h v1.17.0 ubuntu Ready master 23h v1.17.0
ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm /var/lib/cni/ -rf rm /etc/cni/net.d/ -rf rm /etc/kubernetes/ -rf rm /var/lib/kubelet/ -rf