0%

K8s-node

查看集群信息

1
2
3
4
5
[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.19:6443
CoreDNS is running at https://192.168.1.19:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

查看节点信息

节点信息

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 11h v1.21.0
k8s-node01 Ready <none> 64m v1.21.0
k8s-node02 Ready <none> 63m v1.21.0

[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 11h v1.21.0
k8s-node01 Ready <none> 64m v1.21.0
k8s-node02 Ready <none> 63m v1.21.0

节点详情信息

1
2
3
4
5
[root@k8s-master01 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready control-plane,master 11h v1.21.0 192.168.1.19 <none> CentOS Linux 7 (Core) 5.19.8-1.el7.elrepo.x86_64 docker://20.10.9
k8s-node01 Ready <none> 64m v1.21.0 192.168.1.16 <none> CentOS Linux 7 (Core) 5.19.8-1.el7.elrepo.x86_64 docker://20.10.9
k8s-node02 Ready <none> 64m v1.21.0 192.168.1.17 <none> CentOS Linux 7 (Core) 5.19.8-1.el7.elrepo.x86_64 docker://20.10.9

节点详细的描述信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
[root@k8s-master01 ~]# kubectl describe node k8s-master01
Name: k8s-master01
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-master01
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: csi.volume.kubernetes.io/nodeid: {"csi.tigera.io":"k8s-master01"}
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.1.19/24
projectcalico.org/IPv4VXLANTunnelAddr: 10.244.32.128
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 10 Sep 2022 23:17:35 +0800
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: k8s-master01
AcquireTime: <unset>
RenewTime: Sun, 11 Sep 2022 11:06:35 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 11 Sep 2022 09:32:20 +0800 Sun, 11 Sep 2022 09:32:20 +0800 CalicoIsUp Calico is running on this node
MemoryPressure False Sun, 11 Sep 2022 11:03:56 +0800 Sat, 10 Sep 2022 23:17:34 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 11 Sep 2022 11:03:56 +0800 Sat, 10 Sep 2022 23:17:34 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 11 Sep 2022 11:03:56 +0800 Sat, 10 Sep 2022 23:17:34 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 11 Sep 2022 11:03:56 +0800 Sun, 11 Sep 2022 09:32:07 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.1.19
Hostname: k8s-master01
Capacity:
cpu: 2
ephemeral-storage: 17394Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8117452Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 16415037823
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8015052Ki
pods: 110
System Info:
Machine ID: 4d61410eef834edf9b7843c926bbe691
System UUID: 75bc4d56-c008-db5f-4848-59f0939491af
Boot ID: 1d543183-4871-45f1-a263-92d73fb9ca31
Kernel Version: 5.19.8-1.el7.elrepo.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.9
Kubelet Version: v1.21.0
Kube-Proxy Version: v1.21.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
calico-apiserver calico-apiserver-665569779-4wr5l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 93m
calico-apiserver calico-apiserver-665569779-r76t6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 93m
calico-system calico-kube-controllers-78687bb75f-46l2t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 96m
calico-system calico-node-xfr2n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 96m
calico-system calico-typha-75444c4b8-nmmj5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 96m
calico-system csi-node-driver-92q4k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 94m
kube-system coredns-558bd4d5db-5j4gq 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 11h
kube-system coredns-558bd4d5db-79qsm 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 11h
kube-system etcd-k8s-master01 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 11h
kube-system kube-apiserver-k8s-master01 250m (12%) 0 (0%) 0 (0%) 0 (0%) 11h
kube-system kube-controller-manager-k8s-master01 200m (10%) 0 (0%) 0 (0%) 0 (0%) 11h
kube-system kube-proxy-v2ltp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11h
kube-system kube-scheduler-k8s-master01 100m (5%) 0 (0%) 0 (0%) 0 (0%) 11h
tigera-operator tigera-operator-6f669b6c4f-qmdbc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 103m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 240Mi (3%) 340Mi (4%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>

删除节点

平滑删除pod

移除节点前,需要将节点上的资源,迁移到其他节点上

1
kubectl drain 即将删除节点名称 --delete-local-data --force --ignore-daemonsets

删除node

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#删除节点
kubectl delete nodes 节点名称

#在被删除节点中清空集群信息
kubeadm reset -f
system stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
iptables -F && sudo iptables -t nat -F && sudo iptables =t mangle -F && sudo iptables -X
ipvsadm -C

添加节点

创建token加入节点

1
2
3
4
5
6
7
8
9
10
11
#在master查看token
kubeadm token list
#通过openssl查看哈希值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |openssl dgst -sha256 -hex | sed 's/^.* //'

#在集群中创建token,命令后边指定 --ttl 0则token不受时间限制,否则24小时过期

kubeadm token create --print-join-command

#在节点端执行返回的join命令加入集群
kubeadm join host:6443 --token xxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxx

work节点管理集群

  • 如果是kubeasz安装,所有节点(包括master与node)都已经可以对集群进行管理

  • 如果是kubeadm安装,在node节点上管理时,即使已经安装了kubectl,仍会报如下错误

    [root@k8s-node01 ~]# kubectl get nodes

    The connection to the server localhost:8080 was refused - did you specify the right host or port?

只要把master上的管理文件/etc/kubernetes/admin.conf拷贝到node节点的$HOME/.kube/config就可以让node节点也可以实现kubectl命令管理

1, 在node节点的用户家目录创建.kube目录

1
[root@k8s-node01 ~]# mkdir /root/.kube

2, 在master节点做如下操作

1
scp /etc/kubernetes/admin.conf root@k8s-node01:/root/.kube/config

3, 在worker node节点验证

1
2
3
4
5
[root@k8s-node01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 11h v1.21.0
k8s-node01 Ready <none> 75m v1.21.0
k8s-node02 Ready <none> 75m v1.21.0