解决k8s部署dashboard时一直处于Pending状态的问题
直接用离线包就行
命令
[root@k8s-master ~]# docker load -i calico-image-v3.25.0.tar
[root@k8s-master ~]# kubectl apply -f calico.yaml
链接在https://download.csdn.net/download/weixin_42759398/90192045
[root@k8s-master ~]# docker load -i calico-image-v3.25.0.tar
2115854292b7: Loading layer [==================================================>] 13.82kB/13.82kB
f2cd7a8887ad: Loading layer [==================================================>] 2.56kB/2.56kB
e53823ea1ab6: Loading layer [==================================================>] 2.048kB/2.048kB
aab16c21b5f0: Loading layer [==================================================>] 2.048kB/2.048kB
80a0311a6f35: Loading layer [==================================================>] 152.1kB/152.1kB
f915be43d1f2: Loading layer [==================================================>] 2.096MB/2.096MB
7aba39e8ebcd: Loading layer [==================================================>] 1.124MB/1.124MB
8e47df0af359: Loading layer [==================================================>] 31.74kB/31.74kB
05cbe103e488: Loading layer [==================================================>] 56.83kB/56.83kB
445109866ec0: Loading layer [==================================================>] 2.56kB/2.56kB
f17fc9408cc0: Loading layer [==================================================>] 4.608kB/4.608kB
a8764b36cebb: Loading layer [==================================================>] 65.42MB/65.42MB
6b6e0e9a04c1: Loading layer [==================================================>] 2.744MB/2.744MB
Loaded image: calico/kube-controllers:v3.25.0
14a282cea6ec: Loading layer [==================================================>] 88.58kB/88.58kB
2553397e07e0: Loading layer [==================================================>] 13.82kB/13.82kB
476b7b4979ae: Loading layer [==================================================>] 1.124MB/1.124MB
17d9f5d187d9: Loading layer [==================================================>] 152.1kB/152.1kB
e391db31906a: Loading layer [==================================================>] 2.096MB/2.096MB
97a5923546c2: Loading layer [==================================================>] 2.56kB/2.56kB
0ed010669301: Loading layer [==================================================>] 4.608kB/4.608kB
3cf983e25bde: Loading layer [==================================================>] 194.4MB/194.4MB
5f70bf18a086: Loading layer [==================================================>] 1.024kB/1.024kB
Loaded image: calico/cni:v3.25.0
350fd2fc4cdb: Loading layer [==================================================>] 246.8MB/246.8MB
4e86ebda2314: Loading layer [==================================================>] 13.82kB/13.82kB
Loaded image: calico/node:v3.25.0
[root@k8s-master ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 3h39m v1.28.0
k8s-node1 NotReady <none> 3h37m v1.28.0
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 3h39m v1.28.0
k8s-node1 NotReady <none> 3h37m v1.28.0
[root@k8s-master ~]#
反正ready了就行
如果有问题 记得看日志
查看节点事件
命令:kubectl describe nodes
作用:可以查看节点的整体状态、资源分配情况以及可能出现的问题。例如,如果节点资源不足,导致 Pod 无法调度,会在这里有相应的提示。
查看 Pod 所属命名空间的事件
命令:kubectl describe namespace kubernetes-dashboard
作用:可以查看该命名空间下的各种事件,包括 Pod 的创建、调度等相关事件,从中可能发现导致 Pod 处于 Pending 状态的原因,比如是否存在资源限制、调度策略问题等。
查看 Pod 详细信息
命令:kubectl describe pod -n kubernetes-dashboard <pod名称>,需将<pod名称>替换为实际的 Pod 名称,如dashboard-metrics-scraper-55c5ccb8fb-744lz和kubernetes-dashboard-586567c756-hvwvk。
作用:可以获取 Pod 的详细描述信息,包括 Pod 的状态、事件、容器状态、调度信息等。其中,Events部分会显示与该 Pod 相关的各种事件记录,例如调度失败的原因、镜像拉取问题等,这对于排查 Pending 原因非常有帮助。
查看 kubelet 日志
命令:在节点上查看 kubelet 服务的日志,具体命令因系统和日志管理方式而异。例如,在使用 systemd 管理服务的系统上,可以使用sudo journalctl -u kubelet -f。
作用:Kubelet 是负责在节点上运行 Pod 的组件,其日志可以提供关于 Pod 创建、启动过程中的详细信息,比如镜像拉取失败的具体错误、容器启动时的问题等。
查看容器运行时日志
命令:如果使用的是 Docker 作为容器运行时,可以使用docker logs <容器ID>来查看容器的日志。首先需要通过kubectl get pods -n kubernetes-dashboard -o wide获取 Pod 所在的节点,然后在该节点上使用docker ps找到对应的容器 ID。
作用:可以了解容器内部的运行情况,是否存在应用程序启动错误、依赖问题等导致容器无法正常运行,进而使 Pod 处于 Pending 状态。