Kubeadm安装k8s集群
搭建方案
使用kubeadm安装
服务器要求
3台服务器
名称 | ip |
---|---|
k8s-master | 192.168.225.128 |
k8s-node1 | 192.168.225.129 |
k8s-node2 | 192.168.225.130 |
最低配置
2核、2G内存、20G硬盘
软件环境
操作系统:centos7
Docker:20+
k8s:1.23.6
安装步骤
1.初始操作
关闭防火墙(三台机器都执行)
systemctl disable firewalld #关闭自启动
systemctl stop firewalld #停止防火墙
systemctl status firewalld #查看防火墙状态
关闭selinux(三台机器都执行)
sed -i 's/enforcing/disabled/' /etc/sysconfig/selinux #永久
setenforce 0 #临时关闭
关闭交换分区swap(三台机器都执行)
swapoff -a #临时sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久lsblk # 查看交换分区状态
关闭swap交换分区后,要重启虚拟机
设置主机名(分别对应三台机器上执行)
hostnamectl set-hostname <hostname>
分别是:
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
在master节点中添加hosts(只在master节点执行)
cat >> /etc/hosts << EOF
192.168.225.128 k8s_master
192.168.225.129 k8s_node1
192.168.225.130 k8s_node2
EOF
将桥接的ipv4流量传递到iptables的链(三台机器都执行)
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
生效
sysctl --system
时间同步(三台机器都执行)
yum install ntpdate -y
ntpdate time.windows.com
2.安装基础软件(所有节点)
2.1安装docker
#卸载旧的版本sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine#需要的安装包
sudo yum install -y yum-utils#设置镜像仓库阿里云或者华为云
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#sudo yum-config-manager --add-repo https://mirrors.huaweicloud.com/repository/conf/CentOS-7-reg.repo#更新软件包索引
yum makecache fast#安装最新的docker引擎 cc社区版
sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin#配置阿里云镜像加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://xxx.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker#启动docker
systemctl start docker#检查是否启动成功
docker version#docker开机自启动
systemctl enable docker.service
华为云镜像加速(建议使用)
阿里云镜像加速
每个人都有一个唯一id
2.2配置K8s阿里yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.3安装kubeadm、kubelet、kubectl
yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
设置kubelet开机自启
systemctl enable kubelet
3.部署Kubernetes Master
在master节点下执行
apiserver修改成自己节点的ip
kubeadm init \--apiserver-advertise-address=192.168.225.128 \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.23.6 \--service-cidr=10.96.0.0/12 \--pod-network-cidr=10.244.0.0/16
安装出现问题
- 排查
journalctl -xefu kubelet
cgroupfs问题
[root@localhost ~]# docker info|grep DriverStorage Driver: overlay2Logging Driver: json-fileCgroup Driver: cgroupfs
- 处理(这里也同步在node节点上处理)
vi /etc/docker/daemon.json
添加新配置
“exec-opts”:[“native.cgroupdriver=systemd”]
重启
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
- 重置
kubeadm reset
重新执行初始化操作
安装成功
复制如下配置并执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
执行完就可以使用kubectl命令
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 2m34s v1.23.6
4.加入Kubernetes Node
初始化中会有token
如果不小心清屏可以执行
kubeadm token list
[root@localhost ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
2oenv5.d428229ust3k74q3 23h 2024-09-21T08:03:35Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
如果过期可以重新申请
kubeadm token create
在k8s-node1和k8s-node2中执行如下命令,加入node节点
sha256的值也是在初始化中生成
kubeadm join 192.168.225.128:6443 --token 2oenv5.d428229ust3k74q3 --discovery-token-ca-cert-hash sha256:125db916ec5078d6d297a2c3cc011c3eab5c57007c32526cae1e3b1823757814
查看结果
[root@localhost ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 21m v1.23.6
k8s-node1 NotReady <none> 3m18s v1.23.6
k8s-node2 NotReady <none> 3m29s v1.23.6
5.部署CNI网络插件
节点的状态是NotReady
[root@localhost ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8c4cb4d-kcpp6 0/1 Pending 0 24m
coredns-6d8c4cb4d-mjfhp 0/1 Pending 0 24m
etcd-k8s-master 1/1 Running 1 24m
kube-apiserver-k8s-master 1/1 Running 1 24m
kube-controller-manager-k8s-master 1/1 Running 1 24m
kube-proxy-mhzdx 1/1 Running 0 24m
kube-proxy-mzbrd 1/1 Running 0 7m14s
kube-proxy-v42ss 1/1 Running 0 7m3s
kube-scheduler-k8s-master 1/1 Running 1 24m
发现coredns一直在pending
网络的原因
下载calico配置文件(在master节点上执行)
curl https://docs.tigera.io/archive/v3.25/manifests/calico.yaml -O
[root@localhost k8s]# ll
总用量 4
-rw-r--r-- 1 root root 83 9月 20 16:31 calico.yaml
修改calico.yaml文件
修改CALICO_IPV4POOL_CIDR成–pod-network-cidr=10.244.0.0/16这里的网段
删除calico.yaml文件中镜像docker.io/前缀,避免下载过慢
sed -i 's#docker.io/##g' calico.yaml
[root@localhost k8s]# grep image calico.yaml image: calico/cni:v3.25.0imagePullPolicy: IfNotPresentimage: calico/cni:v3.25.0imagePullPolicy: IfNotPresentimage: calico/node:v3.25.0imagePullPolicy: IfNotPresentimage: calico/node:v3.25.0imagePullPolicy: IfNotPresentimage: calico/kube-controllers:v3.25.0imagePullPolicy: IfNotPresent
构建应用
kubectl apply -f calico.yaml
报错处理
是node污点问题
执行命令
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node.kubernetes.io/not-ready-
报错,缺少目录(在所有节点上执行)
#升级内核
yum update -y kernel
# 重启系统使内核生效
reboot
# 查看内核版本是否生效
uname -r
#创建目录
mkdir -p /sys/fs/bpf
6.检查完成
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 43m v1.23.6
k8s-node1 Ready <none> 42m v1.23.6
k8s-node2 Ready <none> 42m v1.23.6
所有节点都ready