部署环境

系统 IP 主机名称 Docker版本 k8s版本
CentOS 7.9 10.201.10.201 k8s-master1 24.0.2 v1.21.14
CentOS 7.9 10.201.10.202 k8s-work2 24.0.2 v1.21.14
CentOS 7.9 10.201.10.203 k8s-work3 24.0.2 v1.21.14

注意: 新版本k8s已放弃对docker的支持,如果使用docker作为容器底层不建议安装更高版本的k8s


1 先前准备

1.1 k8s-master1节点

1.1.1 更改主机名称
$ hostnamectl set-hostname k8s-master1
1.1.2 修改hosts文件
$ vi /etc/hosts
10.201.10.201   k8s-master1
10.201.10.202   k8s-work2
10.201.10.203   k8s-work3
1.1.3 关闭防火墙
# 关闭firewalld防火墙
$ systemctl stop firewalld
# 关闭防火墙开机自启
$ systemctl disable firewalld
# 重置iptables
$ iptables -F  && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
1.1.4 关闭selinux
$ sed -i 's|=enforcing|=disabled|g' /etc/selinux/config
1.1.5 关闭虚拟内存
$ sed -i 's|/dev/mapper/centos-swap|#/dev/mapper/centos-swap|g' /etc/fstab
1.1.5 设置时间同步
# 安装ntpdate
$ yum install ntpdate -y
## 先手动同步一次时间
$ ntpdate us.pool.ntp.org
# 设置定时同步
$ crontab -l >> crontab.bak & echo -e "0-59/10 * * * * /usr/sbin/ntpdate us.pool.ntp.org | logger -t NTP" >> crontab.bak && crontab crontab.bak && rm -f crontab.bak
1.1.6 安装Docker
# 安装依赖
$ yum install -y yum-utils device-mapper-persistent-data lvm2
# 设置阿里云源
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装指定版本Docker
$ yum install -y docker-ce-24.0.2 docker-ce-cli-24.0.2 containerd.io
# 修改Docker默认配置
$ vim /etc/docker/daemon.json
{
    "registry-mirrors": ["https://mtzlgvlw.mirror.aliyuncs.com"],
    "data-root": "/data/docker",
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m",
    "max-file": "3"
    }
}
## 设置开机自启
$ systemctl start docker
$ systemctl enable docker
1.1.7 开启路由转发
$ vim /etc/sysctl.d/kubernetes.conf
vm.swappiness=0
vm.overcommit_memory = 1
vm.panic_on_oom=0
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
fs.inotify.max_user_watches=89100

# 生效配置文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf
1.1.8 免密配置
# 生成免密登录
$ ssh-keygen -t rsa
$ ssh-copy-id k8s-work2
$ ssh-copy-id k8s-work3
1.1.9 配置文件分发
# hosts文件拷贝至其他节点
$ scp /etc/hosts k8s-work2:/etc
$ scp /etc/hosts k8s-work3:/etc

# kubernetes.conf文件拷贝至其他节点
$ scp /etc/sysctl.d/kubernetes.conf  k8s-work2:/etc/sysctl.d/
$ scp /etc/sysctl.d/kubernetes.conf  k8s-work3:/etc/sysctl.d/

# 重启服务器
$ reboot

1.2 k8s-work2节点

1.2.1 更改主机名称
$ hostnamectl set-hostname k8s-work2
1.2.2 关闭防火墙
# 关闭firewalld防火墙
$ systemctl stop firewalld
# 关闭防火墙开机自启
$ systemctl disable firewalld
# 重置iptables
$ iptables -F  && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
1.2.3 关闭selinux
$ sed -i 's|=enforcing|=disabled|g' /etc/selinux/config
1.2.4 关闭虚拟内存
$ sed -i 's|/dev/mapper/centos-swap|#/dev/mapper/centos-swap|g' /etc/fstab
1.2.5 设置时间同步
# 安装ntpdate
$ yum install ntpdate -y
## 先手动同步一次时间
$ ntpdate us.pool.ntp.org
## 设置定时同步
$ crontab -l >> crontab.bak & echo -e "0-59/10 * * * * /usr/sbin/ntpdate us.pool.ntp.org | logger -t NTP" >> crontab.bak && crontab crontab.bak && rm -f crontab.bak
1.2.6 安装Docker
# 安装依赖
$ yum install -y yum-utils device-mapper-persistent-data lvm2
# 设置阿里云源
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装指定版本Docker
$ yum install -y docker-ce-24.0.2 docker-ce-cli-24.0.2 containerd.io
# 修改Docker默认配置
$ vim /etc/docker/daemon.json
{
    "registry-mirrors": ["https://mtzlgvlw.mirror.aliyuncs.com"],
    "data-root": "/data/docker",
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m",
    "max-file": "3"
    }
}
# 设置开机自启
$ systemctl start docker
$ systemctl enable docker
1.2.7 开启路由转发

刚刚k8s-master节点的网络配置文件已经拷贝过来了,只需要使生效即可

# 生效配置文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf
1.2.8 免密配置
# 生成免密登录
$ ssh-keygen -t rsa  //生成密钥, 连续回车
$ ssh-copy-id k8s-master1
$ ssh-copy-id k8s-work3
1.2.9 节点初始化配置
# 创建集群配置目录
$ mkdir -p /data/glory/
# 重启服务器
$ reboot

1.3 k8s-work3节点

1.3.1 更改主机名称
$ hostnamectl set-hostname k8s-work3
1.3.2 关闭防火墙
# 关闭firewalld防火墙
$ systemctl stop firewalld
# 关闭防火墙开机自启
$ systemctl disable firewalld
# 重置iptables
$ iptables -F  && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
1.3.3 关闭selinux
$ sed -i 's|=enforcing|=disabled|g' /etc/selinux/config
1.3.4 关闭虚拟内存
$ sed -i 's|/dev/mapper/centos-swap|#/dev/mapper/centos-swap|g' /etc/fstab
1.3.5 设置时间同步
# 安装ntpdate
$ yum install ntpdate -y
## 先手动同步一次时间
$ ntpdate us.pool.ntp.org
## 设置定时同步
$ crontab -l >> crontab.bak & echo -e "0-59/10 * * * * /usr/sbin/ntpdate us.pool.ntp.org | logger -t NTP" >> crontab.bak && crontab crontab.bak && rm -f crontab.bak
1.3.6 安装Docker
# 安装依赖
$ yum install -y yum-utils device-mapper-persistent-data lvm2
# 设置阿里云源
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装指定版本Docker
$ yum install -y docker-ce-24.0.2 docker-ce-cli-24.0.2 containerd.io
# 修改Docker默认配置
$ vim /etc/docker/daemon.json
{
    "registry-mirrors": ["https://mtzlgvlw.mirror.aliyuncs.com"],
    "data-root": "/data/docker",
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m",
    "max-file": "3"
    }
}
# 设置开机自启
$ systemctl start docker
$ systemctl enable docker
1.3.7 开启路由转发

刚刚k8s-master节点的网络配置文件已经拷贝过来了,只需要使生效即可

# 生效配置文件
$ sysctl -p /etc/sysctl.d/kubernetes.conf
1.3.8 免密配置
# 生成免密登录
$ ssh-keygen -t rsa  //生成密钥, 连续回车
$ ssh-copy-id k8s-master1
$ ssh-copy-id k8s-work2
1.3.9 节点初始化配置
# 创建集群配置目录
$ mkdir -p /data/glory/
# 重启服务器
$ reboot

2 安装k8s工具

2.1 k8s-master1节点

2.1.1 配置k8s阿里源
$ vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
2.1.2 安装k8s工具
# 测试源是否可用
$ yum repolist -y
# 创建本地缓存
$ yum makecache fast
# 安装k8s工具
$ yum install -y kubelet-1.21.14 kubeadm-1.21.14 kubectl-1.21.14
# 设置开机自启
$ systemctl enable kubelet

2.2 k8s-word2节点

2.2.1 配置k8s阿里源
$ vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
2.2.2 安装k8s工具
# 测试源是否可用
$ yum repolist -y
# 创建本地缓存
$ yum makecache fast
# 安装k8s工具
$ yum install -y kubelet-1.21.14 kubeadm-1.21.14
# 设置开机自启
$ systemctl enable kubelet

2.3 k8s-word3节点

2.3.1 配置k8s阿里源
$ vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
2.3.2 安装k8s工具
# 测试源是否可用
$ yum repolist -y
# 创建本地缓存
$ yum makecache fast
# 安装k8s工具
$ yum install -y kubelet-1.21.14 kubeadm-1.21.14
# 设置开机自启
$ systemctl enable kubelet

3 搭建k8s集群

3.1 k8s-master1节点

3.1.1 创建配置文件夹
$ mkdir -p /data/glory/working
$ cd /data/glory/working
3.1.2 生成集群初始化配置文件
$ kubeadm config print init-defaults > init.default.yaml
$ vim init.default.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.201.10.201  // 修改为k8s-master1节点的IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1  // 修改为k8s-master1节点的名称
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  // 更换为国内阿里源
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.42.0.0/16  // 设置Pod分配的IP
  serviceSubnet: 10.96.0.0/12
scheduler: {}
3.1.3 拉取相关镜像
# 查看要拉取的镜像
$ kubeadm config images list --config ./init.default.yaml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
registry.aliyuncs.com/google_containers/pause:3.4.1
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:v1.8.0
# 拉取镜像
$ kubeadm config images pull --config ./init.default.yaml
3.1.4 初始化集群配置
$ kubeadm init --config ./init.default.yaml  // 集群初始化完成后记住最后生成的token

# 初始化操作
$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config
3.1.5 启动kubelet并设置开机自启
$ systemctl start kubelet
$ systemctl enable kubelet
3.1.6 将admin.conf(认证文件)传递给两个节点
$ scp /etc/kubernetes/admin.conf root@k8s-work2:/data/glory/
$ scp /etc/kubernetes/admin.conf root@k8s-work3:/data/glory/

注意: 集群初始化过程中报错,参考URL处理


3.2 部署集群内部通信flannel网络

3.2.1 下载kube-flannel.yml
$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
3.2.2 更改子网配置,更改为我们先前设置的子网
$ vim kube-flannel.yml
  net-conf.json: |
    {
      "Network": "10.42.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
3.2.3 创建flannel网络
$ kubectl apply -f ./kube-flannel.yml
3.2.3 查询flannel网络pod是否正常运行
$ kubectl get pods --all-namespaces
NAMESPACE      NAME                                  READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-8h7hs                 1/1     Running   9          6h41m
kube-flannel   kube-flannel-ds-9sxht                 1/1     Running   9          6h41m
kube-flannel   kube-flannel-ds-vpv24                 1/1     Running   11         6h41m
kube-system    coredns-59d64cd4d4-g476g              1/1     Running   5          7h33m
kube-system    coredns-59d64cd4d4-hc42k              1/1     Running   5          7h33m
kube-system    etcd-k8s-master1                      1/1     Running   5          7h33m
kube-system    kube-apiserver-k8s-master1            1/1     Running   5          7h33m
kube-system    kube-controller-manager-k8s-master1   1/1     Running   5          7h13m
kube-system    kube-proxy-2m4h8                      1/1     Running   3          7h19m
kube-system    kube-proxy-jdrgs                      1/1     Running   3          7h19m
kube-system    kube-proxy-qbhh8                      1/1     Running   5          7h33m
kube-system    kube-scheduler-k8s-master1            1/1     Running   5          7h13m
3.2.4 集群flannel网络配置文件传递给两个节点
$ scp /data/glory/working/kube-flannel.yml root@k8s-work2:/data/glory/kube-flannel.yml
$ scp /data/glory/working/kube-flannel.yml root@k8s-work3:/data/glory/kube-flannel.yml

注意:如果一直启动失败,需通过日志排查失败原因,参考URL教程


3.3 k8s-work2节点

3.3.1 配置集群加密文件
$ mkdir -p $HOME/.kube
$ cp -i /data/glory/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config
3.3.2 加入集群
$ kubeadm join 10.201.10.201:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:f167df54ce5d997ee2d5d0ef52a98a998809cf58024c5d6a16392af429cd17a4
3.3.3 配置flannel网络
$ cd /data/glory
$ kubectl apply -f kube-flannel.yml

3.4 k8s-work3节点

3.4.1 配置集群加密文件
$ mkdir -p $HOME/.kube
$ cp -i /data/glory/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config
3.4.2 加入集群
$ kubeadm join 10.201.10.201:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:f167df54ce5d997ee2d5d0ef52a98a998809cf58024c5d6a16392af429cd17a4
3.4.3 配置flannel网络
$ cd /data/glory
$ kubectl apply -f kube-flannel.yml

4 查看当前集群状态

4.1 初始化完成后,查看集群状态

$ kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"} 

集群状态出现报错,且提示Warning: v1 ComponentStatus is deprecated in v1.19+


4.2 修改kube-controller-manager和kube-scheduler配置文件

$ vim /etc/kubernetes/manifests/kube-controller-manager.yaml
#- --port=0   // 注释该参数
$ vim /etc/kubernetes/manifests/kube-scheduler.yaml
#- --port=0   // 注释该参数

4.3 重启kubelet,重新查看集群状态

$ systemctl restart kubelet
# 查看集群状态
$ kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok            
scheduler            Healthy   ok            
etcd-0               Healthy   {"health":"true"} 
# 查看节点状态
$ kubectl get nodes
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   8h      v1.21.14
k8s-work2     Ready    <none>                 7h46m   v1.21.14
k8s-work3     Ready    <none>                 7h46m   v1.21.14
文章作者: hzbb
版权声明: 本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 运维小记
云原生 Kubernetes K8S
喜欢就支持一下吧
打赏
微信 微信
支付宝 支付宝