Rocky9 Linux部署k8s集群

社区K8s

0.前言

之前在Rocky Linux部署过docker,使用过docker-compose,搭建过harbor,今天我们来搭建一套k8s集群。

1.准备工作

1.1 服务器信息
主机名操作系统ip地址cpu内存
k8s-masterRocky Linux release 9.4192.168.159.1644核8G
k8s-node1Rocky Linux release 9.4192.168.159.1654核8G
k8s-node2Rocky Linux release 9.4192.168.159.1664核8G
1.2 关闭防火墙

分别在上述三台服务器上执行:

systemctl stop firewalld
systemctl disable firewalld
1.3 更改selinux模式

分别在上述三台服务器上执行:

setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
1.4 关闭swap

分别在上述三台服务器上执行:

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
1.5 配置hosts文件

需要在每台服务器的/etc/hosts文件添加以下内容:

192.168.159.164 k8s-master
192.168.159.165 k8s-node1
192.168.159.166 k8s-node2
1.6 服务器互信和免密登录

为方便部署k8s集群,我们创建一下服务器之间的互信关系,并使服务器之间能够免密访问。

(1)生成ssh key公密钥
分别在三台服务器上执行如下命令:

ssh-keygen

执行该命令之后,如果没有特别的要求,一路按enter即可。

(2)创建免密访问
在k8s-master上执行:

ssh-copy-id k8s-master
ssh-copy-id k8s-node1
ssh-copy-id k8s-node2

在k8s-node1上执行:

ssh-copy-id k8s-master
ssh-copy-id k8s-node1
ssh-copy-id k8s-node2

在k8s-node2上执行:

ssh-copy-id k8s-master
ssh-copy-id k8s-node1
ssh-copy-id k8s-node2

在创建免密访问的过程中,输入用户密码即可。

1.7 配置ulimit

在每台服务器上执行以下命令: (1)设置文件:

ulimit -SHn 65535

(2)修改配置文件:

cat /etc/security/limits.conf 
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
1.8 安装ipvsadmin

分别在每台服务器上执行以下命令:

dnf install -y ipvsadm ipset sysstat conntrack libseccomp

ipvs是k8s支持的一种网络模式,大集群使用性能会比较好,其实部署测试环境这种小规模集群用iptables模式即可。

(2)修改ipvs配置文件:
新建/etc/modules-load.d/ipvs.conf文件,并添加以下内容:

cat /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

(3)重新加载系统模块:

systemctl restart systemd-modules-load.service

(4)查看ipvs状态

lsmod | grep ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 237568  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          217088  3 nf_nat,nft_ct,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
1.9 修改内核参数

分别在三台服务器上执行以下操作:
(1)新建/etc/sysctl.d/k8s-sysctl.conf文件,并添加以下内容:

cat /etc/sysctl.d/k8s-sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1

fs.may_detach_mounts = 1
vm.swappiness = 0
vm.overcommit_memory=1
vm.panic_on_oom=0
vm.max_map_count=655360
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384

(2)重新加载内核配置:

sysctl --system
1.10 安装工具集
dnf install -y wget tree curl bash-completion jq vim net-tools telnet git lrzsz epel-release

以上是一些常用工具,部署的过程中可能会用到,所以安装一下。

2.部署规划

2.1 组件部署规划

k8s-master节点:apiserver,scheduler,controller-manager,etcd,kubectl,kubelet,kube-proxy,containerd
k8s-node1节点:kubelet,kube-proxy,containerd
k8s-node2节点:kubelet,kube-proxy,containerd
大家现在只要知道每个节点需要部署的组件即可,后续会出新的文章慢慢讲解每个组件的作用。

2.2 网络规划

在部署之前可以先规划好pod和service的网段,方便后续使用:
服务器网段:192.168.159.0/24
pod网段:172.30.0.0/16
service网段:10.96.0.0/16

2.3 应用版本规划

本次部署应用的版本号分别是:

  • kubernetes: v1.30.5
  • containerd: v1.7.22
  • etcd: 3.5.16
  • cfssl: 1.6.1
  • coredns: 1.9.3
  • metrics-server: 0.6.1

3.正式部署

3.1 部署cfssl

(1)下载cfssl程序

wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64 -O /usr/local/bin/cfssl
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64 -O /usr/local/bin/cfssljson

(2)添加执行权限

chmod +x /usr/local/bin/cfssl*

(3)查看执行软件

ll /usr/local/bin/cfssl*
-rwxr-xr-x. 1 root root 16659824 Dec  7  2021 /usr/local/bin/cfssl
-rwxr-xr-x. 1 root root 11029744 Dec  7  2021 /usr/local/bin/cfssljson

至此,cfssl相关软件安装完成。

3.2 部署containerd

3.2.1 创建安装目录

 mkdir -p /data/containerd/{app,bin,cnibin,config,service}

3.2.2 下载安装包并解压

cd /data/containerd
wget https://github.com/containerd/containerd/releases/download/v1.7.22/cri-containerd-cni-1.7.22-linux-amd64.tar.gz -O ./app/containerd.tar.gz
tar -xf app/containerd.tar.gz --strip-components=3 -C bin /usr/local/bin/{containerd*,crictl,ctr}
tar -xf app/containerd.tar.gz --strip-components=3 -C cnibin opt/cni/bin/*

3.2.3 安装runc

(1)下载runc程序

wget https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64 -O bin/runc

(2)赋执行权限

chmod +x bin/runc

3.2.4 创建service文件和containerd配置文件

(1)创建containerd service文件

cat service/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

(2)创建containerd配置文件

cat config/containerd.conf
overlay
br_netfilter

3.2.5 创建toml文件

./bin/containerd config default > config/config.toml

修改其中的几项配置:
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]     SystemdCgroup = true

3.2.6 分发重要文件到相关路径

在各服务器上创建目录,并将配置文件分发到该目录下:

for i in k8s-master k8s-node1 k8s-node2; do \
ssh $i "mkdir -p /etc/containerd"; \
ssh $i "mkdir -p /opt/cni/bin"; \
ssh $i "mkdir -p /opt/containerd"; \
ssh $i "mkdir -p /etc/cni/net.d"; \
scp bin/* $i:/usr/local/bin/; \
scp cnibin/* $i:/opt/cni/bin/; \
scp service/containerd.service $i:/usr/lib/systemd/system/; \
scp config/config.toml $i:/etc/containerd/; \
scp config/containerd.conf $i:/etc/modules-load.d/; \
done

3.2.7 启动containerd服务

启动所有服务器的containerd服务,命令如下:

for i in k8s-master k8s-node1 k8s-node2; do \
ssh $i "systemctl restart systemd-modules-load.service"; \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable containerd"; \
ssh $i "systemctl restart containerd --no-block"; \
ssh $i "systemctl is-active containerd"; \
done

3.2.8 试用containerd服务

(1)查看namespace

ctr ns list

(2)创建namespace

ctr ns create test

(3)拉取buxybox镜像

ctr images pull docker.m.daocloud.io/library/busybox:latest

(4)查看镜像

ctr images list

可以看到ctr命令使用起来比较抽象,不太符合常见用法,可以使用crictl命令替换

(5)创建crictl.yaml文件 使用crictl命令之前,需要在config目录下创建crictl.yaml文件

cat ./config/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false

将配置文件拷贝到其他所有服务器:

for i in k8s-master k8s-node1 k8s-node2; do \
scp config/crictl.yaml $i:/etc/
done

(6)试用crictl命令 查看镜像命令如下:

crictl images

拉取镜像命令如下:

crictl pull docker.m.daocloud.io/library/busybox:latest

可以看到crictl和docker的用法基本一致。

3.3 部署ectd

3.3.1 下载安装包

(1)创建下载目录

mkdir /data/etcd/{bin,config,service,ssl,app}

(2)下载最新版本的etcd,并解压到指定目录

cd /data/etcd/
wget https://github.com/etcd-io/etcd/releases/download/v3.5.16/etcd-v3.5.16-linux-amd64.tar.gz -O app/etcd.tar.gz
tar -xf app/etcd.tar.gz --strip-components=1 -C bin/ etcd-v3.5.16-linux-amd64/etcd{,ctl}

(3)创建etcd使用证书 etcd ca的配置文件为:

cat ./ca-config.json
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "peer": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}

etcd ca的证书签名请求文件如下:

cat ./etcd-ca-csr.json
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}

生成etcd的ca根证书

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare ssl/etcd-ca

etcd证书申请签名文件如下:

cat ./etcd-csr.json
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}

最后生成etcd证书:

cfssl gencert \
	-ca=ssl/etcd-ca.pem \
	-ca-key=ssl/etcd-ca-key.pem \
	-config=ca-config.json \
	-hostname=${HOSTNAME} \
	-profile=peer etcd-csr.json | cfssljson -bare ssl/etcd

到这里,证书就全部准备完毕了。

(4)创建etcd部署目录

mkdir -p /opt/etcd/{config,ssl,data}

(5)拷贝执行文件和证书

cp ./bin/etcd* /usr/local/bin
cp ./ssl/etcd{,-key,-ca}.pem /opt/etcd/ssl

(6)创建配置文件

cat /opt/etcd/config/etcd.config.yaml
name: 'k8s-master'
data-dir: /opt/etcd/data
wal-dir: /opt/etcd/data/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.159.164:2380'
listen-client-urls: 'https://192.168.159.164:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.159.164:2380'
advertise-client-urls: 'https://192.168.159.164:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master=https://192.168.159.164:2380,'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/opt/etcd/ssl/etcd.pem'
  key-file: '/opt/etcd/ssl/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/opt/etcd/ssl/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/opt/etcd/ssl/etcd.pem'
  key-file: '/opt/etcd/ssl/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/opt/etcd/ssl/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false

(7)创建service文件

cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--config-file=${ETCD_CONF_DIR}/etcd.config.yaml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

(8)启动etcd服务

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd --no-block
systemctl is-active etcd

如果最终显示active,说明etcd启动成功。

(9)检查etcd的健康状态

etcdctl \ --endpoints="$ENDPOINTS" \ --cacert=/opt/etcd/ssl/etcd-ca.pem \ --cert=/opt/etcd/ssl/etcd.pem \ --key=/opt/etcd/ssl/etcd-key.pem endpoint health \ --write-out=table

picture.image 如果health为true,说明etcd服务正常。

(10)etcd作用
etcd是一个key-value格式的元数据存储中间件,在k8s集群中起到了数据存储和服务发现的作用,可以说是举足轻重。并且etcd各个功能的实现有着非常巧妙的设计,以后有机会也可以和大家分享探讨一番。

3.4 部署k8s主组件

在master节点上需要部署三个组件:
(1)kube-apiserver
(2)kube-controller-manager
(3)kube-scheduler
kubernetes安装包为;kubernetes-server-linux-amd64.tar.gz(1.30.5版本)
可在github自行下载: https://github.com/kubernetes/kubernetes/releases

3.4.1 部署kube-apiserver

(1)创建准备目录并解压安装包

mkdir -p /data/k8s/{app,bin,config,kubeconfig,service,ssl}
tar -xf app/kubernetes-server.tar.gz  --strip-components=3 -C bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

将下载好的安装包放到/data/k8s/app目录下面

(2)生成k8s所用的CA证书
签名认证:

cat ./ca-config.json
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "peer": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

CA证书签名配置文件:

cat ./ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "GuangDong",
            "ST": "ShenZhen",
            "O": "Kubernetes",
            "OU": "System"
        }
    ],
  "ca": {
    "expiry": "876000h"
  }
}

生成ca证书和ca的私钥:

cfssl gencert -initca ca-csr.json | cfssljson -bare ssl/ca

(3)生成apiserver所需证书
添加一个APISERVER_NAME的参数,后续会用到:

APISERVER_NAME="127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,10.96.0.1,192.168.159.164"

证书配置文件如下:

cat ./kube-apiserver-csr.json
{
    "CN": "kube-apiserver",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "GuangDong",
            "ST": "ShenZhen",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成证书和私钥文件:

cfssl gencert -ca=ssl/ca.pem -ca-key=ssl/ca-key.pem -config=ca-config.json \ 
-hostname=${APISERVER_NAME} \ 
-profile=peer kube-apiserver-csr.json | cfssljson -bare ssl/kube-apiserver

代理证书配置文件如下:

cat ./front-proxy-ca-csr.json
{
  "CN": "kubernetes",
  "key": {
     "algo": "rsa",
     "size": 2048
  },
  "ca": {
    "expiry": "876000h"
  }
}

生成代理证书:

cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare ssl/front-proxy-ca

代理客户端证书配置文件如下:

cat ./front-proxy-client-csr.json
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}

生成代理客户端证书:

cfssl gencert -ca=ssl/front-proxy-ca.pem -ca-key=ssl/front-proxy-ca-key.pem -config=ca-config.json -profile=peer front-proxy-client-csr.json | cfssljson -bare ssl/front-proxy-client

创建ServiceAccount Key:

openssl genrsa -out ssl/sa.key 2048 
openssl rsa -in ssl/sa.key -pubout -out ssl/sa.pub

(3)创建apiserver.service文件:

cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=192.168.159.164 \
--advertise-address=192.168.159.164 \
--secure-port=6443 \
--service-cluster-ip-range=10.96.0.0/16 --service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.159.164:2379 \
--etcd-cafile=/opt/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/opt/etcd/ssl/etcd.pem \
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \
--client-ca-file=/opt/k8s/ssl/ca.pem \
--tls-cert-file=/opt/k8s/ssl/kube-apiserver.pem \
--tls-private-key-file=/opt/k8s/ssl/kube-apiserver-key.pem \
--kubelet-client-certificate=/opt/k8s/ssl/kube-apiserver.pem \
--kubelet-client-key=/opt/k8s/ssl/kube-apiserver-key.pem \
--service-account-key-file=/opt/k8s/ssl/sa.pub \
--service-account-signing-key-file=/opt/k8s/ssl/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--enable-aggregator-routing=true \
--proxy-client-cert-file=/opt/k8s/ssl/front-proxy-client.pem \
--proxy-client-key-file=/opt/k8s/ssl/front-proxy-client-key.pem \
--requestheader-client-ca-file=/opt/k8s/ssl/front-proxy-ca.pem \
--requestheader-allowed-names=front-proxy-client \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
#--token-auth-file=${K8S_CONF_DIR}/token.csv 这里禁用token文件进行认证

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

创建kube-apiserver安装目录:

mkdir /opt/k8s/{ssl,config,log}
cp bin/kube-apiserver /usr/local/bin/
cp ssl/{kube*.pem,ca{,-key}.pem,front-proxy-client*.pem,front-proxy-ca.pem,sa.*} /opt/k8s/ssl/

启动kube-apiserver:

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver --no-block
systemctl is-active kube-apiserver

如果显示active,说明kube-apiserver启动成功。

3.4.2 部署kubectl

(1)创建kubectl证书配置文件

cat ./kubectl-csr.json
{
  "CN": "clusteradmin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}

(2)生成证书

cfssl gencert -ca=ssl/ca.pem -ca-key=ssl/ca-key.pem -config=ca-config.json -profile=peer kubectl-csr.json | cfssljson -bare ssl/kubectl

(3)生成kubectl config文件 首先指定一些系统参数,后续会用到:

APISERVER_IP=192.168.159.164
K8S_CERT_DIR=ssl
PORT=6443
KUBE_APISERVER=https://${APISERVER_IP}:${PORT}
CLUSTER_NAME=kubernetes
USERNAME=clusteradmin
KUBECONFIG_FILE=kubeconfig/kubectl.kubeconfig
CONTEXT_NAME=${USERNAME}@${CLUSTER_NAME}
CERT_PRFIX=kubectl

执行配置文件生成命令:

./bin/kubectl config set-cluster ${CLUSTER_NAME} \
    --certificate-authority=${K8S_CERT_DIR}/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config set-credentials ${USERNAME} \
	--client-certificate=${K8S_CERT_DIR}/${CERT_PRFIX}.pem \
	--client-key=${K8S_CERT_DIR}/${CERT_PRFIX}-key.pem \
	--embed-certs=true \
	--kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config set-context ${CONTEXT_NAME} \
    --cluster=${CLUSTER_NAME} \
    --user=${USERNAME} \
    --kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config use-context ${CONTEXT_NAME} \
    --kubeconfig=${KUBECONFIG_FILE}

分发kubectl和kubeconfig文件:

mkdir -p $HOME/.kube/
cp bin/kubectl /usr/local/bin
cp kubeconfig/kubectl.kubeconfig $HOME/.kube/config

kubectl命令补全:

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

到此,kubectl安装完成,我们运行两个命令看下效果:

kubectl cluster-info
Kubernetes control plane is running at https://192.168.159.164:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl get componentstatus

picture.image OK,接下来我们来部署其他的组件。

3.4.3 部署kube-controller-manager

(1)准备kube-controller-manager所需证书的配置文件:

cat kube-controller-manager-csr.json
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}

(2)生成证书

CONTROLLER_IP=“127.0.0.1,192.168.159.164”
cfssl gencert -ca=ssl/ca.pem -ca-key=ssl/ca-key.pem -config=ca-config.json -hostname=${CONTROLLER_IP} -profile=peer kube-controller-manager-csr.json | cfssljson -bare ssl/kube-controller-manager

(3)创建kubeconfig文件
先指定一些参数:

APISERVER_IP=192.168.159.164
K8S_CERT_DIR=ssl
PORT=6443
KUBE_APISERVER=https://${APISERVER_IP}:${PORT}
KUBECONFIG_FILE=kubeconfig/kube-controller-manager.kubeconfig
CLUSTER_NAME=kubernetes
USERNAME=system:kube-controller-manager
CONTEXT_NAME=${USERNAME}@${CLUSTER_NAME}
CERT_PRFIX=kube-controller-manager

设置集群参数:

./bin/kubectl config set-cluster ${CLUSTER_NAME} \
    --certificate-authority=${K8S_CERT_DIR}/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=${KUBECONFIG_FILE}

# 设置用户认证参数
./bin/kubectl config set-credentials ${USERNAME} \
	--client-certificate=${K8S_CERT_DIR}/${CERT_PRFIX}.pem \
	--client-key=${K8S_CERT_DIR}/${CERT_PRFIX}-key.pem \
	--embed-certs=true \
	--kubeconfig=${KUBECONFIG_FILE}

# 设置context---将用户和集群关联起来
./bin/kubectl config set-context ${CONTEXT_NAME} \
    --cluster=${CLUSTER_NAME} \
    --user=${USERNAME} \
    --kubeconfig=${KUBECONFIG_FILE}

# 设置默认context
./bin/kubectl config use-context ${CONTEXT_NAME} \
    --kubeconfig=${KUBECONFIG_FILE}

执行完之后会在kubeconfig目录下生成kube-controller-manager.kubeconfig文件
(4)创建kube-controller-manager.service文件:

cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--bind-address=127.0.0.1 \
--root-ca-file=/opt/k8s/ssl/ca.pem \
--cluster-signing-cert-file=/opt/k8s/ssl/ca.pem \
--cluster-signing-key-file=/opt/k8s/ssl/ca-key.pem \
--service-account-private-key-file=/opt/k8s/ssl/sa.key \
--tls-cert-file=/opt/k8s/ssl/kube-controller-manager.pem \
--tls-private-key-file=/opt/k8s/ssl/kube-controller-manager-key.pem \
--kubeconfig=/opt/k8s/config/kube-controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=172.30.0.0/16 \
--requestheader-client-ca-file=/opt/k8s/ssl/front-proxy-ca.pem \
--node-cidr-mask-size=24

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

(5)将文件放到指定位置

mkdir -p /opt/k8s/{ssl,config}
cp ./bin/kube-controller-manager /usr/local/bin/
cp ./ssl/kube-controller*.pem /opt/k8s/ssl/
cp ./kubeconfig/kube-controller-manager.kubeconfig /opt/k8s/config/

(6)启动kube-controller-manager服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager --no-block
systemctl is-active kube-controller-manager

如果最终的结果显示的是active,说明服务启动成功。

3.4.4 部署kube-scheduler

(1)kube-scheduler所需证书配置文件

cat kube-scheduler-csr.json
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}

(2)生成证书文件

cfssl gencert -ca=ssl/ca.pem -ca-key=ssl/ca-key.pem \
-config=ca-config.json \
-hostname=192.168.159.164 \
-profile=peer kube-scheduler-csr.json | cfssljson -bare ssl/kube-scheduler

(3)生成kubeconfig文件 先指定一些要用的参数:

APISERVER_IP=192.168.159.164
K8S_CERT_DIR=ssl
PORT=6443
KUBE_APISERVER=https://${APISERVER_IP}:${PORT}
KUBECONFIG_FILE=kubeconfig/kube-scheduler.kubeconfig
CLUSTER_NAME=kubernetes
USERNAME=system:kube-scheduler
CONTEXT_NAME=${USERNAME}@${CLUSTER_NAME}
CERT_PRFIX=kube-scheduler

执行设置集群参数的命令:

./bin/kubectl config set-cluster ${CLUSTER_NAME} \
    --certificate-authority=${K8S_CERT_DIR}/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config set-credentials ${USERNAME} \
	--client-certificate=${K8S_CERT_DIR}/${CERT_PRFIX}.pem \
	--client-key=${K8S_CERT_DIR}/${CERT_PRFIX}-key.pem \
	--embed-certs=true \
	--kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config set-context ${CONTEXT_NAME} \
    --cluster=${CLUSTER_NAME} \
    --user=${USERNAME} \
    --kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config use-context ${CONTEXT_NAME} \
    --kubeconfig=${KUBECONFIG_FILE}

创建service文件:

cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--bind-address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/opt/k8s/config/kube-scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

(4)将文件放置到指定位置

mkdir -p /opt/k8s/{ssl,config}
cp ./bin/kube-scheduler /usr/local/bin
cp ./ssl/kube-scheduler*.pem /opt/k8s/ssl/
cp ./kubeconfig/kube-scheduler.kubeconfig /opt/k8s/config/

(5)启动kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler --no-block
systemctl is-active kube-scheduler

如果最终的结果显示的是active,说明scheduler启动成功。

(6)检查一下集群

kubectl get cs

picture.image 终于,k8s集群主节点三大组件部署完成。

3.5 部署kubelet

3.5.1 配置TLS Bootstrap

(1)生成bootstrap.kubeconfig文件

APISERVER_IP=192.168.159.164
K8S_CERT_DIR=ssl
K8S_CONF_DIR=/opt/k8s/config
PORT=6443
KUBE_APISERVER=https://${APISERVER_IP}:${PORT}
KUBECONFIG_FILE=kubeconfig/bootstrap.kubeconfig
CLUSTER_NAME=kubernetes

TOKEN_ID=$(openssl rand -hex 3)
TOKEN_SECRET=$(openssl rand -hex 8)
BOOTSTRAP_TOKEN=${TOKEN_ID}.${TOKEN_SECRET}

USERNAME=system:bootstrap:${TOKEN_ID}
CONTEXT_NAME=${USERNAME}@${CLUSTER_NAME}

执行创建集群参数:

./bin/kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=${K8S_CERT_DIR}/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config set-credentials ${USERNAME} \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config set-context ${CONTEXT_NAME}  \
--cluster=kubernetes \
--user=${USERNAME} \
--kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config use-context ${CONTEXT_NAME} --kubeconfig=${KUBECONFIG_FILE}

创建bootstrap token secret

cat config/bootstrap-token-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-${TOKEN_ID}
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  token-id: ${TOKEN_ID}
  token-secret: ${TOKEN_SECRET}
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

创建bootstrap secret并查看:

kubectl apply -f config/bootstrap-token-secret.yaml
kubectl get secret -n kube-system

3.5.2 创建kubelet配置文件

设置一些参数:

K8S_CONF_DIR=/opt/k8s/config 
K8S_CERT_DIR=/opt/k8s/ssl 
CLUSTER_DNS=10.96.0.10

设置kubelet参数文件:

cat config/kubelet.conf
KUBELET_OPTS="--v=4 \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--runtime-cgroups=/systemd/system.slice \\
--kubeconfig=${K8S_CONF_DIR}/kubelet.kubeconfig \\
--bootstrap-kubeconfig=${K8S_CONF_DIR}/bootstrap.kubeconfig \\
--config=${K8S_CONF_DIR}/kubelet.yaml \\
--cert-dir=${K8S_CERT_DIR} \\
--node-labels=node.kubernetes.io/node="

设置kubelet配置yaml文件:

cat > config/kubelet.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: ${K8S_CERT_DIR}/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
runtimeRequestTimeout: 15m
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- ${CLUSTER_DNS}
clusterDomain: cluster.local

设置kubelet.service文件:

cat service/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=containerd.service
Requires=containerd.service

[Service]
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/hugetlb/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/blkio/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpuset/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/devices/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/net_cls,net_prio/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/perf_event/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpu,cpuacct/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/freezer/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/memory/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/pids/systemd/system.slice
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/systemd/systemd/system.slice
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
# 在centos系统上需要配置CPUAccountingMemoryAccounting
CPUAccounting=true
MemoryAccounting=true
EnvironmentFile=${K8S_CONF_DIR}/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS

Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

将各类重要文件放到master节点指定目录:

mkdir -p /opt/k8s/{config,ssl,manifests}
cp ./bin/kubelet /usr/local/bin
cp config/kubelet.{conf,yaml} /opt/k8s/config
cp kubeconfig/bootstrap.kubeconfig /opt/k8s/config
cp ssl/ca.pem /opt/k8s/ssl
cp service/kubelet.service /usr/lib/systemd/system/

授权:

kubectl create clusterrolebinding create-csrs-for-bootstrapping \
  --clusterrole=system:node-bootstrapper \
  --group=system:bootstrappers:default-node-token 

kubectl get clusterrolebinding create-csrs-for-bootstrapping

启动kubelet服务:

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet --no-block
systemctl is-active kubelet

如果最终显示的结果是active,说明启动成功。

设置证书自动颁发,自动批准 ,自动续期

kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers

kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes

kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes

3.5.3 Node节点安装kubelet

分发文件:

for i in k8s-node1 k8s-node2; do \
ssh $i "mkdir -p /opt/k8s/{config,ssl,manifests}"; \
scp bin/kubelet $i:/usr/local/bin/; \
scp config/kubelet.{conf,yaml} $i:/opt/k8s/config/; \
scp kubeconfig/bootstrap.kubeconfig $i:/opt/k8s/config/; \
scp ssl/ca.pem $i:/opt/k8s/ssl; \
scp service/kubelet.service $i:/usr/lib/systemd/system/; \
done

启动kubelet服务:

for i in k8s-node1 kube-node2; do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable kubelet"; \
ssh $i "systemctl restart kubelet --no-block"; \
ssh $i "systemctl is-active kubelet"; \
done

都显示active的话,说明kubelet服务在node节点启动完成。

3.5.4 查看集群情况

查看csr情况:

kubectl get csr

给节点添加标签:

kubectl label nodes k8s-master node-role.kubernetes.io/master=
kubectl label nodes k8s-node1 node-role.kubernetes.io/worker=
kubectl label nodes k8s-node2 node-role.kubernetes.io/worker=

查看node节点:

kubectl get nodes

picture.image

3.5.5 部署kube-proxy

kube-proxy证书需要的配置文件:

cat kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "ShenZhen",
      "O": "system:kube-proxy",
      "OU": "Kubernetes-manual"
    }
  ]
}

生成证书文件:

cfssl gencert -ca=ssl/ca.pem -ca-key=ssl/ca-key.pem -config=ca-config.json -profile=peer kube-proxy-csr.json | cfssljson -bare ssl/kube-proxy

生成kubeconfig文件: 指定一些参数:

K8S_CONF_DIR=/opt/k8s/config

APISERVER_IP=192.168.159.164
K8S_CERT_DIR=$2
PORT=6443
CLUSTER_NAME=kubernetes
KUBE_APISERVER=https://${APISERVER_IP}:${PORT}
KUBECONFIG_FILE=kubeconfig/kube-proxy.kubeconfig
USERNAME=system:kube-proxy
CONTEXT_NAME=${USERNAME}@${CLUSTER_NAME}
CERT_PRFIX=kube-proxy

执行生成集群配置文件的命令:

./bin/kubectl config set-cluster ${CLUSTER_NAME} \
    --certificate-authority=${K8S_CERT_DIR}/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config set-credentials ${USERNAME} \
    --client-certificate=${K8S_CERT_DIR}/${CERT_PRFIX}.pem \
    --client-key=${K8S_CERT_DIR}/${CERT_PRFIX}-key.pem \
    --embed-certs=true \
    --kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config set-context ${CONTEXT_NAME} \
    --cluster=${CLUSTER_NAME} \
    --user=${USERNAME} \
    --kubeconfig=${KUBECONFIG_FILE}

./bin/kubectl config use-context ${CONTEXT_NAME} \
    --kubeconfig=${KUBECONFIG_FILE}

配置kube-proxy配置文件:

cat config/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: ${K8S_CONF_DIR}/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: ${CLUSER_CIDR} 
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms

创建kube-proxy service文件:

cat service/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
    --config=/opt/k8s/config/kube-proxy.yaml \
    --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

分发二进制文件到各个节点:

for i in k8s-master k8s-node1 k8s-node2; do \
scp bin/kube-proxy $i:/usr/local/bin/; \
scp config/kube-proxy.yaml $i:/opt/k8s/config/; \
scp kubeconfig/kube-proxy.kubeconfig $i:/opt/k8s/config/; \
scp service/kube-proxy.service $i:/usr/lib/systemd/system/; \
scp ssl/front-proxy-ca.pem $i:/opt/k8s/ssl/; \
done

启动各个节点的kube-proxy服务:

for i in k8s-master k8s-node1 k8s-node2; do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable kube-proxy"; \
ssh $i "systemctl restart kube-proxy --no-block"; \
ssh $i "systemctl is-active kube-proxy"; \
done

如果连续显示active说明各个节点的kube-proxy服务启动成功。

查看服务状态:

ss -ntlp |grep kube-proxy

picture.image

3.5.6 网络方案calico

准备工作:
(1)创建安装目录

mkdir -p /data/addons
cd /data/addons

(2)下载完整安装包
github地址:https://github.com/projectcalico
本次实验使用的是3.28.2版本:release-v3.28.2.tgz
连续两次解压该安装包,可以看到manifests和images目录,manifests目录里面有calico.yaml文件,而images目录里面有部署calico所有镜像,这两样是李现部署calico必须的,我们将其上传到服务器/data/addons目录。

(3)查看部署calico所需镜像

cat ./calico.yaml |grep image |grep -v Pull
image: docker.io/calico/cni:v3.28.2
image: docker.io/calico/cni:v3.28.2
image: docker.io/calico/node:v3.28.2
image: docker.io/calico/node:v3.28.2
image: docker.io/calico/kube-controllers:v3.28.2

可以看到需要三个镜像:
docker.io/calico/cni:v3.28.2,
docker.io/calico/node:v3.28.2,
docker.io/calico/kube-controllers:v3.28.2

(4)导入所需镜像
导入以下三个镜像:

ctr -n k8s.io images import calico-cni.tar
ctr -n k8s.io images import calico-kube-controllers.tar
ctr -n k8s.io images import calico-node.tar

查看镜像导入情况:

crictl images

picture.image

(5)部署calico
执行calico声明文件:

kubectl apply -f calico.yaml

查看calico容器运行情况:

kubectl get pod -n kube-system

picture.image 可以看到calico的容器都正常运行了。
再看下k8s节点的运行情况:

kubectl get node

picture.image 可以看到所有的node都是Ready状态了。

3.5.7 部署coredns

coredns的声明文件如下:

cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.96.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

部署到k8s集群:

kubectl apply -f coredns.yaml

查看coredn容器运行情况:

kubectl get pod -n kube-system |grep coredns

picture.image 可以看到coredns正常运行。

4.试用k8s集群

经过一段时间的努力,终于将k8s集群部署完成,试用一下,看k8s集群基本功能是否正常。
(1)部署busybox busybox申明式文件

mkdir -p /data/yaml
cd /data/yaml
cat buxybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.m.daocloud.io/library/busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

busybox1申明式文件

mkdir -p /data/yaml
cd /data/yaml
cat buxybox1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox1
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.m.daocloud.io/library/busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

busybox2申明式文件

mkdir -p /data/yaml
cd /data/yaml
cat buxybox2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox2
  namespace: test
spec:
  containers:
  - name: busybox
    image: docker.m.daocloud.io/library/busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

部署到k8s集群:

kubectl create ns test
kubectl apply -f busybox.yaml
kubectl apply -f busybox1.yaml
kubectl apply -f busybox2.yaml

查看busybox容器运行情况:


kubectl get pod -A -o wide|grep busybox

picture.image 可以看到容器正常运行。

(2)验证k8s集群网络

kubectl exec busybox -- ping -c 3 172.30.36.66

picture.image

kubectl exec busybox -- ping -c 3 172.30.235.194

picture.image 可以看到,网络是互通的,说明网络正常。

5.总结

终于,一个基础能用的k8s集群部署完成了,整个过程还是比较复杂,其实用kubeadm部署会简单一些,因为kubeadm会将重要组件以容器的形式部署到k8s集群里面(后续也会用kubeadm部署一次k8s集群),也不用用户单独创建证书。但是上面的这种部署方式可以让大家更清楚k8s集群的全貌,以及证书的使用。部署过程中肯定会遇到很多的问题,希望大家遇到问题能够积极面对,多解决一个问题就能多提高一点,happy hacking。

0
0
0
0
评论
未登录
看完啦,登录分享一下感受吧~
暂无评论