如何使同一个工作负载管理的pod部署在不同主机上

容器
问题描述

在调度过程中可能会将同一个工作负载管理的pod部署在同一个宿主机上,宿主机宕机会导致所有pod无法访问,如何将同一个工作负载管理的pod部署在不同的主机上。

问题分析

针对以上问题描述可以使用pod反亲和性: 1.requiredDuringSchedulingIgnoredDuringExecution 2.preferredDuringSchedulingIgnoredDuringExecution

问题解决

一.requiredDuringSchedulingIgnoredDuringExecution

1.集群有3个节点

kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.0.127   Ready    <none>   73d   v1.20.6
192.168.0.132   Ready    <none>   73d   v1.20.6
192.168.0.186   Ready    <none>   73d   v1.20.6

2.使用pod反亲和性硬策略,实现相同lable pod一定不能调度到同一节点

cat antiaffinity.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: nginx-server
        image: nginx:latest

3.部署nginx

kubectl apply -f antiaffinity.yaml

4.查看pod所在节点(每个节点上会运行一个pod)

[root@i-5m57did1xe97fiteqcbs tmp]# kubectl get pod -n default -o wide
NAME                               READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
nginx-5c4b6b7bcf-6r7lz             1/1     Running   0          14s     172.26.128.187   192.168.0.127   <none>           <none>
nginx-5c4b6b7bcf-7txt6             1/1     Running   0          14s     172.26.128.113   192.168.0.132   <none>           <none>
nginx-5c4b6b7bcf-8vv4t             1/1     Running   0          14s     172.26.128.15    192.168.0.186   <none>           <none>

二.preferredDuringSchedulingIgnoredDuringExecution

1.集群有3个节点

kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.0.127   Ready    <none>   73d   v1.20.6
192.168.0.132   Ready    <none>   73d   v1.20.6
192.168.0.186   Ready    <none>   73d   v1.20.6

2.将其中一个节点标记为不可调度

kubectl cordon 192.168.0.127
kubectl get node
NAME            STATUS                     ROLES    AGE   VERSION
192.168.0.127   Ready,SchedulingDisabled   <none>   73d   v1.20.6
192.168.0.132   Ready                      <none>   73d   v1.20.6
192.168.0.186   Ready                      <none>   73d   v1.20.6

3.使用pod反亲和性软策略实现相同label pod尽量不要调度到同一节点

cat antiaffinity.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx-antiaffinity
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx-antiaffinity
    spec:
      affinity:  
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - nginx-antiaffinity
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: nginx
        image: nginx

4.部署nginx

kubectl apply -f antiaffinity.yaml

5.查看pod所在节点(有两个pod会调度到同一个节点)

kubectl get pod -n default -o wide
NAME                               READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
nginx-7975f67cc8-2zj9c             1/1     Running   0          3m47s   172.26.128.16    192.168.0.186   <none>           <none>
nginx-7975f67cc8-ktkvr             1/1     Running   0          3m43s   172.26.128.115   192.168.0.132   <none>           <none>
nginx-7975f67cc8-vvxwm             1/1     Running   0          3m50s   172.26.128.114   192.168.0.132   <none>           <none>
参考文档

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

如果您有其他问题,欢迎您联系火山引擎技术支持服务

67
0
0
0
相关产品
评论
未登录
看完啦,登录分享一下感受吧~
暂无评论