关于 Kubernetes中deployment的一些笔记
写在前面
- 学习
K8s
涉及到这些,整理笔记加以记忆 - 博文内容涉及:
deployment
的创建- 通过
deployment
实现pod的扩容和缩容
- 通过
deployment
实现容器的镜像滚动更新、回滚
- pod的扩容和缩容通过
HPA
的方式有些问题,也可能是我机器的原因,这个之后解决了在补充。 - 这一块学的有点乱,之后还需要整理
情不知所起,一往而深;可惜大多由深转浅,相忘江湖,我也如是 ——烽火戏诸侯《雪中悍刀行》
deployment
Deployment
是Kubernetes v1.2
引入的新概念,引入的目的是为了更好地解决Pod的编排问题
。为此, Deployment
在内部使用了Replica Set
来实现目的,无论从Deployment
的作用与目的、它的YAML定义,还是从它的具体命令行操作来看,我们都可以把它看作RC
的一次升级,两者的相似度超过90%。
Deployment相对于RC的一个最大升级是我们可以随时知道当前Pod“部署”的进度。实际上由于一个Pod的创建、调度、绑定节点及在目标Node上启动对应的容器这一完整过程需要一定的时间,所以我们期待系统启动N个Pod副本的目标状态,实际上是一个连续变化的“部署过程”导致的最终状态。
以下是 Deployments 的典型用例:
Deployments 的典型用例 |
---|
创建 Deployment 以将 ReplicaSet 上线。 ReplicaSet 在后台创建 Pods。 检查 ReplicaSet 的上线状态,查看其是否成功。 |
通过更新 Deployment 的 PodTemplateSpec ,声明 Pod 的新状态 。 新的ReplicaSet 会被创建,Deployment 以受控速率将 |
如果 Deployment 的当前状态不稳定,回滚到较早的 Deployment 版本。 每次回滚都会更新 Deployment 的修订版本。 |
扩大 Deployment 规模以承担更多负载。 |
暂停 Deployment 以应用对 PodTemplateSpec 所作的多项修改, 然后恢复其执行以启动新的上线版本。 |
使用 Deployment 状态 来判定上线过程是否出现停滞。 |
清理较旧的不再需要的 ReplicaSet 。 |
ReplicaSet
ReplicaSet
的目的是维护一组在任何时候都处于运行状态的 Pod 副本的稳定集合
。 因此,它通常用来保证给定数量的、完全相同的 Pod 的可用性。
ReplicaSet 的工作原理
RepicaSet
是通过一组字段来定义的,包括:
- 一个用来识别可获得的 Pod 的集合的选择算符(选择器)、
- 一个用来标明应该维护的副本个数的数值、
- 一个用来指定应该创建新 Pod 以满足副本个数条件时要使用的 Pod 模板等等。
每个 ReplicaSet
都通过根据需要创建和 删除 Pod
以使得副本个数达到期望值, 进而实现其存在价值。当 ReplicaSet
需要创建新的 Pod
时,会使用所提供的 Pod
模板。
ReplicaSet
通过 Pod
上的 metadata.ownerReferences
字段连接到附属 Pod
,该字段给出当前对象的属主资源。 ReplicaSet 所获得的 Pod 都在其 ownerReferences
字段中包含了属主 ReplicaSet
的标识信息。正是通过这一连接,ReplicaSet
知道它所维护的 Pod
集合的状态, 并据此计划其操作行为。
ReplicaSet 使用其选择算符来辨识要获得的 Pod 集合。如果某个 Pod 没有 OwnerReference 或者其 OwnerReference 不是一个 控制器,且其匹配到 某 ReplicaSet 的选择算符,则该 Pod 立即被此 ReplicaSet 获得。
何时使用 ReplicaSet
ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。 然而,Deployment 是一个更高级的概念,它管理 ReplicaSet,并向 Pod 提供声明式的更新以及许多其他有用的功能。 因此,我们建议使用 Deployment 而不是直接使用 ReplicaSet,除非 你需要自定义更新业务流程或根本不需要更新。
这实际上意味着,你可能永远不需要操作ReplicaSet
对象:而是使用 Deployment
,并在 spec 部分定义你的应用
apiVersion: apps/v1kind: ReplicaSetmetadata: name: frontend labels: app: guestbook tier: frontendspec: # modify replicas according to your case replicas: 3 selector: matchLabels: tier: frontend template: metadata: labels: tier: frontend spec: containers: - name: php-redis image: nginx
学习环境准备
┌──[[email protected]]-[~/ansible]└─$dir=k8s-deploy-create ;mkdir $dir;cd $dir┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get nsNAMESTATUS AGEdefault Active 78mkube-node-lease Active 79mkube-publicActive 79mkube-systemActive 79m┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl create ns liruilong-deploy-createnamespace/liruilong-deploy-create created┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl config set-context $(kubectl config current-context) --namespace=liruilong-deploy-createContext "kubernetes-admin@kubernetes" modified.┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl config view | grep namespace namespace: liruilong-deploy-create┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$
用yaml文件创建deployment
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl create deployment web1 --image=nginx --dry-run=client -o yaml > ngixndeplog.yaml┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$vim ngixndeplog.yaml
ngixndeplog.yaml
apiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: web1 name: web1spec: replicas: 3 selector: matchLabels: app: web1 strategy: {} template: metadata: creationTimestamp: null labels: app: web1 spec: containers: - image: nginx name: nginx ports: - containerPort: 80 resources: {}status: {}
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl apply -f ngixndeplog.yamldeployment.apps/web1 created
查看创建的deployment
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get deploy -o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORweb1 2/3 3 2 37s nginx nginx app=web1
查看创建的replicaSet
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get rs -o wideNAMEDESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTORweb1-66b5fd9bc8 3 3 34m28s nginx nginx app=web1,pod-template-hash=66b5fd9bc8
查看创建的pod
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESweb1-66b5fd9bc8-2wpkr 1/1 Running 0 3m45s 10.244.171.131 vms82.liruilongs.github.io <none> <none>web1-66b5fd9bc8-9lxh2 1/1 Running 0 3m45s 10.244.171.130 vms82.liruilongs.github.io <none> <none>web1-66b5fd9bc8-s9w7g 1/1 Running 0 3m45s 10.244.70.3 vms83.liruilongs.github.io <none> <none>
Pod的扩容和缩容
在实际生产系统中,我们经常会遇到某个服务需要扩容的场景,也可能会遇到由于资源紧张或者工作负载降低而需要减少服务实例数量的场景。此时我们可以利用DeploymentRC的Scale机制来完成这些工作。Kubermetes对Pod的扩容和缩容操作提供了手动和自动两种模式,
手动模式通过执行kubecl scale
命令对一个Deploymen/RC
进行Pod副本数量的设置,即可一键完成。
自动模式则需要用户根据某个性能指标或者自定义业务指标,并指定Pod
副本数量的范围,系统将自动在这个范围内根据性能指标的变化进行调整。
手动模式
命令行修改kubectl scale deployment web1 --replicas=2
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl scale deployment web1 --replicas=2deployment.apps/web1 scaled┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESweb1-66b5fd9bc8-2wpkr 1/1 Running 0 8m19s 10.244.171.131 vms82.liruilongs.github.io <none> <none>web1-66b5fd9bc8-s9w7g 1/1 Running 0 8m19s 10.244.70.3 vms83.liruilongs.github.io <none> <none>┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$
edit的方式修改kubectl edit deployment web1
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl edit deployment web1deployment.apps/web1 edited┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get pod -o wideNAME READY STATUSRESTARTS AGE IP NODE NOMINATED NODE READINESS GATESweb1-66b5fd9bc8-2wpkr 1/1 Running 0 9m56s 10.244.171.131 vms82.liruilongs.github.io <none> <none>web1-66b5fd9bc8-9lnds 0/1 ContainerCreating 0 6s <none> vms82.liruilongs.github.io <none> <none>web1-66b5fd9bc8-s9w7g 1/1 Running 0 9m56s 10.244.70.3 vms83.liruilongs.github.io <none> <none>┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$
修改yaml文件方式
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$sed -i 's/replicas: 3/replicas: 2/' ngixndeplog.yaml┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl apply -f ngixndeplog.yamldeployment.apps/web1 configured┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESweb1-66b5fd9bc8-2wpkr 1/1 Running 0 12m 10.244.171.131 vms82.liruilongs.github.io <none> <none>web1-66b5fd9bc8-s9w7g 1/1 Running 0 12m 10.244.70.3 vms83.liruilongs.github.io <none> <none>┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$
HPA自动模式
从Kubernetes v1.1版本开始,新增了名为Horizontal Pod Autoscaler (HPA)的控制器,用于实现基于CPU使用率进行自动Pod扩容和缩容的功能。
HPA控制器基于Master的kube-controller-manager服务启动参数–horizontal-pod-autoscaler-sync-period定义的时长(默认值为30s),周期性地监测目标Pod的CPU使用率,并在满足条件时对ReplicationController或Deployment中的Pod副本数量进行调整,以符合用户定义的平均Pod CPU使用率。Pod CPU使用率来源于metric server 组件,所以需要预先安装好metric server .
HPA 可以基于内存,CPU,并发量来动态伸缩
创建HPA时可以使用kubectl autoscale 命令进行快速创建或者使用yaml配置文件进行创建。在创建HPA之前,需要已经存在一个DeploymentRC对象,并且该Deployment/RC中的Pod必须定义resources.requests.cpu的资源请求值,如果不设置该值,则metric server 将无法采集到该Pod的CPU使用情况,会导致HPA无法正常工作。
设置metric server 监控
┌──[[email protected]]-[~/ansible/metrics/deploy/1.8+]└─$kubectl top nodesNAME CPU(cores) CPU% MEMORY(bytes) MEMORY%vms81.liruilongs.github.io 401m 20% 1562Mi 40%vms82.liruilongs.github.io 228m 11% 743Mi 19%vms83.liruilongs.github.io 221m 11% 720Mi 18%
配置HPA
设置副本数是最小2个,最大10个,CPU超过80
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl autoscale deployment web1 --min=2 --max=10 --cpu-percent=80horizontalpodautoscaler.autoscaling/web1 autoscaled┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEweb1 Deployment/web1 <unknown>/80% 2 10 2 15s┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl delete hpa web1horizontalpodautoscaler.autoscaling "web1" deleted┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$
解决当前cpu的使用量为unknown,这个占时没有解决办法
ngixndeplog.yaml
apiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: web1 name: web1spec: replicas: 2 selector: matchLabels: app: web1 strategy: {} template: metadata: creationTimestamp: null labels: app: web1 spec: containers: - image: nginx name: nginx ports: - containerPort: 80 resources: limits: cpu: 500m requests: cpu: 200m
测试HPA
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$cat ngixndeplog.yamlapiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: nginx name: nginxdepspec: replicas: 2 selector: matchLabels: app: nginx strategy: {} template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx name: web resources: requests: cpu: 100m restartPolicy: Always
设置HPAkubectl autoscale deployment nginxdep --max=5 --cpu-percent=50
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get deployments.appsNAMEREADY UP-TO-DATE AVAILABLE AGEnginxdep 2/2 2 2 8m8s┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl autoscale deployment nginxdep --max=5 --cpu-percent=50horizontalpodautoscaler.autoscaling/nginxdep autoscaled┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginxdep-645bf755b9-27hzn 1/1 Running 0 97s 10.244.171.140 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-cb57p 1/1 Running 0 97s 10.244.70.10 vms83.liruilongs.github.io <none> <none>┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get hpa -o wideNAMEREFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEnginxdep Deployment/nginxdep <unknown>/50% 1 5 2 21s
创建一个svc,然后模拟调用
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl expose --name=nginxsvc deployment nginxdep --port=80service/nginxsvc exposed┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get svc -o wideNAMETYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORnginxsvc ClusterIP 10.104.147.65 <none> 80/TCP 9s app=nginx
测试svc的调用
┌──[[email protected]]-[~/ansible]└─$ansible 192.168.26.83 -m shell -a "curl http://10.104.147.65 "192.168.26.83 | CHANGED | rc=0 >><!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>html { color-scheme: light dark; }body { width: 35em; margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p><p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p></body></html> % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 615 100 615 0 0 304k 0 --:--:-- --:--:-- --:--:-- 600k
安装http-tools(IP压力测试工具包),模拟调用
┌──[[email protected]]-[~/ansible]└─$ansible 192.168.26.83 -m shell -a "yum install httpd-tools -y"┌──[[email protected]]-[~/ansible]└─$ansible 192.168.26.83 -m shell -a "ab -t 600 -n 1000000 -c 1000 http://10.104.147.65/ " &[1] 123433┌──[[email protected]]-[~/ansible]└─$
观察pod的变化
deployment-健壮性测试
┌──[[email protected]]-[~/ansible]└─$kubectl scale deployment nginxdep --replicas=3deployment.apps/nginxdep scaled┌──[[email protected]]-[~/ansible]└─$kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginxdep-645bf755b9-27hzn 1/1 Running 1 (3m19s ago) 47m 10.244.171.141 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-4dkpp 1/1 Running 0 30s 10.244.171.144 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-vz5qt 1/1 Running 0 30s 10.244.70.11 vms83.liruilongs.github.io <none> <none>┌──[[email protected]]-[~/ansible]└─$
把vms83.liruilongs.github.io
关机,等一段时间就会发现,pod都会在vms82.liruilongs.github.io
上运行
┌──[[email protected]]-[~]└─$kubectl get nodesNAME STATUS ROLES AGE VERSIONvms81.liruilongs.github.io Ready control-plane,master 47h v1.22.2vms82.liruilongs.github.io Ready <none> 47h v1.22.2vms83.liruilongs.github.io NotReady <none> 47h v1.22.2┌──[[email protected]]-[~]└─$kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginxdep-645bf755b9-27hzn 1/1 Running1 (20m ago) 64m 10.244.171.141 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-4dkpp 1/1 Running0 17m 10.244.171.144 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-9hzf2 1/1 Running0 9m48s 10.244.171.145 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-vz5qt 1/1 Terminating 0 17m 10.244.70.11 vms83.liruilongs.github.io <none> <none>┌──[[email protected]]-[~]└─$kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginxdep-645bf755b9-27hzn 1/1 Running 1 (27m ago) 71m 10.244.171.141 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-4dkpp 1/1 Running 0 24m 10.244.171.144 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-9hzf2 1/1 Running 0 16m 10.244.171.145 vms82.liruilongs.github.io <none> <none>┌──[[email protected]]-[~]└─$kubectl top podsNAME CPU(cores) MEMORY(bytes)nginxdep-645bf755b9-27hzn 0m 4Minginxdep-645bf755b9-4dkpp 0m 1Minginxdep-645bf755b9-9hzf2 0m 1Mi┌──[[email protected]]-[~]└─$
当vms83.liruilongs.github.io
重新启动,pod并不会返回到vms83.liruilongs.github.io
上运行
┌──[[email protected]]-[~]└─$kubectl get nodesNAME STATUS ROLES AGE VERSIONvms81.liruilongs.github.io Ready control-plane,master 2d v1.22.2vms82.liruilongs.github.io Ready <none> 2d v1.22.2vms83.liruilongs.github.io Ready <none> 2d v1.22.2┌──[[email protected]]-[~]└─$kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginxdep-645bf755b9-27hzn 1/1 Running 1 (27m ago) 71m 10.244.171.141 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-4dkpp 1/1 Running 0 24m 10.244.171.144 vms82.liruilongs.github.io <none> <none>nginxdep-645bf755b9-9hzf2 1/1 Running 0 16m 10.244.171.145 vms82.liruilongs.github.io <none> <none>┌──[[email protected]]-[~]└─$
deployment-更新回滚镜像
当集群中的某个服务需要升级时,我们需要停止目前与该服务相关的所有Pod,然后下载新版本镜像并创建新的Pod,如果集群规模比较大,则这个工作就变成了一个挑战,而且先全部停止然后逐步升级的方式会导致较长时间的服务不可用。
Kuberetes提供了滚动升级功能来解决上述问题。如果Pod是通过Deployment创建的,则用户可以在运行时修改Deployment的Pod定义(spec.template)或镜像名称,并应用到Deployment对象上,系统即可完成Deployment的自动更新操作。如果在更新过程中发生了错误,则还可以通过回滚(Rollback)操作恢复Pod的版本。
环境准备
┌──[[email protected]]-[~]└─$kubectl scale deployment nginxdep --replicas=5deployment.apps/nginxdep scaled
┌──[[email protected]]-[~/ansible]└─$ansible node -m shell -a "docker pull nginx:1.9"┌──[[email protected]]-[~/ansible]└─$ansible node -m shell -a "docker pull nginx:1.7.9"
deployment滚动更新镜像
现在pod
镜像需要更新为 Nginx l.9
,我们可以通 kubectl set image deployment/deploy名字 容器名字=nginx:1.9 --record
为 Deployment
设置新的镜像名称
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl set image deployment/nginxdep web=nginx:1.9 --recordFlag --record has been deprecated, --record will be removed in the futuredeployment.apps/nginxdep image updated┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get podsNAME READY STATUSRESTARTS AGEnginxdep-59d7c6b6f-6hdb8 0/1 ContainerCreating 0 26snginxdep-59d7c6b6f-bd5z2 0/1 ContainerCreating 0 26snginxdep-59d7c6b6f-jb2j7 1/1 Running 0 26snginxdep-59d7c6b6f-jd5df 0/1 ContainerCreating 0 4snginxdep-645bf755b9-27hzn 1/1 Running 1 (51m ago) 95mnginxdep-645bf755b9-4dkpp 1/1 Running 0 48mnginxdep-645bf755b9-hkcqx 1/1 Running 0 18m┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get podsNAME READY STATUSRESTARTS AGEnginxdep-59d7c6b6f-6hdb8 0/1 ContainerCreating 0 51snginxdep-59d7c6b6f-bd5z2 1/1 Running 0 51snginxdep-59d7c6b6f-jb2j7 1/1 Running 0 51snginxdep-59d7c6b6f-jd5df 0/1 ContainerCreating 0 29snginxdep-59d7c6b6f-prfzd 0/1 ContainerCreating 0 14snginxdep-645bf755b9-27hzn 1/1 Running 1 (51m ago) 96mnginxdep-645bf755b9-4dkpp 1/1 Running 0 49m┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get podsNAME READY STATUS RESTARTS AGEnginxdep-59d7c6b6f-6hdb8 1/1 Running 0 2m28snginxdep-59d7c6b6f-bd5z2 1/1 Running 0 2m28snginxdep-59d7c6b6f-jb2j7 1/1 Running 0 2m28snginxdep-59d7c6b6f-jd5df 1/1 Running 0 2m6snginxdep-59d7c6b6f-prfzd 1/1 Running 0 111s
可以通过age的时间看到nginx的版本由latest滚动升级到 1.9的版本然后到1.7.9版本
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl set image deployment/nginxdep web=nginx:1.7.9 --recordFlag --record has been deprecated, --record will be removed in the futuredeployment.apps/nginxdep image updated┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get podsNAME READY STATUS RESTARTS AGEnginxdep-66587778f6-9jqfz 1/1 Running 0 4m37snginxdep-66587778f6-jbsww 1/1 Running 0 5m2snginxdep-66587778f6-lwkpg 1/1 Running 0 5m1snginxdep-66587778f6-tmd4l 1/1 Running 0 4m41snginxdep-66587778f6-v9f28 1/1 Running 0 5m2s┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl describe pods nginxdep-66587778f6-jbsww | grep Image: Image: nginx:1.7.9
可以使用kubectl rollout pause deployment nginxdep
来暂停更新操作,完成复杂更新
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl rollout pause deployment nginxdepdeployment.apps/nginxdep paused┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get deployments -o wideNAMEREADY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORnginxdep 5/5 5 5 147m web nginx:1.7.9 app=nginx┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl set image deployment/nginxdep web=nginxdeployment.apps/nginxdep image updated┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl rollout history deployment nginxdepdeployment.apps/nginxdepREVISION CHANGE-CAUSE4 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true5 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true6 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl rollout resume deployment nginxdepdeployment.apps/nginxdep resumed
deployment-回滚镜像
这个和git基本类似。可以回滚到任意版本ID
查看版本历史记录
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl rollout history deployment nginxdepdeployment.apps/nginxdepREVISION CHANGE-CAUSE1 kubectl set image deployment/nginxdep nginxdep=nginx:1.9 --record=true2 kubectl set image deployment/nginxdep web=nginx:1.9 --record=true3 kubectl set image deployment/nginxdep web=nginx:1.7.9 --record=true┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get deployments nginxdep -o wideNAMEREADY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORnginxdep 5/5 5 5 128m web nginx:1.7.9 app=nginx
回滚版本
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl rollout undo deployment nginxdep --to-revision=2deployment.apps/nginxdep rolled back┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get podsNAME READY STATUSRESTARTS AGEnginxdep-59d7c6b6f-ctdh2 0/1 ContainerCreating 0 6snginxdep-59d7c6b6f-dk67c 0/1 ContainerCreating 0 6snginxdep-59d7c6b6f-kr74k 0/1 ContainerCreating 0 6snginxdep-66587778f6-9jqfz 1/1 Running 0 23mnginxdep-66587778f6-jbsww 1/1 Running 0 23mnginxdep-66587778f6-lwkpg 1/1 Running 0 23mnginxdep-66587778f6-v9f28 1/1 Running 0 23m┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get podsNAME READY STATUSRESTARTS AGEnginxdep-59d7c6b6f-7j9z7 0/1 ContainerCreating 0 37snginxdep-59d7c6b6f-ctdh2 1/1 Running 0 59snginxdep-59d7c6b6f-dk67c 1/1 Running 0 59snginxdep-59d7c6b6f-f2sb4 0/1 ContainerCreating 0 21snginxdep-59d7c6b6f-kr74k 1/1 Running 0 59snginxdep-66587778f6-jbsww 1/1 Running 0 24m┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$
查看版本详细信息
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl rollout history deployment nginxdep --revision=4deployment.apps/nginxdep with revision #4Pod Template: Labels:app=nginx pod-template-hash=59d7c6b6f Annotations: kubernetes.io/change-cause: kubectl set image deployment/nginxdep web=nginx:1.9 --record=true Containers: web: Image: nginx:1.9 Port:<none> Host Port: <none> Requests: cpu: 100m Environment: <none> Mounts: <none> Volumes: <none>
滚动更新的相关参数
maxSurge :在升级过程中一次升级几个,即新旧的副本不超过 (1+ 设置的值)%
maxUnavailable :在升级过程中,pod不可用个数,一次性删除多少个pod
可以通过命令修改
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl edit deployments nginxdep
默认值
┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$kubectl get deployments nginxdep -o yaml | grep -A 5 strategy: strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template:┌──[[email protected]]-[~/ansible/k8s-deploy-create]└─$
- type
Recreate (重建): 设置spec.strategy.type:Recreate
,表示 Deployment
在更新Pod
时,会先杀掉所有正在运行的Pod
,然后创建新的Pod
RolligUpdate (滚动更新): 设置spec.strategy.type:RollingUupdate
,表示Deployment
会以滚动更新的方式来逐个更新Pod.同时,可以通过设置spec.strategy.rollingUuplate
下的两个参数(maxUnavailable
和maxSurge
)来控制滚动更新的过程。