Q1: spec.containers: Required value
报错信息:
[root@master231 pods]# kubectl apply -f 01-pods-xiuxian.yaml
The Pod “oldboyedu-linux95-xiuxian” is invalid: spec.containers: Required value
[root@master231 pods]#
问题原因:
缺少”spec.containers”资源的定义。
解决方案:
检查配置文件的”spec.containers”是否正确。
Q2: spec.containers[0].image: Required value
报错信息:
[root@master231 pods]# kubectl apply -f 01-pods-xiuxian.yaml
The Pod “oldboyedu-linux95-xiuxian” is invalid: spec.containers[0].image: Required value
[root@master231 pods]#
问题原因:
缺少”spec.containers[0].image”资源的定义。
解决方案:
检查配置文件的”spec.containers[0].image”是否正确。
Q3: pods “oldboyedu-linux95-xiuxian” already exists
报错信息:
[root@master231 pods]# kubectl create -f 01-pods-xiuxian.yaml
Error from server (AlreadyExists): error when creating “01-pods-xiuxian.yaml”: pods “oldboyedu-linux95-xiuxian” already exists
[root@master231 pods]#
问题原因:
创建名为”oldboyedu-linux95-xiuxian”的pods资源已经存在。
解决方案:
– 1.删除原有的Pod;
– 2.修改创建的名称;
Q4: invalid label spec: class=linux95,school=oldboyedu
报错信息:
[root@master231 pods]# kubectl label pod xiuxian class=linux95,school=oldboyedu
error: invalid label spec: class=linux95,school=oldboyedu
[root@master231 pods]#
问题原因:
打标签出错。
解决方案:
检查命令格式是否正确。
Q5: error: ‘school’ already has a value (oldboyedu), and –overwrite is false
报错信息:
[root@master231 pods]# kubectl label pod xiuxian school=laonanhai
error: ‘school’ already has a value (oldboyedu), and –overwrite is false
[root@master231 pods]#
问题原因:
标签的key已经有值了,修改时需要使用”–overwrite”资源覆盖。
解决方案:
使用”–overwrite”参数即可。
Q6: metadata.name: Invalid value: “oldboyedu-linux95-nodeName”: a lowercase
报错信息:
[root@master231 pods]# kubectl apply -f 03-pods-xiuxian-nodeName.yaml
The Pod “oldboyedu-linux95-nodeName” is invalid: metadata.name: Invalid value: “oldboyedu-linux95-nodeName”: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, ‘-‘ or ‘.’, and must start and end with an alphanumeric character (e.g. ‘example.com’, regex used for validation is ‘a-z0-9?(.a-z0-9?)*’)
[root@master231 pods]#
问题原因:
对于元数据资源,必须是小写字母,可以支持正则: ‘a-z0-9?(.a-z0-9?)*’
解决方案:
使用正确的格式即可。’a-z0-9?(.a-z0-9?)*’
Q7: nable to retrieve container logs for …
报错信息:
[root@master231 pods]# kubectl logs -f oldboyedu-multiple-logs -c c1 -p
unable to retrieve container logs for docker://7a10e5330ee28f2ba44552ba36acd9b120258d85e07c11d9074ebb8af9f5bda2[
问题原因:
容器被删除,无法查看上一个容器的日志。
解决方案:
取消使用-p选项,因为上一个容器不存在啦~
Q8: error validating data: ValidationError(Pod.spec.containers[0]): unknown field “restartPolicy”
报错信息:
[root@master231 pods]# kubectl apply -f 13-pods-explain.yaml
error: error validating “13-pods-explain.yaml”: error validating data: ValidationError(Pod.spec.containers[0]): unknown field “restartPolicy” in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with –validate=false
[root@master231 pods]#
问题原因:
资源书写错误。
解决方案:
使用Kubectl explain命令检查字段是否正确。
Q9: [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified
报错信息:
[root@master231 pods]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-casedemo-mysql 0/1 Error 3 (35s ago) 52s 10.0.0.233 worker233
[root@master231 pods]#
[root@master231 pods]# kubectl logs oldboyedu-casedemo-mysql
2025-02-11 08:40:56+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.36-1.el8 started.
2025-02-11 08:40:56+00:00 [Note] [Entrypoint]: Switching to dedicated user ‘mysql’
2025-02-11 08:40:56+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.36-1.el8 started.
2025-02-11 08:40:56+00:00 [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified
You need to specify one of the following as an environment variable:
– MYSQL_ROOT_PASSWORD
– MYSQL_ALLOW_EMPTY_PASSWORD
– MYSQL_RANDOM_ROOT_PASSWORD
[root@master231 pods]#
问题原因:
启动MySQL时需要传递环境变量。
解决方案:
使用env传递环境变量即可。
10: spec.containers.env.value of type string
报错信息:
[root@master231 pods]# kubectl apply -f 19-pods-casedemo-mysql.yaml
Error from server (BadRequest): error when creating “19-pods-casedemo-mysql.yaml”: Pod in version “v1” cannot be handled as a Pod: json: cannot unmarshal number into Go structspec.containers.env.value of type string
[root@master231 pods]#
问题原因:
传递的值和系统要求的值类型不匹配。
解决方案:
对于特殊的字符,比如”true”,”false”,”数字”,”yes”,”no”等关键字最好使用双引号引起来,这样就强制转为字符串。
11: The ReplicationController “oldboyedu-rc” is invalid: spec.selector: Required value
报错信息:
[root@master231 replicationcontrollers]# kubectl apply -f 01-rc-xiuxian.yaml
The ReplicationController “oldboyedu-rc” is invalid: spec.selector: Required value
[root@master231 replicationcontrollers]#
问题原因:
ReplicationController资源缺少”spec.selector”字段。
解决方案:
检查资源清单,填写”ReplicationController.spec.selector”字段即可。
12: selector
does not match template labels
报错信息:
[root@master231 replicationcontrollers]# kubectl apply -f 01-rc-xiuxian.yaml
The ReplicationController “oldboyedu-rc” is invalid: spec.template.metadata.labels: Invalid value: map[string]string(nil): selector
does not match template labels
[root@master231 replicationcontrollers]#
问题原因:selector
和template labels
不匹配导致的。
解决方案:
检查selector
和template labels
是否匹配。
13:spec.template.spec.restartPolicy: Required value: valid values: “OnFailure”, “Never”
报错信息:
[root@master231 jobs]# kubectl apply -f 01-jobs-xiuxian.yaml
The Job “jobs-xiuxian” is invalid: spec.template.spec.restartPolicy: Required value: valid values: “OnFailure”, “Never”
[root@master231 jobs]#
问题原因:
对于jobs控制器要声明重启策略,且不支持ALways。
解决方案:
编写资源清单,手动配置restartPolicy即可。
14.missing required field “schedule” in io.k8s.api.batch.v1.CronJobSpec;
报错信息:
[root@master231 cronjobs]# kubectl apply -f 01-cj-xiuxian.yaml
error: error validating “01-cj-xiuxian.yaml”: error validating data: ValidationError(CronJob.spec): missing required field “schedule” in io.k8s.api.batch.v1.CronJobSpec; if you choose to ignore these errors, turn validation off with –validate=false
[root@master231 cronjobs]#
问题原因:
对于cj控制器缺少schedule字段。
解决方案:
修改资源清单,添加schedule相关资源即可。
15.line 9: did not find expected alphabetic or numeric character
报错信息:
[root@master231 cronjobs]# kubectl apply -f 01-cj-xiuxian.yaml
error: error parsing 01-cj-xiuxian.yaml: error converting YAML to JSON: yaml: line 9: did not find expected alphabetic or numeric character
[root@master231 cronjobs]#
问题原因:
表示第9行存在问题。
解决方案:
检查第9行的配置是否正确。
16.error: unable to drain node “worker233” due to error:cannot delete DaemonSet-managed Pods
报错信息:
[root@master231 scheduler]# kubectl drain worker233
node/worker233 cordoned
error: unable to drain node “worker233” due to error:cannot delete DaemonSet-managed Pods (use –ignore-daemonsets to ignore): kube-flannel/kube-flannel-ds-57zww, kube-system/kube-proxy-pzjt7, continuing command…
There are pending nodes to be drained:
worker233
cannot delete DaemonSet-managed Pods (use –ignore-daemonsets to ignore): kube-flannel/kube-flannel-ds-57zww, kube-system/kube-proxy-pzjt7
[root@master231 scheduler]#
问题原因:
Pod驱逐时无法驱逐ds的Pod。
解决方案:
驱逐时使用”–ignore-daemonsets”来忽略ds控制器的Pod。