ALT Linux Bugzilla
– Attachment 15959 Details for
Bug 50058
Некоторые pod'ы на master-ноде остаются в состоянии Init:0/1, Pending или ContainerCreating
New bug
|
Search
|
[?]
|
Help
Register
|
Log In
[x]
|
Forgot Password
Login:
[x]
|
EN
|
RU
Вывод kubectl describe для неработающих pod'ов
describe.txt (text/plain), 12.43 KB, created by
Artem Varaksa
on 2024-04-24 12:53:26 MSK
(
hide
)
Description:
Вывод kubectl describe для неработающих pod'ов
Filename:
MIME Type:
Creator:
Artem Varaksa
Created:
2024-04-24 12:53:26 MSK
Size:
12.43 KB
patch
obsolete
>Name: kube-flannel-ds-p7mqt >Namespace: kube-flannel >Priority: 2000001000 >Priority Class Name: system-node-critical >Service Account: flannel >Node: podsec-master/<ip-адÑÐµÑ Ð¼Ð°ÑинÑ> >Start Time: Wed, 24 Apr 2024 12:16:29 +0300 >Labels: app=flannel > controller-revision-hash=78b9cfb6c5 > pod-template-generation=1 > tier=node >Annotations: <none> >Status: Pending >IP: <ip-адÑÐµÑ Ð¼Ð°ÑинÑ> >IPs: > IP: <ip-адÑÐµÑ Ð¼Ð°ÑинÑ> >Controlled By: DaemonSet/kube-flannel-ds >Init Containers: > install-cni: > Container ID: > Image: registry.altlinux.org/k8s-sisyphus/flannel:v0.24.2 > Image ID: > Port: <none> > Host Port: <none> > Command: > cp > Args: > -f > /etc/kube-flannel/cni-conf.json > /etc/cni/net.d/10-flannel.conflist > State: Waiting > Reason: PodInitializing > Ready: False > Restart Count: 0 > Environment: <none> > Mounts: > /etc/cni/net.d from cni (rw) > /etc/kube-flannel/ from flannel-cfg (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grnmq (ro) >Containers: > kube-flannel: > Container ID: > Image: registry.altlinux.org/k8s-sisyphus/flannel:v0.24.2 > Image ID: > Port: <none> > Host Port: <none> > Command: > /opt/bin/flanneld > Args: > --ip-masq > --kube-subnet-mgr > State: Waiting > Reason: PodInitializing > Ready: False > Restart Count: 0 > Requests: > cpu: 100m > memory: 50Mi > Environment: > POD_NAME: kube-flannel-ds-p7mqt (v1:metadata.name) > POD_NAMESPACE: kube-flannel (v1:metadata.namespace) > EVENT_QUEUE_DEPTH: 5000 > Mounts: > /etc/kube-flannel/ from flannel-cfg (rw) > /run/flannel from run (rw) > /run/xtables.lock from xtables-lock (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grnmq (ro) >Conditions: > Type Status > Initialized False > Ready False > ContainersReady False > PodScheduled True >Volumes: > run: > Type: HostPath (bare host directory volume) > Path: /run/flannel > HostPathType: > cni-plugin: > Type: HostPath (bare host directory volume) > Path: /opt/cni/bin > HostPathType: > cni: > Type: HostPath (bare host directory volume) > Path: /etc/cni/net.d > HostPathType: > flannel-cfg: > Type: ConfigMap (a volume populated by a ConfigMap) > Name: kube-flannel-cfg > Optional: false > xtables-lock: > Type: HostPath (bare host directory volume) > Path: /run/xtables.lock > HostPathType: FileOrCreate > kube-api-access-grnmq: > Type: Projected (a volume that contains injected data from multiple sources) > TokenExpirationSeconds: 3607 > ConfigMapName: kube-root-ca.crt > ConfigMapOptional: <nil> > DownwardAPI: true >QoS Class: Burstable >Node-Selectors: <none> >Tolerations: :NoSchedule op=Exists > node.kubernetes.io/disk-pressure:NoSchedule op=Exists > node.kubernetes.io/memory-pressure:NoSchedule op=Exists > node.kubernetes.io/network-unavailable:NoSchedule op=Exists > node.kubernetes.io/not-ready:NoExecute op=Exists > node.kubernetes.io/pid-pressure:NoSchedule op=Exists > node.kubernetes.io/unreachable:NoExecute op=Exists > node.kubernetes.io/unschedulable:NoSchedule op=Exists >Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal Scheduled 11m default-scheduler Successfully assigned kube-flannel/kube-flannel-ds-p7mqt to podsec-master > Normal Pulled 11m kubelet Container image "registry.altlinux.org/k8s-sisyphus/flannel:v0.24.2" already present on machine > Normal Created 11m kubelet Created container install-cni > Normal Started 11m kubelet Started container install-cni > > > >Name: coredns-74b4f8d87d-hprkh >Namespace: kube-system >Priority: 2000000000 >Priority Class Name: system-cluster-critical >Service Account: coredns >Node: podsec-master/<ip-адÑÐµÑ Ð¼Ð°ÑинÑ> >Start Time: Wed, 24 Apr 2024 12:16:29 +0300 >Labels: k8s-app=kube-dns > pod-template-hash=74b4f8d87d >Annotations: <none> >Status: Pending >IP: >IPs: <none> >Controlled By: ReplicaSet/coredns-74b4f8d87d >Containers: > coredns: > Container ID: > Image: registry.altlinux.org/k8s-sisyphus/coredns:v1.9.3 > Image ID: > Ports: 53/UDP, 53/TCP, 9153/TCP > Host Ports: 0/UDP, 0/TCP, 0/TCP > Args: > -conf > /etc/coredns/Corefile > State: Waiting > Reason: ContainerCreating > Ready: False > Restart Count: 0 > Limits: > memory: 170Mi > Requests: > cpu: 100m > memory: 70Mi > Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 > Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3 > Environment: <none> > Mounts: > /etc/coredns from config-volume (ro) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fl4wt (ro) >Conditions: > Type Status > Initialized True > Ready False > ContainersReady False > PodScheduled True >Volumes: > config-volume: > Type: ConfigMap (a volume populated by a ConfigMap) > Name: coredns > Optional: false > kube-api-access-fl4wt: > Type: Projected (a volume that contains injected data from multiple sources) > TokenExpirationSeconds: 3607 > ConfigMapName: kube-root-ca.crt > ConfigMapOptional: <nil> > DownwardAPI: true >QoS Class: Burstable >Node-Selectors: kubernetes.io/os=linux >Tolerations: CriticalAddonsOnly op=Exists > node-role.kubernetes.io/control-plane:NoSchedule > node.kubernetes.io/not-ready:NoExecute op=Exists for 300s > node.kubernetes.io/unreachable:NoExecute op=Exists for 300s >Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal Scheduled 12m default-scheduler Successfully assigned kube-system/coredns-74b4f8d87d-hprkh to podsec-master > > > >Name: coredns-74b4f8d87d-k4sj4 >Namespace: kube-system >Priority: 2000000000 >Priority Class Name: system-cluster-critical >Service Account: coredns >Node: podsec-master/ >Labels: k8s-app=kube-dns > pod-template-hash=74b4f8d87d >Annotations: <none> >Status: Pending >IP: >IPs: <none> >Controlled By: ReplicaSet/coredns-74b4f8d87d >Containers: > coredns: > Image: registry.altlinux.org/k8s-sisyphus/coredns:v1.9.3 > Ports: 53/UDP, 53/TCP, 9153/TCP > Host Ports: 0/UDP, 0/TCP, 0/TCP > Args: > -conf > /etc/coredns/Corefile > Limits: > memory: 170Mi > Requests: > cpu: 100m > memory: 70Mi > Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 > Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3 > Environment: <none> > Mounts: > /etc/coredns from config-volume (ro) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2xbd9 (ro) >Conditions: > Type Status > PodScheduled True >Volumes: > config-volume: > Type: ConfigMap (a volume populated by a ConfigMap) > Name: coredns > Optional: false > kube-api-access-2xbd9: > Type: Projected (a volume that contains injected data from multiple sources) > TokenExpirationSeconds: 3607 > ConfigMapName: kube-root-ca.crt > ConfigMapOptional: <nil> > DownwardAPI: true >QoS Class: Burstable >Node-Selectors: kubernetes.io/os=linux >Tolerations: CriticalAddonsOnly op=Exists > node-role.kubernetes.io/control-plane:NoSchedule > node.kubernetes.io/not-ready:NoExecute op=Exists for 300s > node.kubernetes.io/unreachable:NoExecute op=Exists for 300s >Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal Scheduled 12m default-scheduler Successfully assigned kube-system/coredns-74b4f8d87d-k4sj4 to podsec-master > > > >Name: kube-proxy-9jbtj >Namespace: kube-system >Priority: 2000001000 >Priority Class Name: system-node-critical >Service Account: kube-proxy >Node: podsec-master/<ip-адÑÐµÑ Ð¼Ð°ÑинÑ> >Start Time: Wed, 24 Apr 2024 12:16:29 +0300 >Labels: controller-revision-hash=6b6d469555 > k8s-app=kube-proxy > pod-template-generation=1 >Annotations: <none> >Status: Pending >IP: <ip-адÑÐµÑ Ð¼Ð°ÑинÑ> >IPs: > IP: <ip-адÑÐµÑ Ð¼Ð°ÑинÑ> >Controlled By: DaemonSet/kube-proxy >Containers: > kube-proxy: > Container ID: > Image: registry.altlinux.org/k8s-sisyphus/kube-proxy:v1.26.11 > Image ID: > Port: <none> > Host Port: <none> > Command: > /usr/local/bin/kube-proxy > --config=/var/lib/kube-proxy/config.conf > --hostname-override=$(NODE_NAME) > State: Waiting > Reason: ContainerCreating > Ready: False > Restart Count: 0 > Environment: > NODE_NAME: (v1:spec.nodeName) > Mounts: > /lib/modules from lib-modules (ro) > /run/xtables.lock from xtables-lock (rw) > /var/lib/kube-proxy from kube-proxy (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f9qnm (ro) >Conditions: > Type Status > Initialized True > Ready False > ContainersReady False > PodScheduled True >Volumes: > kube-proxy: > Type: ConfigMap (a volume populated by a ConfigMap) > Name: kube-proxy > Optional: false > xtables-lock: > Type: HostPath (bare host directory volume) > Path: /run/xtables.lock > HostPathType: FileOrCreate > lib-modules: > Type: HostPath (bare host directory volume) > Path: /lib/modules > HostPathType: > kube-api-access-f9qnm: > Type: Projected (a volume that contains injected data from multiple sources) > TokenExpirationSeconds: 3607 > ConfigMapName: kube-root-ca.crt > ConfigMapOptional: <nil> > DownwardAPI: true >QoS Class: BestEffort >Node-Selectors: kubernetes.io/os=linux >Tolerations: op=Exists > node.kubernetes.io/disk-pressure:NoSchedule op=Exists > node.kubernetes.io/memory-pressure:NoSchedule op=Exists > node.kubernetes.io/network-unavailable:NoSchedule op=Exists > node.kubernetes.io/not-ready:NoExecute op=Exists > node.kubernetes.io/pid-pressure:NoSchedule op=Exists > node.kubernetes.io/unreachable:NoExecute op=Exists > node.kubernetes.io/unschedulable:NoSchedule op=Exists >Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal Scheduled 12m default-scheduler Successfully assigned kube-system/kube-proxy-9jbtj to podsec-master > Normal Pulled 12m kubelet Container image "registry.altlinux.org/k8s-sisyphus/kube-proxy:v1.26.11" already present on machine > Normal Created 12m kubelet Created container kube-proxy > Normal Started 12m kubelet Started container kube-proxy
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 50058
: 15959