Bug 52587 - Добавить зависимость на containerd
Summary: Добавить зависимость на containerd
Status: CLOSED WORKSFORME
Alias: None
Product: Sisyphus
Classification: Development
Component: kubernetes1.31-kubelet (show other bugs)
Version: unstable
Hardware: x86_64 Linux
: P5 normal
Assignee: geochip@altlinux.org
QA Contact: qa-sisyphus
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-12-28 16:22 MSK by Vladislav Glinkin
Modified: 2025-04-23 18:46 MSK (History)
2 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Vladislav Glinkin 2024-12-28 16:22:21 MSK
Воспроизводится независимо от образа.

Версия пакета:
kubernetes1.31-kubelet-1.31.2-alt1.x86_64

Согласно https://www.altlinux.org/Kubernetes, развернуть кластер не получится из-за того, что kubelet не может запуститься:
# kubelet 
I1228 15:48:34.965162    6687 server.go:467] "Kubelet version" kubeletVersion="v1.28.14"
I1228 15:48:34.965222    6687 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1228 15:48:34.965411    6687 server.go:630] "Standalone mode, no API client"
W1228 15:48:34.965895    6687 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "/run/containerd/containerd.sock", }. Err: connection error: desc = "transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory"
E1228 15:48:34.966551    6687 run.go:74] "command failed" err="failed to run Kubelet: validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""

# kubeadm init --pod-network-cidr=10.244.0.0/16
...
Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

Это относится и к другим kubernetes1.*-kubelet
Comment 1 Artem Varaksa 2025-04-23 18:46:34 MSK
Не удалось воспроизвести в sisyphus, p11+378611, p11.

При запуске kubelet вручную нужно указывать container runtime unix:///run/crio/crio.sock аргументом или файлом конфигурации. Лучше запускать kubelet.service.

containerd не нужен, если используется crio.

Однако с 1.31 есть ошибка https://bugzilla.altlinux.org/53963, возможно проблема была в этом.