окт 25 17:11:14 podsec-master useradd[15412]: new group: name=u7s_admin_temp, GID=501 окт 25 17:11:14 podsec-master useradd[15412]: new user: name=u7s_admin_temp, UID=501, GID=501, home=/home/u7s_admin_temp, shell=/bin/bash, from=/dev/pts/0 окт 25 17:11:14 podsec-master nscd[2500]: 2500 в отслеживаемый файл «/etc/passwd» была запись окт 25 17:11:14 podsec-master nscd[2500]: 2500 отслеживаемый файл «/etc/passwd» был moved into place, добавление слежения окт 25 17:11:14 podsec-master nscd[2500]: 2500 в отслеживаемый файл «/etc/group» была запись окт 25 17:11:14 podsec-master nscd[2500]: 2500 отслеживаемый файл «/etc/group» был moved into place, добавление слежения окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/passwd» (19) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/group» (20) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/passwd» (19) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/group» (20) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/passwd» (19) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/group» (20) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master userdel[15423]: delete user 'u7s_admin_temp' окт 25 17:11:14 podsec-master userdel[15423]: removed group 'u7s_admin_temp' owned by 'u7s_admin_temp' окт 25 17:11:14 podsec-master userdel[15423]: removed shadow group 'u7s_admin_temp' owned by 'u7s_admin_temp' окт 25 17:11:14 podsec-master nscd[2500]: 2500 в отслеживаемый файл «/etc/passwd» была запись окт 25 17:11:14 podsec-master nscd[2500]: 2500 отслеживаемый файл «/etc/passwd» был moved into place, добавление слежения окт 25 17:11:14 podsec-master nscd[2500]: 2500 в отслеживаемый файл «/etc/group» была запись окт 25 17:11:14 podsec-master nscd[2500]: 2500 отслеживаемый файл «/etc/group» был moved into place, добавление слежения окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/passwd» (21) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/group» (22) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/passwd» (21) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/nsswitch.conf» (5) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за файлом «/etc/group» (22) окт 25 17:11:14 podsec-master nscd[2500]: 2500 слежение за каталогом «/etc» (2) окт 25 17:11:14 podsec-master dbus-daemon[2351]: [system] Activating via systemd: service name='org.freedesktop.machine1' unit='dbus-org.freedesktop.machine1.service' requested by ':1.12' (uid=0 pid=15438 comm="machinectl shell u7s-admin@ /bin/sh /usr/libexec/p") окт 25 17:11:14 podsec-master systemd[1]: Created slice machine.slice - Virtual Machine and Container Slice. окт 25 17:11:14 podsec-master systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). окт 25 17:11:14 podsec-master systemd[1]: Starting systemd-machined.service - Virtual Machine and Container Registration Service... окт 25 17:11:14 podsec-master dbus-daemon[2351]: [system] Successfully activated service 'org.freedesktop.machine1' окт 25 17:11:14 podsec-master systemd[1]: Started systemd-machined.service - Virtual Machine and Container Registration Service. окт 25 17:11:14 podsec-master systemd[1]: Created slice system-container\x2dshell.slice - Slice /system/container-shell. окт 25 17:11:14 podsec-master systemd[1]: Started container-shell@2.service - Shell for User u7s-admin. окт 25 17:11:14 podsec-master (sh)[15440]: pam_tcb(login:session): Session opened for u7s-admin by u7s-admin(uid=0) окт 25 17:11:14 podsec-master systemd-logind[2365]: New session 7 of user u7s-admin. окт 25 17:11:14 podsec-master systemd[1]: Created slice user-482.slice - User Slice of UID 482. окт 25 17:11:14 podsec-master systemd[1]: Starting user-runtime-dir@482.service - User Runtime Directory /run/user/482... окт 25 17:11:14 podsec-master systemd[1]: Finished user-runtime-dir@482.service - User Runtime Directory /run/user/482. окт 25 17:11:14 podsec-master systemd[1]: Starting user@482.service - User Manager for UID 482... окт 25 17:11:14 podsec-master (systemd)[15444]: pam_tcb(systemd-user:session): Session opened for u7s-admin by (uid=0) окт 25 17:11:14 podsec-master systemd[15444]: Queued start job for default target default.target. окт 25 17:11:15 podsec-master systemd[15444]: Created slice app.slice - User Application Slice. окт 25 17:11:15 podsec-master systemd[15444]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. окт 25 17:11:15 podsec-master systemd[15444]: Reached target paths.target - Paths. окт 25 17:11:15 podsec-master systemd[15444]: Reached target timers.target - Timers. окт 25 17:11:15 podsec-master systemd[15444]: Starting dbus.socket - D-Bus User Message Bus Socket... окт 25 17:11:15 podsec-master systemd[15444]: Starting systemd-tmpfiles-setup.service - Create User's Volatile Files and Directories... окт 25 17:11:15 podsec-master systemd[15444]: Finished systemd-tmpfiles-setup.service - Create User's Volatile Files and Directories. окт 25 17:11:15 podsec-master systemd[15444]: Listening on dbus.socket - D-Bus User Message Bus Socket. окт 25 17:11:15 podsec-master systemd[15444]: Reached target sockets.target - Sockets. окт 25 17:11:15 podsec-master systemd[15444]: Reached target basic.target - Basic System. окт 25 17:11:15 podsec-master systemd[15444]: Reached target default.target - Main User Target. окт 25 17:11:15 podsec-master systemd[15444]: Startup finished in 86ms. окт 25 17:11:15 podsec-master systemd[1]: Started user@482.service - User Manager for UID 482. окт 25 17:11:15 podsec-master systemd[1]: Started session-7.scope - Session 7 of User u7s-admin. окт 25 17:11:15 podsec-master u7s-admin[15463]: =============================================== KUBEADM ===================================== окт 25 17:11:15 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.435673 delay 0.000446, next query 31s окт 25 17:11:16 podsec-master systemd[15444]: rootlesskit.service: unit configures an IP firewall, but not running as root. окт 25 17:11:16 podsec-master systemd[15444]: rootlesskit.service: (This warning is only shown for the first unit using IP firewalling.) окт 25 17:11:16 podsec-master systemd[15444]: Started rootlesskit.service - Usernetes RootlessKit service (crio). окт 25 17:11:16 podsec-master kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap0: link becomes ready окт 25 17:11:16 podsec-master rootlesskit.sh[15514]: [INFO] RootlessKit ready, PID=15484, state directory=/run/user/482/usernetes/rootlesskit . окт 25 17:11:16 podsec-master rootlesskit.sh[15514]: [INFO] Hint: You can enter RootlessKit namespaces by running `nsenter -U --preserve-credential -n -m -t 15484`. окт 25 17:11:16 podsec-master rootlesskit.sh[15552]: 1 окт 25 17:11:16 podsec-master rootlesskit.sh[15558]: 2 окт 25 17:11:16 podsec-master rootlesskit.sh[15564]: 3 окт 25 17:11:16 podsec-master rootlesskit.sh[15570]: 4 окт 25 17:11:16 podsec-master rootlesskit.sh[15577]: 5 окт 25 17:11:16 podsec-master rootlesskit.sh[15583]: 6 окт 25 17:11:16 podsec-master rootlesskit.sh[15589]: 7 окт 25 17:11:16 podsec-master rootlesskit.sh[15595]: 8 окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.434769721+03:00" level=info msg="Starting CRI-O, version: 1.26.4, git: unknown(clean)" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.438943626+03:00" level=info msg="Node configuration value for hugetlb cgroup is false" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.438963828+03:00" level=info msg="Node configuration value for pid cgroup is true" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.439018468+03:00" level=info msg="Node configuration value for memoryswap cgroup is true" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.439029250+03:00" level=info msg="Node configuration value for cgroup v2 is true" окт 25 17:11:16 podsec-master systemd[15444]: Created slice session.slice - User Core Session Slice. окт 25 17:11:16 podsec-master systemd[15444]: Starting dbus.service - D-Bus User Message Bus... окт 25 17:11:16 podsec-master systemd[15444]: Started dbus.service - D-Bus User Message Bus. окт 25 17:11:16 podsec-master dbus-daemon[15617]: [session uid=482 pid=15617] Successfully activated service 'org.freedesktop.systemd1' окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.466875434+03:00" level=info msg="Node configuration value for systemd CollectMode is true" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.474482803+03:00" level=info msg="Node configuration value for systemd AllowedCPUs is true" окт 25 17:11:16 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/compat1422179571/lower1' does not support file handles, falling back to xino=off. окт 25 17:11:16 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/metacopy-check2014869882/l1' does not support file handles, falling back to xino=off. окт 25 17:11:16 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/opaque-bug-check2877385725/l2' does not support file handles, falling back to xino=off. окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.491977234+03:00" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.494443486+03:00" level=info msg="Checkpoint/restore support disabled" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.494465401+03:00" level=info msg="Using seccomp default profile when unspecified: true" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.494473859+03:00" level=info msg="Using the internal default seccomp profile" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.494482459+03:00" level=info msg="AppArmor is disabled by the system or at CRI-O build-time" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.494490328+03:00" level=info msg="No blockio config file specified, blockio not configured" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.494497520+03:00" level=info msg="RDT not available in the host system" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.494508742+03:00" level=info msg="Using conmon executable: /usr/bin/conmon" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.496774992+03:00" level=info msg="Conmon does support the --sync option" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.496796112+03:00" level=info msg="Conmon does support the --log-global-size-max option" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.496811922+03:00" level=info msg="Using conmon executable: /usr/bin/conmon" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.498569966+03:00" level=info msg="Conmon does support the --sync option" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.498587651+03:00" level=info msg="Conmon does support the --log-global-size-max option" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.504320721+03:00" level=info msg="Found CNI network cbr0 (type=flannel) at /etc/cni/net.d/10-flannel.conflist" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.513964181+03:00" level=info msg="Found CNI network u7s-bridge (type=bridge) at /etc/cni/net.d/50-bridge.conf" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.529241724+03:00" level=info msg="Found CNI network 99-loopback.conf (type=loopback) at /etc/cni/net.d/99-loopback.conf" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.529277999+03:00" level=info msg="Updated default CNI network name to cbr0" окт 25 17:11:16 podsec-master kernel: bpfilter: Loaded bpfilter_umh pid 15656 окт 25 17:11:16 podsec-master unknown: Started bpfilter окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.560766845+03:00" level=warning msg="Error encountered when checking whether cri-o should wipe containers: open /run/user/482/usernetes/crio/version: no such file or directory" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.560985244+03:00" level=info msg="Starting seccomp notifier watcher" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.561068552+03:00" level=info msg="Create NRI interface" окт 25 17:11:16 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:16.561079462+03:00" level=info msg="NRI interface is disabled in the configuration." окт 25 17:11:19 podsec-master root[15676]: /usr/libexec/podsec/u7s/bin/_kubeadm.sh: TIME=17:11:19.077890116 UID=0 PID=15484 PARS=init окт 25 17:11:19 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:19.675183266+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3" id=afbcf9a7-d7cd-4853-81c1-a6be32c2d1a9 name=/runtime.v1.ImageService/ImageStatus окт 25 17:11:19 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:19.675731627+03:00" level=info msg="Image registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3 not found" id=afbcf9a7-d7cd-4853-81c1-a6be32c2d1a9 name=/runtime.v1.ImageService/ImageStatus окт 25 17:11:19 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:19.700022505+03:00" level=info msg="Pulling image: registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3" id=4e5fb160-4ee0-44b1-a7ec-3cbdf877cf6c name=/runtime.v1.ImageService/PullImage окт 25 17:11:19 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:19.700384050+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3\"" окт 25 17:11:19 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:11:19.982267862+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3\"" окт 25 17:11:31 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.423575 delay 0.043510, next query 32s окт 25 17:11:39 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.421254 delay 0.003200, next query 31s окт 25 17:11:46 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.417319 delay 0.000618, next query 34s окт 25 17:12:03 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.404632 delay 0.043598, next query 34s окт 25 17:12:09 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:09.807377884+03:00" level=info msg="Pulled image: registry.altlinux.org/k8s-p10/kube-apiserver@sha256:0b0e983766f2a725ec43aa525dbb97d8e030aeb416064588c9d16fa985801a0b" id=4e5fb160-4ee0-44b1-a7ec-3cbdf877cf6c name=/runtime.v1.ImageService/PullImage окт 25 17:12:09 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:09.829672961+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3" id=55aa6d4a-da29-4bae-8c71-e186f0eeaad1 name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:09 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:09.829853035+03:00" level=info msg="Image registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3 not found" id=55aa6d4a-da29-4bae-8c71-e186f0eeaad1 name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:09 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:09.852766777+03:00" level=info msg="Pulling image: registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3" id=bf6d1367-d4db-45bc-ac07-a746360427c8 name=/runtime.v1.ImageService/PullImage окт 25 17:12:09 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:09.853242835+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3\"" окт 25 17:12:10 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:10.114148790+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3\"" окт 25 17:12:10 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:10.514980644+03:00" level=info msg="Pulled image: registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:b99f200d4ff21b4f76e350d7f5d73ccba6443c4d761220da59dd68578ff51358" id=bf6d1367-d4db-45bc-ac07-a746360427c8 name=/runtime.v1.ImageService/PullImage окт 25 17:12:10 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:10.539726981+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3" id=8fabf603-106b-41c7-a361-b185600de42f name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:10 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:10.539929906+03:00" level=info msg="Image registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3 not found" id=8fabf603-106b-41c7-a361-b185600de42f name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:10 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:10.566852952+03:00" level=info msg="Pulling image: registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3" id=5737505d-9b4f-49d3-8ba1-cd9a24578aea name=/runtime.v1.ImageService/PullImage окт 25 17:12:10 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:10.567164263+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3\"" окт 25 17:12:10 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:10.841421475+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3\"" окт 25 17:12:10 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.402853 delay 0.003110, next query 33s окт 25 17:12:11 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:11.463285444+03:00" level=info msg="Pulled image: registry.altlinux.org/k8s-p10/kube-scheduler@sha256:59fba96e02bdfe545447117b643f0865990d9ebd8237d5eed0b7b423c21fdb16" id=5737505d-9b4f-49d3-8ba1-cd9a24578aea name=/runtime.v1.ImageService/PullImage окт 25 17:12:11 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:11.487742099+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3" id=dea40db0-da4b-424e-a218-b4d0aef8867a name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:11 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:11.487943661+03:00" level=info msg="Image registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3 not found" id=dea40db0-da4b-424e-a218-b4d0aef8867a name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:11 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:11.511003776+03:00" level=info msg="Pulling image: registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3" id=483269e2-a213-4727-be95-ba93409b2bf5 name=/runtime.v1.ImageService/PullImage окт 25 17:12:11 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:11.511289724+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3\"" окт 25 17:12:11 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:11.741441121+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3\"" окт 25 17:12:20 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.397296 delay 0.000590, next query 34s окт 25 17:12:20 podsec-master ntpd[2619]: adjusting local clock by 0.423575s окт 25 17:12:20 podsec-master ntpd[2619]: interval 259.595 olddelta 0.484 (delta - olddelta) -0.060 окт 25 17:12:20 podsec-master ntpd[2619]: error_ppm -116.249 freq_delta -1064868 tick_delta -1 окт 25 17:12:38 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.386384 delay 0.043824, next query 32s окт 25 17:12:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:41.367066626+03:00" level=info msg="Pulled image: registry.altlinux.org/k8s-p10/kube-proxy@sha256:0b5407b4b2d8609f624802ac16d12806d65a0170fc2d2594a496280352a4bde5" id=483269e2-a213-4727-be95-ba93409b2bf5 name=/runtime.v1.ImageService/PullImage окт 25 17:12:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:41.390768038+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/pause:3.9" id=a4139e1c-d715-4382-b1fa-836b32d523f7 name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:41.390964599+03:00" level=info msg="Image registry.altlinux.org/k8s-p10/pause:3.9 not found" id=a4139e1c-d715-4382-b1fa-836b32d523f7 name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:41.414234020+03:00" level=info msg="Pulling image: registry.altlinux.org/k8s-p10/pause:3.9" id=63b22421-f069-4c61-b97e-8f23543a1245 name=/runtime.v1.ImageService/PullImage окт 25 17:12:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:41.414466049+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/pause:3.9\"" окт 25 17:12:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:41.667969426+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/pause:3.9\"" окт 25 17:12:42 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:42.352173410+03:00" level=info msg="Pulled image: registry.altlinux.org/k8s-p10/pause@sha256:60eaff526530c6133f8367ea53d0f78880e437fd9be6008d366c7341c9e3e5a9" id=63b22421-f069-4c61-b97e-8f23543a1245 name=/runtime.v1.ImageService/PullImage окт 25 17:12:42 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:42.374008932+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/etcd:3.5.6-0" id=02900b4d-f3de-4c1a-a874-b0cba9c929d8 name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:42 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:42.374196257+03:00" level=info msg="Image registry.altlinux.org/k8s-p10/etcd:3.5.6-0 not found" id=02900b4d-f3de-4c1a-a874-b0cba9c929d8 name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:42 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:42.398079134+03:00" level=info msg="Pulling image: registry.altlinux.org/k8s-p10/etcd:3.5.6-0" id=71d178ab-19b5-4c69-978c-9a479e8dc39a name=/runtime.v1.ImageService/PullImage окт 25 17:12:42 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:42.398382848+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/etcd:3.5.6-0\"" окт 25 17:12:42 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:42.687559556+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/etcd:3.5.6-0\"" окт 25 17:12:43 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.386492 delay 0.004204, next query 30s окт 25 17:12:54 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.381714 delay 0.000722, next query 33s окт 25 17:12:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:54.074741659+03:00" level=info msg="Pulled image: registry.altlinux.org/k8s-p10/etcd@sha256:248b71517776f8b76dd9a805a923c9f2568222c9a478cd08f3a4f5453b186896" id=71d178ab-19b5-4c69-978c-9a479e8dc39a name=/runtime.v1.ImageService/PullImage окт 25 17:12:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:54.097634433+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/coredns:v1.9.3" id=757ba8e7-0e58-41ed-8233-8cf140dac264 name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:54.097784709+03:00" level=info msg="Image registry.altlinux.org/k8s-p10/coredns:v1.9.3 not found" id=757ba8e7-0e58-41ed-8233-8cf140dac264 name=/runtime.v1.ImageService/ImageStatus окт 25 17:12:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:54.118919226+03:00" level=info msg="Pulling image: registry.altlinux.org/k8s-p10/coredns:v1.9.3" id=d7fd3dbc-fbe6-4b92-abb4-b65b3f1a746c name=/runtime.v1.ImageService/PullImage окт 25 17:12:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:54.119295976+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/coredns:v1.9.3\"" окт 25 17:12:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:54.381593101+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/coredns:v1.9.3\"" окт 25 17:12:57 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:12:57.177598872+03:00" level=info msg="Pulled image: registry.altlinux.org/k8s-p10/coredns@sha256:00e9130a6bef9ba103693951da35424d4df820c885589ab4754016ed1622c07a" id=d7fd3dbc-fbe6-4b92-abb4-b65b3f1a746c name=/runtime.v1.ImageService/PullImage окт 25 17:13:00 podsec-master systemd[15444]: Reloading requested from client PID 15902 ('systemctl')... окт 25 17:13:00 podsec-master systemd[15444]: Reloading... окт 25 17:13:00 podsec-master systemd[15444]: Reloading finished in 97 ms. окт 25 17:13:00 podsec-master systemd[15444]: Starting kubelet.service - Usernetes kubelet service (crio)... окт 25 17:13:00 podsec-master nsenter_u7s[15915]: [INFO] Entering RootlessKit namespaces: окт 25 17:13:00 podsec-master nsenter_u7s[15920]: OK окт 25 17:13:00 podsec-master kubelet.sh[15932]: [INFO] Entering RootlessKit namespaces: окт 25 17:13:00 podsec-master kubelet.sh[15937]: OK окт 25 17:13:00 podsec-master root[15945]: =============================================== KUBELET ===================================== окт 25 17:13:00 podsec-master root[15950]: /usr/libexec/podsec/u7s/bin/_kubelet.sh: TIME=17:13:00.862502978 UID=0 PID=15484 PARS= окт 25 17:13:00 podsec-master kubelet.sh[15942]: /usr/libexec/podsec/u7s/bin/_kubelet.sh: TIME=17:13:00.866781329 UID=0 PID=15484 PARS= окт 25 17:13:01 podsec-master kubelet.sh[15960]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.041641 411 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" окт 25 17:13:01 podsec-master kubelet.sh[15960]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.267138 411 server.go:412] "Kubelet version" kubeletVersion="v1.26.9" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.267162 411 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.267374 411 server.go:836] "Client rotation is on, will bootstrap in background" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.270491 411 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.271742 411 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.88.11.114:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": EOF окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.278524 411 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.278739 411 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.278791 411 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/u7s-admin/.local/share/usernetes/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.03} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.278814 411 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.278829 411 container_manager_linux.go:307] "Creating device plugin manager" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.278932 411 state_mem.go:36] "Initialized new in-memory state store" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.479549 411 server.go:775] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.482578 411 kubelet.go:398] "Attempting to sync node with API server" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.482600 411 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.482623 411 kubelet.go:297] "Adding apiserver pod source" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.482639 411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.483300 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.483699 411 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="cri-o" version="1.26.4" apiVersion="v1" окт 25 17:13:01 podsec-master kubelet.sh[15960]: W1025 17:13:01.483978 411 probe.go:268] Flexvolume plugin directory at /var/lib/u7s-admin/.local/share/usernetes/kubelet-plugins-exec does not exist. Recreating. окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.484248 411 server.go:1175] "Failed to set rlimit on max file handles" err="operation not permitted" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.484267 411 server.go:1186] "Started kubelet" окт 25 17:13:01 podsec-master systemd[15444]: Started kubelet.service - Usernetes kubelet service (crio). окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.484872 411 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.486723 411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.487387 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.487413 411 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.487427 411 server.go:451] "Adding debug handlers to kubelet server" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.487739 411 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"podsec-master.17915f2d0698a961", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"podsec-master", UID:"podsec-master", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"podsec-master"}, FirstTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 484222817, time.Local), LastTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 484222817, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"podsec-master"}': 'Post "https://10.88.11.114:8443/api/v1/namespaces/default/events": EOF'(may retry after sleeping) окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.488026 411 volume_manager.go:293] "Starting Kubelet Volume Manager" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.488237 411 desired_state_of_world_populator.go:151] "Desired state populator starts to run" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.488451 411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"podsec-master\" not found" окт 25 17:13:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:01.489858771+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/pause:3.9" id=494352fe-c336-4aec-bb86-78af41ffe66f name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:01.490678429+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e5ea918be71e188ac31bdec1ce20c8ab8dcfd7373f6d89525691be2c9227054,RepoTags:[registry.altlinux.org/k8s-p10/pause:3.9],RepoDigests:[registry.altlinux.org/k8s-p10/pause@sha256:60eaff526530c6133f8367ea53d0f78880e437fd9be6008d366c7341c9e3e5a9 registry.altlinux.org/k8s-p10/pause@sha256:f14315ad18ed3dc1672572c3af9f6b28427cf036a43cc00ebac885e919b59548],Size_:753507,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d73f1c8561f2df848a2403b7dca50b9664628029c89a82d4fa1ea137c9534738,org.opencontainers.image.base.name: ,},},Pinned:false,},Info:map[string]string{},}" id=494352fe-c336-4aec-bb86-78af41ffe66f name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:01 podsec-master kubelet.sh[15960]: W1025 17:13:01.492267 411 manager.go:289] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: operation not permitted окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.502031 411 cpu_manager.go:214] "Starting CPU manager" policy="none" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.502046 411 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.502077 411 state_mem.go:36] "Initialized new in-memory state store" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.502961 411 policy_none.go:49] "None policy: Start" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.503351 411 memory_manager.go:169] "Starting memorymanager" policy="None" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.503393 411 state_mem.go:35] "Initializing new in-memory state store" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.505100 411 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.505253 411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.508278 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.590272 411 kubelet_node_status.go:70] "Attempting to register node" node="podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.591084 411 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.88.11.114:8443/api/v1/nodes\": EOF" node="podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.594046 411 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.616138 411 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.616164 411 status_manager.go:176] "Starting to sync pod status with apiserver" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.616183 411 kubelet.go:2113] "Starting kubelet main sync loop" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.616219 411 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.706026 411 container_manager_linux.go:515] "Failed to ensure process in container with oom score" err="failed to apply oom score -999 to PID 411: write /proc/411/oom_score_adj: permission denied" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.717206 411 topology_manager.go:210] "Topology Admit Handler" podUID=856d0d139624bdb53580e2252a20221c podNamespace="kube-system" podName="kube-apiserver-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.720250 411 topology_manager.go:210] "Topology Admit Handler" podUID=0aaa188ac577d167ba76d603260425f4 podNamespace="kube-system" podName="kube-controller-manager-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.721445 411 topology_manager.go:210] "Topology Admit Handler" podUID=9195923b431ed610c910b4ac34fb23b8 podNamespace="kube-system" podName="kube-scheduler-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.722578 411 topology_manager.go:210] "Topology Admit Handler" podUID=cd4d19bb32159c3e645c996c49d65155 podNamespace="kube-system" podName="etcd-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.795268 411 kubelet_node_status.go:70] "Attempting to register node" node="podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: E1025 17:13:01.796711 411 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.88.11.114:8443/api/v1/nodes\": EOF" node="podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.889977 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-usr-share-ca-certificates\") pod \"kube-apiserver-podsec-master\" (UID: \"856d0d139624bdb53580e2252a20221c\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890028 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aaa188ac577d167ba76d603260425f4-ca-certs\") pod \"kube-controller-manager-podsec-master\" (UID: \"0aaa188ac577d167ba76d603260425f4\") " pod="kube-system/kube-controller-manager-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890050 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0aaa188ac577d167ba76d603260425f4-kubeconfig\") pod \"kube-controller-manager-podsec-master\" (UID: \"0aaa188ac577d167ba76d603260425f4\") " pod="kube-system/kube-controller-manager-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890077 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/cd4d19bb32159c3e645c996c49d65155-etcd-data\") pod \"etcd-podsec-master\" (UID: \"cd4d19bb32159c3e645c996c49d65155\") " pod="kube-system/etcd-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890109 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-ca-certs\") pod \"kube-apiserver-podsec-master\" (UID: \"856d0d139624bdb53580e2252a20221c\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890133 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-etc-pki\") pod \"kube-apiserver-podsec-master\" (UID: \"856d0d139624bdb53580e2252a20221c\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890161 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/0aaa188ac577d167ba76d603260425f4-etc-pki\") pod \"kube-controller-manager-podsec-master\" (UID: \"0aaa188ac577d167ba76d603260425f4\") " pod="kube-system/kube-controller-manager-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890208 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aaa188ac577d167ba76d603260425f4-usr-share-ca-certificates\") pod \"kube-controller-manager-podsec-master\" (UID: \"0aaa188ac577d167ba76d603260425f4\") " pod="kube-system/kube-controller-manager-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890233 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9195923b431ed610c910b4ac34fb23b8-kubeconfig\") pod \"kube-scheduler-podsec-master\" (UID: \"9195923b431ed610c910b4ac34fb23b8\") " pod="kube-system/kube-scheduler-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890256 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/cd4d19bb32159c3e645c996c49d65155-etcd-certs\") pod \"etcd-podsec-master\" (UID: \"cd4d19bb32159c3e645c996c49d65155\") " pod="kube-system/etcd-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890280 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-k8s-certs\") pod \"kube-apiserver-podsec-master\" (UID: \"856d0d139624bdb53580e2252a20221c\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890319 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0aaa188ac577d167ba76d603260425f4-flexvolume-dir\") pod \"kube-controller-manager-podsec-master\" (UID: \"0aaa188ac577d167ba76d603260425f4\") " pod="kube-system/kube-controller-manager-podsec-master" окт 25 17:13:01 podsec-master kubelet.sh[15960]: I1025 17:13:01.890349 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aaa188ac577d167ba76d603260425f4-k8s-certs\") pod \"kube-controller-manager-podsec-master\" (UID: \"0aaa188ac577d167ba76d603260425f4\") " pod="kube-system/kube-controller-manager-podsec-master" окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.025299122+03:00" level=info msg="Running pod sandbox: kube-system/kube-apiserver-podsec-master/POD" id=cd89663e-cd8d-4e81-b799-f2a4688af8a3 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.025332434+03:00" level=info msg="Running pod sandbox: kube-system/kube-controller-manager-podsec-master/POD" id=c286e0cd-aabf-4895-a1d1-ceee56586372 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.025361541+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.025377759+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.025403686+03:00" level=info msg="Running pod sandbox: kube-system/kube-scheduler-podsec-master/POD" id=5c46199c-3dda-4808-b621-21467a062a91 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.025459332+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.025973701+03:00" level=info msg="Running pod sandbox: kube-system/etcd-podsec-master/POD" id=b712c199-492b-4a8c-8d0d-81accb22796d name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.026017810+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:02 podsec-master kernel: overlayfs: failed to set xattr on upper окт 25 17:13:02 podsec-master kernel: overlayfs: ...falling back to index=off,metacopy=off. окт 25 17:13:02 podsec-master kernel: overlayfs: ...falling back to xino=off. окт 25 17:13:02 podsec-master kernel: overlayfs: try mounting with 'userxattr' option окт 25 17:13:02 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:13:02 podsec-master kubelet.sh[15960]: W1025 17:13:02.033679 411 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod856d0d139624bdb53580e2252a20221c/crio-57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95 WatchSource:0}: Error finding container 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95: Status 404 returned error can't find the container with id 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95 окт 25 17:13:02 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.034318796+03:00" level=info msg="Ran pod sandbox 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95 with infra container: kube-system/kube-apiserver-podsec-master/POD" id=cd89663e-cd8d-4e81-b799-f2a4688af8a3 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.034879985+03:00" level=info msg="Ran pod sandbox 4f0b7a18308c90a43d5ee6fdf886ddfaead0f2ec65112d8efd3b6cbf49664f5b with infra container: kube-system/kube-scheduler-podsec-master/POD" id=5c46199c-3dda-4808-b621-21467a062a91 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.035812579+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3" id=896582a8-e6bd-4640-954f-d1174144fc19 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.035975303+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a0bbad5012c4355ce91f0ff52e959a720eb5a23bec54941ee883654eaa52cee3,RepoTags:[registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-apiserver@sha256:0b0e983766f2a725ec43aa525dbb97d8e030aeb416064588c9d16fa985801a0b registry.altlinux.org/k8s-p10/kube-apiserver@sha256:d95502eaf9ae689a7f130bcca210d7b9033712fb5b1bf64ccac7b26ec1bf0eda],Size_:473578156,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=896582a8-e6bd-4640-954f-d1174144fc19 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.036347230+03:00" level=info msg="Ran pod sandbox 4d9fbf2f5ac2933b80eba6af172b6958563ab4c8a569bbb68256666580e61519 with infra container: kube-system/etcd-podsec-master/POD" id=b712c199-492b-4a8c-8d0d-81accb22796d name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.036651593+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3" id=eda0b857-8d01-4095-99a3-384649af29a9 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.036830554+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a0bbad5012c4355ce91f0ff52e959a720eb5a23bec54941ee883654eaa52cee3,RepoTags:[registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-apiserver@sha256:0b0e983766f2a725ec43aa525dbb97d8e030aeb416064588c9d16fa985801a0b registry.altlinux.org/k8s-p10/kube-apiserver@sha256:d95502eaf9ae689a7f130bcca210d7b9033712fb5b1bf64ccac7b26ec1bf0eda],Size_:473578156,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=eda0b857-8d01-4095-99a3-384649af29a9 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.037191526+03:00" level=info msg="Ran pod sandbox a248ae9b0553ee474bfcf02db0e40918bcb9cc40f0d8546825a3e23729b23058 with infra container: kube-system/kube-controller-manager-podsec-master/POD" id=c286e0cd-aabf-4895-a1d1-ceee56586372 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.037705798+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/etcd:3.5.6-0" id=eb89e935-ecd3-4eb3-b237-cd859adbfac1 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.037813703+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3" id=a55c32a0-3365-4f70-b536-155a3c9289d6 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.038001134+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3" id=cd82dd58-5b11-45c8-8090-0373f1e90b8d name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.038010736+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7643eb036300b7db8da4b9e570ced76f5321943f933b2ccb5ce368b3161eb919,RepoTags:[registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-scheduler@sha256:59fba96e02bdfe545447117b643f0865990d9ebd8237d5eed0b7b423c21fdb16 registry.altlinux.org/k8s-p10/kube-scheduler@sha256:c9adf804319a08d2e6c31fdb321392f67ed78af3d625da5986b95f647842c6bc],Size_:473578155,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=a55c32a0-3365-4f70-b536-155a3c9289d6 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.038124210+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45177cfb7eb98d03913767f8c9f01d87e74af115ba5e4b214f0447ad945371ba,RepoTags:[registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:b99f200d4ff21b4f76e350d7f5d73ccba6443c4d761220da59dd68578ff51358 registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:f30fe66841ee0c5cbbb6cff5f2fccf7ef957508a1d4e65c54824c2007576fe28],Size_:473578174,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=cd82dd58-5b11-45c8-8090-0373f1e90b8d name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.037844567+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9d9b33d642dc058b93888358fdc9983b9a771055df227238df78bf9e2567b178,RepoTags:[registry.altlinux.org/k8s-p10/etcd:3.5.6-0],RepoDigests:[registry.altlinux.org/k8s-p10/etcd@sha256:248b71517776f8b76dd9a805a923c9f2568222c9a478cd08f3a4f5453b186896 registry.altlinux.org/k8s-p10/etcd@sha256:bfc6d8255e0a3d623f2973018af97dca473ce5e79a15e9d4ba47a021a1a178c5],Size_:198945139,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:c2fd0f957ce3637be8013bdf76a256bd7a83253321e4168d5047210507d5d76c,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=eb89e935-ecd3-4eb3-b237-cd859adbfac1 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.038968018+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/etcd:3.5.6-0" id=cd4f6fa5-ee99-4e30-b731-a9b1f3cabe5f name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.039129202+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9d9b33d642dc058b93888358fdc9983b9a771055df227238df78bf9e2567b178,RepoTags:[registry.altlinux.org/k8s-p10/etcd:3.5.6-0],RepoDigests:[registry.altlinux.org/k8s-p10/etcd@sha256:248b71517776f8b76dd9a805a923c9f2568222c9a478cd08f3a4f5453b186896 registry.altlinux.org/k8s-p10/etcd@sha256:bfc6d8255e0a3d623f2973018af97dca473ce5e79a15e9d4ba47a021a1a178c5],Size_:198945139,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:c2fd0f957ce3637be8013bdf76a256bd7a83253321e4168d5047210507d5d76c,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=cd4f6fa5-ee99-4e30-b731-a9b1f3cabe5f name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.038976392+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3" id=acf6764c-1e7d-499b-8443-8ddf3c5c85fe name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.039370297+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45177cfb7eb98d03913767f8c9f01d87e74af115ba5e4b214f0447ad945371ba,RepoTags:[registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:b99f200d4ff21b4f76e350d7f5d73ccba6443c4d761220da59dd68578ff51358 registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:f30fe66841ee0c5cbbb6cff5f2fccf7ef957508a1d4e65c54824c2007576fe28],Size_:473578174,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=acf6764c-1e7d-499b-8443-8ddf3c5c85fe name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.038991368+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3" id=35710001-86df-4f42-a833-c234bb807272 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.039517515+03:00" level=info msg="Creating container: kube-system/kube-apiserver-podsec-master/kube-apiserver" id=6763131f-0e39-4179-bb84-5f4d32905cd6 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.039637859+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.039711627+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7643eb036300b7db8da4b9e570ced76f5321943f933b2ccb5ce368b3161eb919,RepoTags:[registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-scheduler@sha256:59fba96e02bdfe545447117b643f0865990d9ebd8237d5eed0b7b423c21fdb16 registry.altlinux.org/k8s-p10/kube-scheduler@sha256:c9adf804319a08d2e6c31fdb321392f67ed78af3d625da5986b95f647842c6bc],Size_:473578155,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=35710001-86df-4f42-a833-c234bb807272 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.040355954+03:00" level=info msg="Creating container: kube-system/etcd-podsec-master/etcd" id=5ad7bf86-8a31-4f7f-ab4f-dc02c72ab201 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.040416719+03:00" level=info msg="Creating container: kube-system/kube-scheduler-podsec-master/kube-scheduler" id=71a235bb-a0a5-4362-83d0-a236342c654c name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.040442542+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.040363702+03:00" level=info msg="Creating container: kube-system/kube-controller-manager-podsec-master/kube-controller-manager" id=4643f340-ff59-4ef8-aa24-dd9e0a167274 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.040538005+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.040495789+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:02 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/RSKIUEXXY5HIDDGIFNWI43OUDO' does not support file handles, falling back to xino=off. окт 25 17:13:02 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/GKZUYBTV7O6AWQBQYEQKPUNK2G' does not support file handles, falling back to xino=off. окт 25 17:13:02 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/RSKIUEXXY5HIDDGIFNWI43OUDO' does not support file handles, falling back to xino=off. окт 25 17:13:02 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/RSKIUEXXY5HIDDGIFNWI43OUDO' does not support file handles, falling back to xino=off. окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.082088656+03:00" level=info msg="Created container ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb: kube-system/kube-apiserver-podsec-master/kube-apiserver" id=6763131f-0e39-4179-bb84-5f4d32905cd6 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.082455975+03:00" level=info msg="Created container 9e1e89c32be6b052fd5ca8ef4a9e3523e57de19986de3a4f4c7fe6b74c7878f0: kube-system/kube-scheduler-podsec-master/kube-scheduler" id=71a235bb-a0a5-4362-83d0-a236342c654c name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.083177737+03:00" level=info msg="Starting container: 9e1e89c32be6b052fd5ca8ef4a9e3523e57de19986de3a4f4c7fe6b74c7878f0" id=2f5451ac-b4a0-4c75-977b-10e506c9ed44 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.083343070+03:00" level=info msg="Starting container: ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb" id=010aa558-4050-4d2c-8868-b6485456a676 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.087448463+03:00" level=info msg="Started container" PID=485 containerID=ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb description=kube-system/kube-apiserver-podsec-master/kube-apiserver id=010aa558-4050-4d2c-8868-b6485456a676 name=/runtime.v1.RuntimeService/StartContainer sandboxID=57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95 окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.087983901+03:00" level=info msg="Created container fb375e1180e30eada0c61617858225c526a84bac7493f8cf94c62c62a497b2ad: kube-system/kube-controller-manager-podsec-master/kube-controller-manager" id=4643f340-ff59-4ef8-aa24-dd9e0a167274 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.090350280+03:00" level=info msg="Started container" PID=491 containerID=9e1e89c32be6b052fd5ca8ef4a9e3523e57de19986de3a4f4c7fe6b74c7878f0 description=kube-system/kube-scheduler-podsec-master/kube-scheduler id=2f5451ac-b4a0-4c75-977b-10e506c9ed44 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f0b7a18308c90a43d5ee6fdf886ddfaead0f2ec65112d8efd3b6cbf49664f5b окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.090540283+03:00" level=info msg="Starting container: fb375e1180e30eada0c61617858225c526a84bac7493f8cf94c62c62a497b2ad" id=b84a8692-59cb-490d-8e9a-041991b75495 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.091822859+03:00" level=info msg="Created container 6db06fc58cfca4700c592c79f1cf1edf596670c32dd3cd2ae15d5d163566d954: kube-system/etcd-podsec-master/etcd" id=5ad7bf86-8a31-4f7f-ab4f-dc02c72ab201 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.092220674+03:00" level=info msg="Starting container: 6db06fc58cfca4700c592c79f1cf1edf596670c32dd3cd2ae15d5d163566d954" id=5916e46e-dec0-4d09-bc0b-93a221b02866 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.095542083+03:00" level=info msg="Started container" PID=496 containerID=6db06fc58cfca4700c592c79f1cf1edf596670c32dd3cd2ae15d5d163566d954 description=kube-system/etcd-podsec-master/etcd id=5916e46e-dec0-4d09-bc0b-93a221b02866 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d9fbf2f5ac2933b80eba6af172b6958563ab4c8a569bbb68256666580e61519 окт 25 17:13:02 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:02.097775839+03:00" level=info msg="Started container" PID=495 containerID=fb375e1180e30eada0c61617858225c526a84bac7493f8cf94c62c62a497b2ad description=kube-system/kube-controller-manager-podsec-master/kube-controller-manager id=b84a8692-59cb-490d-8e9a-041991b75495 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a248ae9b0553ee474bfcf02db0e40918bcb9cc40f0d8546825a3e23729b23058 окт 25 17:13:02 podsec-master kubelet.sh[15960]: I1025 17:13:02.198544 411 kubelet_node_status.go:70] "Attempting to register node" node="podsec-master" окт 25 17:13:02 podsec-master kubelet.sh[15960]: E1025 17:13:02.199951 411 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.88.11.114:8443/api/v1/nodes\": EOF" node="podsec-master" окт 25 17:13:03 podsec-master kubelet.sh[15960]: I1025 17:13:03.001797 411 kubelet_node_status.go:70] "Attempting to register node" node="podsec-master" окт 25 17:13:03 podsec-master kubelet.sh[15960]: E1025 17:13:03.002724 411 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.88.11.114:8443/api/v1/nodes\": EOF" node="podsec-master" окт 25 17:13:03 podsec-master kubelet.sh[15960]: E1025 17:13:03.399286 411 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.88.11.114:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": EOF окт 25 17:13:03 podsec-master systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... окт 25 17:13:03 podsec-master systemd-tmpfiles[16095]: /lib/tmpfiles.d/faillock.conf:1: Line references path below legacy directory /var/run/, updating /var/run/faillock → /run/faillock; please update the tmpfiles.d/ drop-in file accordingly. окт 25 17:13:03 podsec-master systemd-tmpfiles[16095]: /lib/tmpfiles.d/multipath.conf:1: Line references path below legacy directory /var/run/, updating /var/run/multipath → /run/multipath; please update the tmpfiles.d/ drop-in file accordingly. окт 25 17:13:03 podsec-master systemd-tmpfiles[16095]: /lib/tmpfiles.d/nslcd.conf:1: Line references path below legacy directory /var/run/, updating /var/run/nslcd → /run/nslcd; please update the tmpfiles.d/ drop-in file accordingly. окт 25 17:13:03 podsec-master systemd-tmpfiles[16095]: /lib/tmpfiles.d/pesign.conf:1: Line references path below legacy directory /var/run/, updating /var/run/pesign → /run/pesign; please update the tmpfiles.d/ drop-in file accordingly. окт 25 17:13:03 podsec-master systemd-tmpfiles[16095]: /lib/tmpfiles.d/pesign.conf:2: Line references path below legacy directory /var/run/, updating /var/run/pesign/socketdir → /run/pesign/socketdir; please update the tmpfiles.d/ drop-in file accordingly. окт 25 17:13:03 podsec-master systemd-tmpfiles[16095]: /lib/tmpfiles.d/screen.conf:1: Line references path below legacy directory /var/run/, updating /var/run/screen → /run/screen; please update the tmpfiles.d/ drop-in file accordingly. окт 25 17:13:03 podsec-master systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. окт 25 17:13:03 podsec-master systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. окт 25 17:13:04 podsec-master kubelet.sh[15960]: I1025 17:13:04.604744 411 kubelet_node_status.go:70] "Attempting to register node" node="podsec-master" окт 25 17:13:04 podsec-master kubelet.sh[15960]: E1025 17:13:04.884405 411 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.88.11.114:8443/api/v1/nodes\": EOF" node="podsec-master" окт 25 17:13:06 podsec-master kubelet.sh[15960]: E1025 17:13:06.503846 411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"podsec-master\" not found" node="podsec-master" окт 25 17:13:07 podsec-master kubelet.sh[15960]: I1025 17:13:07.487246 411 apiserver.go:52] "Watching apiserver" окт 25 17:13:08 podsec-master kubelet.sh[15960]: I1025 17:13:08.085486 411 kubelet_node_status.go:70] "Attempting to register node" node="podsec-master" окт 25 17:13:08 podsec-master kubelet.sh[15960]: I1025 17:13:08.289270 411 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" окт 25 17:13:08 podsec-master kubelet.sh[15960]: I1025 17:13:08.329487 411 reconciler.go:41] "Reconciler: start to sync state" окт 25 17:13:08 podsec-master kubelet.sh[15960]: I1025 17:13:08.887112 411 kubelet_node_status.go:73] "Successfully registered node" node="podsec-master" окт 25 17:13:09 podsec-master kubelet.sh[15960]: I1025 17:13:09.916128 411 kubelet_node_status.go:493] "Fast updating node status as it just became ready" окт 25 17:13:10 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.371252 delay 0.043708, next query 33s окт 25 17:13:10 podsec-master systemd[1]: container-shell@2.service: Deactivated successfully. окт 25 17:13:10 podsec-master systemd[1]: session-7.scope: Deactivated successfully. окт 25 17:13:10 podsec-master systemd[1]: session-7.scope: Consumed 4.646s CPU time. окт 25 17:13:10 podsec-master systemd-logind[2365]: Removed session 7. окт 25 17:13:10 podsec-master systemd[1]: Started container-shell@2.service - Shell for User u7s-admin. окт 25 17:13:10 podsec-master (nter_u7s)[16103]: pam_tcb(login:session): Session opened for u7s-admin by u7s-admin(uid=0) окт 25 17:13:10 podsec-master systemd-logind[2365]: New session 9 of user u7s-admin. окт 25 17:13:10 podsec-master systemd[1]: Started session-9.scope - Session 9 of User u7s-admin. окт 25 17:13:10 podsec-master systemd[1]: container-shell@2.service: Deactivated successfully. окт 25 17:13:10 podsec-master systemd[1]: session-9.scope: Deactivated successfully. окт 25 17:13:10 podsec-master systemd-logind[2365]: Removed session 9. окт 25 17:13:10 podsec-master systemd[1]: Started container-shell@2.service - Shell for User u7s-admin. окт 25 17:13:10 podsec-master (nter_u7s)[16122]: pam_tcb(login:session): Session opened for u7s-admin by u7s-admin(uid=0) окт 25 17:13:10 podsec-master systemd-logind[2365]: New session 10 of user u7s-admin. окт 25 17:13:10 podsec-master systemd[1]: Started session-10.scope - Session 10 of User u7s-admin. окт 25 17:13:10 podsec-master systemd[1]: container-shell@2.service: Deactivated successfully. окт 25 17:13:10 podsec-master systemd[1]: session-10.scope: Deactivated successfully. окт 25 17:13:10 podsec-master systemd-logind[2365]: Removed session 10. окт 25 17:13:10 podsec-master systemd[1]: Started container-shell@2.service - Shell for User u7s-admin. окт 25 17:13:10 podsec-master (ystemctl)[16141]: pam_tcb(login:session): Session opened for u7s-admin by u7s-admin(uid=0) окт 25 17:13:10 podsec-master systemd-logind[2365]: New session 11 of user u7s-admin. окт 25 17:13:10 podsec-master systemd[1]: Started session-11.scope - Session 11 of User u7s-admin. окт 25 17:13:10 podsec-master systemd[15444]: Reached target u7s.target - Usernetes target (all components in the single node). окт 25 17:13:10 podsec-master systemd[1]: container-shell@2.service: Deactivated successfully. окт 25 17:13:10 podsec-master systemd[1]: session-11.scope: Deactivated successfully. окт 25 17:13:10 podsec-master systemd-logind[2365]: Removed session 11. окт 25 17:13:10 podsec-master systemd[1]: Reloading requested from client PID 16147 ('systemctl') (unit session-1.scope)... окт 25 17:13:10 podsec-master systemd[1]: Reloading... окт 25 17:13:10 podsec-master systemd-sysv-generator[16185]: SysV service '/etc/rc.d/init.d/clock' lacks a native systemd unit file. ~ Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it safe, robust and future-proof. ! This compatibility logic is deprecated, expect removal soon. ! окт 25 17:13:10 podsec-master systemd-sysv-generator[16185]: SysV service '/etc/rc.d/init.d/ifplugd' lacks a native systemd unit file. ~ Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it safe, robust and future-proof. ! This compatibility logic is deprecated, expect removal soon. ! окт 25 17:13:10 podsec-master systemd[1]: /lib/systemd/system/nslcd.service:7: PIDFile= references a path below legacy directory /var/run/, updating /var/run/nslcd/nslcd.pid → /run/nslcd/nslcd.pid; please update the unit file accordingly. окт 25 17:13:10 podsec-master systemd[1]: /lib/systemd/system/cups.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/cups/cups.sock → /run/cups/cups.sock; please update the unit file accordingly. окт 25 17:13:10 podsec-master systemd[1]: /lib/systemd/system/nscd.socket:5: ListenDatagram= references a path below legacy directory /var/run/, updating /var/run/nscd/socket → /run/nscd/socket; please update the unit file accordingly. окт 25 17:13:10 podsec-master systemd[1]: Reloading finished in 254 ms. окт 25 17:13:10 podsec-master systemd[1]: Starting u7s.service - Usernet (rootless kubernetes) U7S Service... окт 25 17:13:10 podsec-master systemd[1]: Started u7s.service - Usernet (rootless kubernetes) U7S Service. окт 25 17:13:10 podsec-master machinectl[16191]: Connected to the local host. Press ^] three times within 1s to exit session. окт 25 17:13:10 podsec-master systemd[1]: Started container-shell@2.service - Shell for User u7s-admin. окт 25 17:13:10 podsec-master (ystemctl)[16195]: pam_tcb(login:session): Session opened for u7s-admin by u7s-admin(uid=0) окт 25 17:13:10 podsec-master systemd-logind[2365]: New session 12 of user u7s-admin. окт 25 17:13:10 podsec-master systemd[1]: Started session-12.scope - Session 12 of User u7s-admin. окт 25 17:13:10 podsec-master machinectl[16191]: Enqueued anchor job 66 u7s.target/start. окт 25 17:13:11 podsec-master kubelet.sh[15960]: E1025 17:13:11.191971 411 file.go:108] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver.yaml\": open /etc/kubernetes/manifests/kube-apiserver.yaml: permission denied" окт 25 17:13:11 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:11.193144927+03:00" level=info msg="Stopping container: ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb (timeout: 30s)" id=4a7eba43-82a3-4172-a6ef-55b2370ad46a name=/runtime.v1.RuntimeService/StopContainer окт 25 17:13:11 podsec-master kubelet.sh[15960]: I1025 17:13:11.193716 411 topology_manager.go:210] "Topology Admit Handler" podUID=3ff1accd3d927b76db0fedd722c8c899 podNamespace="kube-system" podName="kube-apiserver-podsec-master" окт 25 17:13:11 podsec-master kubelet.sh[15960]: E1025 17:13:11.193797 411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="856d0d139624bdb53580e2252a20221c" containerName="kube-apiserver" окт 25 17:13:11 podsec-master kubelet.sh[15960]: I1025 17:13:11.193849 411 memory_manager.go:346] "RemoveStaleState removing state" podUID="856d0d139624bdb53580e2252a20221c" containerName="kube-apiserver" окт 25 17:13:11 podsec-master kubelet.sh[15960]: I1025 17:13:11.248077 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ff1accd3d927b76db0fedd722c8c899-usr-share-ca-certificates\") pod \"kube-apiserver-podsec-master\" (UID: \"3ff1accd3d927b76db0fedd722c8c899\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:11 podsec-master kubelet.sh[15960]: I1025 17:13:11.248133 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ff1accd3d927b76db0fedd722c8c899-ca-certs\") pod \"kube-apiserver-podsec-master\" (UID: \"3ff1accd3d927b76db0fedd722c8c899\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:11 podsec-master kubelet.sh[15960]: I1025 17:13:11.248157 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/3ff1accd3d927b76db0fedd722c8c899-etc-pki\") pod \"kube-apiserver-podsec-master\" (UID: \"3ff1accd3d927b76db0fedd722c8c899\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:11 podsec-master kubelet.sh[15960]: I1025 17:13:11.248181 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ff1accd3d927b76db0fedd722c8c899-k8s-certs\") pod \"kube-apiserver-podsec-master\" (UID: \"3ff1accd3d927b76db0fedd722c8c899\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:11 podsec-master kubelet.sh[15960]: I1025 17:13:11.248204 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/host-path/3ff1accd3d927b76db0fedd722c8c899-audit\") pod \"kube-apiserver-podsec-master\" (UID: \"3ff1accd3d927b76db0fedd722c8c899\") " pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:11 podsec-master kubelet.sh[15960]: E1025 17:13:11.509349 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:13:11 podsec-master kubelet.sh[15960]: E1025 17:13:11.559712 411 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"podsec-master.17915f2d07a23b60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"278", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"podsec-master", UID:"podsec-master", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node podsec-master status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"podsec-master"}, FirstTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 721379111, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"podsec-master"}': 'Patch "https://10.88.11.114:8443/api/v1/namespaces/default/events/podsec-master.17915f2d07a23b60": EOF'(may retry after sleeping) окт 25 17:13:12 podsec-master kubelet.sh[15960]: E1025 17:13:12.016078 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-scheduler-podsec-master" окт 25 17:13:13 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.372216 delay 0.003231, next query 32s окт 25 17:13:16 podsec-master kubelet.sh[15960]: E1025 17:13:16.458222 411 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"podsec-master.17915f2d07a23b60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"278", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"podsec-master", UID:"podsec-master", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node podsec-master status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"podsec-master"}, FirstTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 721379111, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"podsec-master"}': 'Patch "https://10.88.11.114:8443/api/v1/namespaces/default/events/podsec-master.17915f2d07a23b60": EOF'(may retry after sleeping) окт 25 17:13:21 podsec-master kubelet.sh[15960]: E1025 17:13:21.487967 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:13:21 podsec-master kubelet.sh[15960]: E1025 17:13:21.511261 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:13:21 podsec-master kubelet.sh[15960]: E1025 17:13:21.835678 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-scheduler-podsec-master" окт 25 17:13:21 podsec-master conmon[16043]: conmon fb375e1180e30eada0c6 : container 495 exited with status 1 окт 25 17:13:22 podsec-master conmon[16039]: conmon 9e1e89c32be6b052fd5c : container 491 exited with status 1 окт 25 17:13:22 podsec-master kubelet.sh[15960]: I1025 17:13:22.559000 411 status_manager.go:698] "Failed to get status for pod" podUID=856d0d139624bdb53580e2252a20221c pod="kube-system/kube-apiserver-podsec-master" err="Get \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-podsec-master\": EOF" окт 25 17:13:22 podsec-master kubelet.sh[15960]: I1025 17:13:22.655889 411 scope.go:115] "RemoveContainer" containerID="fb375e1180e30eada0c61617858225c526a84bac7493f8cf94c62c62a497b2ad" окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.657260819+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3" id=7df7c459-19a5-46a4-ba84-4997a805ca8d name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.657491721+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45177cfb7eb98d03913767f8c9f01d87e74af115ba5e4b214f0447ad945371ba,RepoTags:[registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:b99f200d4ff21b4f76e350d7f5d73ccba6443c4d761220da59dd68578ff51358 registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:f30fe66841ee0c5cbbb6cff5f2fccf7ef957508a1d4e65c54824c2007576fe28],Size_:473578174,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=7df7c459-19a5-46a4-ba84-4997a805ca8d name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:22 podsec-master kubelet.sh[15960]: E1025 17:13:22.658282 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-scheduler-podsec-master" окт 25 17:13:22 podsec-master kubelet.sh[15960]: I1025 17:13:22.658364 411 scope.go:115] "RemoveContainer" containerID="9e1e89c32be6b052fd5ca8ef4a9e3523e57de19986de3a4f4c7fe6b74c7878f0" окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.658509152+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3" id=9a135042-7f15-48c2-99c8-62a66db63242 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.658743215+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45177cfb7eb98d03913767f8c9f01d87e74af115ba5e4b214f0447ad945371ba,RepoTags:[registry.altlinux.org/k8s-p10/kube-controller-manager:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:b99f200d4ff21b4f76e350d7f5d73ccba6443c4d761220da59dd68578ff51358 registry.altlinux.org/k8s-p10/kube-controller-manager@sha256:f30fe66841ee0c5cbbb6cff5f2fccf7ef957508a1d4e65c54824c2007576fe28],Size_:473578174,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=9a135042-7f15-48c2-99c8-62a66db63242 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.658810582+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3" id=d0282f16-6935-463a-80d4-ef82dbba7851 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.658954294+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7643eb036300b7db8da4b9e570ced76f5321943f933b2ccb5ce368b3161eb919,RepoTags:[registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-scheduler@sha256:59fba96e02bdfe545447117b643f0865990d9ebd8237d5eed0b7b423c21fdb16 registry.altlinux.org/k8s-p10/kube-scheduler@sha256:c9adf804319a08d2e6c31fdb321392f67ed78af3d625da5986b95f647842c6bc],Size_:473578155,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=d0282f16-6935-463a-80d4-ef82dbba7851 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.659469900+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3" id=3ce27bc1-4167-4edf-bf37-408a2657f893 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.659579380+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7643eb036300b7db8da4b9e570ced76f5321943f933b2ccb5ce368b3161eb919,RepoTags:[registry.altlinux.org/k8s-p10/kube-scheduler:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-scheduler@sha256:59fba96e02bdfe545447117b643f0865990d9ebd8237d5eed0b7b423c21fdb16 registry.altlinux.org/k8s-p10/kube-scheduler@sha256:c9adf804319a08d2e6c31fdb321392f67ed78af3d625da5986b95f647842c6bc],Size_:473578155,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=3ce27bc1-4167-4edf-bf37-408a2657f893 name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.659703938+03:00" level=info msg="Creating container: kube-system/kube-controller-manager-podsec-master/kube-controller-manager" id=ff7eabb6-dcb2-4b5e-8e79-3dabf531e609 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.659807837+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.660245989+03:00" level=info msg="Creating container: kube-system/kube-scheduler-podsec-master/kube-scheduler" id=1a38116f-c901-48dd-b007-eee9e493ccf2 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.660342035+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:22 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/RSKIUEXXY5HIDDGIFNWI43OUDO' does not support file handles, falling back to xino=off. окт 25 17:13:22 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/RSKIUEXXY5HIDDGIFNWI43OUDO' does not support file handles, falling back to xino=off. окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.706120975+03:00" level=info msg="Created container 9ec6a7b904436fde855bff76f1029a2e47e4770ff041823e2d99e570e9ba53ea: kube-system/kube-scheduler-podsec-master/kube-scheduler" id=1a38116f-c901-48dd-b007-eee9e493ccf2 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.706249209+03:00" level=info msg="Created container 46949b07be13d04e653db6aa8535cd09981021bc455754f421f4151670ca3a65: kube-system/kube-controller-manager-podsec-master/kube-controller-manager" id=ff7eabb6-dcb2-4b5e-8e79-3dabf531e609 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.706796397+03:00" level=info msg="Starting container: 46949b07be13d04e653db6aa8535cd09981021bc455754f421f4151670ca3a65" id=d12f260f-4273-4195-96a5-bf6394ddbe08 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.706799983+03:00" level=info msg="Starting container: 9ec6a7b904436fde855bff76f1029a2e47e4770ff041823e2d99e570e9ba53ea" id=0b67816c-531b-4dfa-a1d9-14c1f6fc2e8d name=/runtime.v1.RuntimeService/StartContainer окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.710109419+03:00" level=info msg="Started container" PID=559 containerID=46949b07be13d04e653db6aa8535cd09981021bc455754f421f4151670ca3a65 description=kube-system/kube-controller-manager-podsec-master/kube-controller-manager id=d12f260f-4273-4195-96a5-bf6394ddbe08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a248ae9b0553ee474bfcf02db0e40918bcb9cc40f0d8546825a3e23729b23058 окт 25 17:13:22 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:22.710726703+03:00" level=info msg="Started container" PID=560 containerID=9ec6a7b904436fde855bff76f1029a2e47e4770ff041823e2d99e570e9ba53ea description=kube-system/kube-scheduler-podsec-master/kube-scheduler id=0b67816c-531b-4dfa-a1d9-14c1f6fc2e8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f0b7a18308c90a43d5ee6fdf886ddfaead0f2ec65112d8efd3b6cbf49664f5b окт 25 17:13:23 podsec-master kubelet.sh[15960]: E1025 17:13:23.661609 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-scheduler-podsec-master" окт 25 17:13:26 podsec-master kubelet.sh[15960]: E1025 17:13:26.460308 411 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"podsec-master.17915f2d07a23b60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"278", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"podsec-master", UID:"podsec-master", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node podsec-master status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"podsec-master"}, FirstTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 721379111, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"podsec-master"}': 'Patch "https://10.88.11.114:8443/api/v1/namespaces/default/events/podsec-master.17915f2d07a23b60": EOF'(may retry after sleeping) окт 25 17:13:26 podsec-master kubelet.sh[15960]: E1025 17:13:26.735689 411 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.88.11.114:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/podsec-master?timeout=10s": context deadline exceeded - error from a previous attempt: EOF окт 25 17:13:27 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.365795 delay 0.000499, next query 31s окт 25 17:13:29 podsec-master kubelet.sh[15960]: E1025 17:13:29.274017 411 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"podsec-master\": Get \"https://10.88.11.114:8443/api/v1/nodes/podsec-master?resourceVersion=0&timeout=10s\": context deadline exceeded - error from a previous attempt: EOF" окт 25 17:13:31 podsec-master kubelet.sh[15960]: E1025 17:13:31.512727 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:13:31 podsec-master kubelet.sh[15960]: E1025 17:13:31.830704 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-scheduler-podsec-master" окт 25 17:13:32 podsec-master kubelet.sh[15960]: I1025 17:13:32.574562 411 status_manager.go:698] "Failed to get status for pod" podUID=0aaa188ac577d167ba76d603260425f4 pod="kube-system/kube-controller-manager-podsec-master" err="Get \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-podsec-master\": EOF" окт 25 17:13:36 podsec-master kubelet.sh[15960]: E1025 17:13:36.462349 411 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"podsec-master.17915f2d07a23b60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"278", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"podsec-master", UID:"podsec-master", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node podsec-master status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"podsec-master"}, FirstTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 721379111, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"podsec-master"}': 'Patch "https://10.88.11.114:8443/api/v1/namespaces/default/events/podsec-master.17915f2d07a23b60": EOF'(may retry after sleeping) окт 25 17:13:36 podsec-master kubelet.sh[15960]: E1025 17:13:36.951886 411 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.88.11.114:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/podsec-master?timeout=10s": context deadline exceeded - error from a previous attempt: EOF окт 25 17:13:39 podsec-master kubelet.sh[15960]: E1025 17:13:39.287924 411 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"podsec-master\": Get \"https://10.88.11.114:8443/api/v1/nodes/podsec-master?timeout=10s\": context deadline exceeded - error from a previous attempt: EOF" окт 25 17:13:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:41.197002947+03:00" level=warning msg="Stopping container ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb with stop signal timed out: timeout reached after 30 seconds waiting for container process to exit" id=4a7eba43-82a3-4172-a6ef-55b2370ad46a name=/runtime.v1.RuntimeService/StopContainer окт 25 17:13:41 podsec-master conmon[16035]: conmon ab0d9aaf3371237acb01 : container 485 exited with status 137 окт 25 17:13:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:41.316405561+03:00" level=info msg="Stopped container ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb: kube-system/kube-apiserver-podsec-master/kube-apiserver" id=4a7eba43-82a3-4172-a6ef-55b2370ad46a name=/runtime.v1.RuntimeService/StopContainer окт 25 17:13:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:41.317043125+03:00" level=info msg="Stopping pod sandbox: 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95" id=a1917cef-5af4-41be-bcb4-4da0a7181f6f name=/runtime.v1.RuntimeService/StopPodSandbox окт 25 17:13:41 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:41.317606105+03:00" level=info msg="Stopped pod sandbox: 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95" id=a1917cef-5af4-41be-bcb4-4da0a7181f6f name=/runtime.v1.RuntimeService/StopPodSandbox окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.406975 411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-ca-certs\") pod \"856d0d139624bdb53580e2252a20221c\" (UID: \"856d0d139624bdb53580e2252a20221c\") " окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.407016 411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-etc-pki\") pod \"856d0d139624bdb53580e2252a20221c\" (UID: \"856d0d139624bdb53580e2252a20221c\") " окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.407042 411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-usr-share-ca-certificates\") pod \"856d0d139624bdb53580e2252a20221c\" (UID: \"856d0d139624bdb53580e2252a20221c\") " окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.407060 411 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-k8s-certs\") pod \"856d0d139624bdb53580e2252a20221c\" (UID: \"856d0d139624bdb53580e2252a20221c\") " окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.407090 411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "856d0d139624bdb53580e2252a20221c" (UID: "856d0d139624bdb53580e2252a20221c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.407118 411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "856d0d139624bdb53580e2252a20221c" (UID: "856d0d139624bdb53580e2252a20221c"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue "" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.407116 411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-etc-pki" (OuterVolumeSpecName: "etc-pki") pod "856d0d139624bdb53580e2252a20221c" (UID: "856d0d139624bdb53580e2252a20221c"). InnerVolumeSpecName "etc-pki". PluginName "kubernetes.io/host-path", VolumeGidValue "" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.407101 411 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "856d0d139624bdb53580e2252a20221c" (UID: "856d0d139624bdb53580e2252a20221c"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" окт 25 17:13:41 podsec-master kubelet.sh[15960]: E1025 17:13:41.486027 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.507736 411 reconciler_common.go:295] "Volume detached for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-usr-share-ca-certificates\") on node \"podsec-master\" DevicePath \"\"" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.507768 411 reconciler_common.go:295] "Volume detached for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-k8s-certs\") on node \"podsec-master\" DevicePath \"\"" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.507780 411 reconciler_common.go:295] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-ca-certs\") on node \"podsec-master\" DevicePath \"\"" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.507790 411 reconciler_common.go:295] "Volume detached for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/856d0d139624bdb53580e2252a20221c-etc-pki\") on node \"podsec-master\" DevicePath \"\"" окт 25 17:13:41 podsec-master kubelet.sh[15960]: E1025 17:13:41.513854 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.618405 411 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=856d0d139624bdb53580e2252a20221c path="/var/lib/u7s-admin/.local/share/usernetes/kubelet/pods/856d0d139624bdb53580e2252a20221c/volumes" окт 25 17:13:41 podsec-master kubelet.sh[15960]: I1025 17:13:41.687117 411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95" окт 25 17:13:41 podsec-master kubelet.sh[15960]: E1025 17:13:41.835486 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-scheduler-podsec-master" окт 25 17:13:42 podsec-master kubelet.sh[15960]: I1025 17:13:42.589860 411 status_manager.go:698] "Failed to get status for pod" podUID=cd4d19bb32159c3e645c996c49d65155 pod="kube-system/etcd-podsec-master" err="Get \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods/etcd-podsec-master\": EOF" окт 25 17:13:43 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.355733 delay 0.043502, next query 30s окт 25 17:13:45 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.357017 delay 0.003041, next query 32s окт 25 17:13:46 podsec-master kubelet.sh[15960]: E1025 17:13:46.464166 411 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"podsec-master.17915f2d07a23b60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"278", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"podsec-master", UID:"podsec-master", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node podsec-master status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"podsec-master"}, FirstTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 721379111, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"podsec-master"}': 'Patch "https://10.88.11.114:8443/api/v1/namespaces/default/events/podsec-master.17915f2d07a23b60": EOF'(may retry after sleeping) окт 25 17:13:47 podsec-master kubelet.sh[15960]: E1025 17:13:47.367971 411 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.88.11.114:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/podsec-master?timeout=10s": context deadline exceeded - error from a previous attempt: EOF окт 25 17:13:49 podsec-master kubelet.sh[15960]: E1025 17:13:49.300751 411 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"podsec-master\": Get \"https://10.88.11.114:8443/api/v1/nodes/podsec-master?timeout=10s\": context deadline exceeded - error from a previous attempt: EOF" окт 25 17:13:51 podsec-master kubelet.sh[15960]: E1025 17:13:51.515353 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:13:53 podsec-master kubelet.sh[15960]: I1025 17:13:53.611309 411 status_manager.go:698] "Failed to get status for pod" podUID=856d0d139624bdb53580e2252a20221c pod="kube-system/kube-apiserver-podsec-master" err="Get \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-podsec-master\": EOF" окт 25 17:13:54 podsec-master kubelet.sh[15960]: E1025 17:13:54.211630 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.212365061+03:00" level=info msg="Running pod sandbox: kube-system/kube-apiserver-podsec-master/POD" id=e14db96b-2158-45b2-b898-b18f612b6578 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.212800271+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:54 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.217879304+03:00" level=info msg="Ran pod sandbox b65bb38a73f3efb1b906418a80fb5f0fd6b1b0c7453998708bfd4ed7cb4bedb2 with infra container: kube-system/kube-apiserver-podsec-master/POD" id=e14db96b-2158-45b2-b898-b18f612b6578 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.218984248+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3" id=2fa0d609-5918-48b8-97a5-5f9c491a520e name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.219131424+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a0bbad5012c4355ce91f0ff52e959a720eb5a23bec54941ee883654eaa52cee3,RepoTags:[registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-apiserver@sha256:0b0e983766f2a725ec43aa525dbb97d8e030aeb416064588c9d16fa985801a0b registry.altlinux.org/k8s-p10/kube-apiserver@sha256:d95502eaf9ae689a7f130bcca210d7b9033712fb5b1bf64ccac7b26ec1bf0eda],Size_:473578156,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=2fa0d609-5918-48b8-97a5-5f9c491a520e name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.219665444+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3" id=c3db4bd5-a08b-4025-858e-dac02bc67d1b name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.219774514+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a0bbad5012c4355ce91f0ff52e959a720eb5a23bec54941ee883654eaa52cee3,RepoTags:[registry.altlinux.org/k8s-p10/kube-apiserver:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-apiserver@sha256:0b0e983766f2a725ec43aa525dbb97d8e030aeb416064588c9d16fa985801a0b registry.altlinux.org/k8s-p10/kube-apiserver@sha256:d95502eaf9ae689a7f130bcca210d7b9033712fb5b1bf64ccac7b26ec1bf0eda],Size_:473578156,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d8fc2cd894e2fe4807c0eb2df52a930c36929147afcf2cf4a3f643896be53a3b,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=c3db4bd5-a08b-4025-858e-dac02bc67d1b name=/runtime.v1.ImageService/ImageStatus окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.222582428+03:00" level=info msg="Creating container: kube-system/kube-apiserver-podsec-master/kube-apiserver" id=be35445f-be7e-4dce-9dfa-c150692e3a10 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.222645893+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:13:54 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/RSKIUEXXY5HIDDGIFNWI43OUDO' does not support file handles, falling back to xino=off. окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.262163620+03:00" level=info msg="Created container 8fe2019dcfd2a0d1c7208a7d90624a56011ce488cdea226a288d50bc32a9e80a: kube-system/kube-apiserver-podsec-master/kube-apiserver" id=be35445f-be7e-4dce-9dfa-c150692e3a10 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.262638577+03:00" level=info msg="Starting container: 8fe2019dcfd2a0d1c7208a7d90624a56011ce488cdea226a288d50bc32a9e80a" id=2901356e-2911-41d4-ab4c-790720f4049a name=/runtime.v1.RuntimeService/StartContainer окт 25 17:13:54 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:13:54.265984099+03:00" level=info msg="Started container" PID=591 containerID=8fe2019dcfd2a0d1c7208a7d90624a56011ce488cdea226a288d50bc32a9e80a description=kube-system/kube-apiserver-podsec-master/kube-apiserver id=2901356e-2911-41d4-ab4c-790720f4049a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b65bb38a73f3efb1b906418a80fb5f0fd6b1b0c7453998708bfd4ed7cb4bedb2 окт 25 17:13:55 podsec-master kubelet.sh[15960]: E1025 17:13:55.211880 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:56 podsec-master kubelet.sh[15960]: E1025 17:13:56.211418 411 kubelet.go:1802] "Failed creating a mirror pod for" err="Post \"https://10.88.11.114:8443/api/v1/namespaces/kube-system/pods\": EOF" pod="kube-system/kube-apiserver-podsec-master" окт 25 17:13:56 podsec-master kubelet.sh[15960]: E1025 17:13:56.465812 411 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"podsec-master.17915f2d07a23b60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"278", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"podsec-master", UID:"podsec-master", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node podsec-master status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"podsec-master"}, FirstTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 0, time.Local), LastTimestamp:time.Date(2023, time.October, 25, 17, 13, 1, 721379111, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"podsec-master"}': 'Patch "https://10.88.11.114:8443/api/v1/namespaces/default/events/podsec-master.17915f2d07a23b60": EOF'(may retry after sleeping) окт 25 17:13:58 podsec-master kubelet.sh[15960]: E1025 17:13:58.186076 411 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.88.11.114:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/podsec-master?timeout=10s": context deadline exceeded - error from a previous attempt: EOF окт 25 17:13:58 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.351025 delay 0.000520, next query 33s окт 25 17:14:00 podsec-master kubelet.sh[15960]: I1025 17:14:00.812832 411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-podsec-master" podStartSLOduration=50.812523567 pod.CreationTimestamp="2023-10-25 17:13:10 +0300 MSK" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 17:14:00.615583021 +0300 MSK m=+59.617861340" watchObservedRunningTime="2023-10-25 17:14:00.812523567 +0300 MSK m=+59.814801878" окт 25 17:14:00 podsec-master kubelet.sh[15960]: I1025 17:14:00.812953 411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-podsec-master" podStartSLOduration=50.812932799 pod.CreationTimestamp="2023-10-25 17:13:10 +0300 MSK" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 17:13:59.622530606 +0300 MSK m=+58.624808905" watchObservedRunningTime="2023-10-25 17:14:00.812932799 +0300 MSK m=+59.815211118" окт 25 17:14:01 podsec-master kubelet.sh[15960]: E1025 17:14:01.485513 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:14:01 podsec-master kubelet.sh[15960]: I1025 17:14:01.495171 411 scope.go:115] "RemoveContainer" containerID="ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb" окт 25 17:14:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:01.497022819+03:00" level=info msg="Removing container: ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb" id=25aaf2f5-34ef-49e7-907f-0aab61852afb name=/runtime.v1.RuntimeService/RemoveContainer окт 25 17:14:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:01.503625387+03:00" level=info msg="Removed container ab0d9aaf3371237acb01cdee01299d9218e3b8e8ba5f1c5d6dc3a7d405356aeb: kube-system/kube-apiserver-podsec-master/kube-apiserver" id=25aaf2f5-34ef-49e7-907f-0aab61852afb name=/runtime.v1.RuntimeService/RemoveContainer окт 25 17:14:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:01.504587379+03:00" level=info msg="Stopping pod sandbox: 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95" id=227d6bb2-8c37-41ee-a982-b279e010e2e3 name=/runtime.v1.RuntimeService/StopPodSandbox окт 25 17:14:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:01.504700117+03:00" level=info msg="Stopped pod sandbox (already stopped): 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95" id=227d6bb2-8c37-41ee-a982-b279e010e2e3 name=/runtime.v1.RuntimeService/StopPodSandbox окт 25 17:14:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:01.505140449+03:00" level=info msg="Removing pod sandbox: 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95" id=fef955db-6725-465e-b9c5-0e90541858b2 name=/runtime.v1.RuntimeService/RemovePodSandbox окт 25 17:14:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:01.506880203+03:00" level=info msg="Removed pod sandbox: 57f68c5e6f1a6b2749cd6877b440c5cf16ffd6249658aecc0e980f1d95988c95" id=fef955db-6725-465e-b9c5-0e90541858b2 name=/runtime.v1.RuntimeService/RemovePodSandbox окт 25 17:14:01 podsec-master kubelet.sh[15960]: E1025 17:14:01.517199 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:14:04 podsec-master kubelet.sh[15960]: E1025 17:14:04.729092 411 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-podsec-master\" already exists" pod="kube-system/kube-apiserver-podsec-master" окт 25 17:14:04 podsec-master kubelet.sh[15960]: I1025 17:14:04.739312 411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-podsec-master" podStartSLOduration=0.739263977 pod.CreationTimestamp="2023-10-25 17:14:04 +0300 MSK" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 17:14:04.739083986 +0300 MSK m=+63.741362305" watchObservedRunningTime="2023-10-25 17:14:04.739263977 +0300 MSK m=+63.741542302" окт 25 17:14:11 podsec-master kubelet.sh[15960]: E1025 17:14:11.519147 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:14:13 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.341427 delay 0.043512, next query 33s окт 25 17:14:17 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.341784 delay 0.003227, next query 31s окт 25 17:14:21 podsec-master kubelet.sh[15960]: E1025 17:14:21.485682 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:14:21 podsec-master kubelet.sh[15960]: E1025 17:14:21.520601 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:14:28 podsec-master kubelet.sh[15960]: I1025 17:14:28.755744 411 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" окт 25 17:14:28 podsec-master kubelet.sh[15960]: I1025 17:14:28.756324 411 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" окт 25 17:14:31 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.335307 delay 0.000601, next query 31s окт 25 17:14:31 podsec-master kubelet.sh[15960]: I1025 17:14:31.275507 411 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" окт 25 17:14:31 podsec-master kubelet.sh[15960]: E1025 17:14:31.523309 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:14:41 podsec-master kubelet.sh[15960]: E1025 17:14:41.485733 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:14:41 podsec-master kubelet.sh[15960]: E1025 17:14:41.524645 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.279078 411 topology_manager.go:210] "Topology Admit Handler" podUID=482b0013-059a-4dac-a5e4-c7666fe7d4e2 podNamespace="kube-system" podName="kube-proxy-2qz9b" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.282115 411 topology_manager.go:210] "Topology Admit Handler" podUID=2481443c-c5fd-483c-ad53-6a01d1999769 podNamespace="kube-system" podName="coredns-9987f98bf-m6b7g" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.282202 411 topology_manager.go:210] "Topology Admit Handler" podUID=f0e23be4-a4da-43f8-997d-fdabec7cafe0 podNamespace="kube-flannel" podName="kube-flannel-ds-mnq5f" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.282282 411 topology_manager.go:210] "Topology Admit Handler" podUID=c26a93a0-fe02-4d3a-8e64-476813228743 podNamespace="kube-system" podName="coredns-9987f98bf-v8fzb" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.388938 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/482b0013-059a-4dac-a5e4-c7666fe7d4e2-kube-proxy\") pod \"kube-proxy-2qz9b\" (UID: \"482b0013-059a-4dac-a5e4-c7666fe7d4e2\") " pod="kube-system/kube-proxy-2qz9b" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389016 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2481443c-c5fd-483c-ad53-6a01d1999769-config-volume\") pod \"coredns-9987f98bf-m6b7g\" (UID: \"2481443c-c5fd-483c-ad53-6a01d1999769\") " pod="kube-system/coredns-9987f98bf-m6b7g" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389051 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/f0e23be4-a4da-43f8-997d-fdabec7cafe0-flannel-cfg\") pod \"kube-flannel-ds-mnq5f\" (UID: \"f0e23be4-a4da-43f8-997d-fdabec7cafe0\") " pod="kube-flannel/kube-flannel-ds-mnq5f" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389099 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e23be4-a4da-43f8-997d-fdabec7cafe0-xtables-lock\") pod \"kube-flannel-ds-mnq5f\" (UID: \"f0e23be4-a4da-43f8-997d-fdabec7cafe0\") " pod="kube-flannel/kube-flannel-ds-mnq5f" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389185 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2pz6\" (UniqueName: \"kubernetes.io/projected/f0e23be4-a4da-43f8-997d-fdabec7cafe0-kube-api-access-h2pz6\") pod \"kube-flannel-ds-mnq5f\" (UID: \"f0e23be4-a4da-43f8-997d-fdabec7cafe0\") " pod="kube-flannel/kube-flannel-ds-mnq5f" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389292 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7svvt\" (UniqueName: \"kubernetes.io/projected/482b0013-059a-4dac-a5e4-c7666fe7d4e2-kube-api-access-7svvt\") pod \"kube-proxy-2qz9b\" (UID: \"482b0013-059a-4dac-a5e4-c7666fe7d4e2\") " pod="kube-system/kube-proxy-2qz9b" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389334 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/f0e23be4-a4da-43f8-997d-fdabec7cafe0-cni\") pod \"kube-flannel-ds-mnq5f\" (UID: \"f0e23be4-a4da-43f8-997d-fdabec7cafe0\") " pod="kube-flannel/kube-flannel-ds-mnq5f" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389372 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/482b0013-059a-4dac-a5e4-c7666fe7d4e2-xtables-lock\") pod \"kube-proxy-2qz9b\" (UID: \"482b0013-059a-4dac-a5e4-c7666fe7d4e2\") " pod="kube-system/kube-proxy-2qz9b" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389409 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56bv\" (UniqueName: \"kubernetes.io/projected/2481443c-c5fd-483c-ad53-6a01d1999769-kube-api-access-x56bv\") pod \"coredns-9987f98bf-m6b7g\" (UID: \"2481443c-c5fd-483c-ad53-6a01d1999769\") " pod="kube-system/coredns-9987f98bf-m6b7g" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389443 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f0e23be4-a4da-43f8-997d-fdabec7cafe0-run\") pod \"kube-flannel-ds-mnq5f\" (UID: \"f0e23be4-a4da-43f8-997d-fdabec7cafe0\") " pod="kube-flannel/kube-flannel-ds-mnq5f" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389480 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jld2z\" (UniqueName: \"kubernetes.io/projected/c26a93a0-fe02-4d3a-8e64-476813228743-kube-api-access-jld2z\") pod \"coredns-9987f98bf-v8fzb\" (UID: \"c26a93a0-fe02-4d3a-8e64-476813228743\") " pod="kube-system/coredns-9987f98bf-v8fzb" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389535 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c26a93a0-fe02-4d3a-8e64-476813228743-config-volume\") pod \"coredns-9987f98bf-v8fzb\" (UID: \"c26a93a0-fe02-4d3a-8e64-476813228743\") " pod="kube-system/coredns-9987f98bf-v8fzb" окт 25 17:14:45 podsec-master kubelet.sh[15960]: I1025 17:14:45.389558 411 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/482b0013-059a-4dac-a5e4-c7666fe7d4e2-lib-modules\") pod \"kube-proxy-2qz9b\" (UID: \"482b0013-059a-4dac-a5e4-c7666fe7d4e2\") " pod="kube-system/kube-proxy-2qz9b" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.183859775+03:00" level=info msg="Running pod sandbox: kube-system/coredns-9987f98bf-m6b7g/POD" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.183948742+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.183867398+03:00" level=info msg="Running pod sandbox: kube-system/coredns-9987f98bf-v8fzb/POD" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.184041929+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.189833020+03:00" level=info msg="Got pod network &{Name:coredns-9987f98bf-m6b7g Namespace:kube-system ID:8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c UID:2481443c-c5fd-483c-ad53-6a01d1999769 NetNS:/run/user/482/usernetes/crio/ns/netns/4ac5756b-baca-4902-8319-2c24f88de6a4 Networks:[] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.189862981+03:00" level=info msg="Adding pod kube-system_coredns-9987f98bf-m6b7g to CNI network \"cbr0\" (type=flannel)" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.190034848+03:00" level=info msg="Got pod network &{Name:coredns-9987f98bf-v8fzb Namespace:kube-system ID:f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7 UID:c26a93a0-fe02-4d3a-8e64-476813228743 NetNS:/run/user/482/usernetes/crio/ns/netns/060b08ef-efec-412a-aefc-3254f9ff53b2 Networks:[] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.190060798+03:00" level=info msg="Adding pod kube-system_coredns-9987f98bf-v8fzb to CNI network \"cbr0\" (type=flannel)" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.192665081+03:00" level=info msg="NetworkStart: stopping network for sandbox 8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.192725316+03:00" level=info msg="Got pod network &{Name:coredns-9987f98bf-m6b7g Namespace:kube-system ID:8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c UID:2481443c-c5fd-483c-ad53-6a01d1999769 NetNS:/run/user/482/usernetes/crio/ns/netns/4ac5756b-baca-4902-8319-2c24f88de6a4 Networks:[] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.192746282+03:00" level=error msg="error loading cached network config: network \"cbr0\" not found in CNI cache" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.192755539+03:00" level=warning msg="falling back to loading from existing plugins on disk" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.192764155+03:00" level=info msg="Deleting pod kube-system_coredns-9987f98bf-m6b7g from CNI network \"cbr0\" (type=flannel)" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.193264162+03:00" level=info msg="NetworkStart: stopping network for sandbox f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.193310358+03:00" level=info msg="Got pod network &{Name:coredns-9987f98bf-v8fzb Namespace:kube-system ID:f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7 UID:c26a93a0-fe02-4d3a-8e64-476813228743 NetNS:/run/user/482/usernetes/crio/ns/netns/060b08ef-efec-412a-aefc-3254f9ff53b2 Networks:[] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.193329169+03:00" level=error msg="error loading cached network config: network \"cbr0\" not found in CNI cache" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.193337163+03:00" level=warning msg="falling back to loading from existing plugins on disk" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.193345201+03:00" level=info msg="Deleting pod kube-system_coredns-9987f98bf-v8fzb from CNI network \"cbr0\" (type=flannel)" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.197961519+03:00" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox 8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198108400+03:00" level=info msg="runSandbox: cleaning up namespaces after failing to run sandbox f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198121444+03:00" level=info msg="runSandbox: deleting pod ID 8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c from idIndex" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198167972+03:00" level=info msg="runSandbox: removing pod sandbox 8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198194272+03:00" level=info msg="runSandbox: deleting container ID from idIndex for sandbox 8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198210599+03:00" level=info msg="runSandbox: unmounting shmPath for sandbox 8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198212056+03:00" level=info msg="runSandbox: deleting pod ID f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7 from idIndex" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198250262+03:00" level=info msg="runSandbox: removing pod sandbox f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198267291+03:00" level=info msg="runSandbox: deleting container ID from idIndex for sandbox f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198280493+03:00" level=info msg="runSandbox: unmounting shmPath for sandbox f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198311137+03:00" level=info msg="runSandbox: removing pod sandbox from storage: 8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.198876255+03:00" level=info msg="runSandbox: removing pod sandbox from storage: f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.199660371+03:00" level=info msg="runSandbox: releasing container name: k8s_POD_coredns-9987f98bf-m6b7g_kube-system_2481443c-c5fd-483c-ad53-6a01d1999769_0" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.199689267+03:00" level=info msg="runSandbox: releasing pod sandbox name: k8s_coredns-9987f98bf-m6b7g_kube-system_2481443c-c5fd-483c-ad53-6a01d1999769_0" id=ed6edcb6-fda1-4b33-92ed-27e8cdc5e3ea name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master kubelet.sh[15960]: E1025 17:14:46.199938 411 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-9987f98bf-m6b7g_kube-system_2481443c-c5fd-483c-ad53-6a01d1999769_0(8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c): error adding pod kube-system_coredns-9987f98bf-m6b7g to CNI network \"cbr0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" окт 25 17:14:46 podsec-master kubelet.sh[15960]: E1025 17:14:46.199995 411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-9987f98bf-m6b7g_kube-system_2481443c-c5fd-483c-ad53-6a01d1999769_0(8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c): error adding pod kube-system_coredns-9987f98bf-m6b7g to CNI network \"cbr0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-9987f98bf-m6b7g" окт 25 17:14:46 podsec-master kubelet.sh[15960]: E1025 17:14:46.200028 411 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-9987f98bf-m6b7g_kube-system_2481443c-c5fd-483c-ad53-6a01d1999769_0(8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c): error adding pod kube-system_coredns-9987f98bf-m6b7g to CNI network \"cbr0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-9987f98bf-m6b7g" окт 25 17:14:46 podsec-master kubelet.sh[15960]: E1025 17:14:46.200090 411 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-9987f98bf-m6b7g_kube-system(2481443c-c5fd-483c-ad53-6a01d1999769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-9987f98bf-m6b7g_kube-system(2481443c-c5fd-483c-ad53-6a01d1999769)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-9987f98bf-m6b7g_kube-system_2481443c-c5fd-483c-ad53-6a01d1999769_0(8a9c2ab5294d5897a41a7e4ca4ce80bd3034605d9d7577e8ed1d3d70f46d8f9c): error adding pod kube-system_coredns-9987f98bf-m6b7g to CNI network \\\"cbr0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-9987f98bf-m6b7g" podUID=2481443c-c5fd-483c-ad53-6a01d1999769 окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.201098603+03:00" level=info msg="runSandbox: releasing container name: k8s_POD_coredns-9987f98bf-v8fzb_kube-system_c26a93a0-fe02-4d3a-8e64-476813228743_0" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.201139021+03:00" level=info msg="runSandbox: releasing pod sandbox name: k8s_coredns-9987f98bf-v8fzb_kube-system_c26a93a0-fe02-4d3a-8e64-476813228743_0" id=bf5d27be-7ca2-4d7d-808d-4b71416bbed8 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master kubelet.sh[15960]: E1025 17:14:46.201348 411 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-9987f98bf-v8fzb_kube-system_c26a93a0-fe02-4d3a-8e64-476813228743_0(f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7): error adding pod kube-system_coredns-9987f98bf-v8fzb to CNI network \"cbr0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" окт 25 17:14:46 podsec-master kubelet.sh[15960]: E1025 17:14:46.201409 411 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-9987f98bf-v8fzb_kube-system_c26a93a0-fe02-4d3a-8e64-476813228743_0(f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7): error adding pod kube-system_coredns-9987f98bf-v8fzb to CNI network \"cbr0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-9987f98bf-v8fzb" окт 25 17:14:46 podsec-master kubelet.sh[15960]: E1025 17:14:46.201435 411 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-9987f98bf-v8fzb_kube-system_c26a93a0-fe02-4d3a-8e64-476813228743_0(f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7): error adding pod kube-system_coredns-9987f98bf-v8fzb to CNI network \"cbr0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-9987f98bf-v8fzb" окт 25 17:14:46 podsec-master kubelet.sh[15960]: E1025 17:14:46.201507 411 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-9987f98bf-v8fzb_kube-system(c26a93a0-fe02-4d3a-8e64-476813228743)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-9987f98bf-v8fzb_kube-system(c26a93a0-fe02-4d3a-8e64-476813228743)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-9987f98bf-v8fzb_kube-system_c26a93a0-fe02-4d3a-8e64-476813228743_0(f78ba8cb38a2325e1f2a92de8022b0daf7ae5888d2e60c2eb8e1026952d6bca7): error adding pod kube-system_coredns-9987f98bf-v8fzb to CNI network \\\"cbr0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-9987f98bf-v8fzb" podUID=c26a93a0-fe02-4d3a-8e64-476813228743 окт 25 17:14:46 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.325709 delay 0.043571, next query 34s окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.486591115+03:00" level=info msg="Running pod sandbox: kube-flannel/kube-flannel-ds-mnq5f/POD" id=e5830b30-0185-48a3-b084-37bc1c21d79c name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.486645925+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:46 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.491724290+03:00" level=info msg="Ran pod sandbox 24223a413f5e286d1d3bbfa6035e3dd8d464984f3b720032ebc78cefd68e8dde with infra container: kube-flannel/kube-flannel-ds-mnq5f/POD" id=e5830b30-0185-48a3-b084-37bc1c21d79c name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.492550726+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/flannel:v0.19.2" id=73419dc3-ddcd-478e-a282-46afbb4f0de1 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.492718722+03:00" level=info msg="Image registry.altlinux.org/k8s-p10/flannel:v0.19.2 not found" id=73419dc3-ddcd-478e-a282-46afbb4f0de1 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.493317349+03:00" level=info msg="Pulling image: registry.altlinux.org/k8s-p10/flannel:v0.19.2" id=e91de680-d9d3-4500-a890-df5f6c030d4f name=/runtime.v1.ImageService/PullImage окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.493533990+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/flannel:v0.19.2\"" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.748736847+03:00" level=info msg="Trying to access \"registry.altlinux.org/k8s-p10/flannel:v0.19.2\"" окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.781175899+03:00" level=info msg="Running pod sandbox: kube-system/kube-proxy-2qz9b/POD" id=97decd23-3454-4324-9c95-4633d8d5d074 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.781377681+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:46 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.786386426+03:00" level=info msg="Ran pod sandbox 529d998fbb5e2fa0b43e89aacca8626e599691ee888c838a1178481e25fc9c2c with infra container: kube-system/kube-proxy-2qz9b/POD" id=97decd23-3454-4324-9c95-4633d8d5d074 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.787026332+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3" id=0704294b-4201-493d-8a01-fbe91f75f4f3 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.787181900+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:50238aea4c6f7a242f2a54d922ff2459915bd52f23b24f90b81d287a95f6f6de,RepoTags:[registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-proxy@sha256:0b5407b4b2d8609f624802ac16d12806d65a0170fc2d2594a496280352a4bde5 registry.altlinux.org/k8s-p10/kube-proxy@sha256:f7997aeadf75fc571ee639fbc988ff6b0646f55b0b72f6b20dc8622b6d153a65],Size_:352779985,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:350ce844173cb559b5539f51995b7336f0268d56d0d4db4007bf8bf9567aac6c,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=0704294b-4201-493d-8a01-fbe91f75f4f3 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.787803392+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3" id=5167fa3e-7bd3-434c-9c5c-9265a683fc35 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.788026747+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:50238aea4c6f7a242f2a54d922ff2459915bd52f23b24f90b81d287a95f6f6de,RepoTags:[registry.altlinux.org/k8s-p10/kube-proxy:v1.26.3],RepoDigests:[registry.altlinux.org/k8s-p10/kube-proxy@sha256:0b5407b4b2d8609f624802ac16d12806d65a0170fc2d2594a496280352a4bde5 registry.altlinux.org/k8s-p10/kube-proxy@sha256:f7997aeadf75fc571ee639fbc988ff6b0646f55b0b72f6b20dc8622b6d153a65],Size_:352779985,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:350ce844173cb559b5539f51995b7336f0268d56d0d4db4007bf8bf9567aac6c,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=5167fa3e-7bd3-434c-9c5c-9265a683fc35 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.788611941+03:00" level=info msg="Creating container: kube-system/kube-proxy-2qz9b/kube-proxy" id=25baf7d7-50c9-4e2a-8aad-7fcbe7b9cabe name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.788697049+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:46 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/VXHJ7REYDRK77QQFHFJVE5MGJS' does not support file handles, falling back to xino=off. окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.813880549+03:00" level=info msg="Created container f153ea23ad9fd19e37c796b7f3c430f22ef66ddb3ea95328e1c5589323a5b864: kube-system/kube-proxy-2qz9b/kube-proxy" id=25baf7d7-50c9-4e2a-8aad-7fcbe7b9cabe name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.814437278+03:00" level=info msg="Starting container: f153ea23ad9fd19e37c796b7f3c430f22ef66ddb3ea95328e1c5589323a5b864" id=cbf76031-6e3e-4bab-8609-9df0d8ea9901 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:14:46 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:46.818537144+03:00" level=info msg="Started container" PID=654 containerID=f153ea23ad9fd19e37c796b7f3c430f22ef66ddb3ea95328e1c5589323a5b864 description=kube-system/kube-proxy-2qz9b/kube-proxy id=cbf76031-6e3e-4bab-8609-9df0d8ea9901 name=/runtime.v1.RuntimeService/StartContainer sandboxID=529d998fbb5e2fa0b43e89aacca8626e599691ee888c838a1178481e25fc9c2c окт 25 17:14:47 podsec-master kubelet.sh[15960]: I1025 17:14:47.793143 411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2qz9b" podStartSLOduration=23.793106211 pod.CreationTimestamp="2023-10-25 17:14:24 +0300 MSK" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 17:14:47.793066374 +0300 MSK m=+106.795344693" watchObservedRunningTime="2023-10-25 17:14:47.793106211 +0300 MSK m=+106.795384533" окт 25 17:14:48 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.326932 delay 0.003313, next query 30s окт 25 17:14:51 podsec-master kubelet.sh[15960]: E1025 17:14:51.525647 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.524889381+03:00" level=info msg="Pulled image: registry.altlinux.org/k8s-p10/flannel@sha256:5fb1a2308d8e962daf8fba82dd93d35d19ec39d80c1a4cb6339edff53234e2a9" id=e91de680-d9d3-4500-a890-df5f6c030d4f name=/runtime.v1.ImageService/PullImage окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.528142417+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/flannel:v0.19.2" id=53589ca7-e7e0-4721-a944-2d979f5c6d26 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.529267080+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:691044311f83bdab7633a1f7a7d4af9273c1c0184399b4ef31504f4719e693f3,RepoTags:[registry.altlinux.org/k8s-p10/flannel:v0.19.2],RepoDigests:[registry.altlinux.org/k8s-p10/flannel@sha256:5fb1a2308d8e962daf8fba82dd93d35d19ec39d80c1a4cb6339edff53234e2a9 registry.altlinux.org/k8s-p10/flannel@sha256:99cfc9797a1dee8b25a11fb50d96999f99c481f72205e98b0f1c95c01920abd9],Size_:236988975,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:a9f2639740e29b05e729c046fdd5b10632fd46ebdc74f332bbc6310035121700,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=53589ca7-e7e0-4721-a944-2d979f5c6d26 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.530096717+03:00" level=info msg="Creating container: kube-flannel/kube-flannel-ds-mnq5f/install-cni" id=d11966da-5c50-45e1-ab68-ad130d08c09e name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.530250965+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:52 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/UXE7FCZXQISTBVXOLVCFUYHNHZ' does not support file handles, falling back to xino=off. окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.550556434+03:00" level=info msg="Created container 416564f2e1174b09920caf28abee2da02521c8f40bb68953ede98dabb22d17ab: kube-flannel/kube-flannel-ds-mnq5f/install-cni" id=d11966da-5c50-45e1-ab68-ad130d08c09e name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.551038032+03:00" level=info msg="Starting container: 416564f2e1174b09920caf28abee2da02521c8f40bb68953ede98dabb22d17ab" id=66f28bfb-0318-484c-ba97-f555405b6779 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.554482328+03:00" level=info msg="Started container" PID=783 containerID=416564f2e1174b09920caf28abee2da02521c8f40bb68953ede98dabb22d17ab description=kube-flannel/kube-flannel-ds-mnq5f/install-cni id=66f28bfb-0318-484c-ba97-f555405b6779 name=/runtime.v1.RuntimeService/StartContainer sandboxID=24223a413f5e286d1d3bbfa6035e3dd8d464984f3b720032ebc78cefd68e8dde окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.556282839+03:00" level=info msg="CNI monitoring event WRITE \"/etc/cni/net.d/10-flannel.conflist\"" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.562390621+03:00" level=info msg="Found CNI network cbr0 (type=flannel) at /etc/cni/net.d/10-flannel.conflist" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.566734770+03:00" level=info msg="Found CNI network u7s-bridge (type=bridge) at /etc/cni/net.d/50-bridge.conf" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.571641548+03:00" level=info msg="Found CNI network 99-loopback.conf (type=loopback) at /etc/cni/net.d/99-loopback.conf" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.571662796+03:00" level=info msg="Updated default CNI network name to cbr0" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.571684380+03:00" level=info msg="CNI monitoring event WRITE \"/etc/cni/net.d/10-flannel.conflist\"" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.577267525+03:00" level=info msg="Found CNI network cbr0 (type=flannel) at /etc/cni/net.d/10-flannel.conflist" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.580433440+03:00" level=info msg="Found CNI network u7s-bridge (type=bridge) at /etc/cni/net.d/50-bridge.conf" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.583297712+03:00" level=info msg="Found CNI network 99-loopback.conf (type=loopback) at /etc/cni/net.d/99-loopback.conf" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.583321816+03:00" level=info msg="Updated default CNI network name to cbr0" окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.794385705+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/flannel:v0.19.2" id=3735c03f-ea8e-4417-8262-e38b1d3751df name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.795773054+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:691044311f83bdab7633a1f7a7d4af9273c1c0184399b4ef31504f4719e693f3,RepoTags:[registry.altlinux.org/k8s-p10/flannel:v0.19.2],RepoDigests:[registry.altlinux.org/k8s-p10/flannel@sha256:5fb1a2308d8e962daf8fba82dd93d35d19ec39d80c1a4cb6339edff53234e2a9 registry.altlinux.org/k8s-p10/flannel@sha256:99cfc9797a1dee8b25a11fb50d96999f99c481f72205e98b0f1c95c01920abd9],Size_:236988975,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:a9f2639740e29b05e729c046fdd5b10632fd46ebdc74f332bbc6310035121700,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=3735c03f-ea8e-4417-8262-e38b1d3751df name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.796615721+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/flannel:v0.19.2" id=a2f2c186-f690-4f5c-ab0e-f4065471da0c name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.797943269+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:691044311f83bdab7633a1f7a7d4af9273c1c0184399b4ef31504f4719e693f3,RepoTags:[registry.altlinux.org/k8s-p10/flannel:v0.19.2],RepoDigests:[registry.altlinux.org/k8s-p10/flannel@sha256:5fb1a2308d8e962daf8fba82dd93d35d19ec39d80c1a4cb6339edff53234e2a9 registry.altlinux.org/k8s-p10/flannel@sha256:99cfc9797a1dee8b25a11fb50d96999f99c481f72205e98b0f1c95c01920abd9],Size_:236988975,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:a9f2639740e29b05e729c046fdd5b10632fd46ebdc74f332bbc6310035121700,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=a2f2c186-f690-4f5c-ab0e-f4065471da0c name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.798781058+03:00" level=info msg="Creating container: kube-flannel/kube-flannel-ds-mnq5f/kube-flannel" id=3afa8079-aa50-430c-bd08-aca2b34c623a name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.798891823+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:52 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/UXE7FCZXQISTBVXOLVCFUYHNHZ' does not support file handles, falling back to xino=off. окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.820174591+03:00" level=info msg="Created container 8dc23f5b74a3d76a3b96e0adccc0ee8dbe26ef2dee108ae3b8e8056e744dd5cc: kube-flannel/kube-flannel-ds-mnq5f/kube-flannel" id=3afa8079-aa50-430c-bd08-aca2b34c623a name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.820837940+03:00" level=info msg="Starting container: 8dc23f5b74a3d76a3b96e0adccc0ee8dbe26ef2dee108ae3b8e8056e744dd5cc" id=071f7f92-c668-4b28-ae62-a8c8e4bbd277 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:14:52 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:52.823873525+03:00" level=info msg="Started container" PID=842 containerID=8dc23f5b74a3d76a3b96e0adccc0ee8dbe26ef2dee108ae3b8e8056e744dd5cc description=kube-flannel/kube-flannel-ds-mnq5f/kube-flannel id=071f7f92-c668-4b28-ae62-a8c8e4bbd277 name=/runtime.v1.RuntimeService/StartContainer sandboxID=24223a413f5e286d1d3bbfa6035e3dd8d464984f3b720032ebc78cefd68e8dde окт 25 17:14:53 podsec-master kubelet.sh[15960]: I1025 17:14:53.806067 411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-mnq5f" podStartSLOduration=-9.223372007048754e+09 pod.CreationTimestamp="2023-10-25 17:14:24 +0300 MSK" firstStartedPulling="2023-10-25 17:14:46.492933436 +0300 MSK m=+105.495211750" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 17:14:53.805918845 +0300 MSK m=+112.808197165" watchObservedRunningTime="2023-10-25 17:14:53.806021271 +0300 MSK m=+112.808299597" окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.617078090+03:00" level=info msg="Running pod sandbox: kube-system/coredns-9987f98bf-m6b7g/POD" id=0b0e348a-14a4-4c0a-9fd2-7681a0f8512f name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.617138453+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.621851477+03:00" level=info msg="Got pod network &{Name:coredns-9987f98bf-m6b7g Namespace:kube-system ID:10ff4c49c2a2b3d14026821f5a5b7565034f055c972b1cb22c5be328fabb4f9f UID:2481443c-c5fd-483c-ad53-6a01d1999769 NetNS:/run/user/482/usernetes/crio/ns/netns/02d6aaad-5460-4cb6-8320-aee967baa310 Networks:[] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.621881446+03:00" level=info msg="Adding pod kube-system_coredns-9987f98bf-m6b7g to CNI network \"cbr0\" (type=flannel)" окт 25 17:14:58 podsec-master kernel: cni0: port 1(veth56a3291c) entered blocking state окт 25 17:14:58 podsec-master kernel: cni0: port 1(veth56a3291c) entered disabled state окт 25 17:14:58 podsec-master kernel: device veth56a3291c entered promiscuous mode окт 25 17:14:58 podsec-master kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready окт 25 17:14:58 podsec-master kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth56a3291c: link becomes ready окт 25 17:14:58 podsec-master kernel: cni0: port 1(veth56a3291c) entered blocking state окт 25 17:14:58 podsec-master kernel: cni0: port 1(veth56a3291c) entered forwarding state окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.244.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000148f0), "name":"cbr0", "type":"bridge"} окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: delegateAdd: netconf sent to delegate plugin: окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.244.0.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":65470,"name":"cbr0","type":"bridge"}time="2023-10-25 17:14:58.642964090+03:00" level=info msg="Got pod network &{Name:coredns-9987f98bf-m6b7g Namespace:kube-system ID:10ff4c49c2a2b3d14026821f5a5b7565034f055c972b1cb22c5be328fabb4f9f UID:2481443c-c5fd-483c-ad53-6a01d1999769 NetNS:/run/user/482/usernetes/crio/ns/netns/02d6aaad-5460-4cb6-8320-aee967baa310 Networks:[] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.643119804+03:00" level=info msg="Checking pod kube-system_coredns-9987f98bf-m6b7g for CNI network cbr0 (type=flannel)" окт 25 17:14:58 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.645254368+03:00" level=info msg="Ran pod sandbox 10ff4c49c2a2b3d14026821f5a5b7565034f055c972b1cb22c5be328fabb4f9f with infra container: kube-system/coredns-9987f98bf-m6b7g/POD" id=0b0e348a-14a4-4c0a-9fd2-7681a0f8512f name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.646219994+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/coredns:v1.9.3" id=f62d202f-992d-43e5-b87f-8458f6eaf1b7 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.646361915+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6dafeaaf74e8e7139ee77a24c6810495f6a9c0df76d8fd8090d5f95e3a1b0eee,RepoTags:[registry.altlinux.org/k8s-p10/coredns:v1.9.3],RepoDigests:[registry.altlinux.org/k8s-p10/coredns@sha256:00e9130a6bef9ba103693951da35424d4df820c885589ab4754016ed1622c07a registry.altlinux.org/k8s-p10/coredns@sha256:785c5093dc1f8ccd9d6aa05bfe93564e1888aa76ecc250500ba4c7236613bc65],Size_:173554322,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:49fb93fd41f89790bb6f64ea44ff34b893c96a9625c00d4ac38e1fa93b0a8c00,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=f62d202f-992d-43e5-b87f-8458f6eaf1b7 name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.647033588+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/coredns:v1.9.3" id=6bcea532-9787-452e-8f4e-7c7d0fa5228b name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.647263255+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6dafeaaf74e8e7139ee77a24c6810495f6a9c0df76d8fd8090d5f95e3a1b0eee,RepoTags:[registry.altlinux.org/k8s-p10/coredns:v1.9.3],RepoDigests:[registry.altlinux.org/k8s-p10/coredns@sha256:00e9130a6bef9ba103693951da35424d4df820c885589ab4754016ed1622c07a registry.altlinux.org/k8s-p10/coredns@sha256:785c5093dc1f8ccd9d6aa05bfe93564e1888aa76ecc250500ba4c7236613bc65],Size_:173554322,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:49fb93fd41f89790bb6f64ea44ff34b893c96a9625c00d4ac38e1fa93b0a8c00,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=6bcea532-9787-452e-8f4e-7c7d0fa5228b name=/runtime.v1.ImageService/ImageStatus окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.648041395+03:00" level=info msg="Creating container: kube-system/coredns-9987f98bf-m6b7g/coredns" id=c4988791-49ea-4c33-bfc2-e0b35114ee85 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.648147215+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:14:58 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/VXXZYXMYOI2TWYI5SCZTMI6MIX' does not support file handles, falling back to xino=off. окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.669815632+03:00" level=info msg="Created container cf5baea820de23bb81351914f4be25488c3fea05389301a3b604d94ad31adc0c: kube-system/coredns-9987f98bf-m6b7g/coredns" id=c4988791-49ea-4c33-bfc2-e0b35114ee85 name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.670361514+03:00" level=info msg="Starting container: cf5baea820de23bb81351914f4be25488c3fea05389301a3b604d94ad31adc0c" id=18ea59f9-8fc2-4c8e-9697-923e9d4b3c06 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:14:58 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:14:58.673684317+03:00" level=info msg="Started container" PID=911 containerID=cf5baea820de23bb81351914f4be25488c3fea05389301a3b604d94ad31adc0c description=kube-system/coredns-9987f98bf-m6b7g/coredns id=18ea59f9-8fc2-4c8e-9697-923e9d4b3c06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=10ff4c49c2a2b3d14026821f5a5b7565034f055c972b1cb22c5be328fabb4f9f окт 25 17:14:58 podsec-master kubelet.sh[15960]: I1025 17:14:58.812767 411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-9987f98bf-m6b7g" podStartSLOduration=34.812716546 pod.CreationTimestamp="2023-10-25 17:14:24 +0300 MSK" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 17:14:58.812382642 +0300 MSK m=+117.814660957" watchObservedRunningTime="2023-10-25 17:14:58.812716546 +0300 MSK m=+117.814994869" окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.617155708+03:00" level=info msg="Running pod sandbox: kube-system/coredns-9987f98bf-v8fzb/POD" id=56fa7211-b23c-4be1-9ba6-bbff09418005 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.617224389+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.622399120+03:00" level=info msg="Got pod network &{Name:coredns-9987f98bf-v8fzb Namespace:kube-system ID:a40a1034e6779004a4e51ec55d136293163696c3403235dfc7a6def2756b631b UID:c26a93a0-fe02-4d3a-8e64-476813228743 NetNS:/run/user/482/usernetes/crio/ns/netns/cffaf3d4-b9d5-45b9-a97b-cda2f0827691 Networks:[] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.622444623+03:00" level=info msg="Adding pod kube-system_coredns-9987f98bf-v8fzb to CNI network \"cbr0\" (type=flannel)" окт 25 17:15:00 podsec-master kernel: cni0: port 2(vethca543039) entered blocking state окт 25 17:15:00 podsec-master kernel: cni0: port 2(vethca543039) entered disabled state окт 25 17:15:00 podsec-master kernel: device vethca543039 entered promiscuous mode окт 25 17:15:00 podsec-master kernel: cni0: port 2(vethca543039) entered blocking state окт 25 17:15:00 podsec-master kernel: cni0: port 2(vethca543039) entered forwarding state окт 25 17:15:00 podsec-master kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethca543039: link becomes ready окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.244.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000148f0), "name":"cbr0", "type":"bridge"} окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: delegateAdd: netconf sent to delegate plugin: окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.244.0.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":65470,"name":"cbr0","type":"bridge"}time="2023-10-25 17:15:00.640389596+03:00" level=info msg="Got pod network &{Name:coredns-9987f98bf-v8fzb Namespace:kube-system ID:a40a1034e6779004a4e51ec55d136293163696c3403235dfc7a6def2756b631b UID:c26a93a0-fe02-4d3a-8e64-476813228743 NetNS:/run/user/482/usernetes/crio/ns/netns/cffaf3d4-b9d5-45b9-a97b-cda2f0827691 Networks:[] RuntimeConfig:map[cbr0:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}] Aliases:map[]}" окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.640504612+03:00" level=info msg="Checking pod kube-system_coredns-9987f98bf-v8fzb for CNI network cbr0 (type=flannel)" окт 25 17:15:00 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/ESEP4RMP3LO2EAO6I6JPXTY52A' does not support file handles, falling back to xino=off. окт 25 17:15:00 podsec-master kubelet.sh[15960]: W1025 17:15:00.642445 411 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods/burstable/podc26a93a0-fe02-4d3a-8e64-476813228743/crio-a40a1034e6779004a4e51ec55d136293163696c3403235dfc7a6def2756b631b WatchSource:0}: Error finding container a40a1034e6779004a4e51ec55d136293163696c3403235dfc7a6def2756b631b: Status 404 returned error can't find the container with id a40a1034e6779004a4e51ec55d136293163696c3403235dfc7a6def2756b631b окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.642999189+03:00" level=info msg="Ran pod sandbox a40a1034e6779004a4e51ec55d136293163696c3403235dfc7a6def2756b631b with infra container: kube-system/coredns-9987f98bf-v8fzb/POD" id=56fa7211-b23c-4be1-9ba6-bbff09418005 name=/runtime.v1.RuntimeService/RunPodSandbox окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.643918267+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/coredns:v1.9.3" id=d1323068-57ee-49ec-9ecd-896273a1a6fa name=/runtime.v1.ImageService/ImageStatus окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.644085003+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6dafeaaf74e8e7139ee77a24c6810495f6a9c0df76d8fd8090d5f95e3a1b0eee,RepoTags:[registry.altlinux.org/k8s-p10/coredns:v1.9.3],RepoDigests:[registry.altlinux.org/k8s-p10/coredns@sha256:00e9130a6bef9ba103693951da35424d4df820c885589ab4754016ed1622c07a registry.altlinux.org/k8s-p10/coredns@sha256:785c5093dc1f8ccd9d6aa05bfe93564e1888aa76ecc250500ba4c7236613bc65],Size_:173554322,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:49fb93fd41f89790bb6f64ea44ff34b893c96a9625c00d4ac38e1fa93b0a8c00,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=d1323068-57ee-49ec-9ecd-896273a1a6fa name=/runtime.v1.ImageService/ImageStatus окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.644658651+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/coredns:v1.9.3" id=3dd2f8f7-c5a3-42dc-a36d-fcbde27e2f75 name=/runtime.v1.ImageService/ImageStatus окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.644867363+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6dafeaaf74e8e7139ee77a24c6810495f6a9c0df76d8fd8090d5f95e3a1b0eee,RepoTags:[registry.altlinux.org/k8s-p10/coredns:v1.9.3],RepoDigests:[registry.altlinux.org/k8s-p10/coredns@sha256:00e9130a6bef9ba103693951da35424d4df820c885589ab4754016ed1622c07a registry.altlinux.org/k8s-p10/coredns@sha256:785c5093dc1f8ccd9d6aa05bfe93564e1888aa76ecc250500ba4c7236613bc65],Size_:173554322,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:49fb93fd41f89790bb6f64ea44ff34b893c96a9625c00d4ac38e1fa93b0a8c00,org.opencontainers.image.base.name: registry.altlinux.org/alt/alt:p10,},},Pinned:false,},Info:map[string]string{},}" id=3dd2f8f7-c5a3-42dc-a36d-fcbde27e2f75 name=/runtime.v1.ImageService/ImageStatus окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.645594539+03:00" level=info msg="Creating container: kube-system/coredns-9987f98bf-v8fzb/coredns" id=804ab83b-90d0-470e-83f6-dd5693dc38ea name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.645756920+03:00" level=warning msg="Allowed annotations are specified for workload []" окт 25 17:15:00 podsec-master kernel: overlayfs: fs on '/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay/l/VXXZYXMYOI2TWYI5SCZTMI6MIX' does not support file handles, falling back to xino=off. окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.665086215+03:00" level=info msg="Created container 42658fc90acaafe37305f68d424301ed3d0f8c8253023fd2044d6eb3fdf47ee6: kube-system/coredns-9987f98bf-v8fzb/coredns" id=804ab83b-90d0-470e-83f6-dd5693dc38ea name=/runtime.v1.RuntimeService/CreateContainer окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.665680672+03:00" level=info msg="Starting container: 42658fc90acaafe37305f68d424301ed3d0f8c8253023fd2044d6eb3fdf47ee6" id=487d8f78-189a-4a58-855b-19439673dca3 name=/runtime.v1.RuntimeService/StartContainer окт 25 17:15:00 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:15:00.669324018+03:00" level=info msg="Started container" PID=1014 containerID=42658fc90acaafe37305f68d424301ed3d0f8c8253023fd2044d6eb3fdf47ee6 description=kube-system/coredns-9987f98bf-v8fzb/coredns id=487d8f78-189a-4a58-855b-19439673dca3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a40a1034e6779004a4e51ec55d136293163696c3403235dfc7a6def2756b631b окт 25 17:15:00 podsec-master kubelet.sh[15960]: I1025 17:15:00.819183 411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-9987f98bf-v8fzb" podStartSLOduration=36.81914875 pod.CreationTimestamp="2023-10-25 17:14:24 +0300 MSK" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 17:15:00.819079322 +0300 MSK m=+119.821357641" watchObservedRunningTime="2023-10-25 17:15:00.81914875 +0300 MSK m=+119.821427069" окт 25 17:15:01 podsec-master kubelet.sh[15960]: E1025 17:15:01.485375 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:15:01 podsec-master kubelet.sh[15960]: E1025 17:15:01.527109 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:15:01 podsec-master kubelet.sh[15960]: I1025 17:15:01.821354 411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-podsec-master" podStartSLOduration=1.8213143889999999 pod.CreationTimestamp="2023-10-25 17:15:00 +0300 MSK" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-25 17:15:01.624973088 +0300 MSK m=+120.627251412" watchObservedRunningTime="2023-10-25 17:15:01.821314389 +0300 MSK m=+120.823592703" окт 25 17:15:02 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.320607 delay 0.000525, next query 30s окт 25 17:15:05 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.319768 delay 0.034483, next query 6s окт 25 17:15:11 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.316908 delay 0.034472, next query 9s окт 25 17:15:11 podsec-master kubelet.sh[15960]: E1025 17:15:11.528790 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:15:18 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.312819 delay 0.003154, next query 33s окт 25 17:15:20 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.312570 delay 0.034506, next query 7s окт 25 17:15:20 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.309501 delay 0.043536, next query 34s окт 25 17:15:21 podsec-master kubelet.sh[15960]: E1025 17:15:21.485956 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:15:21 podsec-master kubelet.sh[15960]: E1025 17:15:21.530547 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:15:27 podsec-master ntpd[2620]: peer 91.206.16.3 now valid окт 25 17:15:27 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.309349 delay 0.034523, next query 8s окт 25 17:15:31 podsec-master kubelet.sh[15960]: E1025 17:15:31.532226 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:15:32 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.306341 delay 0.000557, next query 30s окт 25 17:15:35 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.305395 delay 0.034426, next query 8s окт 25 17:15:41 podsec-master kubelet.sh[15960]: E1025 17:15:41.485755 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:15:41 podsec-master kubelet.sh[15960]: E1025 17:15:41.534337 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:15:43 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.301660 delay 0.034488, next query 7s окт 25 17:15:50 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.298293 delay 0.034342, next query 31s окт 25 17:15:51 podsec-master kubelet.sh[15960]: E1025 17:15:51.536289 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:15:51 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.297045 delay 0.003152, next query 30s окт 25 17:15:54 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.293266 delay 0.043647, next query 32s окт 25 17:16:01 podsec-master kubelet.sh[15960]: E1025 17:16:01.485088 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:16:01 podsec-master kubelet.sh[15960]: E1025 17:16:01.537816 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:16:02 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.292053 delay 0.000587, next query 30s окт 25 17:16:11 podsec-master kubelet.sh[15960]: E1025 17:16:11.539063 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:16:21 podsec-master kubelet.sh[15960]: E1025 17:16:21.485883 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:16:21 podsec-master kubelet.sh[15960]: E1025 17:16:21.541231 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:16:21 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.282739 delay 0.003227, next query 34s окт 25 17:16:21 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.283516 delay 0.034383, next query 34s окт 25 17:16:26 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.278032 delay 0.043641, next query 34s окт 25 17:16:31 podsec-master kubelet.sh[15960]: E1025 17:16:31.542605 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:16:32 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.277706 delay 0.000488, next query 31s окт 25 17:16:32 podsec-master ntpd[2619]: adjusting local clock by 0.327013s окт 25 17:16:32 podsec-master ntpd[2619]: interval 251.783 olddelta 0.298 (delta - olddelta) 0.029 окт 25 17:16:32 podsec-master ntpd[2619]: error_ppm 58.457 freq_delta 3831039 tick_delta 0 окт 25 17:16:39 podsec-master systemd[15444]: Created slice background.slice - User Background Tasks Slice. окт 25 17:16:39 podsec-master systemd[15444]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... окт 25 17:16:39 podsec-master systemd[15444]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. окт 25 17:16:41 podsec-master kubelet.sh[15960]: E1025 17:16:41.485672 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:16:41 podsec-master kubelet.sh[15960]: E1025 17:16:41.543662 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:16:51 podsec-master kubelet.sh[15960]: E1025 17:16:51.545188 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:16:55 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.265317 delay 0.003190, next query 32s окт 25 17:16:55 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.266057 delay 0.034381, next query 32s окт 25 17:17:00 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.260232 delay 0.043664, next query 34s окт 25 17:17:01 podsec-master kubelet.sh[15960]: E1025 17:17:01.486046 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:17:01 podsec-master kubelet.sh[15960]: E1025 17:17:01.547153 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:17:03 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.261174 delay 0.000343, next query 32s окт 25 17:17:11 podsec-master kubelet.sh[15960]: E1025 17:17:11.548830 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:17:21 podsec-master kubelet.sh[15960]: E1025 17:17:21.488012 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:17:21 podsec-master kubelet.sh[15960]: E1025 17:17:21.550691 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:17:27 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.248181 delay 0.002995, next query 31s окт 25 17:17:27 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.248542 delay 0.033488, next query 31s окт 25 17:17:31 podsec-master kubelet.sh[15960]: E1025 17:17:31.552694 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:17:34 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.242044 delay 0.043729, next query 30s окт 25 17:17:35 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.244046 delay 0.000412, next query 30s окт 25 17:17:41 podsec-master kubelet.sh[15960]: E1025 17:17:41.485536 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:17:41 podsec-master kubelet.sh[15960]: E1025 17:17:41.554376 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:17:51 podsec-master kubelet.sh[15960]: E1025 17:17:51.555934 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:17:58 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.231695 delay 0.003287, next query 32s окт 25 17:17:58 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.232030 delay 0.033476, next query 34s окт 25 17:18:01 podsec-master kubelet.sh[15960]: E1025 17:18:01.486155 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:18:01 podsec-master kubelet.sh[15960]: E1025 17:18:01.488732 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:18:01 podsec-master kubelet.sh[15960]: E1025 17:18:01.488765 411 kubelet.go:1382] "Image garbage collection failed multiple times in a row" err="invalid capacity 0 on image filesystem" окт 25 17:18:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:18:01.496355724+03:00" level=info msg="Checking image status: registry.altlinux.org/k8s-p10/pause:3.9" id=5395aa5d-ab9c-41ae-940e-d07bce5bfea5 name=/runtime.v1.ImageService/ImageStatus окт 25 17:18:01 podsec-master rootlesskit.sh[15603]: time="2023-10-25 17:18:01.496544192+03:00" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e5ea918be71e188ac31bdec1ce20c8ab8dcfd7373f6d89525691be2c9227054,RepoTags:[registry.altlinux.org/k8s-p10/pause:3.9],RepoDigests:[registry.altlinux.org/k8s-p10/pause@sha256:60eaff526530c6133f8367ea53d0f78880e437fd9be6008d366c7341c9e3e5a9 registry.altlinux.org/k8s-p10/pause@sha256:f14315ad18ed3dc1672572c3af9f6b28427cf036a43cc00ebac885e919b59548],Size_:753507,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{org.opencontainers.image.base.digest: sha256:d73f1c8561f2df848a2403b7dca50b9664628029c89a82d4fa1ea137c9534738,org.opencontainers.image.base.name: ,},},Pinned:false,},Info:map[string]string{},}" id=5395aa5d-ab9c-41ae-940e-d07bce5bfea5 name=/runtime.v1.ImageService/ImageStatus окт 25 17:18:01 podsec-master kubelet.sh[15960]: E1025 17:18:01.559820 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:18:01 podsec-master kubelet.sh[15960]: E1025 17:18:01.907578 411 container_manager_linux.go:515] "Failed to ensure process in container with oom score" err="failed to apply oom score -999 to PID 411: write /proc/411/oom_score_adj: permission denied" окт 25 17:18:05 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.226084 delay 0.043629, next query 34s окт 25 17:18:05 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.228604 delay 0.000579, next query 30s окт 25 17:18:11 podsec-master kubelet.sh[15960]: E1025 17:18:11.560723 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:18:21 podsec-master kubelet.sh[15960]: E1025 17:18:21.485707 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:18:21 podsec-master kubelet.sh[15960]: E1025 17:18:21.561569 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:18:25 podsec-master ntpd[2620]: reply from 176.215.178.239: offset 0.214937 delay 0.003027, next query 7s окт 25 17:18:30 podsec-master ntpd[2620]: reply from 194.190.168.1: offset 0.215046 delay 0.003174, next query 33s окт 25 17:18:31 podsec-master kubelet.sh[15960]: E1025 17:18:31.563329 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:18:32 podsec-master ntpd[2620]: reply from 176.215.178.239: offset 0.211129 delay 0.003065, next query 9s окт 25 17:18:32 podsec-master ntpd[2620]: reply from 91.206.16.3: offset 0.214354 delay 0.033400, next query 31s окт 25 17:18:35 podsec-master ntpd[2620]: reply from 10.88.7.1: offset 0.212574 delay 0.000623, next query 34s окт 25 17:18:39 podsec-master ntpd[2620]: reply from 162.159.200.123: offset 0.207866 delay 0.043711, next query 34s окт 25 17:18:41 podsec-master ntpd[2620]: reply from 176.215.178.239: offset 0.206335 delay 0.002928, next query 6s окт 25 17:18:41 podsec-master kubelet.sh[15960]: E1025 17:18:41.486077 411 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifests/kube-flannel.yml: couldn't parse as pod(invalid pod: &core.Namespace{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-flannel\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"kubernetes.io/metadata.name\":\"kube-flannel\", \"pod-security.kubernetes.io/enforce\":\"privileged\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.NamespaceSpec{Finalizers:[]core.FinalizerName(nil)}, Status:core.NamespaceStatus{Phase:\"Active\", Conditions:[]core.NamespaceCondition(nil)}}), please check config file" path="/etc/kubernetes/manifests/kube-flannel.yml" окт 25 17:18:41 podsec-master kubelet.sh[15960]: E1025 17:18:41.564779 411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/sda2\"" mountpoint="/var/lib/u7s-admin/.local/share/usernetes/containers/storage/overlay-images" окт 25 17:18:47 podsec-master ntpd[2620]: peer 176.215.178.239 now valid окт 25 17:18:47 podsec-master ntpd[2620]: reply from 176.215.178.239: offset 0.203130 delay 0.003000, next query 8s