alirezaghey
Repos
65
Followers
10
Following
3

Solutions to a variety of leetcode problems.

3
1

Solutions to Advent of Code 2021 event

0
0

Solutions and notes to the book "Haskell from First Principles"

0
0

personal reminders for bash

0
0

Events

issue comment
Add an option to skip searching Maven artifacts in case search.maven.org is down

Facing the same issue. I'm also interested in whether trivy caches this information between runs or not.

Created at 1 week ago
create branch
alirezaghey create branch feature/dynamic-master-number
Created at 3 weeks ago

initial microk8s auto rollout

it'll use the default stable channel on snapcraft

Created at 3 weeks ago

fix webserver bugs

Created at 3 weeks ago

webserver for cluster rollout

Created at 3 weeks ago
create branch
alirezaghey create branch main
Created at 3 weeks ago
create repository
alirezaghey create repository
Created at 3 weeks ago
Created at 1 month ago
issue comment
kubeadm 1.25.0 deployment over yaml pull error coredns:v1.93

This problem still exists as of K8s v1.25.4. One thing to note ist that I'm joining v1.25.4 control planes to an originally v1.24.8 plane. The workaround for me was using the following:

crictl pull k8s.gcr.io/coredns/coredns:v1.9.3
ctr --namespace=k8s.io image tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3

I'm not sure, if the problem would exists on an originally v1.25.4 plane, if we tried to add v1.26.0 control planes. One thing I'm sure about is that this problem didn't arise, as I created the pure v1.24 or v1.25 clusters.

Created at 1 month ago
started
Created at 1 month ago
Created at 1 month ago
started
Created at 1 month ago
Created at 2 months ago
pull request opened
fix mismatch in file names

Fixed small mismatch between the file name in the config file and its reference in the description below it.

Created at 2 months ago

fix mismatch in file names

Created at 2 months ago
Created at 2 months ago
issue comment
Cilium CNi with k8s does not work with SELinux in permissive mode

Hi @tormath1

Sorry for the incomplete issue.

$ cat /etc/os-release 
NAME="Flatcar Container Linux by Kinvolk"
ID=flatcar
ID_LIKE=coreos
VERSION=3227.2.4
VERSION_ID=3227.2.4
BUILD_ID=2022-10-27-1321
SYSEXT_LEVEL=1.0
PRETTY_NAME="Flatcar Container Linux by Kinvolk 3227.2.4 (Oklo)"
ANSI_COLOR="38;5;75"
HOME_URL="https://flatcar.org/"
BUG_REPORT_URL="https://issues.flatcar.org"
FLATCAR_BOARD="amd64-usr"
CPE_NAME="cpe:2.3:o:flatcar-linux:flatcar_linux:3227.2.4:*:*:*:*:*:*:*"
$ cilium version
cilium-cli: v0.12.4 compiled with go1.19.1 on linux/amd64
cilium image (default): v1.12.2
cilium image (stable): v1.12.3
cilium image (running): v1.12.2

Part of journalctl -u kubelet output. I think the invalid argument: unknown errors had with SELinux to do:

...
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: E1103 10:57:39.287131   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.310984   15588 topology_manager.go:205] "Topology Admit Handler"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363545   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-bpf-maps\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363622   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-cilium-cgroup\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363695   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-host-proc-sys-net\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363726   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-cni-path\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363775   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-xtables-lock\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363805   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-clustermesh-secrets\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363848   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-cilium-config-path\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363901   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-hubble-tls\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363932   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-etc-cni-netd\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.363976   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59rxc\" (UniqueName: \"kubernetes.io/projected/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-kube-api-access-59rxc\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.364002   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-cilium-run\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.364043   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-hostproc\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.364070   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-host-proc-sys-kernel\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.364098   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fffeec14-b0c2-424e-a8e6-0f1ab5924c32-lib-modules\") pod \"cilium-4dctx\" (UID: \"fffeec14-b0c2-424e-a8e6-0f1ab5924c32\") " pod="kube-system/cilium-4dctx"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.623612   15588 topology_manager.go:205] "Topology Admit Handler"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.665804   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdxk\" (UniqueName: \"kubernetes.io/projected/ab0dc6c1-393c-4882-a555-b412b1d4ca90-kube-api-access-frdxk\") pod \"cilium-operator-bc4d5b54-tlll9\" (UID: \"ab0dc6c1-393c-4882-a555-b412b1d4ca90\") " pod="kube-system/cilium-operator-bc4d5b54-tlll9"
Nov 03 10:57:39 az-k8s-master1.novalocal kubelet[15588]: I1103 10:57:39.666386   15588 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab0dc6c1-393c-4882-a555-b412b1d4ca90-cilium-config-path\") pod \"cilium-operator-bc4d5b54-tlll9\" (UID: \"ab0dc6c1-393c-4882-a555-b412b1d4ca90\") " pod="kube-system/cilium-operator-bc4d5b54-tlll9"
Nov 03 10:57:44 az-k8s-master1.novalocal kubelet[15588]: E1103 10:57:44.298741   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:57:49 az-k8s-master1.novalocal kubelet[15588]: E1103 10:57:49.300708   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:57:54 az-k8s-master1.novalocal kubelet[15588]: E1103 10:57:54.302119   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:57:59 az-k8s-master1.novalocal kubelet[15588]: E1103 10:57:59.304829   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:58:04 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:04.306102   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:58:08 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:08.766804   15588 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b9de8946329cacabab9d6a1ed07cb6b33ad0a6b1c7aecf026712ae81d2f3553f"
Nov 03 10:58:08 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:08.767075   15588 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.2@sha256:986f8b04cfdb35cf714701e58e35da0ee63da2b8a048ab596ccb49de58d5ba36,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Nov 03 10:58:08 az-k8s-master1.novalocal kubelet[15588]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Nov 03 10:58:08 az-k8s-master1.novalocal kubelet[15588]: rm /hostbin/cilium-mount
Nov 03 10:58:08 az-k8s-master1.novalocal kubelet[15588]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-59rxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4dctx_kube-system(fffeec14-b0c2-424e-a8e6-0f1ab5924c32): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Nov 03 10:58:08 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:08.767242   15588 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4dctx" podUID=fffeec14-b0c2-424e-a8e6-0f1ab5924c32
Nov 03 10:58:09 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:09.307343   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: W1103 10:58:11.380170   15588 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfffeec14_b0c2_424e_a8e6_0f1ab5924c32.slice/cri-containerd-b9de8946329cacabab9d6a1ed07cb6b33ad0a6b1c7aecf026712ae81d2f3553f.scope WatchSource:0}: task b9de8946329cacabab9d6a1ed07cb6b33ad0a6b1c7aecf026712ae81d2f3553f not found: not found
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:11.705806   15588 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e71373372c645ba281a4a1fc5b820a19953f257de59ca8d1b53a4a4a75b3e0e1"
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:11.706747   15588 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.2@sha256:986f8b04cfdb35cf714701e58e35da0ee63da2b8a048ab596ccb49de58d5ba36,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: rm /hostbin/cilium-mount
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-59rxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4dctx_kube-system(fffeec14-b0c2-424e-a8e6-0f1ab5924c32): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:11.706838   15588 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4dctx" podUID=fffeec14-b0c2-424e-a8e6-0f1ab5924c32
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: I1103 10:58:11.793081   15588 scope.go:115] "RemoveContainer" containerID="b9de8946329cacabab9d6a1ed07cb6b33ad0a6b1c7aecf026712ae81d2f3553f"
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: I1103 10:58:11.794325   15588 scope.go:115] "RemoveContainer" containerID="b9de8946329cacabab9d6a1ed07cb6b33ad0a6b1c7aecf026712ae81d2f3553f"
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:11.797865   15588 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"b9de8946329cacabab9d6a1ed07cb6b33ad0a6b1c7aecf026712ae81d2f3553f\": container is already in removing state" containerID="b9de8946329cacabab9d6a1ed07cb6b33ad0a6b1c7aecf026712ae81d2f3553f"
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:11.797946   15588 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "b9de8946329cacabab9d6a1ed07cb6b33ad0a6b1c7aecf026712ae81d2f3553f": container is already in removing state; Skipping pod "cilium-4dctx_kube-system(fffeec14-b0c2-424e-a8e6-0f1ab5924c32)"
Nov 03 10:58:11 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:11.798419   15588 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-4dctx_kube-system(fffeec14-b0c2-424e-a8e6-0f1ab5924c32)\"" pod="kube-system/cilium-4dctx" podUID=fffeec14-b0c2-424e-a8e6-0f1ab5924c32
Nov 03 10:58:12 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:12.799302   15588 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-4dctx_kube-system(fffeec14-b0c2-424e-a8e6-0f1ab5924c32)\"" pod="kube-system/cilium-4dctx" podUID=fffeec14-b0c2-424e-a8e6-0f1ab5924c32
Nov 03 10:58:14 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:14.308851   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:58:14 az-k8s-master1.novalocal kubelet[15588]: W1103 10:58:14.672607   15588 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfffeec14_b0c2_424e_a8e6_0f1ab5924c32.slice/cri-containerd-e71373372c645ba281a4a1fc5b820a19953f257de59ca8d1b53a4a4a75b3e0e1.scope WatchSource:0}: task e71373372c645ba281a4a1fc5b820a19953f257de59ca8d1b53a4a4a75b3e0e1 not found: not found
Nov 03 10:58:19 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:19.311049   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:58:24 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:24.313193   15588 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Nov 03 10:58:29 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:29.098670   15588 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2a687445463a9dc2e66a1a5e3361e9a4380e6b32d2fe067a91d2c3f288230212"
Nov 03 10:58:29 az-k8s-master1.novalocal kubelet[15588]: E1103 10:58:29.098828   15588 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.2@sha256:986f8b04cfdb35cf714701e58e35da0ee63da2b8a048ab596ccb49de58d5ba36,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
...

Let me know, if I can provide you with other information. I'm a novice with K8s and Flatcar, so sorry if the information isn't as expected.

I just saw the update you posted, but posting this for what it's worth :).

Created at 2 months ago
opened issue
Cilium CNi with k8s does not work with SELinux in permissive mode

Description

Cilium CNi with k8s does not work with SELinux in permissive mode.

Impact

You need to disable SELinux for Cilium to work.

Environment and steps to reproduce

  1. Set-up: Latest Flatcar Linux, with k8s, and latest Cilium CNi plugin.
  2. Task: Trying to boot the CNi plugin

Expected behavior

Expected Cilium to run with SELinux in permissive mode.

Additional information

Disabling SELinux does the trick. Probably, the necessary policies and labels are not in place. Related to #673

Created at 2 months ago
issue comment
[RFE/Fix]: Smooth out SELinux rough edges in Flatcar

Note to internet strangers, that may bump on this thread: If you try to install k8s with Cilium CNi on Flatcar, it won't work with the current implementation of SELinux, even in permissive mode. Disable SELinux. Cheers!

Created at 2 months ago
pull request opened
update dead link to flatcar documentation

closes #243

Update dead link in https://www.flatcar.org/faq

updated dead link in https://www.flatcar.org/faq pointing to https://kinvolk.io/docs/flatcar-container-linux to point to https://flatcar-linux.org/docs/latest/

How to use

Click on the link and it leads you to Flatcar's docs :)

Testing done

Manually tested and the new link works.

Created at 2 months ago

update dead link to flatcar documentation

closes #243

Created at 2 months ago
Faq link to Flatcar docs is 404

Description

The link in Flatcar's Faq page is 404. It links to https://kinvolk.io/docs/flatcar-container-linux, which doesn't exist (anymore).

Impact

Dead link.

Environment and steps to reproduce

Head over to https://www.flatcar.org/faq and click on the documentation under: If the image is immutable, how....

Expected behavior

Working link to the docs.

Solution

Change the link in https://github.com/flatcar/flatcar-website/blame/master/content/faq.md#L35 to point to https://flatcar-linux.org/docs/latest/

Created at 2 months ago