Fedora 33 + CRI-O 1.19でKubernetes 1.19を動かしてみる

Fedora 33ではswap-on-zramがデフォルト

まず、zram swapをなんとかしないといけません。 Fedora 33はzram swapがデフォルトで有効化されており、メモリーが少ないと発動するそうです。 dnf remove zram-generator-defaults を実行して swapoff -a して対処しました。


Control Group v2がデフォルトなのにも注意

今回はFedora 33 + CRI-OでKubernetesを動かしてみることにしました。インストール手順は以下を見ながらやるだけです。

上記に書かれていない差分情報として、KubeletとCRI-Oのcgroup driver設定を合わせておきました。

# cat /etc/sysconfig/kubelet 

ここまでやった上でいつものように kubeadm init をするわけですが、失敗するわけです。

# kubeadm init --kubernetes-version 1.19.4 --pod-network-cidr= --control-plane-endpoint=
    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
        - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


# strace kubeadm init --kubernetes-version 1.19.4 --pod-network-cidr= --control-plane-endpoint=

connect(7, {sa_family=AF_INET, sin_port=htons(6443), sin_addr=inet_addr("")}, 16) = -1 EINPROGRESS (現在処理中の操作です)
epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=3200214296, u64=140070673671448}}) = 0
futex(0xc00005c548, FUTEX_WAKE_PRIVATE, 1) = 1
getsockopt(7, SOL_SOCKET, SO_ERROR, [ECONNREFUSED], [4]) = 0

journalctl コマンドを実行したら、しっかり「this version of runc doesn't work on cgroups v2」って書いてありました。ま た あ な た か runC。

# journalctl -xeu kubelet

 ("kube-controller-manager-fedora33no2_kube-system(e95d6f5518631c3f14475cf585810a24)"), skipping: failed to "CreatePodSandbox" for "kube-controller-manager-fedora33no2_kube-system(e95d6f5518631c3f14475cf585810a24)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-controller-manager-fedora33no2_kube-system(e95d6f5518631c3f14475cf585810a24)\"  failed: rpc error: code = Unknown desc = container create failed: time=\"2020-12-01T08:57:30+09:00\" level=error msg=\"this version of runc doesn't work on cgroups v2\"\n"


CRI-Oの設定を見たら、Podmanと違って、CRI-Oはcrunはオプションみたいですね。インストール時の依存モジュールとしてcrunも入りませんし。crunの記述はありますが、設定がコメントになっています。Control Group v1に切り替えてrunCを使う方法もあるわけですが、今回はcrunを使ってみることにしました。

# yum install crun
# vi /etc/crio/crio.conf 

# default_runtime is the _name_ of the OCI runtime to be used as the default.
# The name is matched against the runtimes map below. If this value is changed,
# the corresponding existing entry from the runtimes map below will be ignored.
# default_runtime = "runc"
default_runtime = "crun"

# crun is a fast and lightweight fully featured OCI runtime and C library for
# running containers
runtime_path = "/usr/bin/crun"
runtime_type = "oci"

# systemctl daemon-reload
# systemctl restart crio
# systemctl restart kubelet

# strace kubeadm init --kubernetes-version 1.19.4 --pod-network-cidr= --control-plane-endpoint=
Your Kubernetes control-plane has initialized successfully!



# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# kubectl taint nodes --all
node/fedora33no2 untainted

# curl -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  182k  100  182k    0     0   159k      0  0:00:01  0:00:01 --:--:--  159k
# kubectl apply -f canal.yaml

# kubectl get no
fedora33no2   Ready    master   26m   v1.19.4

# kubectl run --generator=run-pod/v1 --tty=true -it hello-k8s --image=fedora:33 -- bash
Flag --generator has been deprecated, has no effect and will be removed in the future.
If you don't see a command prompt, try pressing enter.
[root@hello-k8s /]#