knweiss
Repos
37
Followers
8

Events

issue comment
dkms: Error! Module version 1.4.5a for zzstd.ko.xz

@tonyhutter Interesting. FWIW I was using zfs-2.1.5-1 as a starting point.

Which dkms were you using? (dkms-3.0.7-1.el8.noarch here). I assume your dkms status output shows no warning?

I've just verified: kernel-devel was installed as expected:

# rpm -qa --last | grep -E 'kernel.*4.18.0-372'
kernel-devel-4.18.0-372.26.1.el8_6.x86_64     Wed 12 Oct 2022 10:12:07 AM CEST
kernel-headers-4.18.0-372.26.1.el8_6.x86_64   Wed 12 Oct 2022 10:11:57 AM CEST
kernel-4.18.0-372.26.1.el8_6.x86_64           Wed 12 Oct 2022 10:11:55 AM CEST
kernel-tools-4.18.0-372.26.1.el8_6.x86_64     Wed 12 Oct 2022 10:11:45 AM CEST
kernel-tools-libs-4.18.0-372.26.1.el8_6.x86_64 Wed 12 Oct 2022 10:11:07 AM CEST
kernel-modules-4.18.0-372.26.1.el8_6.x86_64   Wed 12 Oct 2022 10:11:02 AM CEST
kernel-core-4.18.0-372.26.1.el8_6.x86_64      Wed 12 Oct 2022 10:10:57 AM CEST
kernel-devel-4.18.0-372.19.1.el8_6.x86_64     Mon 05 Sep 2022 01:36:56 PM CEST
kernel-4.18.0-372.19.1.el8_6.x86_64           Mon 05 Sep 2022 01:36:46 PM CEST
kernel-modules-4.18.0-372.19.1.el8_6.x86_64   Mon 05 Sep 2022 01:36:42 PM CEST
kernel-core-4.18.0-372.19.1.el8_6.x86_64      Mon 05 Sep 2022 01:36:37 PM CEST
kernel-devel-4.18.0-372.9.1.el8.x86_64        Tue 05 Jul 2022 11:41:41 AM CEST
kernel-4.18.0-372.9.1.el8.x86_64              Tue 05 Jul 2022 11:41:20 AM CEST
kernel-modules-4.18.0-372.9.1.el8.x86_64      Tue 05 Jul 2022 11:39:27 AM CEST
kernel-core-4.18.0-372.9.1.el8.x86_64         Tue 05 Jul 2022 11:39:22 AM CEST

Notice, that I did upgrade the system/kernel with yum update (the zfs repo mirror is disabled by default) and not yum update kernel as you did. That's probably why you didn't get the new kernel-devel rpm but I did.

Created at 1 month ago
issue comment
YOU DID IT AGAIN: BROKEN FEDORA INSTALL AFER UPDATE

@tonyhutter FWIW: I've just update a zfs 2.1.5 Rocky Linux 8.6 system from kernel-4.18.0-372.19.1.el8_6.x86_64 to kernel-4.18.0-372.26.1.el8_6.x86_64 using dkms-3.0.7 (without updating zfs!). I got the same output as you i.e. it worked fine. Also dkms status output is fine.

Then, as a 2nd step after reboot, I've updated zfs from 2.1.5 to 2.1.6 for kernel-4.18.0-372.26.1.el8_6.x86_64. Now I've got the following output during "yum update" (just for the zfs-2.1.6 rpms!):

zzstd.ko.xz:
Running module version sanity check.
Error! Module version 1.4.5a for zzstd.ko.xz
is not newer than what is already found in kernel 4.18.0-372.26.1.el8_6.x86_64 (1.4.5a).
You may override by specifying --force.
depmod....

and dkms status is now unhappy:

# dkms status
zfs/2.1.6, 4.18.0-372.26.1.el8_6.x86_64, x86_64: installed (WARNING! Diff between built and installed module!)

Also, there are stale symlinks in /lib/modules/4.18.0-372.26.1.el8_6.x86_64/weak-updates/ for all zfs kernel modules and /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/zzstd.ko.xz was not updated (because of ther error above).

However, all zfs kernel modules in the extra directory have new timestamps. But the checksum of extra/zzstd.ko.xz differs from the new compilation (all other checksums match):

# md5sum /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/*
f1e07468523b6280d3d172b7c4956c57  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/icp.ko.xz
12bd261ff332f325dfc5c44211d8b0b4  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/spl.ko.xz
43bda0ea2f5b40b3d72f574a257203f9  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/zavl.ko.xz
5f7b28a61920e8446813e19555869994  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/zcommon.ko.xz
2b3d5a6968ce279b8581aeb9ecf86342  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/zfs.ko.xz
c5403e7f836a4bbfb8adf27b936e24cc  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/zlua.ko.xz
aadd99c04e89bb35b4ec7d85e839def7  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/znvpair.ko.xz
8d873e253e9802ca30694bb2918d7601  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/zunicode.ko.xz
3c07a450d76c3e4cdd8c830f2ad69582  /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/zzstd.ko.xz

# md5sum /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/*
f1e07468523b6280d3d172b7c4956c57  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/icp.ko.xz
12bd261ff332f325dfc5c44211d8b0b4  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/spl.ko.xz
43bda0ea2f5b40b3d72f574a257203f9  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/zavl.ko.xz
5f7b28a61920e8446813e19555869994  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/zcommon.ko.xz
2b3d5a6968ce279b8581aeb9ecf86342  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/zfs.ko.xz
c5403e7f836a4bbfb8adf27b936e24cc  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/zlua.ko.xz
aadd99c04e89bb35b4ec7d85e839def7  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/znvpair.ko.xz
8d873e253e9802ca30694bb2918d7601  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/zunicode.ko.xz
3bc6370c19c0162fedaaf34cd296ae40  /lib/modules/4.18.0-372.26.1.el8_6.x86_64/extra/zzstd.ko.xz

My workaround is to delete the stale weak-updates symlinks and overwrite the zzstd.ko.xz in extra with the new kernel module version in /var/lib/dkms/zfs/2.1.6/4.18.0-372.26.1.el8_6.x86_64/x86_64/module/zsstd.ko.xz followed by a dracut -f.

Now dkms status is happy and zfs 2.1.6 works (after reboot):

# dkms status
zfs/2.1.6, 4.18.0-372.26.1.el8_6.x86_64, x86_64: installed
# zfs version
zfs-2.1.6-1
zfs-kmod-2.1.6-1
Created at 1 month ago
issue comment
High load due to ksoftirqd, growing iptables rules

@mogoman FWIW: The k3s system where @firefly-serenity and I see/saw this issue is running on six virtual nodes (Rocky Linux 8.6).

(Since the last (extended) update-alternatives change it is working fine - so far.)

Created at 2 months ago
[prometheus-kube-stack] Error: cannot update resource "prometheuses/status"

I had the same error using kube-prometheus-stack-39.11.0 on a k8s system where the kube-prometheus-stack was initially deployed as version 32.2.1 and then updated in several steps up to 39.11.0.

I think the root cause may be that helm is not able to update the CRDs shipped in the kube-prometheus-stack helm chart:

$ ls kube-prometheus-stack-39.11.0/crds/
crd-alertmanagerconfigs.yaml  crd-alertmanagers.yaml  crd-podmonitors.yaml  crd-probes.yaml
crd-prometheuses.yaml  crd-prometheusrules.yaml  crd-servicemonitors.yaml  crd-thanosrulers.yaml

YMMV but I just decided to update the crds manually using kubectl replace -f CRDFILE and then upgraded/re-deployed the kube-prometheus-stack helm chart and restarted the pods (kubectl rollout restart ...) and so far the operator's error messages disappeared.

Created at 2 months ago
issue comment
High load due to ksoftirqd, growing iptables rules

Side note: According to Additional iptables-nft 1.8.0-1.8.3 compatibility problems iptables version 1.8.0 to 1.8.3 have known problems and 1.8.4 should be fine.

  • In some cases it was possible to add a rule with iptables -A but then have iptables -C claim that the rule did not exist. (This led to kubelet repeatedly creating more and more copies of the same rule, thinking it had not been created yet.)

iptables 1.8.3 fixed these compatibility problems, but had a slightly different problem, which is that iptables-nft would get stuck in an infinite loop if it couldn't load the kernel nf_tables module.

iptables 1.8.4 and later have no known problems that affect Kubernetes.

However, our tests on Rocky Linux 8.6 indicate that the 1.8.4 still has (another) issue in one of its commands.

Created at 2 months ago
issue comment
various issues after update ( helm-install-traefik, dns errors)

Which installed k3s version did you try to upgrade?

Created at 2 months ago