Critical Linux Kernel 0-Day: Instant Root Shell on All Distros Since 2017

A newly disclosed Linux kernel vulnerability, CVE-2026-31431 (“CopyFail”), gives any unprivileged local user an instant, unconditional root shell on virtually every Linux distribution shipped since 2017 – including Ubuntu, RHEL, Amazon Linux, and SUSE.

Unlike previous high-profile kernel exploits, the entire proof-of-concept is a 732-byte Python script using only the standard library. It exploits a logic flaw in authencesn to perform a controlled page-cache write, patching any setuid binary in memory without touching disk – bypassing file integrity tools and crossing container boundaries.

Exploitation requires only an unprivileged local shell; no race condition, no kernel offsets, no compiled payload.

Immediate action required. As a stopgap, disable algif_aead via modprobe (details):

echo "install algif_aead /bin/false" > /etc/modprobe.d/disable-algif.conf && rmmod algif_aead

Then prioritize kernel patching to 6.18.22+, 6.19.12+, or 7.0+.

Note that some distributions – including Debian – had not released a patched kernel as of 2026-04-30 03:00 UTC; monitor your vendor’s security advisories closely (Debian tracker).

CI runners, shared hosts, and Kubernetes nodes are highest priority.

Recommending IPSet over firewallcmd-multiport

I run a small setup with fail2ban protecting nginx and SSH. The setup worked fine – fail2ban watches logs, detects attackers, and bans them via firewalld. Nothing exotic.

After accumulating hundreds of bans across my f2b jails, fail2ban-client reload became noticeably slow. The default firewallcmd-multiport action adds one firewall rule per banned IP, so every reload rebuilds hundreds of individual rules.

Enter IPSet, per Claude, instead of one rule per IP, it stores all banned IPs in a kernel-level hash table behind a single firewall rule. Lookup is O(1) regardless of how many IPs are in the set. Fail2ban already ships with firewallcmd-ipset.conf, so the switch is just a config change:

# /etc/jail.d/defaults-debian.conf
[DEFAULT]
banaction = firewallcmd-ipset
banaction_allports = firewallcmd-ipset[actiontype=<allports>]

After systemctl restart fail2ban, fail2ban-client reload went instant — even with 300+ IPs banned across three jails.

To verify fail2ban and ipset stay in sync on every login:

echo "=== Fail2Ban IPSet Status ==="
for jail in $(fail2ban-client status | sed -n 's/,//g;s/.*Jail list://p'); do
    f2b_count=$(fail2ban-client status "$jail" | awk '/Currently banned/{print $NF}')
    ipset_count=$(ipset list "f2b-$jail" 2>/dev/null | awk '/Number of entries/{print $NF}')
    printf "%-30s banned: %-6s ←→  ipset entries: %s\n" "$jail" "$f2b_count" "${ipset_count:-0}"
done

Weird Default Zone Settings by ClouDNS

This site was recently migrated to ClouDNS from DNSMadeEasy. Everything went smoothly, especially with Claude’s help automating the migration script via zonefile and API.

However, during post-migration testing, I found a weird behavior with ClouDNS’ Premium plan nameservers: they return 127.0.0.1 as an authoritative answer for domains that are not hosted on their system.

For example, querying pns1.cloudns.net for google.com – a domain that obviously has nothing to do with ClouDNS:

~$ dig google.com @pns1.cloudns.net. +noedns +norec

; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> google.com @pns1.cloudns.net. +noedns +norec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7556
;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             120     IN      A       127.0.0.1

;; AUTHORITY SECTION:
.                       120     IN      NS      localhost.

;; Query time: 12 msec
;; SERVER: 185.136.96.111#53(pns1.cloudns.net.) (UDP)
;; WHEN: Fri Apr 10 21:32:38 JST 2026
;; MSG SIZE  rcvd: 66

Note the aa (authoritative answer) flag in the response. This server is claiming to be authoritative for google.com, returning 127.0.0.1 with a nonsensical authority section delegating the root zone to localhost.. The correct behavior would be to return REFUSED for domains not configured on the server.

Even more weirdly, ClouDNS’ free nameservers correctly return REFUSED for domains they don’t host:

~$ dig google.com @ns4.cloudns.net. +noedns +norec

; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> google.com @ns4.cloudns.net. +noedns +norec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 64552
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;google.com.                    IN      A

;; Query time: 251 msec
;; SERVER: 185.206.180.171#53(ns4.cloudns.net.) (UDP)
;; WHEN: Fri Apr 10 21:34:22 JST 2026
;; MSG SIZE  rcvd: 28

No aa flag, no bogus answer — just a clean REFUSED.

The exception is ns1.cloudns.net, which returns a false authoritative answer pointing to an actual working IP 185.105.32.123 instead – a catch-all landing page by ClouDNS:

~$ dig google.com @ns1.cloudns.net. +noedns +norec

; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> google.com @ns1.cloudns.net. +noedns +norec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30787
;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             120     IN      A       185.105.32.123

;; AUTHORITY SECTION:
.                       120     IN      NS      localhost.

;; Query time: 272 msec
;; SERVER: 85.159.233.17#53(ns1.cloudns.net.) (UDP)
;; WHEN: Fri Apr 10 21:36:56 JST 2026
;; MSG SIZE  rcvd: 66

Note: As of April 2026, ClouDNS support replied: “At this time, the current behavior of our Premium nameservers is expected and no changes are planned.”