Contents

Podman containers won't start after reboot: full fix for broken CNI and cgroup.subtree_control errors

Podman containers won’t start after reboot: complete fix for broken CNI & cgroup.subtree_control errors

On some cloud servers or VPS instances (especially those with incomplete systemd, running under LXC, or missing systemd-logind), Podman may hit after-reboot issues:

  • Containers refuse to start
  • CNI network corruption leaves iptables chains behind
  • OCI runtime error: writing file /sys/fs/cgroup/cgroup.subtree_control: Invalid argument
  • Containers created by root cannot be managed by rootless users
  • Podman auto fallback fails

This post documents a real case, with a 100% reproducible fix.


🧩 1. Symptoms

After reboot, running:

podman start nginx-proxy-manager

throws:

unable to start container ... writing file `/sys/fs/cgroup/cgroup.subtree_control`: Invalid argument

Plus:

error tearing down CNI namespace configuration
iptables CHAIN_USER_DEL failed

As root, podman ps -a shows:

Created / Exited(0)
Never reaches Running

As user ubuntu:

no systemd user session available
Falling back to cgroupfs

Sometimes:

Error validating CNI config file ...

🧩 2. Root cause analysis

Three layers of trouble:


1) Broken CNI leaves iptables debris

Typical error:

CHAIN_USER_DEL failed (Device or resource busy)

Podman tries to clean CNI chains, but they’re still referenced, so the container network cannot be created.


2) crun incompatibility with cgroup2

The key error:

writing file `/sys/fs/cgroup/cgroup.subtree_control`: Invalid argument

This is a known crun issue:

  • On certain kernel + cgroup2 + non-systemd environments
  • crun writes to subtree_control
  • That triggers an error
  • All containers fail to start

3) Missing systemd session, so Podman cannot use systemd cgroup driver

Podman says:

no systemd user session available
Falling back to cgroupfs

Meaning:

  • VPS may run LXC or lacks full systemd
  • Or root login isn’t via systemd-logind
  • System doesn’t support systemd as cgroup manager

Therefore:

👉 Podman must use cgroupfs, not systemd

But the default config didn’t switch, so errors persist.


🧩 3. Full remediation steps

All steps run as root.


Step 1: Install runc (required)

Podman defaults to crun, but crun triggers subtree_control errors.

apt update
apt install -y runc

Verify:

which runc
runc --version

Step 2: Set Podman runtime to runc

Edit:

nano /etc/containers/containers.conf

Configure:

[engine]
runtime = "runc"

Save and exit.


Step 3: Set cgroup manager = cgroupfs

In the same file add/modify:

cgroup_manager = "cgroupfs"

This is critical.

Your host lacks a systemd session, so you must use cgroupfs; otherwise subtree_control errors remain.


Step 4: Migrate the Podman environment

podman system migrate

It should finish cleanly.


Step 5: Clean broken CNI network

Remove leftover CNI configs:

rm -f /etc/cni/net.d/*.conf
rm -f /etc/cni/net.d/*.conflist

Flush NAT chains (safe for Podman chains; doesn’t break firewall rules):

iptables -t nat -F
iptables -t nat -X

Clear filter chains:

iptables -t filter -F
iptables -t filter -X

Step 6: Recreate the default Podman network

podman network create podman

Step 7: Try starting the container

podman start nginx-proxy-manager

If it runs normally, you’re done.


🧩 4. Verification

Check runtime:

podman info | grep -i runtime -A3

Expect:

name: runc

Check cgroup manager:

podman info | grep -i cgroup -A6

Expect:

cgroupManager: cgroupfs

🧩 5. Why these steps? (Architecture view)

✦ crun crashes on some VPS setups

Because it writes subtree_control, and certain cgroup2 hierarchies reject that.

✦ systemd cgroup driver needs a full systemd session

Many VPS instances lack logind, so:

→ systemd cgroup driver fails
→ You must switch to cgroupfs

✦ CNI often breaks after reboot

Podman uses NF_TABLES; some systems leave chains after reboot, so networking won’t recover.


🧩 6. TL;DR

Why containers won’t start:

ProblemDescription
crun incompatible with cgroup2Causes subtree_control errors
No systemd sessionsystemd cgroup driver fails
Broken CNI networkiptables chains block network allocation

Final fix:

✔ Install runc
✔ Set runtime=runc
✔ Set cgroup_manager=cgroupfs
✔ Clean CNI and iptables
podman system migrate
✔ Recreate network, start containers

After that, all containers start normally.