Setup
- WUD: 8.2.2 (
getwud/wud:latest)
- Host: Intel N250 Mini PC, Debian 13 (trixie), kernel 6.12.73, Docker 29.2.1, Compose v5.1.0
- ~83 containers watched via
WUD_WATCHER_LOCAL_WATCHBYDEFAULT=true
- Router/DNS: Fritz!Box (Docker default DNS → host router,
192.168.178.1)
Relevant config:
environment:
WUD_WATCHER_LOCAL_CRON: "0 */6 * * *"
WUD_WATCHER_LOCAL_WATCHBYDEFAULT: "true"
WUD_TRIGGER_DOCKER_UPDATE_AUTO: "true"
WUD_TRIGGER_DOCKER_UPDATE_THRESHOLD: minor
WUD_TRIGGER_DOCKER_UPDATE_PRUNE: "true"
What happens
Two containers are perpetually flagged as having an update available, but newTag is null in the WUD API:
GET /api/containers
→ { "name": "snapshot-scheduler", "image": "docker:27.2.1-cli", "updateAvailable": true, "newTag": null }
→ { "name": "mongodb-exporter", "image": "percona/mongodb_exporter:0.43", "updateAvailable": true, "newTag": null }
Because WUD_TRIGGER_DOCKER_UPDATE_AUTO=true, WUD recreates these containers even though there is nothing to actually update to. The container restarts generate Docker socket events. WUD picks those up and immediately fires another full watcher scan. That scan again finds the same two "updates", recreates them again, and the loop repeats.
The configured cron (0 */6 * * *, every 6 hours) is completely overridden — WUD runs every ~5 minutes indefinitely. There are always two parallel watcher instances per cycle (70 + 73 containers):
09:42:26 INFO Cron started (0 */6 * * *)
09:42:44 INFO Cron started (0 */6 * * *) ← second parallel instance, ~18s later
09:43:00 INFO Cron finished (70 containers watched, 2 available updates)
09:43:19 INFO Cron finished (73 containers watched, 2 available updates)
09:47:16 INFO Cron started (0 */6 * * *) ← only 4min 16s after last run
09:47:33 INFO Cron started (0 */6 * * *)
09:47:45 INFO Cron finished (70 containers watched, 2 available updates)
09:48:00 INFO Cron finished (73 containers watched, 2 available updates)
09:52:32 INFO Cron started (0 */6 * * *)
...repeating every ~5 minutes for 24h+
Side effect: DNS resolver crash
Each cycle makes one DNS lookup to auth.docker.io per container — ~143 queries every 5 minutes (~1700/hour). Docker's default DNS forwards these to the host router. This was enough to repeatedly crash the Fritz!Box's embedded DNS resolver, causing full internet outages every ~5 minutes that lasted several minutes each:
[2026-03-22 20:00:18] Internet check failed (3/3)
[2026-03-22 20:06:37] Internet check failed (1/3)
...
[2026-03-22 22:35:06] DSLDown CRITICAL fired
[2026-03-22 22:35:36] InternetDown CRITICAL fired
Took ~12 hours to diagnose because the surface symptom looked like a flaky ISP connection, not a software loop.
To reproduce
- Set
WUD_TRIGGER_DOCKER_UPDATE_AUTO=true
- Add any container with a pinned tag that WUD detects as having an update but resolves to
newTag: null — e.g. docker:27.2.1-cli or percona/mongodb_exporter:0.43
- Watch the cron interval — it should match your configured schedule, but will loop every few minutes instead
Fix applied
- Added
wud.watch=false to the two offending containers
- Set
WUD_TRIGGER_DOCKER_UPDATE_AUTO=false
- Added
dns: [1.1.1.1, 8.8.8.8] to WUD container as a general safeguard
Expected behavior
Setup
getwud/wud:latest)WUD_WATCHER_LOCAL_WATCHBYDEFAULT=true192.168.178.1)Relevant config:
What happens
Two containers are perpetually flagged as having an update available, but
newTagisnullin the WUD API:Because
WUD_TRIGGER_DOCKER_UPDATE_AUTO=true, WUD recreates these containers even though there is nothing to actually update to. The container restarts generate Docker socket events. WUD picks those up and immediately fires another full watcher scan. That scan again finds the same two "updates", recreates them again, and the loop repeats.The configured cron (
0 */6 * * *, every 6 hours) is completely overridden — WUD runs every ~5 minutes indefinitely. There are always two parallel watcher instances per cycle (70 + 73 containers):Side effect: DNS resolver crash
Each cycle makes one DNS lookup to
auth.docker.ioper container — ~143 queries every 5 minutes (~1700/hour). Docker's default DNS forwards these to the host router. This was enough to repeatedly crash the Fritz!Box's embedded DNS resolver, causing full internet outages every ~5 minutes that lasted several minutes each:Took ~12 hours to diagnose because the surface symptom looked like a flaky ISP connection, not a software loop.
To reproduce
WUD_TRIGGER_DOCKER_UPDATE_AUTO=truenewTag: null— e.g.docker:27.2.1-cliorpercona/mongodb_exporter:0.43Fix applied
wud.watch=falseto the two offending containersWUD_TRIGGER_DOCKER_UPDATE_AUTO=falsedns: [1.1.1.1, 8.8.8.8]to WUD container as a general safeguardExpected behavior
newTagisnull, auto-update should not fire — there is nothing to update to