Skip to content

containers: Cancel async destruction from ContainerClient if a new ContainerClient takes over, and be able to recover container sidecar proxies#6201

Merged
gabivlj merged 2 commits intomainfrom
gv/fix-reload-by-cancelling-tasks
Mar 2, 2026
Merged

containers: Cancel async destruction from ContainerClient if a new ContainerClient takes over, and be able to recover container sidecar proxies#6201
gabivlj merged 2 commits intomainfrom
gv/fix-reload-by-cancelling-tasks

Conversation

@gabivlj
Copy link
Contributor

@gabivlj gabivlj commented Feb 27, 2026

containers: Fix workerd reloads with egress interception

When workerd reloads, and the container is still running, there might be
a chance that the container loses connectivity.

This changes make sure that we can recover the proxy sidecar we were
running, and listen in the same port we were before in Workerd.

containers: Cancel async destruction from ContainerClient if a new ContainerClient takes over

The new system makes it so we have a containerCleanupCanceler reference
that a container destructor uses to wrap async calls that should be
cancelled if a new ContainerClient of the same id takes over.

This fixes issues where we abort a DO container, but then we call it
again, racing the old destructor tasks against the new ContainerClient.

When workerd reloads, and the container is still running, there might be
a chance that the container loses connectivity.

This changes make sure that we can recover the proxy sidecar we were
running, and listen in the same port we were before in Workerd.
@gabivlj gabivlj requested review from a team as code owners February 27, 2026 20:22
Copy link
Contributor

@ask-bonk ask-bonk bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm Bonk, and I've done a quick review of your PR.

This PR adds cancellation of stale Docker cleanup tasks during workerd reloads and recovery of sidecar proxy state when a container is still running after restart.

Issues (highest severity first):

  1. parseAs<uint16_t>() can throw in inspectSidecarEgressPort() recovery path — If the sidecar container's args contain a malformed --http-egress-port value, parseAs will throw a kj::Exception. Since this is called from status(), which is the initial health-check RPC, an unhandled parse failure would break status reporting for a container that is otherwise running fine. The recovery path should degrade gracefully to kj::none instead.

  2. status() now performs side effects (starts listeners/sidecars) without joining getRpcTurn() queue — Previously status() was read-only. The new recovery code in status() calls ensureEgressListenerStarted() and ensureSidecarStarted(), both of which mutate state. The atomic exchange guards prevent double-execution, but if status() and start() interleave at co_await points, you could get surprising ordering (e.g., status() starts the listener on the recovered port, then start() tries to start the sidecar with containerSidecarStarted = false overriding the true that status() just stored). This isn't a correctness bug today because the atomic guards protect the critical sections, but it's a fragile arrangement. Consider whether status() should acquire a mutating turn or whether the recovery should be deferred to a separate method that does.

@ask-bonk
Copy link
Contributor

ask-bonk bot commented Feb 27, 2026

Review posted successfully. Here's a summary of what I flagged:

PR #6201 adds two features: (1) cancellation of stale Docker cleanup tasks when a new ContainerClient takes over the same container ID during workerd reloads, and (2) recovery of sidecar proxy state by inspecting running sidecar containers.

Issues found:

  1. Medium: parseAs<uint16_t>() can throw in recovery path (container-client.c++:578) — The inspectSidecarEgressPort() method parses the sidecar's --http-egress-port arg, but if it's malformed, the exception propagates up through status() and breaks the health check. Posted a suggestion to use tryParseAs instead.

  2. Low: status() now has side effects without RPC turn serialization — The recovery logic starts listeners and sidecars from status(), which doesn't join the getRpcTurn() queue. The atomic guards prevent double-execution, but interleaving with start() at co_await points could produce surprising ordering. Noted as a design concern.

The canceler lifetime management and declaration ordering in server.c++ look correct. The removal of setAutoRemove(true) from the sidecar is necessary for the recovery feature. Test changes properly reorder interceptOutboundHttp before start().

github run

@codecov-commenter
Copy link

codecov-commenter commented Feb 27, 2026

Codecov Report

❌ Patch coverage is 0% with 77 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.55%. Comparing base (f3b58ab) to head (79757a5).
⚠️ Report is 9 commits behind head on main.

Files with missing lines Patch % Lines
src/workerd/server/container-client.c++ 0.00% 51 Missing ⚠️
src/workerd/server/server.c++ 0.00% 26 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6201      +/-   ##
==========================================
- Coverage   70.59%   70.55%   -0.04%     
==========================================
  Files         413      413              
  Lines      109986   110059      +73     
  Branches    18120    18134      +14     
==========================================
+ Hits        77644    77653       +9     
- Misses      21531    21596      +65     
+ Partials    10811    10810       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@gabivlj gabivlj force-pushed the gv/fix-reload-by-cancelling-tasks branch 4 times, most recently from 017e554 to 8cd69a7 Compare February 27, 2026 23:03
…ntainerClient takes over

The new system makes it so we have a containerCleanupCanceler reference
that a container destructor uses to wrap async calls that should be
cancelled if a new ContainerClient of the same id takes over.

This fixes issues where we abort a DO container, but then we call it
again, racing the old destructor tasks against the new ContainerClient.
@gabivlj gabivlj force-pushed the gv/fix-reload-by-cancelling-tasks branch from 8cd69a7 to 79757a5 Compare February 28, 2026 00:35
@anonrig
Copy link
Member

anonrig commented Mar 2, 2026

@jasnell any concerns in this pr? i believe running async tasks on destructors would be a bad idea.

@gabivlj gabivlj merged commit 6ae8200 into main Mar 2, 2026
46 of 47 checks passed
@gabivlj gabivlj deleted the gv/fix-reload-by-cancelling-tasks branch March 2, 2026 19:46
Copy link
Collaborator

@jasnell jasnell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI-assisted review (claude-opus-4-6). Findings may contain inaccuracies — please verify.

Findings:

  • [HIGH] Raw pointer cancelerPtr identity check risks dangling access on address reuse
  • [MEDIUM] status() now acquires an RPC turn, changing its concurrency semantics
  • [MEDIUM] Removal of setAutoRemove(true) for sidecar may leak containers on crash
  • [MEDIUM] parseAs<uint16_t>() in inspectSidecarEgressPort() is fatal on a recovery path
  • [LOW] ContainerCleanupState defined in header but only used in server.c++
  • [LOW] Missing destruction-ordering comment for new member fields

Comment on lines +2869 to +2882
auto* cancelerPtr = canceler.get();
containerCleanupState.upsert(kj::str(containerId),
ContainerCleanupState{.canceler = kj::mv(canceler)},
[](ContainerCleanupState& existing, ContainerCleanupState&& incoming) {
existing.canceler = kj::mv(incoming.canceler);
});

// Cleanup callback: invoked from the ContainerClient destructor with the joined
// with a cleanup promise
kj::Function<void(kj::Promise<void>)> cleanupCallback =
[this, containerId = kj::str(containerId), cancelerPtr](
kj::Promise<void> cleanupPromise) mutable {
KJ_IF_SOME(state, containerCleanupState.find(containerId)) {
if (state.canceler.get() != cancelerPtr) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[HIGH] The cleanup callback captures cancelerPtr (a raw kj::Canceler*) extracted from a kj::Own<kj::Canceler> stored in containerCleanupState. Between creation and destruction, the upsert lambda can replace existing.canceler, destroying the old kj::Own<kj::Canceler> and freeing the memory cancelerPtr points to.

The identity check state.canceler.get() != cancelerPtr is intended to detect this case, but it compares against a potentially freed address. If the allocator reuses the same address for the new Canceler, the check would pass incorrectly and the callback would operate on the wrong canceler. While tcmalloc makes immediate same-size reuse unlikely, this is a correctness issue that relies on allocator behavior rather than language guarantees.

Suggestion: Replace the raw pointer identity check with a generation counter:

struct ContainerCleanupState {
  kj::Own<kj::Canceler> canceler;
  kj::ForkedPromise<void> promise = kj::Promise<void>(kj::READY_NOW).fork();
  uint64_t generation = 0;
};

Then increment generation on each upsert and capture it in the lambda instead of cancelerPtr. The check becomes state.generation != capturedGeneration, which is immune to address reuse.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is a really good suggestion, addressed here #6221

Comment on lines +802 to +809
// Wait for any pending cleanup from a previous ContainerClient (Docker DELETE).
// If the cleanup was already cancelled via containerCleanupCanceler the .catch_()
// in the destructor resolves it immediately, so this is a no-op in that case.
co_await pendingCleanup.addBranch();

auto [ready, done] = getRpcTurn();
co_await ready;
KJ_DEFER(done->fulfill());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MEDIUM] status() previously had no getRpcTurn() and could run concurrently with mutating RPCs. Adding the RPC turn serializes it with start(), destroy(), signal(), and setEgressHttp(). This is necessary because status() now does sidecar recovery (starting an egress listener), but it means status polling will block behind any in-progress mutating RPC and vice versa.

Consider acquiring the turn only when the recovery path is actually taken (i.e., when isRunning && containerEgressInterceptorImage != kj::none && sidecar is running). This would keep the common read-only path fast. Either way, a comment documenting the changed concurrency semantics would be helpful.

@@ -701,7 +746,6 @@ kj::Promise<void> ContainerClient::createSidecarContainer(
// Sidecar needs NET_ADMIN capability for iptables/TPROXY
auto capAdd = hostConfig.initCapAdd(1);
capAdd.set(0, "NET_ADMIN");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MEDIUM] The removed hostConfig.setAutoRemove(true) (previously on the next line) means that if workerd crashes without running the destructor (SIGKILL, OOM, etc.), the sidecar container will be left running indefinitely. Previously Docker would auto-remove it.

This is a deliberate trade-off for recovery (the sidecar must survive workerd restarts), and ensureSidecarStarted() destroys any existing sidecar before creating a fresh one, which covers normal restarts. Worth a comment documenting the trade-off and noting that orphaned sidecars are possible on hard crashes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is not really truthful, setAutoRemove is to remove the container automatically when it exits. Its not useful in the case of a container crash as we wouldnt be able to check the logs.

auto args = jsonRoot.getArgs();
for (auto i = 0u; i < args.size(); i++) {
if (args[i] == "--http-egress-port" && i + 1 < args.size()) {
co_return kj::str(args[i + 1]).parseAs<uint16_t>();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MEDIUM] parseAs<uint16_t>() throws on invalid input, but this is called from status() which is now a recovery/best-effort path. If the sidecar has a --http-egress-port flag with a non-numeric value (e.g., a different sidecar image version), crashing the status check seems disproportionate.

I saw the earlier discussion where the author argued a parse failure here indicates something seriously wrong, which is a reasonable position. But consider tryParseAs<uint16_t>() returning kj::none on failure — this would gracefully skip recovery rather than crashing the status RPC. Judgment call.

// Tracks the canceler and cleanup promise for a Docker container's lifecycle cleanup.
// Useful to await on async calls of a ContainerClient destructor when the new
// one appears before they've been resolved.
struct ContainerCleanupState {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[LOW] ContainerCleanupState is only used by the ActorNamespace inner class in server.c++ — it has no relationship to ContainerClient itself. Placing it in this header creates unnecessary coupling and suggests it's part of the ContainerClient API.

Consider moving it to a local struct inside ActorNamespace in server.c++, or into an anonymous namespace there.

Comment on lines +2935 to +2936
// Per-container cleanup state: canceler + forked cleanup promise.
kj::HashMap<kj::String, ContainerCleanupState> containerCleanupState;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[LOW] The reordering of these fields before actors is correct for destruction safety (cleanup state must outlive actors that reference it via their cleanup callback), following the same pattern as the existing actorStorage/actors ordering. But unlike the actorStorage case (which has an explanatory comment at line 2931), the ordering constraint here isn't documented.

Suggested change
// Per-container cleanup state: canceler + forked cleanup promise.
kj::HashMap<kj::String, ContainerCleanupState> containerCleanupState;
// Per-container cleanup state: canceler + forked cleanup promise.
// Must be declared before `actors` so that cleanup state outlives actor destruction
// (ContainerClient destructors reference containerCleanupState via their cleanup callback).
kj::HashMap<kj::String, ContainerCleanupState> containerCleanupState;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants