Skip to content

Caddy does not pick up new client stack deployments without manual restart #179

@Cre-eD

Description

@Cre-eD

Caddy does not pick up new client stack deployments without manual restart

Description

When a new client stack is deployed (e.g., a new environment in an existing project), the Caddy reverse proxy does not automatically route traffic to the new service. Instead, the new domain serves the Caddy "Default page" until Caddy pods are manually restarted.

Steps to Reproduce

  1. Add a new environment (e.g., my-new-env) to an existing client stack client.yaml with a new domain (my-new-env.example.com)
  2. Deploy the client stack via the deploy-client-stack GitHub Action
  3. Verify the deployment succeeds — pod is running (3/3), service is created, configmap has correct simple-container.com/domain annotation
  4. Visit https://my-new-env.example.com/ — returns the Caddy "Default page" instead of proxying to the application

Root Cause

The Caddyfile is generated by the generate-caddyfile init container, which only runs at pod startup. It iterates over services across namespaces that have simple-container.com/caddyfile-entry annotations and builds site blocks.

When a new client stack is deployed after the Caddy pods are already running, the new service's configmap and annotations are created in the cluster, but Caddy has no mechanism to detect the change and regenerate its config.

Evidence from logs

Before restart — generate-caddyfile output had no entry for the new domain.

After kubectl rollout restart deployment/caddy-production -n caddy — the init container re-ran and generated:

http://my-new-env.example.com {
  reverse_proxy http://my-app-service.my-app-namespace.svc.cluster.local:8000 {
    header_down Server nginx
    import handle_server_error
  }
  import gzip
  import handle_static
}

Workaround

Manually restart Caddy after deploying a new client stack:

kubectl rollout restart deployment/caddy-production -n caddy

Expected Behavior

When a new client stack is deployed, Caddy should automatically detect the new service and regenerate/reload its config without requiring a manual pod restart. Options could include:

  • A sidecar or controller that watches for new services/configmaps with SC annotations and triggers a Caddy reload
  • A post-deploy hook in the deploy-client-stack action that restarts Caddy when a new service is detected
  • Using Caddy's admin API to dynamically add routes

Environment

  • GKE Autopilot cluster
  • Caddy image: simplecontainer/caddy:latest
  • 2 replicas with init container generate-caddyfile

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions