-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Caddy does not pick up new client stack deployments without manual restart
Description
When a new client stack is deployed (e.g., a new environment in an existing project), the Caddy reverse proxy does not automatically route traffic to the new service. Instead, the new domain serves the Caddy "Default page" until Caddy pods are manually restarted.
Steps to Reproduce
- Add a new environment (e.g.,
my-new-env) to an existing client stackclient.yamlwith a new domain (my-new-env.example.com) - Deploy the client stack via the
deploy-client-stackGitHub Action - Verify the deployment succeeds — pod is running (3/3), service is created, configmap has correct
simple-container.com/domainannotation - Visit
https://my-new-env.example.com/— returns the Caddy "Default page" instead of proxying to the application
Root Cause
The Caddyfile is generated by the generate-caddyfile init container, which only runs at pod startup. It iterates over services across namespaces that have simple-container.com/caddyfile-entry annotations and builds site blocks.
When a new client stack is deployed after the Caddy pods are already running, the new service's configmap and annotations are created in the cluster, but Caddy has no mechanism to detect the change and regenerate its config.
Evidence from logs
Before restart — generate-caddyfile output had no entry for the new domain.
After kubectl rollout restart deployment/caddy-production -n caddy — the init container re-ran and generated:
http://my-new-env.example.com {
reverse_proxy http://my-app-service.my-app-namespace.svc.cluster.local:8000 {
header_down Server nginx
import handle_server_error
}
import gzip
import handle_static
}
Workaround
Manually restart Caddy after deploying a new client stack:
kubectl rollout restart deployment/caddy-production -n caddy
Expected Behavior
When a new client stack is deployed, Caddy should automatically detect the new service and regenerate/reload its config without requiring a manual pod restart. Options could include:
- A sidecar or controller that watches for new services/configmaps with SC annotations and triggers a Caddy reload
- A post-deploy hook in the
deploy-client-stackaction that restarts Caddy when a new service is detected - Using Caddy's admin API to dynamically add routes
Environment
- GKE Autopilot cluster
- Caddy image: simplecontainer/caddy:latest
- 2 replicas with init container
generate-caddyfile