Skip to content

clean up k8s starter manifests#132

Draft
mfreeman451 wants to merge 1 commit intoGoogleCloudPlatform:mainfrom
mfreeman451:fix/k8s-generic-manifests
Draft

clean up k8s starter manifests#132
mfreeman451 wants to merge 1 commit intoGoogleCloudPlatform:mainfrom
mfreeman451:fix/k8s-generic-manifests

Conversation

@mfreeman451
Copy link
Copy Markdown
Contributor

What changed

This replaces the environment-specific hack/k8s deployment snapshot with a generic Kubernetes starter for Scion.

The new base:

  • deploys a single combo pod with the Hub, runtime broker, and web UI
  • uses a simple ClusterIP service plus kubectl port-forward as the default access path
  • keeps local SQLite and template storage on a PVC
  • enables dev auth by default for evaluation so the base can run without preconfigured OAuth
  • loads an optional scion-auth secret for stable sessions and OAuth credentials

It also removes the CarverAuto-specific ingress, Gateway API, telemetry, sealed secret, pull secret, template seeding, and network policy assumptions.

Why

The previous manifests were tied to one production deployment and were not suitable for upstream use. They encoded private registry paths, sealed secrets, company-specific hostnames, OTLP mTLS wiring, cluster-specific Gateway resources, and node scheduling details.

That made hack/k8s hard to reuse and hard to document as a supported starting point for other Kubernetes users.

User impact

Users now get a smaller starter that can be applied with kubectl apply -k hack/k8s after setting their image and registry values.

Clusters with custom ingress, TLS, private registries, OAuth, or network policy requirements can layer those concerns on top of the base instead of having to unwind an opinionated deployment snapshot first.

Validation

  • kubectl kustomize hack/k8s
  • kubectl kustomize hack/k8s | kubectl apply --dry-run=client -f -

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a Kubernetes deployment for Scion, including Kustomize manifests, RBAC configurations, and comprehensive documentation. Additionally, it refactors the metadata server to skip iptables interception when running as a non-root user, which is common in hosted Kubernetes environments. Feedback focuses on improving the deployment's reliability by adjusting the image pull policy for the latest tag, ensuring the RBAC role has necessary permissions for secret management, and exposing the broker port in the service definition.

containers:
- name: scion
image: scion-server:latest
imagePullPolicy: IfNotPresent
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using imagePullPolicy: IfNotPresent with the :latest tag is generally discouraged in Kubernetes. The default behavior for :latest is Always, which ensures the node checks for a newer version of the image on every pod start. Explicitly setting it to IfNotPresent can cause the pod to run an outdated version of the image if a new one was pushed to the registry with the same tag. For a starter manifest, it's safer to use Always or omit the field.

          imagePullPolicy: Always

Comment on lines +36 to +38
- list
- delete
---
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The secrets resource is missing the get verb. If the runtime manager needs to check for the existence of a secret before creating it, or if it needs to read secret metadata, the get permission is required. Without it, operations that rely on checking state before acting (like idempotency checks) will fail.

      - create
      - get
      - list
      - delete

Comment on lines +9 to +12
- name: http
port: 80
targetPort: http
protocol: TCP
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The service only exposes the HTTP port (80). Since the deployment also runs a runtime broker on port 9800, this port should be included in the service definition to allow agents or other cluster components to reach the broker via the service DNS name.

    - name: http
      port: 80
      targetPort: http
      protocol: TCP
    - name: broker
      port: 9800
      targetPort: broker
      protocol: TCP

@mfreeman451 mfreeman451 changed the title [codex] clean up k8s starter manifests clean up k8s starter manifests Apr 11, 2026
@mfreeman451 mfreeman451 force-pushed the fix/k8s-generic-manifests branch from 480613f to 1cdf6c3 Compare April 11, 2026 05:15
@mfreeman451
Copy link
Copy Markdown
Contributor Author

Addressed the follow-up cleanup here as well.

Changes:

  • aligned the metadata interception setup with the cleaner guard-clause shape already used in #110
  • extracted the interception setup into configureMetadataInterception(uid int)
  • kept the earlier manifest fixes in place:
    • imagePullPolicy: Always for :latest
    • secrets/get RBAC
    • broker service port 9800

Current head: ecf59882

@mfreeman451 mfreeman451 force-pushed the fix/k8s-generic-manifests branch from ecf5988 to d520003 Compare April 11, 2026 23:15
@mfreeman451
Copy link
Copy Markdown
Contributor Author

This branch has been rebuilt and narrowed on top of current main. The current diff is now only the starter Kubernetes manifests under hack/k8s/*.

The earlier metadata-helper cleanup is already effectively in main, so the surviving PR content is just the starter deployment material. The manifest review items remain addressed on the current branch:

  • imagePullPolicy: Always for :latest
  • secrets/get added to RBAC
  • broker service port 9800 exposed

Current branch scope:

  • hack/k8s/.gitignore
  • hack/k8s/README.md
  • hack/k8s/*.yaml starter manifests

Current head: d520003f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant