k8s: Use contexts instead of namespaces for flexibility#228
Open
adelbertc wants to merge 3 commits intogetnelson:0.11.xfrom
Open
k8s: Use contexts instead of namespaces for flexibility#228adelbertc wants to merge 3 commits intogetnelson:0.11.xfrom
adelbertc wants to merge 3 commits intogetnelson:0.11.xfrom
Conversation
added 2 commits
March 27, 2019 12:55
Using the --namespace flag assumes the "current" credentials are valid for that namespace (e.g. kubectl does not try to switch contexts when you explicitly specify a namespace, which makes sense). However it is possible that a Kubernetes deployment expects different credentials per-namespace, which KUBECONFIG supports. However given how we're using --namespace right now, Nelson isn't leveraging that flexibility. This change instead uses --context to explicitly specify the context (and therefore token + namespace). However since contexts can be named anything (there is a logical name for each context which ties together (cluster, namespace, token)), and because we expect each DC to have its own KUBECONFIG, Nelson will assume the context name is the same as the namespace name.
Previous in-cluster behavior used assumed administrative credentials automatically mounted in the Pod (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod) to do deployments in-cluster. However with the previous change to use --context instead of --namespace, this no longer works (because there is no KUBECONFIG file, it just uses the token). Therefore even if Nelson is deployed in the same cluster a corresponding kubeconfig must still be mounted + specified. In any case this also makes the semantics perhaps slightly less confusing and/or more consistent.
6206051 to
a03fb75
Compare
a03fb75 to
36d4b65
Compare
timperrett
reviewed
Mar 28, 2019
|
|
||
| def getCronJob(namespace: NamespaceName, stackName: StackName): IO[JobStatus] = | ||
| exec(List("kubectl", "get", "job", "-l", s"stackName=${stackName.toString}", "-n", namespace.root.asString, "-o", "json"), emptyStdin) | ||
| exec(List("kubectl", "get", "job", "-l", s"stackName=${stackName.toString}", "--context", namespace.root.asString, "-o", "json"), emptyStdin) |
Member
There was a problem hiding this comment.
actually this is very breaking... this assumes that the kubeconfig files people are using are structured in a particular manner, correct? This would break my setup today.
timperrett
reviewed
Mar 28, 2019
|
|
||
| def readKubernetesInfrastructure(kfg: KConfig): Option[Infrastructure.Kubernetes] = for { | ||
| inCluster <- kfg.lookup[Boolean]("in-cluster") | ||
| mode <- if (inCluster) Some(KubernetesMode.InCluster) else readKubernetesOutClusterParams(kfg) |
Member
There was a problem hiding this comment.
I think this will also break any existing configuration files in the field.
Codecov Report
@@ Coverage Diff @@
## 0.11.x #228 +/- ##
==========================================
+ Coverage 53.08% 55.56% +2.48%
==========================================
Files 133 134 +1
Lines 4591 4373 -218
Branches 111 112 +1
==========================================
- Hits 2437 2430 -7
+ Misses 2154 1943 -211
Continue to review full report at Codecov.
|
Member
|
@adelbertc did you get a chance to think about what to do here? This will break existing deployments |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
NOTE: This is a breaking change for in-cluster deployments, more info below:
Using the --namespace flag assumes the "current" credentials are valid
for that namespace (e.g. kubectl does not try to switch contexts when
you explicitly specify a namespace, which makes sense). However it is
possible that a Kubernetes deployment expects different credentials
per-namespace, which KUBECONFIG supports. However given how we're using
--namespace right now, Nelson isn't leveraging that flexibility.
This change instead uses --context to explicitly specify the context
(and therefore token + namespace). However since contexts can be named
anything (there is a logical name for each context which ties together
(cluster, namespace, token)), and because we expect each DC to have
its own KUBECONFIG, Nelson will assume the context name is the same as
the namespace name.
In addition this change also removes in/out-cluster distinction in the Kubernetes backend.
Previous in-cluster behavior used assumed administrative credentials
automatically mounted in the Pod
(https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
to do deployments in-cluster. However with the previous change to use
--context instead of --namespace, this no longer works (because there is
no KUBECONFIG file, it just uses the token). Therefore even if Nelson
is deployed in the same cluster a corresponding kubeconfig must still be
mounted + specified. In any case this also makes the semantics perhaps
slightly less confusing and/or more consistent.