Setting up Kubernetes API Access Using Service Account

Setting up Kubernetes API Access Using Service Account
May 3, 2019 Eshan Sarpotdar
Setting Up Kubernetes API Access

While using Kubernetes clusters of different distributions like – AKS, GKE, EKS, OpenShift, and ICP we need to give specific privileges to a specific user/user group. During this process, to give restricted access to a cluster we can make use of a service account.

Before we begin, it is assumed that you have already created a Kubernetes cluster and have configured kubectl to connect to it. Moreover, this blog post describes how service accounts can be set up and their behavior in a cluster which is recommended in a Kubernetes project.

Follow the below procedure to quickly setup up Kubernetes API Access using a Service Account:

  • Use kubectl to create a service account. For example:
$ kubectl create serviceaccount test-user
  • Bind the service account to the appropriate roles to grant privileges. For example:
$ kubectl create clusterrolebinding test-user-binding --clusterrole=cluster-admin --serviceaccount=default:test-user
  • You now have a service account bound to the appropriate roles. To use this account, you need to get the token in its secret.
  • Retrieve the token associated with the account.
  • The token is stored in the secret of the service account.
  • First, locate the secret associated with the account.
$ kubectl get secrets
  • This command returns a list of secrets that looks something like this.
NAME                    TYPE       DATA AGE
default-token-2kf6w kubernetes.io/service-account-token                      3           30d
k8s-nginx-ingress-token-h79rc kubernetes.io/service-account-token            3           21h
test-user-token-l8mrf         kubernetes.io/service-account-token            3           12m
  • Then use the kubectl describe command see the token in the secret. For example:
$ kubectl describe secret test-user-token-l8mrf
  • This command returns a description of the secret, which contains the token. For example:
Name: test-user-token-l8mrf
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name=test-user
kubernetes.io/service-account.uid=aa1c318a-bc3d-11e8-b171-023b9d05d78
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 33605 bytes
namespace: 7 bytes
token: eyJhb3ciOi . . . [output snipped]
  • You can optionally put the token into an environment variable which provides a convenient way to access it.
  • Use the following command, replacing my-build-worker-account-token-rq3ls with the name of the secret.
$ export TOKEN=$(kubectl get secret test-user-token-l8mrf -o=jsonpath="{.data.token}" | base64 -D -i -)

Note:

  • This is a MacOS command. You might have to adjust appropriately for your operating system.
  • Test the token.
  • Use cURL to access an API in your cluster, pulling the token from the environment variable you created. In the following command, replace cluster-address with the address of your Smart Cluster.
$ curl -H "Authorization: Bearer $TOKEN" https://api.cluster-address/api/v1/pods -k
  • This command returns a list of the pods in your Smart Cluster in JSON format.
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/pods",
"resourceVersion": "4781803"
},
"items": [
  { ... [output snipped]

Results:

  • Now you have a service account with a long-lived token that you can use to authenticate to Kubernetes.
  • Below is the existing Kubeconfig on machine.
apiVersion: v1
clusters:
cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZ….[snipped]
server: https://xx.xxx.xx.xxx
name: gke_cluster-138506_us-east1-b_gc-test1
contexts:
context:
cluster: gke_cluster-138506_us-east1-b_gc-test1
user: gke_cluster-138506_us-east1-b_gc-test1
name: gke_cluster-138506_us-east1-b_gc-test1
current-context: gke_cluster-138506_us-east1-b_gc-test1
kind: Config
preferences: {}
users:
name: gke_cluster-138506_us-east1-b_gc-test1
user:
auth-provider:
config:
access-token: ya29.c.El_YBjiORx69G_PzzJ...[snippet]
cmd-args: config config-helper --format=json
cmd-path: path
expiry: "2019-03-26T06:42:04Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
  • Replace user and its token at highlighted places:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F….[snipped]
server: https://xxx.xxx.xx.XXX
name: gke_cluster-138506_us-east1-b_gc-test1
contexts:
context:
cluster: gke_cluster-138506_us-east1-b_gc-test1
user: test-user
name: gke_cluster-138506_us-east1-b_gc-test1
current-context: gke_cluster-138506_us-east1-b_gc-test1
kind: Config
preferences: {}
users:
name: test-user
user:
token: eyJhb3ciOi . . . [output snipped]

And that’s a wrap! CloudHedge uses BYOC (Bring Your Own Cluster) feature to import any Kubernetes Cluster during the migration and deployment of mission-critical apps from legacy to cloud. In my next blog post, I would be covering how to bring and deploy your own cluster using Cruize, a CloudHedge product. Contact us – hello@cloudhedge.io to migrate your age-old legacy applications to cloud.