Skip to content
William Kennedy edited this page Jun 14, 2021 · 10 revisions

Requirements

This projects has been configured to run in a Kubernetes (k8s) environment. A decision was made to use GCP/GKE for the k8s environment. Any k8s environment would work after careful setup of your k8s cluster. This section is not going to talk about installing your own k8s environment. Instead, it will focus on using GKE for your k8s environment, setting up your cluster there and deploying and running the project in GCP.

GCP/GKE Installation

To install the Google Cloud SDK follow these instructions. This is required since the gcloud client is needed to perform some operations.

To install the K8s kubectl client follow these instructions.

Google Container Registery

You will need to make sure your GCP account is attached to the Google container registry. This is required when the docker containers are published to GCP so they can be installed in the GKE environment.

https://console.cloud.google.com/gcr

Makefile Variables

There are a set of variables you can keep or change depending on how you want to configure your GCP and k8s environment.

# The name of the GCP project. You will not be deleting this project but
# reusing it. It takes over a month for GCP to purge a project name. Pick a
# name that you want for a long time. The containers will use this name
# as well. This is exported so the Docker Compose file can use this variable as well.
export PROJECT = ardan-starter-kit

# The name of the cluster in GKE that all services are deployed under.
CLUSTER = ardan-starter-cluster

# The name of the database in GCP that will be created and managed.
DATABASE = ardan-starter-db

# The zone you want to run your Database and GKE cluster in.
ZONE = us-central1-b

Commands

Run these commands in the order they are presented.

make gcp-config

This command will configure your terminal environment for using GCP/GKE.

$ make gcp-config

gcloud config set project $(PROJECT)
gcloud config set compute/zone $(ZONE)
gcloud auth configure-docker

make gcp-project (Optional)

This command only has to be executed one time. This will create the initial project in the GCP environment. The environmental variable $(ACCOUNT_ID) is not set in the makefile for security reasons. It is recommended that you run that command directly on the command line. Follow any directions for setting up billing with the project.

$ make gcp-project

gcloud projects create $(PROJECT)
gcloud beta billing projects link $(PROJECT) --billing-account=$(ACCOUNT_ID)
gcloud services enable container.googleapis.com

make gcp-cluster

This command creates a cluster in the project which will contain two Nodes. Within each Node, we will eventually run a Pod that contains the services we are building and managing in this project.

A Node is a worker machine in Kubernetes. A Node may be a VM or physical machine, depending on the cluster. Each Node contains the services necessary to run Pods and is managed by the master components.

A Pod is a group of one or more containers, with shared storage/network, and a specification for how to run the containers. A Pod’s contents are always co-located and co-scheduled, and run in a shared context.

$ make gcp-cluster

gcloud container clusters create $(CLUSTER) --enable-ip-alias --num-nodes=2 --machine-type=n1-standard-2
gcloud compute instances list

make gcp-upload

This command uploads the two service containers we build into the Google Container Registery (GCR). It's from the GCR that we reference the containers you want to run in the Pod.

$ make gcp-upload

docker push gcr.io/$(PROJECT)/sales-api-amd64:1.0
docker push gcr.io/$(PROJECT)/metrics-amd64:1.0

make gcp-database

This command creates a Postgres database in GCP. It is a bare-bones database using the default network setup. This is important because eventually you need the Pod to connect to this database using a private ip address over the default network. There is no public ip address assigned at this time. You can enter the GCP console for this database and whitelist your public IP address for outside access or use step 09. It's important this database and the cluster exist in the same zone.

$ make gcp-database

gcloud beta sql instances create $(DATABASE) --database-version=POSTGRES_9_6 --no-backup --tier=db-f1-micro --zone=$(ZONE) --no-assign-ip --network=default
gcloud sql instances describe $(DATABASE)

make gcp-db-assign-ip (Optional)

This command is not needed. This command will white-list your public IP address on the databse.

$ make gcp-db-assign-ip

gcloud sql instances patch $(DATABASE) --authorized-networks=[$(PUBLIC-IP)/32]
gcloud sql instances describe $(DATABASE)

make gcp-db-private-ip

This command retrieves the private ip address of the database. This is needed to configure the deployment of the services.

$ make gcp-db-private-ip

# IMPORTANT: Make sure you run this command and get the private IP of the DB.
gcloud sql instances describe $(DATABASE)

make gcp-services

This command deploys the containers into a single POD and runs them across the two nodes.

You must edit the first line and add the private IP retrieved by the last command.

$ make gcp-services

# These scripts needs to be edited for the PROJECT and PRIVATE_DB_IP markers before running.
kubectl create -f gke-deploy-sales-api.yaml
kubectl expose -f gke-expose-sales-api.yaml --type=LoadBalancer

Configure Database

You are now ready to test if the system is operational. You must seed the database just like when working in your development environment. The admin tool has been added to the sales-api container to run the seed command.

Get information about the pod. You need the pod name.

kubectl get pods

With the pod name, replace that and run this command to get a shell inside the sales-api container.

kubectl exec -it <POD NAME> --container sales-api  -- /bin/sh

Now run the migrate command to structure the database and seed to populate it with data.

./admin --db-disable-tls=1 migrate
./admin --db-disable-tls=1 seed

Authenticated Requests

Before any requests can be sent you must acquire an auth token. Make a request using HTTP Basic auth with the test user email and password to get the token.

Get the public ip address assigned to the sales-api. Then set the environmental variable.

kubectl get services sales-api
export SALES_API_PUBLIC_IP="COPY IP"

Replace the public ip address inside this command and execute the token endpoint.

curl --user "[email protected]:gophers" http://$SALES_API_PUBLIC_IP:3000/v1/users/token/54bb2165-71e1-41a6-af3e-7da4a0e1e2c1

I suggest putting the resulting token in an environment variable like $TOKEN.

export TOKEN="COPY TOKEN STRING FROM LAST CALL"

To make authenticated requests put the token in the Authorization header with the Bearer prefix.

curl -H "Authorization: Bearer ${TOKEN}" http://$SALES_API_PUBLIC_IP:3000/v1/users/1/2