startups and code

A week of DevOps on GCP

← Back to home

I spent most of this week working on Google Cloud Platform (gcp). I have done some extensive work with Azure of the past several years, and have done some light work on AWS. I think it is important to experience something before judging it.

There is a principle which is a bar against all information, which is proof against all arguments, and which cannot fail to keep a man in everlasting ignorance—that principle is contempt prior to investigation.

I have met people who told me they hate Microsoft, and they hate Azure. I have people tell me that Google is evil, which is really funny since there big claim is "Don't be evil", which was removed this year from their code of conduct (cited). And the opposite extreme for Apple fanboys who swear the "next" iPhone is the second coming of Christ. So I am given the opportunity to work on GCP and because of that, I went all-in with it.  I learned about Compute Engine, AppEngine, Template Instances, Instance Groups, Kubernetes Engine, IAM, BigQuery, Storage Buckets, etc...

I have probably a really strong grasp of about 2% of all that is GCP. However, the parts I do know I am growing pretty fond of. I do think it is funny that Google is so forward thinking that they release betas for clients to use with no ramifications of future deprecated features. What is even crazier is you can even have access to alpha features. WHAT?!  Why?  Oh, because the people that are risky enough to use alpha features are also the same people that are annoyed enough to give great feedback and feature requests. What better QA than several thousand (if not more) people complaining about something they are getting for free?

Ok, on to the GCP world, specifically Compute Engine, AppEngine, and Kubernetes Engine. For those "TLDR" people, the answer is you should take the time dockerize everything and use Kubernetes for any highly-scalable, highly-available application. If you need a simple blog/website, use AppEngine, actually use Wordpress, Medium, or Github Pages. :-)

I am going to write some step-by-step guides to using GCP in the near-future, which may mean I will never do it, but I have every intention of doing it.  Today, I want to explain some things I experienced and hopefully help you.

The first is, if you are using Instance Groups and Instance Templates, then you are using a manual version of what Kubernetes (K8s abbreviation explained here) engine does for you.  So let's explain some basics of Compute Engine.

Before you start complaining about how fast I'm moving, I'm assuming you have used some cloud architecture before. You know some basics of virtual servers, ssh, and containers. If you don't, go play on GCP - here are some great tutorials here: https://cloud.google.com/docs/tutorials

Compute Engine is how you spin up virtual servers on GCP, similar to EC2 on AWS. Compute Engine has Instance Templates which are templates of images that would describe what a new Instance Group would use to spin up additional servers. You can create a single compute engine instance and ssh into that server and treat it like your own machine.

Kubernetes is a container orchestrator for containers.  Most containers are docker images (I said most, because some idiot out there is going to say, I am running a custom linux kernel with a modified version of docker and it isn't docker, good for you, I don't care). Kubernetes will generally follow this order on gcp:

  1. Create Container (docker build something...)
  2. Push Container to GCP Repo (git push to remote something...)
  3. Create container clusters (gcloud container cluster create something...)
  4. Create a deployment (kubectl run some-deployment name ports and image (from step 2)....)
  5. Wait until pods are running (kubectl get pods --watch)
  6. Expose to the internet (kubectl expose deployment some-deployment some type and port stuff)
  7. Wait until services are live (kubectl get services --watch)
  8. Go to external IP from step 7. TADA

Now you have a new image running on Kubernetes on GCP.  BUT WAIT THERE'S MORE.

On step 6, you have various types you can create.  Let's say you want a load balancer for your app.  then you would think the type is --type loadBalancer. And you would be right, unless you want SSL. Then you don't use type loadBalancer to create a loadBalancer.  WAIT WHAT? Yes, that is not a typo.

To create a SSL supported loadBalancer you do NOT create service of type loadBalancer

You create one called ingress.  Why? Because a loadBalancer creates tcp protocols, not http. So ingress, creates an http protocol load balancer that will allow you to add an SSL cert to it.

After you understand all of that, K8s is actually pretty awesome. Sure I default a dev environment is a single compute engine instance, because I don't need a dev environment to scale and it will go up and down more than elevator in Otis's service tower (fun elevator reference from a previous job). :-) But for production, you can spin up K8s pods, running multiple nodes with a Cloud Sql Proxy backend, Redis, and even some fun cron jobs running as endpoints (or images).  I can write a separate post about cron jobs on GCP.  I probably won't, but I could. LOL

Ok, that is all I wanted to get out and document this week about playing with GCP.  It is a great platform and does some amazing things. I'm glad I took the time to dig into K8s and realize how practical it is for production applications.

Thanks for stopping by and go build something amazing.