Moving to the Cloud
November 27, 2021
Finally getting around to writing some more content for this blog. This post will talk about my transition to using cloud services for this site, and some Cloudflare proxies for the stuff still at home. I’ll go into the Why, the What, and the How in detail.
The Why #
Short Version #
I want my hobby programming time to be working on things that I can’t do at work.
Long Version #
At Paycom we run everything On-Prem there is 0 cloud deployments going on. Before I switched to an Internal Tools team I wasn’t able to mess with servers and configuring and managing services. So to expand my skill set and for fun I got a couple of Dell rack servers and went to town. However, I now manage a team in our DevOps groups, and part of that is managing our GitLab instance. So that was a service I no longer wanted to run at home. That was one of the first migrations. I still want to run most of my services on Kubernetes as that is what I’m familiar with, so to change things up a bit I decided to use a Managed K8s offering. Even at work I don’t provision my own cluster to run (yet), so for my “homelab” I decided to handle all the provisioning with Infrastructure-as-Code. The other main reason to move to the cloud was to gain experience with it. I’m always trying to expand my knowledge and moving to the cloud lets me do that.
Another smaller reason was cost. I did the math on how much my servers were costing my to run a month and decided that moving to the cloud would cost about the same in an initial setup. It was costing me $35/month while after all is said in done my cloud setup is costing me about $30/month. Now I have significantly less hardware to play with but I wasn’t really using that much hardware in my homelab consistently. I do plan on keeping the R510 that I use as a NAS and getting TrueNAS Scale setup so I can run Jellyfin on it directly, after gathering hardware from the R610s.
The What #
- GitLab - VCS, CI/CD, Container Registry, eventually monitoring. Free
- DigitalOcean - Managed K8s ~$30/month
- Cloudflare - DNS, Firewall, SSL Free
- Pulumi - IaC Free
- external-dns - DNS sync from Ingress Resources Free
- cert-manager - Automatice SSL certs from Ingress Resources Free
- ingress-nginx - K8s Ingress Free
Sticking with GitLab was an obvious choice as I really like their product and they offer a good number of features in their free tier.
I’ve messed with DigitalOcean a little bit in the past and it seemed like a fairly simply cloud setup, and the pricing was pretty good for their managed k8s offering. I could have done things cheaper if I didn’t use their k8s offering and just built a cluster using k3s and some droplets but decided against it as the cost benefits were’t quite there.
I had already setup Cloudflare to proxy my homelab setup to improve my security, and decided to keep that here. I use them to manage my DNS and “production” SSL certs.
I decided on Pulumi from my IaC over Terraform simply because I felt like it might be nice to be able to use a programming language instead of HCL.
external-dns, cert-manager, and ingress-nginx I think are all given. They let me automatically provision secure sites as needed under my domains.
The How #
GitLab Migration #
First I migrated my required projects to GitLab. This is a pretty simple process so I’m not going into detail. The
Migration Documentation covers most everything better than I could. Once migrated however I had to rework my .gitlab-ci.yml
to work on gitlab.com. This was mostly just deleting my existing production deployment process as that involved rsyncing the static content over to a webserver. I had no desire to open up that to the world.
Containerization of the Site #
Before I could really get down to it I had to containerize my site. Luckily it’s just a static site so it was super easy. I was using the monachus/hugo
image to build my Hugo site, but I found that GitLab actually published a container with Huge build it. I decided to move to that as I trust GitLab more than some rando. I took advantage of docker
multi-stage builds to keep my final image small. I only needed it to have nginx in the final image with my static content. I settled on the nginxinc/nginx-unprivileged:stable-alpine
image as to avoid running the container as root as the base nginx
container does. I kept my tags pretty vague intentionally. I’m not using any advance features of either of the images so I figured keeping up to date is more important for this. Here is my final
Dockerfile. Again super simple. There are not a lot of moving pieces.
Next I needed a helm chart. This is pretty easy as the default chart from helm create
deploys a nginx webserver image which is pretty much what I’m doing. I like to use local helm charts for most my projects. I think it’s a bit cleaner for more Continuos Delivery style deployments. I’m also a fan of monorepos (not monoliths) for projects. Keeping everything together makes things a bit easier. Once this was done I did some local testing with
k3d which lets me easily create k8s clusters locally using docker as a backend.
Infrastructure Creation #
Now that I had a container and a helm chart I needed a external cluster that I could deploy to. I looked around at the different could providers before landing on DigitalOcean. At my scale they had the best pricing. The only issue was that they didn’t have HA k8s control plane nodes yet. However this is now a option should I need to scale up that much. Once I decided on DigitalOcean I set out to use Pulumi to provision the cluster and get my other infrastructure services deployed. For the most part Pulumi just wraps existing Terraform providers which allows it to support most of what Terraform supports out of the box. So with that I got to work using the DigitalOcean provider and provisioned my cluster. This is what I ended up with.
This isn’t a guide on how to use Pulumi, so I won’t go into the details but overall I feel this is pretty straightforward. The only interesting bit is at the end with the doCluster.status.apply()
. This is simply a workaround the fact that the kubeconfig provided by the digitalocean.KubernetesCluster
is only valid for a week. This will generate a new one if needed. The provider
and the end is to allow other portions of this script to provision resources in the cluster. You can see the entirety of my Pulumi setup at
osullivan-lab/infra.
Most everything else is just a helm chart deployment. I did initially run into issues deploying the ingress-nginx
helm chart as that used helm hooks for some things. Pulumi has since added support for hooks in the kubernetes.helm.v3.Release resource. Despite being in beta this is working well for me.
I think the other most interesting parts of my setup for the cluster is my use of both the Let’s Encrypt integration in cert-manager and using Cloudflare’s Origin CA Issuer. I would like to use just the Origin CA Issuer, however due to it not having a ClusterIssuer option I can’t use it in my dynamically provisioned Review App namespaces. For that I deploy a global ClusterIssuer that uses Let’s Encrypt to create certificates. This uses DNS challenges rather than HTTP for the cert validation. I do deploy the Origin CA Issuer to my production namespaces as I can provision those ahead of time.
Automated Deployments #
With the cluster fully ready I registered the cluster in GitLab as a “GitLab Managed Cluster” with an environment scope of review/*
. This allows me to easily deploy Review Apps to the cluster. I currently exclusively deploy my review apps under *.osullivan.xyz
to keep things at least a little bit separated. My deployments consist of only 3 jobs. A build job, a deploy prep job, and a deploy job. The build job just build my container using
kaniko to avoid Docker in Docker. The deploy prep job is just deploying the credential necessary to pull my container image from the GitLab registry. The deploy jobs is also super simple it just runs a helm upgrade --install
command and passes in some env specific values. All in all my deployments are pretty simple. Every push to a MR gets deployed as a review app and every commit on master gets deployed here on osullivan.tech
. You can see the full pipeline in my
.gitlab-ci.yml
What is still non cloud. #
There isn’t much. The only things not going to be in the cloud is my Jellyfin server serving content from my NAS. Storage is just to expensive to keep raw bluray rips in the cloud. Other than that it’ll just be reverse proxy to TrueNAS and Jellyfin. I need to figure out a local DNS setup so I can direct jellyfin.osullivan.tech differently locally so I’m not proxying traffic that should stay local through Cloudflare.
The Future #
The main things on the infrastructure side of this is to hook up the cluster to GitLab with the new Kubernetes Agent and using a CI/CD tunnel to run jobs against the cluster, and to get Prometheus going to monitor the cluster. Eventually I should use a separate cluster for test and production, but here we are. At a minimum $30/month per cluster (2 $10/m nodes, 1 $10/m loadbalancer) I’m just not about that.
For builds I would like to explore podman or buildah some more as another Docker alternative.
For this site I think the next task will be adding comments support. I’m currently eyeing Remark42, but we’ll see. I also need to be adding more posts to the blog.