Skip to main content

4 posts tagged with "kubernetes"

View All Tags

· One min read
Kas J

Certificates in K3s

In my previous post I mentioned that Traefik allows me to provide SSL termination certificate handling. The thing is certificates are actually an unknown resource type in the kubernetes ecosystem like "pods" or "services".

cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates.

certmanager

Installing cert-manager

Add customer resource definition (CRD) using a manifest from cert-manager:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml

Like with traefik, I also created a values.yaml file for the helm installation:

values.yaml
installCRDs: false # Oops didn't realise I could do it here
replicaCount: 1
extraArgs:
- --dns01-recursive-nameservers=1.1.1.1:53,9.9.9.9:53
- --dns01-recursive-nameservers-only
podDnsPolicy: None
podDnsConfig:
nameservers:
- "1.1.1.1"
- "9.9.9.9"

Create namespace, add the repo and update the repo

kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update

Install cert-manager via helm

helm install cert-manager jetstack/cert-manager --namespace cert-manager --values=values.yaml --version v1.11.0

With cert-manager now installed it was time get some certificates!

· 3 min read
Kas J

Now that I have load balancer to expose my services externally, I have a couple of options:

  • Expose every service I deploy over metallb (ie. each app gets its own IP address) or;
  • Deploy a reverse proxy which intercepts and routes every incoming request to the corresponding backend services.

From the title, you can tell which option I went with. I went with the reverse proxy option because

  • I don't know how many applications I will eventually host
  • I also don't need to think about which application is associated with which IP and configure DNS routes etc
  • It can also provide SSL termination and can be used with an ACME provider (like Let’s Encrypt) for automatic certificate generation (which I'll cover in a future post)

Installing Traefik

Like Metallb, there are heaps of reverse proxy options out there but I went with a popular option Traefik.

traefik

I wanted to try installing this via helm this time. Helm allows you specify custom configuration values via a values.yaml file so I did that first. I know its quite long but I just tweaked the defaults:

values.yaml
globalArguments:
- "--global.sendanonymoususage=false"
- "--global.checknewversion=false"

additionalArguments:
- "--serversTransport.insecureSkipVerify=true"
- "--log.level=INFO"

deployment:
enabled: true
replicas: 1
annotations: {}
podAnnotations: {}
additionalContainers: []
initContainers: []

ports:
web:
redirectTo: websecure
websecure:
tls:
enabled: true

ingressRoute:
dashboard:
enabled: false

providers:
kubernetesCRD:
enabled: true
ingressClass: traefik-external
allowExternalNameServices: true
kubernetesIngress:
enabled: true
allowExternalNameServices: true
publishedService:
enabled: false

rbac:
enabled: true

service:
enabled: true
type: LoadBalancer
annotations: {}
labels: {}
spec:
loadBalancerIP: 192.168.86.100 # this should be an IP in the MetalLB range
loadBalancerSourceRanges: []
externalIPs: []

Then I needed to execute these commands to install via helm (after installing helm of course):

Add repo

helm repo add traefik https://helm.traefik.io/traefik

Update repo

helm repo update

Create namespace

kubectl create namespace traefik

Finally install using helm and our custom values file:

helm install --namespace=traefik traefik traefik/traefik --values=values.yaml

Verifying installation

Finally it was time to check if installation succeeded.

traefikverify

What a beautiful sight, it is all working. The main thing I was happy to see was that Metallb did the job too by assigning the IP 192.168.86.100 to the traefik service. This means I can now route all incoming request (regardless of which application) to this IP and traefik will handle all the routing. This will be done through domain names which will be covered in a later post.

· 3 min read
Kas J

So now I have a cluster - woot! My first thing was to get stuck in and deploying some apps but quickly realised there were a couple of other things that needed to be considered. Say I deployed a web server to my cluster using a small nginx container. It's up and running but how do I access it when it is only routable within my cluster. if you look at the pods IP addresses, they are all 10.1.x.x which is not in my home network.

Well, the answer is I need use a load balancer to expose that web server outside the cluster so that I can see it on my home network. When working in public cloud, these load balancers are usually provided by the cloud providers but I need one for my locally hosted environment.

After a bit of research, I found that metallb is the best solution for this. You give it a "pool" of IP addresses within your home network to allocate to services that you want to expose and it just does it (much like a DHCP server)

Installing applications

So this is my first app that I am going to install on my cluster so it took me a little bit of reading to get to this point but here are my key takeaways of installing this and any app:

  • You can specify kubernetes a manifest which is basically a yaml file which allows you to declaritively specify what you want to install, where to install it from and what configurations you want and how to expose it.
  • Most container applications are containerised with docker and often come with an associated docker-compose file. There is a nifty tool called kompose which allows you to take these docker-compose files and it converts it to a kubernetes manifest for you to allow it to be deployed to your cluster - I plan on using this a lot.
  • Another popular way of installing application is Helm. Helm is a package manager similar to apt (if you're familiar with Ubuntu) which allows you to easily install applications on to your cluster. All you need to do is specify the repo for the application you are wanting to install and it does the rest - I plan on using this a lot too.

Installing metallb

Installing metallb is pretty easy out of the box. You are already provided with manifest to deploy using the kubectl apply command:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml

This installs metallb into a new namespace called metallb-system. Namespaces are a kubenetes construct that basically allow you a way to organise resources within your cluster. I like to think of them as "folders" in a typical file system. So for metallb, all my resources will live in the metallb-system namespace. This allows for easy troubleshooting in the future as I know where they all live.

Configuring metallb

Once installed, there were some configuration changes that needed to be made. As mentioned earlier, I needed to specify a pool of IP address for metallb to allocate out. I put this into another yaml file:

/home-lab/cluster-setup/metallb/metallb-ipconfig.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.86.100-192.168.86.110

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system

Verifying installation

All done! Here are a few commands to verify my install:

metallbverify

But the real test is if it will allocate an IP to a service. Let's test it with reverse proxy service. Stay tuned as I will cover this in my next post!

· 3 min read
Kas J

So with my hardware set up, it was time to get my software up and running. One of the main objectives of setting up this homelab was to get familiar with kubernetes so we need to get a cluster up and running so I can do more than this:

dilbert

Installing an OS

I needed to get an OS installed on my NUC before anything else. There are plenty of open source options out there but I stuck with trusty Ubuntu Server 22.04.1 LTS

K8s vs K3s

Before I got stuck into deploying my kubernetes I wanted to investigate what options I had for a homelab. It boiled down to two main ones: K8s vs K3s. Both K8s and K3s share the same source code but the key difference for me was that K3s was significantly more lightweight, can be deployed much faster and still all all the key capabilities of K8s. There are a number of "production grade" features that are excluded from K3s such has handling of complex applications and intergrations with public cloud providers which I didn't require.

Installing K3s

Installing K3s couldn't be more easier out of the box and it takes no time at all. I simply needed to ssh into my freshly install ubuntu server and execute the following command:

curl -sfL https://get.k3s.io | sh - 

and to uninstall it is:

/usr/local/bin/k3s-uninstall.sh

I must admit I ran these commands ALOT because there were a number of things that were installed by default which I didn't need (yet). I'm not going to get into the various customisable configuration options here but there is some pretty good documentation for it. After tweaking my configuration, I ended up with the following command to install the cluster I wanted:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--disable traefik --disable servicelb --disable kube-proxy --disable local-storage --cluster-init --tls-san 10.43.0.1" sh -s -

Accessing my K3s cluster

Ok time to run my first kubectl command. To verify that everything was running properly I run:

sudo kubectl get nodes

which shows me my single node in my cluster is up and running:

getnodes

I needed to run it in sudo which I thought was annoying -I had to fix this (OCD much?). The K3s kubeconfig file is stored at a rancher location /etc/rancher/k3s. I think this is why I needed to run kubectl in sudo. So I ran the following steps to rectify that:

Create .kube directory in my home directory

sudo mkdir /home/kas/.kube

Copy the kubeconfig file into the newly created directory

sudo cp /etc/rancher/k3s/k3s.yaml /home/kas/.kube/config

Change ownership of the directory so that root wasn't needed

sudo chown kas:kas /home/kas/.kube/config

Let K3s know the location of the new config file (and hopefully the last time I have to use sudo for kubectl)

sudo kubectl config set-cluster default --server=https://192.168.86.41:6443 --kubeconfig /home/kas/.kube/config

I wanted to ensure I could access my cluster from my laptop without having to SSH into my ubuntu server (k3smaster) everytime. To do this I needed copy the kubeconfig file across to my laptop using scp:

From my laptop I run:

scp k3smaster:/home/kas/.kube/config /home/kas/.kube/config

...and I'm laughing::

getnodes