Get a LoadBalancer for your private Kubernetes cluster

In this tutorial, I'll walk through how you can expose a Service of type LoadBalancer in Kubernetes, and then get a public, routeable IP for any service on your local or dev cluster through the new inlets-operator.

The inlets-operator is a Kubernetes controller that automates a network tunnelling tool I released at the beginning of the year named inlets. Inlets can create a tunnel from a computer behind NAT/firewall/private networks to one on another network such as the internet. Think of it like "Ngrok, but Open Source and without limits"


Conceptual diagram of inlets, for the use-case of enabling webhooks from GitHub to a local service

For comparisons to other tools such as Ngrok, MetalLB and for more about the use-cases for incoming network-connectivity, feel free to checkout the GitHub repo and leave a ⭐️ inlets-operator.

Tutorial

First we'll create a local cluster using K3d or KinD, then create a Deployment for Nginx, expose it as a LoadBalancer, and then access it from the Internet.

Pre-reqs

  • DigitalOcean.com or Packet.com account in which the operator will create hosts with public IPs

  • kubectl access to a local cluster created with KinD, Minikube, Docker Desktop, k3d, or whatever your preference is.

Option A - Install your local cluster with k3d

k3d installs Rancher's light-weight k3s distribution and runs it in a Docker container. The advantage over KinD is that it's faster, smaller, and keeps state between reboots.

Note: You'll also need Docker installed to use k3d.

k3d create --server-arg "--no-deploy=traefik"

INFO[0000] Created cluster network with ID 8babe89daae477b2eb14e08754194865a559c6def84b8c78b0055e21d977b430 
INFO[0000] Created docker volume  k3d-k3s-default-images 
INFO[0000] Creating cluster [k3s-default]               
INFO[0000] Creating server using docker.io/rancher/k3s:v0.9.1... 
INFO[0000] Pulling image docker.io/rancher/k3s:v0.9.1... 
INFO[0007] SUCCESS: created cluster [k3s-default]       
INFO[0007] You can now use the cluster with:

Before going any further, switch into the context of the new Kubernetes cluster:

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

Note: these instructions were tested with v1.15.4

Option B - Install your local cluster with KinD

KinD has gained popularlity amongst the Kubernetes community since it was featured at KubeCon last year.

Note: You'll also need Docker installed to use KinD.

 kind create cluster
Creating cluster "kind" ...
⠊⠁ Ensuring node image (kindest/node:v1.15.3) 🖼 

Before going any further, switch into the context of the new Kubernetes cluster:

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")

Create an Access Token

The operator currently works with the Packet and DigitalOcean APIs to provision a host with a public IP.

Log into DigitalOcean.com, then click "API".

Screenshot-2019-10-04-at-14.19.38

Click Generate New Token

Copy the value from the UI and run the following to store the key as a Kubernetes secret:

export DO_TOKEN="PASTE_VALUE_HERE" # Update this line

kubectl create secret generic inlets-access-key \
  --from-literal inlets-access-key="${DO_TOKEN}"

Deploy the inlets-operator into your cluster

kubectl apply -f https://raw.githubusercontent.com/alexellis/inlets-operator/master/artifacts/crd.yaml

kubectl apply -f https://raw.githubusercontent.com/alexellis/inlets-operator/master/artifacts/operator-rbac.yaml

kubectl apply -f https://raw.githubusercontent.com/alexellis/inlets-operator/master/artifacts/operator-amd64.yaml

Create a test deployment

If you're an OpenFaaS user, then you could deploy the OpenFaaS gateway now, but let's try Nginx for simplicity:

kubectl run nginx-1 --image=nginx --port=80 --restart=Always

You'll see the deployment, but the classic problem that we cannot access it from the internet.

Expose Nginx as a LoadBalancer

Now if you were using a cloud platform such as AWS EKS, GKE or DigitalOcean Kubernetes, you'd have an IP address assigned by their platform. We're using a local KinD cluster so that simply wouldn't work.

Fortunately inlets solves this problem.

kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer

Now we see the familiar "Pending" status, but since we've installed the inlets-operator, a VM will be created on DigitalOcean and a tunnel will be established.

kubectl get svc -w

NAME         TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1     <none>        443/TCP        2m25s
nginx-1      LoadBalancer   10.104.90.5   <pending>     80:32037/TCP   1s

Keep an eye on the "External IP" field for your IP.

Access your local cluster service from the Internet

Using the IP in "EXTERNAL-IP" you can now access Nginx:

NAME         TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1     <none>        443/TCP        4m34s
nginx-1      LoadBalancer   10.104.90.5   <pending>     80:32037/TCP   2m10s
nginx-1      LoadBalancer   10.104.90.5   206.189.117.254   80:32037/TCP   2m36s

Here you can see the VM that was provisioned:

Screenshot-2019-10-04-at-14.28.05

Now access the site with the IP.

Screenshot-2019-10-04-at-14.43.09

The exit-nodes created by the inlets-operator on DigitalOcean cost around 5 USD per month by using the cheapest VPS available with 512MB RAM available. There may be cheaper options available.

Contributions are welcome to add other hosting providers such as AWS EC2 and Google Cloud.

Video demo

Short on time? Checkout my video demo and walk-through:

Get a LoadBalancer for your RPi cluster

You can install Kubernetes with k3s using my tutorial

Then follow the instructions for Raspberry Pi: Running inlets-operstor on a Raspberry Pi (armhf)

Wrapping up

By using inlets and the new inlets-operator, we can now get a public IP for Kubernetes services behind NAT, firewalls, and private networks.

At 5 USD per month, your private LoadBalancer is a fraction of the cost of a cloud Load Balancer which come in at 15 USD + per month. I believe that the cost comparison is irrelevant because it's currently impossible to get a cloud load balancer from AWS or Google Cloud for your local KinD cluster. inlets-operator changes the situation.

Does this sound useful to you? Let me know via Twitter @alexellisuk

Fork / star the code - alexellis/inlets-operator

Not a Kubernetes user yet? Try alexellis/inlets with any HTTP server on your laptop.

See also:


View Original