top of page
Group.png

Kubernetes Without the Drama - Part 2: Observability in Action

  • Himanshu Negi
  • 2 days ago
  • 4 min read

Last time, we got our app cruising inside MicroK8s — smooth, predictable, like a well-oiled machine. If you haven’t configured your cluster locally yet, check out Part 1 — Kubernetes Without the drama – Microk8s in Action. But here’s the catch: machines don’t talk. When something slows down or crashes, Kubernetes won’t whisper why. 

ree

 So, the real question becomes: 

 What’s going on in there right now? 

 Sure, Kubernetes is great at keeping things running, but it doesn’t tell you what’s happening under the hood. It won’t tell you why CPU usage suddenly spiked, which pod decided to take a nap, or when performance started to wobble. 

To make sense of that, we need observability — insight into the internal state of the system while it is running.  Observability combines three things: 

  • Metrics - numerical signals about resource usage and performance 

  • Logs - event and application output with context 

  • Alerting - awareness when conditions cross thresholds we care about 

Together, these help answer: 

  • Is the application healthy? 

  • Is the cluster under load or stress? 

  • When did something change? 

  • What was happening right before it did? 

There is no shift in workflow (Refer to Part 1): 

  • We still run the cluster inside Multipass 

  • We still use Ingress and hostnames to access services from our Mac 

  • We still avoid unnecessary Kubernetes complexity 

We’re not rebuilding our setup - we’re adding visibility to it. 

In this post, we will:  

  1. Enable the built-in observability stack 

  2. Expose Prometheus, Grafana, and Alertmanager through Ingress 

  3. View live cluster metrics, logs and alerts 

By the end, our cluster won’t just be running — we’ll be able to understand its state. 

 

1. Enable the Observability Stack 

Time to give our cluster some superpowers. MicroK8s comes with an observability bundle - Prometheus for metrics, Grafana for dashboards, Alertmanager for alerts, Loki for logs, and Promtail for log collection. And the best part? It’s all just one command away:  microk8s enable observability:

Grab a coffee while it spins up, then check that everything’s alive: 

microk8s kubectl get pods -n observability   


You should see Prometheus, Grafana, Alertmanager, Loki, and Promtail happily running. 

At this point, the monitoring and logging stack is active inside the cluster, but we can’t access the dashboards yet. Just like in the previous post, our cluster is running inside a Multipass VM, so we’ll expose these UIs through Ingress and map them to local hostnames. 

 

2. Expose Prometheus, Grafana, and Alertmanager via Ingress 


We’ll expose Prometheus, Grafana, and Alertmanager using Ingress — the same approach we used for our app earlier. 


Create the Ingress routes: 


# Grafana 

microk8s kubectl create ingress grafana \ 

  --class=public \ 

  --rule="grafana.local/*=kube-prom-stack-grafana:80" \ 

  -n observability 

 

# Prometheus 

microk8s kubectl create ingress prometheus \ 

  --class=public \ 

  --rule="prometheus.local/*=kube-prom-stack-kube-prome-prometheus:9090" \ 

  -n observability 

 

# Alertmanager 

microk8s kubectl create ingress alertmanager \ 

  --class=public \ 

  --rule="alert.local/*=kube-prom-stack-kube-prome-alertmanager:9093" \ 

  -n observability 


Map the hostnames to the VM: 


VM_IP=$(multipass info microk8s-vm | awk '/IPv4/{print $2}') 

echo "$VM_IP grafana.local prometheus.local alert.local" | sudo tee -a /etc/hosts 


Access the dashboards in your browser: 

Default Grafana login: 

admin / prom-operator 

Sign in, and Grafana greets you with dashboards that reveal your cluster’s heartbeat — metrics, node stats, and pod activity, all in one place. 

 

3. Viewing Live Metrics and Logs 

With the observability stack up and dashboards exposed, it’s time to peek under the hood and watch your cluster in action. 

Open Grafana  → http://grafana.local 

Login:  admin / prom-operator 

From the left sidebar: 

Dashboards → Browse 

Look for a dashboard named: 

Kubernetes / Compute Resources / Namespace 

Select it, then choose the namespace where your application (my-nginx) is running -which is default. 

You should now see: 

  • CPU usage for my-nginx 

  • Memory usage 

  • Container restarts 

  • Pod resource patterns 

This is your cluster speaking through metrics. 

 

4. Viewing Logs with Loki 

Logs are already flowing into Loki via Promtail, so there’s no extra setup needed — just dive in and explore.  

In Grafana: 

Explore → (top-left dropdown) → Select "Loki" 

Then run a simple query: 

{app=”nginx"} 

You should see: 

  • Access logs from requests 

  • Response codes 

  • Timing and client information 

Send traffic again if you want to watch logs stream live:  curl http://demo.local 

This gives you both sides of the observability picture: 

  • Metrics show behaviour 

  • Logs show context 

 

5. Viewing Alerts 

Prometheus and Alertmanager already come with a set of Kubernetes alerts — we don’t need to create anything. 

Open Prometheus: 

In the top menu, click:  Alerts 

You’ll see all the built-in alerts the cluster is evaluating. If everything is healthy, they’ll show as green or inactive. If something goes wrong (like a pod repeatedly crashing), the alert will automatically move into firing state. 

 

From Running to Understanding 

You’ve gone from simply deploying workloads to turning blind spots into insights. With metrics, logs, and alerts at your fingertips, you’re no longer guessing when something feels off - you’re diagnosing it. 

Observability isn’t just extra tooling; it’s your window into Kubernetes. Now, when pods misbehave or performance wobbles, you’ll know why - and fix it before the drama escalates. 

So go ahead, open those dashboards, watch the logs stream, and keep an eye on alerts. Your cluster isn’t just running anymore - it’s talking, and you’re listening. 

 

Comments


bottom of page