Monitoring-Stack Deployment To A Kubernetes Cluster — Prometheus | Grafana | AlertManager | Loki + Exporters | Dashboards and etc.

In this tutorial, let’s see how I’m going to deploy this full package of monitoring-stack to a Kubernetes Cluster. For this deployment I’m going to use an EKS Cluster.

  • You have to kubectl CLI and the Helm v3.x package manager installed and configured to work with your Kubernetes clusters.

Once you have successfully installed and configure the kubectl and helm in your machine; we can start to add prometheus-community charts repository to your machine. This chart repository includes kube-prometheus-stack, promtail, loki and etc. This tutorial only I’m going to mention the relevant tools for this deployment; you could find more tools which includes this repository by browsing this URL:


  • Add the prometheus-community charts repo to Helm:
helm repo add prometheus-community
helm repo update

To deploy prometheus-stack you have to copy this values.yaml file and locate to your deployment directory.

Before that, you need to create this pvc.yaml file, since you’re deploying a persistence volume during your helm deployment to establish a high-available monitoring stack.


Then you need to deploy this pvc by using below commands to your cluster.

kubectl create ns monitoring
kubectl create -f pvc.yaml -n monitoring

After that, you can follow below steps to deploy Prometheus stack to your cluster.


  • Then apply this command for the deployment
helm install monitoring-stack prometheus-community/kube-prometheus-stack -n monitoring -f values.yaml

Alright! Now you have successfully deployed prometheus monitoring-stack to your Kubernetes cluster.

Let’s start to add an application logging part to this deployment as well.

Application Logging Configuration (Promtail + Loki)

  • To add this Promtail and Loki configuration; firstly you have to add Loki charts to your machine using helm.
helm repo add loki
  • Then using this command you can deploy Loki to your cluster.
helm upgrade --install loki loki/loki -n monitoring
  • After that, you have to install promtail to scrape log metric to send loki; for that use below command;

We recommend Promtail to ship your logs to Loki as the configuration is very similar to Prometheus. This allows you to ensure that labels for metrics and logs are equivalent by re-using the same scrape_configs and relabeling configuration. When using Grafana having the same labels will allows you to pivot from Metrics to Logs verify easily by simply switching datasource.

helm upgrade --install promtail loki/promtail --set "loki.serviceName=loki" -n monitoring

Once you have successfully install loki and promtail, you can navigate to Grafana (http://localhost:3000 | Replace localhost with the IP according to your scenario).

In the Grafana Login Prompt; use the credentials as below;

Username: admin

Password: prom-operator

Configure Data-Sources / Dashboards

  • To configure the data-sources first you have to go the settings category in the Grafana then click on the Data-sources category as below.
  • Once you moved on to the Data-sources page, you will see Prometheus Data-source already added and you have to add Loki Data-source for this as well. For that click on the Add Data source button in the top-right corner as below.
  • Then click on the Loki data-source from the list of data-sources as below.
  • Finally, once you click on the Save & Test button in the bottom of this page, you will see a green status message called “Data source connected and labels found” as below. Otherwise, you will receive an error message.
  • Let’s try to add Dashboards to visualize these data. Firstly you have to moved on to the Grafana and drag your mouse pointer to the “+” icon in the top-left corner then click on the “Import” button as below.
  • In the below I’ve list-down relevant Dashboard IDs that perfectly visualize these data. Just copy that IDs and click on the “Load” button as below.

  • Pod Memory Table — 11672 (Use data-source as Prometheus)
  • Advanced Node Metrics — 11074 (Use data-source as Prometheus)
  • Application Logs — 13639 (Use data-source as Loki)
  • Kubernetes Cluster Monitoring (via Prometheus) — 3119 (Use data-source as Prometheus)

AlertManager Configuration

In this article, only I’m going show you MS Teams configuration together with AlertManager.

  • As a first step go to your teams application and create a team channel then click on three dots in the right-side corner. Then go to the connectors section, now you will see screen similar below.
  • Find the “Incoming Webhook” connector and click on “Configure” button, then you will move on to the below page.
  • Put a name to your webhook, if you want; you can upload an image as well. Then click on the “Create” button. Now you will see a pretty long URL. I’ll share a sample webhook URL in below to get an idea about it.<ID>/IncomingWebhook/<ID>

Now Copy & Paste your webhook URL in a secure place; since we need that later.

  • To add this MS Teams configuration; firstly you have to add the Prometheus MS Teams charts repository to your machine using helm.
helm repo add prometheus-msteams
  • Before, you deploy MS Teams configuration to your cluster, you have to add this alerts-config.yaml file to correct location according to the below helm command.


# config.yaml
replicaCount: 1
tag: v1.5.0

# in alertmanager, this will be used as http://prometheus-msteams:2000/dev
- dev: # you can change these connector names according to your scenario
# in alertmanager, this will be used as http://prometheus-msteams:2000/foo
# - foo:

# extraEnvs is useful for adding extra environment variables such as proxy settings
# extraEnvs:
# HTTP_PROXY: http://corporateproxy:8080
# HTTPS_PROXY: http://corporateproxy:8080
# container:
# additionalArgs:
# - -debug

# Enable metrics for prometheus operator
enabled: true
release: dev-prometheus-stack # change this accordingly
scrapeInterval: 30s
  • Helm command to Deploy MS TEAMS Configuration
helm upgrade --install prometheus-msteams --namespace monitoring -f alerts-config.yaml prometheus-msteams/prometheus-msteams
  • Then you have to add below custom notification card file to your directory.


{{ define "teams.card" }}
"@type": "MessageCard",
"@context": "",
"themeColor": "{{- if eq .Status "resolved" -}}2DC72D
{{- else if eq .Status "firing" -}}
{{- if eq .CommonLabels.severity "critical" -}}8C1A1A
{{- else if eq .CommonLabels.severity "warning" -}}FFA500
{{- else -}}808080{{- end -}}
{{- else -}}808080{{- end -}}",
"summary": "{{- if eq .CommonAnnotations.summary "" -}}
{{- if eq .CommonAnnotations.message "" -}}
{{- js .CommonLabels.cluster | reReplaceAll "_" " " | reReplaceAll `\\'` "'" -}}
{{- else -}}
{{- js .CommonAnnotations.message | reReplaceAll "_" " " | reReplaceAll `\\'` "'" -}}
{{- end -}}
{{- else -}}
{{- js .CommonAnnotations.summary | reReplaceAll "_" " " | reReplaceAll `\\'` "'" -}}
{{- end -}}",
"title": "Prometheus Alert ({{ .Status }})",
"sections": [ {{$externalUrl := .ExternalURL}}
{{- range $index, $alert := .Alerts }}{{- if $index }},{{- end }}
"activityTitle": "[{{ js $alert.Annotations.description | reReplaceAll "_" " " | reReplaceAll `\\'` "'" }}]({{ $externalUrl }})",
"facts": [
{{- range $key, $value := $alert.Annotations }}
"name": "{{ $key }}",
"value": "{{ js $value | reReplaceAll "_" " " | reReplaceAll `\\'` "'" }}"
{{- end -}}
{{$c := counter}}{{ range $key, $value := $alert.Labels }}{{if call $c}},{{ end }}
"name": "{{ $key }}",
"value": "{{ js $value | reReplaceAll "_" " " | reReplaceAll `\\'` "'" }}"
{{- end }}
"markdown": true
{{- end }}
{{ end }}
  • Alright! Now you can upgrade your deployment with this custom notification card using below command.
helm upgrade --install prometheus-msteams --namespace monitoring -f alerts-config.yaml --set-file customCardTemplate=file://custom-card.tmpl prometheus-msteams/prometheus-msteams
  • Now connect to your Kubernetes cluster, and get the list of secrets in monitoring namespace using below command.
kubectl get secrets -n monitoring
  • Then edit the alert-manager secret using below command.
kubectl edit secret alertmanager-<your-name>-prometheus-stack-kube-alertmanager -n monitoring
  • After that, encrypt the below alertmanager.yaml file and paste it in the alertmanager secret, alertmanager.yaml pointed encrypted code then save the secret.

You should encrypt this alertmanager.yaml file.

resolve_timeout: 5m

receiver: 'prometheus-msteams'

- name: prometheus-msteams
- url: "http://prometheus-msteams:2000/dev" #CHANGE THIS ACCORDING TO YOUR CONFIG FILE
send_resolved: true
  • If you have successfully deployed this configuration, you will receive a sample notification called “Watchdog” as below.
  • Finally, if you want to customize or tune the prometheus alerts, use the below command and edit the prometheus rules according to your scenario.
kubectl get prometheusrules -n monitoring

Awesome!! you have successfully deployed the prometheus monitoring-stack to your Kubernetes cluster!! 🎉🎉

If you have any questions; please drop them in the comments section.



DevOps Engineer • AWS Certified • Community Organizer • ||

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store