Versions Compared
Key
- This line was added.
- This line was removed.
- Formatting was changed.
Prometheus is a monitoring EFK is a set of logging solution. It helps you store the time series data, like metrics of an applicationview the application logs. You can view these metrices using Grafana that presents the information to you in graphical dashboard.
This section guides you on deploying Prometheus and GrafanaEFK. It also discusses how you can configure Prometheus to store Adeptia Connect metrices.
Deploying EFK
Before you begin to deploy EFK, make sure that you have met the following prerequisites.
Prerequisites
- Kubernetes 1.16+
- Helm 3+
To deploy Prometheus, you need to follow the steps as given below.
Deploying FluentD
To deploy FluentD, you need to follow the steps as given below.- Run the following command to add the FluentD helm chart from the FluentD helm repository.
- helm repo add fluent https://fluent.github.io/helm-charts
- Update the Helm repository by running the following command.
- helm repo update
- Run the helm install command as shown below to deploy FluentD .
- helm install fluentd fluent/fluentd -n <NAMESPACE>
Deploying Elasticsearch
To deployElasticsearch
, you need to follow the steps as given below.- Run the following command to add the Elasticsearch helm chart from the Elasticsearch helm repository.
- helm repo add elastic https://helm.elastic.co
- Update the Helm repository by running the following command.
- helm repo update
- Run the helm install command as shown below to deploy Elasticsearch.
- helm install elasticsearch elastic/elasticsearch -n <NAMESPACE>
Prerequisites for Elasticsearch
Prerequisites
- Minimum cluster requirements include the following to run this chart with default settings. All of these settings are configurable.
- Three Kubernetes nodes to respect the default "hard" affinity settings
- 1GB of RAM for the JVM heap
Installation
To deploy Elasticsearch, you need to follow the steps as given below.
- Run the following command to add the Elasticsearch helm chart from the Elasticsearch helm repository.
- helm repo add elastic https://helm.elastic.co
- Update the Helm repository by running the following command.
- helm repo update
- Run the helm install command as shown below to deploy Elasticsearch.
- helm install elasticsearch elastic/elasticsearch -n <NAMESPACE>
Prerequisites for Kibana
- Kubernetes >= 1.14
- Helm >= 2.17.0
To deploy Kibana
FluentD
Installation
To deploy FluentD, you need to follow the steps as given below.
- Run the following command to add the Kibana helm the FluentD helm chart from the Kibana helm the FluentD helm repository.
- helm repo add elastic fluent https://helmfluent.elastic.cogithub.io/helm-charts
- Update the Helm repository by running the following command.
- helm repo update
- Run the helm install command as shown below to deploy Kibanadeploy FluentD .
- helm install kibana elasticfluentd fluent/kibana fluentd -n <NAMESPACE>
Kibana
Installation
To deploy Kibana, you need to follow the steps as given below.
- Run the following command to add
- the Kibana helm chart from
- the Kibana helm repository.
- helm
- repo
- add
- elastic https://
- Update the Helm repository by running the following command.
- helm
- repo
- update
- Run the helm install command as shown below to
- deploy Kibana.
language | css |
---|---|
theme | Midnight |
- helm
Deploying Grafana
Configuring Prometheus
AFter you have deployed the Prometheus
Create a Namespace & ClusterRole
Step 1. Execute the following command to create a new namespace named monitoring.
kubectl create namespace monitoring
Step 2: Create a file named clusterRole.yaml
and copy the following RBAC role.
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: default
namespace: monitoring |
Step 3: Create the role using the following command.
kubectl create -f clusterRole.yaml
This command creates the cluster role and binds it with the newly created namespace.
Create a Config Map To Externalize Prometheus Configurations
All configurations for Prometheus are part of prometheus.yaml
file and all the alert rules for Alertmanager are configured in prometheus.rules
.
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-server-conf
labels:
name: prometheus-server-conf
namespace: monitoring
data:
prometheus.rules: |-
groups:
- name: devopscube demo alert
rules:
- alert: High Pod Memory
expr: sum(container_memory_usage_bytes) > 1
for: 1m
labels:
severity: slack
annotations:
summary: High Memory Usage
prometheus.yml: |-
global:
scrape_interval: 5s
evaluation_interval: 5s
rule_files:
- /etc/prometheus/prometheus.rules
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- "alertmanager.monitoring.svc:9093"
scrape_configs:
- job_name: 'node-exporter'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_endpoints_name]
regex: 'node-exporter'
action: keep
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'kube-state-metrics'
static_configs:
- targets: ['kube-state-metrics.kube-system.svc.cluster.local:8080']
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name |
Step 1: Create a file called config-map.yaml
and copy the file contents from this link –> Prometheus Config File.
Step 2: Execute the following command to create the config map in Kubernetes.
kubectl create -f config-map.yaml
Running this command creates config-map.yaml
.
It creates two files (prometheus.yaml
file, and prometheus.rules file.
) inside the container.
The config map with all the Prometheus scrape config and alerting rules gets mounted to the Prometheus container in /etc/prometheus
location as prometheus.yaml
and prometheus.rules
files.
------------------------------------------------------------------------------
Create Service Monitor
kubectl apply -f servicemonitor.yaml -n <application namespace>
servicemonitor.yaml (Ask Ravi to update following content)
title | servicemonitor.yaml file |
---|
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: portal
app.kubernetes.io/instance: portal
app.kubernetes.io/part-of: portal
name: portal
namespace: adeptia-ga
spec:
endpoints:
- port: https
path: /prometheus
scheme: https
tlsConfig:
caFile: "/adeptia-cert/ca_file"
certFile: "/adeptia-cert/cert_file"
insecureSkipVerify: true
keyFile: "/adeptia-cert/key_file"
namespaceSelector:
matchNames:
- adeptia-ga
selector:
matchLabels:
app.kubernetes.io/instance: portal
app.kubernetes.io/name: portal |
- install kibana elastic/kibana -n <NAMESPACE>
Panel | ||||
---|---|---|---|---|
| ||||
What is new |