六、prometheus高可用之thanos

六、prometheus高可用之thanos

一、thanos架构详解

1.1、thanos是什么?

thanos是prometheus的高可用解决方案之一,thanos与prometheus无缝集成,并提高了一些高级特性,满足了长期存储 + 无限拓展 + 全局视图 + 无侵入性的需求

1.2、thanos架构

这张图中包含了 Thanos 的几个核心组件,但并不包括所有组件,简单介绍下图中几个组件:

Thanos Sidecar:连接 Prometheus,将其数据提供给 Thanos Query 查询,并且/或者将其上传到对象存储,以供长期存储

Thanos Query:实现了 Prometheus API,提供全局查询视图,将来StoreAPI提供的数据进行聚合最终返回给查询数据的client(如grafana)

Thanos Store Gateway:将对象存储的数据暴露给 Thanos Query 去查询。

Thanos Ruler:对监控数据进行评估和告警,还可以计算出新的监控数据,将这些新数据提供给 Thanos Query 查询并且/或者上传到对象存储,以供长期存储。

Thanos Compact:将对象存储中的数据进行压缩和降低采样率,加速大时间区间监控数据查询的速度

Thanos Receiver:从 Prometheus 的远程写入 WAL 接收数据,将其公开和/或上传到云存储。

1.3、架构设计剖析

Query 与 Sidecar

首先,监控数据的查询肯定不能直接查 Prometheus 了,因为会存在许多个 Prometheus 实例,每个 Prometheus 实例只能感知它自己所采集的数据

Thanos Query 实现了 Prometheus 的 HTTP API,能够 “看懂” PromQL。这样,查询 Prometheus 监控数据的 client 就不直接查询 Prometheus 本身了,而是去查询 Thanos Query,Thanos Query 再去下游多个存储了数据的地方查数据,最后将这些数据聚合去重后返回给 client,也就实现了分布式 Prometheus 的数据查询

那么 Thanos Query 又如何去查下游分散的数据呢?Thanos 为此抽象了一套叫 Store API 的内部 gRPC 接口,其它一些组件通过这个接口来暴露数据给 Thanos Query,它自身也就可以做到完全无状态部署,实现高可用与动态扩展。

这些分散的数据可能来自哪些地方呢?

首先,Prometheus 会将采集的数据存到本机磁盘上,如果我们直接用这些分散在各个磁盘上的数据,可以给每个 Prometheus 附带部署一个 Sidecar,这个 Sidecar 实现 Thanos Store API,当 Thanos Query 对其发起查询时,Sidecar 就读取跟它绑定部署的 Prometheus 实例上的监控数据返回给 Thanos Query

由于 Thanos Query 可以对数据进行聚合与去重,所以可以很轻松实现高可用:相同的 Prometheus 部署多个副本(都附带 Sidecar),然后 Thanos Query 去所有 Sidecar 查数据,即便有一个 Prometheus 实例挂掉过一段时间,数据聚合与去重后仍然能得到完整数据

不过因为磁盘空间有限,Prometheus 存储监控数据的能力也是有限的,通常会给 Prometheus 设置一个数据过期时间(默认 15 天)或者最大数据量大小,不断清理旧数据以保证磁盘不被撑爆。因此,我们无法看到时间比较久远的监控数据,有时候这也给我们的问题排查和数据统计造成一些困难

对于需要长期存储的数据,并且使用频率不那么高,最理想的方式是存进对象存储

Store Gateway

那么这些被上传到了对象存储里的监控数据该如何查询呢?理论上 Thanos Query 也可以直接去对象存储查,但这会让 Thanos Query 的逻辑变的很重。我们刚才也看到了,Thanos 抽象出了 Store API,只要实现了该接口的组件都可以作为 Thanos Query 查询的数据源,Thanos Store Gateway 这个组件也实现了 Store API,向 Thanos Query 暴露对象存储的数据。Thanos Store Gateway 内部还做了一些加速数据获取的优化逻辑,一是缓存了 TSDB 索引,二是优化了对象存储的请求 (用尽可能少的请求量拿到所有需要的数据)

这样就实现了监控数据的长期储存,由于对象存储容量无限,所以理论上我们可以存任意时长的数据,监控历史数据也就变得可追溯查询,便于问题排查与统计分析

Ruler

有一个问题,Prometheus 不仅仅只支持将采集的数据进行存储和查询的功能,还可以配置一些 rules:

根据配置不断计算出新指标数据并存储,后续查询时直接使用计算好的新指标,这样可以减轻查询时的计算压力,加快查询速度。

不断计算和评估是否达到告警阀值,当达到阀值时就通知 AlertManager 来触发告警。

由于我们将 Prometheus 进行分布式部署,每个 Prometheus 实例本地并没有完整数据,有些有关联的数据可能存在多个 Prometheus 实例中,单机 Prometheus 看不到数据的全局视图,这种情况我们就不能依赖 Prometheus 来做这些工作

这时,Thanos Ruler 就能大显身手了。它通过查询 Thanos Query 获取全局数据,然后根据 rules 配置计算新指标并存储,同时也通过 Store API 将数据暴露给 Thanos Query,同样还可以将数据上传到对象存储以供长期保存(这里上传到对象存储中的数据一样也是通过 Thanos Store Gateway 暴露给 Thanos Query)

看起来 Thanos Query 跟 Thanos Ruler 之间会相互查询,不过这个不冲突,Thanos Ruler 为 Thanos Query 提供计算出的新指标数据,而 Thanos Query 为 Thanos Ruler 提供计算新指标所需要的全局原始指标数据。

至此,Thanos 的核心能力基本实现了,完全兼容 Prometheus 情况下提供数据查询的全局视图、高可用以及数据的长期保存。

那我们还可以怎么进一步做优化呢?

Compact

由于我们有数据长期存储的能力,也就可以实现查询较大时间范围的监控数据,当时间范围很大时,查询的数据量也会很大,这会导致查询速度非常慢。

通常在查看较大时间范围的监控数据时,我们并不需要那么详细的数据,只需要看到大致就行。这时我们可以用到 Thanos Compact,它可以读取对象存储的数据,对其进行压缩以及降采样再上传到对象存储,这样在查询大时间范围数据时就可以只读取压缩和降采样后的数据,极大地减少了查询的数据量,从而加速查询

1.4、Sidecar模式和Receiver模式

Receiver 是做什么的呢?为什么需要 Receiver?它跟 Sidecar 有什么区别?

它们都可以将数据上传到对象存储以供长期保存,区别在于最新数据的存储。

由于数据上传不可能实时,Sidecar 模式将最新的监控数据存到 Prometheus 本机,Query 通过调所有 Sidecar 的 Store API 来获取最新数据,这就成一个问题:如果 Sidecar 数量非常多或者 Sidecar 跟 Query 离的比较远,每次查询 Query 都调所有 Sidecar 会消耗很多资源,并且速度很慢,而我们查看监控大多数情况都是看的最新数据。

为了解决这个问题,Thanos Receiver 组件被提出,它适配了 Prometheus 的 remote write API,也就是所有 Prometheus 实例可以实时将数据 push 到 Thanos Receiver,最新数据也得以集中起来,然后 Thanos Query 也不用去所有 Sidecar 查最新数据了,直接查 Thanos Receiver 即可。

另外,Thanos Receiver 也将数据上传到对象存储以供长期保存,当然,对象存储中的数据同样由 Thanos Store Gateway 暴露给 Thanos Query。

有同学可能会问:如果规模很大,Receiver 压力会不会很大,成为性能瓶颈?当然,设计者在设计这个组件时肯定会考虑这个问题,Receiver 实现了一致性哈希,支持集群部署,所以即使规模很大也不会成为性能瓶颈

二、Thanos部署

Thanos 支持云原生部署方式,充分利用 Kubernetes 的资源调度与动态扩容能力。从官方文档里可以看到,当前 Thanos 在 Kubernetes 上部署有以下三种:

prometheus-operator:集群中安装了 prometheus-operator 后,就可以通过创建 CRD 对象来部署 Thanos 了;

社区贡献的一些 helm charts:很多个版本,目标都是能够使用 helm 来一键部署 thanos;

kube-thanos:Thanos 官方的开源项目,包含部署 thanos 到 kubernetes 的 jsonnet 模板与 yaml 示例。

本文将通过prometheus-operator方式部署thanos

2.1、架构图

root@deploy:~# cat /etc/issue

Ubuntu 20.04.3 LTS \n \l

192.168.1.100 deploy # 部署和管理k8s的节点

192.168.1.101 devops-master # 集群版本 v1.18.9

192.168.1.102 devops-node1

192.168.1.103 devops-node2

192.168.1.110 test-master # 集群版本 v1.18.9

192.168.1.111 test-node1

192.168.1.112 test-node2

192.168.1.200 nfs-server

部署k8s集群请参考:https://www.cnblogs.com/zhrx/p/15884118.html

2.2、部署nfs-server

root@nfs-server:~# apt install nfs-server nfs-common -y

root@nfs-server:~# vim /etc/exports

# /etc/exports: the access control list for filesystems which may be exported

# to NFS clients. See exports(5).

#

# Example for NFSv2 and NFSv3:

# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)

#

# Example for NFSv4:

# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)

# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)

#

/data *(rw,sync,no_root_squash)

root@nfs-server:~# showmount -e

Export list for nfs-server:

/data *

root@nfs-server:~# systemctl start nfs-server.service

2.2.1、创建nfs-server存储类

在两个集群中都执行

rbac.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

name: nfs-provisioner

namespace: default

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: nfs-provisioner-runner

namespace: default

rules:

- apiGroups: [""]

resources: ["persistentvolumes"]

verbs: ["get", "list", "watch", "create", "delete"]

- apiGroups: [""]

resources: ["persistentvolumeclaims"]

verbs: ["get", "list", "watch", "update"]

- apiGroups: ["storage.k8s.io"]

resources: ["storageclasses"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["events"]

verbs: ["watch", "create", "update", "patch"]

- apiGroups: [""]

resources: ["services", "endpoints"]

verbs: ["get","create","list", "watch","update"]

- apiGroups: ["extensions"]

resources: ["podsecuritypolicies"]

resourceNames: ["nfs-provisioner"]

verbs: ["use"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: run-nfs-provisioner

subjects:

- kind: ServiceAccount

name: nfs-provisioner

namespace: default

roleRef:

kind: ClusterRole

name: nfs-provisioner-runner

apiGroup: rbac.authorization.k8s.io

deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: nfs-client-provisioner

namespace: default

spec:

replicas: 1

selector:

matchLabels:

app: nfs-client-provisioner

strategy:

type: Recreate

template:

metadata:

labels:

app: nfs-client-provisioner

spec:

serviceAccount: nfs-provisioner

containers:

- name: nfs-client-provisioner

image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner

imagePullPolicy: IfNotPresent

volumeMounts:

- name: nfs-client-root

mountPath: /persistentvolumes

env:

- name: PROVISIONER_NAME

value: zhrx/nfs

- name: NFS_SERVER

value: 192.168.1.200

- name: NFS_PATH

value: /data

volumes:

- name: nfs-client-root

nfs:

server: 192.168.1.200

path: /data

class.yaml

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: zhrx-nfs-storage

provisioner: zhrx/nfs

reclaimPolicy: Retain

创建存储类

kubectl apply -f rbac.yaml

kubectl apply -f deployment.yaml

kubectl apply -f class.yaml

3.2、部署prometheus和thanos-sidecar容器

下载prometheus-opreator:https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.5.0.tar.gz

root@deploy:~/manifest/prometheus-operator# tar xf kube-prometheus-0.5.tar.gz

root@deploy:~/manifest/prometheus-operator# cd kube-prometheus-0.5.0/manifests

默认镜像指向的是官方的,最好的办法是将镜像逐个拉到本地并推送到自己的harbor仓库方便以后部署,如果网络环境OK的话也可以直接部署,这里我已经把镜像拉下来推送到自己的harbor仓库了,并且已经修改为自己的仓库路径

部署crd相关资源

root@deploy:~/manifest/prometheus-operator# cd kube-prometheus-0.5.0/manifests/setup/

root@deploy:~/manifest/prometheus-operator/kube-prometheus-0.5.0/manifests/setup# ls

0namespace-namespace.yaml prometheus-operator-0prometheusruleCustomResourceDefinition.yaml prometheus-operator-clusterRoleBinding.yaml

prometheus-operator-0alertmanagerCustomResourceDefinition.yaml prometheus-operator-0servicemonitorCustomResourceDefinition.yaml prometheus-operator-deployment.yaml

prometheus-operator-0podmonitorCustomResourceDefinition.yaml prometheus-operator-0thanosrulerCustomResourceDefinition.yaml prometheus-operator-service.yaml

prometheus-operator-0prometheusCustomResourceDefinition.yaml prometheus-operator-clusterRole.yaml prometheus-operator-serviceAccount.yaml

root@deploy:~/manifest/prometheus-operator/kube-prometheus-0.5.0/manifests/setup# k-devops apply -f . # devops环境

namespace/monitoring created

customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created

clusterrole.rbac.authorization.k8s.io/prometheus-operator created

clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created

deployment.apps/prometheus-operator created

service/prometheus-operator created

serviceaccount/prometheus-operator created

root@deploy:~/manifest/prometheus-operator/kube-prometheus-0.5.0/manifests/setup# k-test apply -f . # test环境

namespace/monitoring created

customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created

customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created

clusterrole.rbac.authorization.k8s.io/prometheus-operator created

clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created

deployment.apps/prometheus-operator created

service/prometheus-operator created

serviceaccount/prometheus-operator created

部署promethues相关pod

修改prometheus-prometheus.yaml配置,添加thanos-sidecar容器和pvc模板配置

注意:部署到不同环境需要修改externalLabels 的标签值

apiVersion: monitoring.coreos.com/v1

kind: Prometheus

metadata:

labels:

prometheus: k8s

name: k8s

namespace: monitoring

spec:

alerting:

alertmanagers:

- name: alertmanager-main

namespace: monitoring

port: web

image: harbor.zhrx.com/monitoring/prometheus:v2.15.2

nodeSelector:

kubernetes.io/os: linux

podMonitorNamespaceSelector: {}

podMonitorSelector: {}

externalLabels:

env: devops # 部署到不同环境需要修改此处label

cluster: devops-idc-cluster # 部署到不同环境需要修改此处label

replicas: 2

resources:

requests:

memory: 400Mi

ruleSelector:

matchLabels:

prometheus: k8s

role: alert-rules

securityContext:

fsGroup: 2000

runAsNonRoot: true

runAsUser: 1000

serviceAccountName: prometheus-k8s

serviceMonitorNamespaceSelector: {}

serviceMonitorSelector: {}

version: v2.15.2

storage: # 添加pvc模板,存储类指向nfs

volumeClaimTemplate:

apiVersion: v1

kind: PersistentVolumeClaim

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 1Gi

storageClassName: zhrx-nfs-storage

thanos: # 添加thano-sidecar容器

baseImage: harbor.zhrx.com/monitoring/thanos

version: v0.20.0

root@deploy:~/manifest/prometheus-operator/kube-prometheus-0.5.0/manifests# k-devops apply -f ./

alertmanager.monitoring.coreos.com/main created

secret/alertmanager-main created

service/alertmanager-main created

serviceaccount/alertmanager-main created

servicemonitor.monitoring.coreos.com/alertmanager created

secret/grafana-datasources created

configmap/grafana-dashboard-apiserver created

configmap/grafana-dashboard-cluster-total created

configmap/grafana-dashboard-controller-manager created

configmap/grafana-dashboard-k8s-resources-cluster created

configmap/grafana-dashboard-k8s-resources-namespace created

configmap/grafana-dashboard-k8s-resources-node created

configmap/grafana-dashboard-k8s-resources-pod created

configmap/grafana-dashboard-k8s-resources-workload created

configmap/grafana-dashboard-k8s-resources-workloads-namespace created

configmap/grafana-dashboard-kubelet created

configmap/grafana-dashboard-namespace-by-pod created

configmap/grafana-dashboard-namespace-by-workload created

configmap/grafana-dashboard-node-cluster-rsrc-use created

configmap/grafana-dashboard-node-rsrc-use created

configmap/grafana-dashboard-nodes created

configmap/grafana-dashboard-persistentvolumesusage created

configmap/grafana-dashboard-pod-total created

configmap/grafana-dashboard-prometheus-remote-write created

configmap/grafana-dashboard-prometheus created

configmap/grafana-dashboard-proxy created

configmap/grafana-dashboard-scheduler created

configmap/grafana-dashboard-statefulset created

configmap/grafana-dashboard-workload-total created

configmap/grafana-dashboards created

deployment.apps/grafana created

service/grafana created

serviceaccount/grafana created

servicemonitor.monitoring.coreos.com/grafana created

clusterrole.rbac.authorization.k8s.io/kube-state-metrics created

clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created

deployment.apps/kube-state-metrics created

service/kube-state-metrics created

serviceaccount/kube-state-metrics created

servicemonitor.monitoring.coreos.com/kube-state-metrics created

clusterrole.rbac.authorization.k8s.io/node-exporter created

clusterrolebinding.rbac.authorization.k8s.io/node-exporter created

daemonset.apps/node-exporter created

service/node-exporter created

serviceaccount/node-exporter created

servicemonitor.monitoring.coreos.com/node-exporter created

apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

clusterrole.rbac.authorization.k8s.io/prometheus-adapter created

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created

clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created

clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created

clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created

configmap/adapter-config created

deployment.apps/prometheus-adapter created

rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created

service/prometheus-adapter created

serviceaccount/prometheus-adapter created

clusterrole.rbac.authorization.k8s.io/prometheus-k8s created

clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created

servicemonitor.monitoring.coreos.com/prometheus-operator created

prometheus.monitoring.coreos.com/k8s created

rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created

rolebinding.rbac.authorization.k8s.io/prometheus-k8s created

rolebinding.rbac.authorization.k8s.io/prometheus-k8s created

rolebinding.rbac.authorization.k8s.io/prometheus-k8s created

role.rbac.authorization.k8s.io/prometheus-k8s-config created

role.rbac.authorization.k8s.io/prometheus-k8s created

role.rbac.authorization.k8s.io/prometheus-k8s created

role.rbac.authorization.k8s.io/prometheus-k8s created

prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created

service/prometheus-k8s created

serviceaccount/prometheus-k8s created

servicemonitor.monitoring.coreos.com/prometheus created

servicemonitor.monitoring.coreos.com/kube-apiserver created

servicemonitor.monitoring.coreos.com/coredns created

servicemonitor.monitoring.coreos.com/kube-controller-manager created

servicemonitor.monitoring.coreos.com/kube-scheduler created

servicemonitor.monitoring.coreos.com/kubelet created

root@deploy:~/manifest/prometheus-operator/kube-prometheus-0.5.0/manifests# vim prometheus-prometheus.yaml

root@deploy:~/manifest/prometheus-operator/kube-prometheus-0.5.0/manifests# k-test apply -f ./

alertmanager.monitoring.coreos.com/main created

secret/alertmanager-main created

service/alertmanager-main created

serviceaccount/alertmanager-main created

servicemonitor.monitoring.coreos.com/alertmanager created

secret/grafana-datasources created

configmap/grafana-dashboard-apiserver created

configmap/grafana-dashboard-cluster-total created

configmap/grafana-dashboard-controller-manager created

configmap/grafana-dashboard-k8s-resources-cluster created

configmap/grafana-dashboard-k8s-resources-namespace created

configmap/grafana-dashboard-k8s-resources-node created

configmap/grafana-dashboard-k8s-resources-pod created

configmap/grafana-dashboard-k8s-resources-workload created

configmap/grafana-dashboard-k8s-resources-workloads-namespace created

configmap/grafana-dashboard-kubelet created

configmap/grafana-dashboard-namespace-by-pod created

configmap/grafana-dashboard-namespace-by-workload created

configmap/grafana-dashboard-node-cluster-rsrc-use created

configmap/grafana-dashboard-node-rsrc-use created

configmap/grafana-dashboard-nodes created

configmap/grafana-dashboard-persistentvolumesusage created

configmap/grafana-dashboard-pod-total created

configmap/grafana-dashboard-prometheus-remote-write created

configmap/grafana-dashboard-prometheus created

configmap/grafana-dashboard-proxy created

configmap/grafana-dashboard-scheduler created

configmap/grafana-dashboard-statefulset created

configmap/grafana-dashboard-workload-total created

configmap/grafana-dashboards created

deployment.apps/grafana created

service/grafana created

serviceaccount/grafana created

servicemonitor.monitoring.coreos.com/grafana created

clusterrole.rbac.authorization.k8s.io/kube-state-metrics created

clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created

deployment.apps/kube-state-metrics created

service/kube-state-metrics created

serviceaccount/kube-state-metrics created

servicemonitor.monitoring.coreos.com/kube-state-metrics created

clusterrole.rbac.authorization.k8s.io/node-exporter created

clusterrolebinding.rbac.authorization.k8s.io/node-exporter created

daemonset.apps/node-exporter created

service/node-exporter created

serviceaccount/node-exporter created

servicemonitor.monitoring.coreos.com/node-exporter created

apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

clusterrole.rbac.authorization.k8s.io/prometheus-adapter created

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created

clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created

clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created

clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created

configmap/adapter-config created

deployment.apps/prometheus-adapter created

rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created

service/prometheus-adapter created

serviceaccount/prometheus-adapter created

clusterrole.rbac.authorization.k8s.io/prometheus-k8s created

clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created

servicemonitor.monitoring.coreos.com/prometheus-operator created

prometheus.monitoring.coreos.com/k8s created

rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created

rolebinding.rbac.authorization.k8s.io/prometheus-k8s created

rolebinding.rbac.authorization.k8s.io/prometheus-k8s created

rolebinding.rbac.authorization.k8s.io/prometheus-k8s created

role.rbac.authorization.k8s.io/prometheus-k8s-config created

role.rbac.authorization.k8s.io/prometheus-k8s created

role.rbac.authorization.k8s.io/prometheus-k8s created

role.rbac.authorization.k8s.io/prometheus-k8s created

prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created

service/prometheus-k8s created

serviceaccount/prometheus-k8s created

servicemonitor.monitoring.coreos.com/prometheus created

servicemonitor.monitoring.coreos.com/kube-apiserver created

servicemonitor.monitoring.coreos.com/coredns created

servicemonitor.monitoring.coreos.com/kube-controller-manager created

servicemonitor.monitoring.coreos.com/kube-scheduler created

servicemonitor.monitoring.coreos.com/kubelet created

验证

# 验证thanos-sidecar容器

root@deploy:~# k-devops describe pod prometheus-k8s-0 -n monitoring

.............

thanos-sidecar:

Container ID: docker://7c8b3442ba8f81a5e5828c02e8e4f08b80c416375aea3adab407e9c341ed9f1b

Image: harbor.zhrx.com/monitoring/thanos:v0.20.0

Image ID: docker-pullable://harbor.zhrx.com/monitoring/thanos@sha256:8bcb077ca3c7d14fe242457d15dd3d98860255c21a673930645891138167d196

Ports: 10902/TCP, 10901/TCP

Host Ports: 0/TCP, 0/TCP

Args:

sidecar

--prometheus.url=http://localhost:9090/

--tsdb.path=/prometheus

--grpc-address=[$(POD_IP)]:10901

--http-address=[$(POD_IP)]:10902

State: Running

Started: Fri, 25 Mar 2022 15:42:09 +0800

Ready: True

Restart Count: 0

Environment:

POD_IP: (v1:status.podIP)

Mounts:

/prometheus from prometheus-k8s-db (rw,path="prometheus-db")

/var/run/secrets/kubernetes.io/serviceaccount from prometheus-k8s-token-9h89g (ro)

.............

暴露thanos-sidecar端口

root@deploy:~/manifest/prometheus-operator# vim thanos-sidecar-nodeport.yaml

apiVersion: v1

kind: Service

metadata:

name: prometheus-k8s-nodeport

namespace: monitoring

spec:

ports:

- port: 10901

targetPort: 10901

nodePort: 30901

selector:

app: prometheus

prometheus: k8s

type: NodePort

root@deploy:~/manifest/prometheus-operator# k-devops apply -f thanos-sidecar-nodeport.yaml

service/prometheus-k8s-nodeport created

root@deploy:~/manifest/prometheus-operator# k-test apply -f thanos-sidecar-nodeport.yaml

service/prometheus-k8s-nodeport created

root@deploy:~/manifest/prometheus-operator#

root@deploy:~/manifest/prometheus-operator# k-devops get svc -n monitoring | grep prometheus-k8s-nodeport

prometheus-k8s-nodeport NodePort 10.68.17.73 10901:30901/TCP 25s

3.3、部署thanos-query组件

我这里是把thanos-query组件部署到了devops集群

root@deploy:~/manifest/prometheus-operator# vim thanos-query.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: thanos-query

namespace: monitoring

labels:

app: thanos-query

spec:

selector:

matchLabels:

app: thanos-query

template:

metadata:

labels:

app: thanos-query

spec:

containers:

- name: thanos

image: harbor.zhrx.com/monitoring/thanos:v0.20.0

args:

- query

- --log.level=debug

- --query.replica-label=prometheus_replica # prometheus-operator 里面配置的副本标签为 prometheus_replica

# Discover local store APIs using DNS SRV.

- --store=192.168.1.101:30901

- --store=192.168.1.110:30901

ports:

- name: http

containerPort: 10902

- name: grpc

containerPort: 10901

livenessProbe:

httpGet:

path: /-/healthy

port: http

initialDelaySeconds: 10

readinessProbe:

httpGet:

path: /-/healthy

port: http

initialDelaySeconds: 15

---

apiVersion: v1

kind: Service

metadata:

name: thanos-query

namespace: monitoring

labels:

app: thanos-query

spec:

ports:

- port: 9090

targetPort: http

name: http

nodePort: 30909

selector:

app: thanos-query

type: NodePort

root@deploy:~/manifest/prometheus-operator# k-devops apply -f thanos-query.yaml

deployment.apps/thanos-query created

service/thanos-query created

root@deploy:~/manifest/prometheus-operator# k-devops get pod -n monitoring | grep query

thanos-query-f9bc76679-jp297 1/1 Running 0 34s

访问thanos-quey,端口为宿主机的IP:30909

可以看到thanos-query已经识别devops集群和test集群的thanos-sidecar,下面就可以查询这两个集群的指标数据

可以查询到两个集群的指标数据

相关数据