Erlo

附014.Kubernetes Prometheus+Grafana+EFK+Kibana+Glusterfs整合解决方案

时间:2020-03-20 08:00   阅读:18次   来源:博客园
页面报错
点赞
原文:https://www.cnblogs.com/itzgr/p/12529383.html

一 glusterfs存储集群部署

注意:以下为简略步骤,详情参考《附009.Kubernetes永久存储之GlusterFS独立部署》。

1.1 架构示意

1.2 相关规划

主机

IP

磁盘

备注

k8smaster01

172.24.8.71

——

Kubernetes Master节点

Heketi主机

k8smaster02

172.24.8.72

——

Kubernetes Master节点

Heketi主机

k8smaster03

172.24.8.73

——

Kubernetes Master节点

Heketi主机

k8snode01

172.24.8.74

sdb

Kubernetes Worker节点

glusterfs 01节点

k8snode02

172.24.8.75

sdb

Kubernetes Worker节点

glusterfs 02节点

k8snode03

172.24.8.76

sdb

Kubernetes Worker节点

glusterfs 03节点

提示:本规划直接使用裸磁盘完成。

1.3 安装glusterfs

# yum -y install centos-release-gluster

# yum -y install glusterfs-server

# systemctl start glusterd

# systemctl enable glusterd

提示:建议所有节点安装。

1.4 添加信任池

[root@k8snode01 ~]# gluster peer probe k8snode02

[root@k8snode01 ~]# gluster peer probe k8snode03

[root@k8snode01 ~]# gluster peer status #查看信任池状态

[root@k8snode01 ~]# gluster pool list #查看信任池列表

提示:仅需要在glusterfs任一节点执行一次即可。

1.5 安装heketi

[root@k8smaster01 ~]# yum -y install heketi heketi-client

1.6 配置heketi

[root@k8smaster01 ~]# vi /etc/heketi/heketi.json

1 { 2 "_port_comment": "Heketi Server Port Number", 3 "port": "8080", 4 5 "_use_auth": "Enable JWT authorization. Please enable for deployment", 6 "use_auth": true, 7 8 "_jwt": "Private keys for access", 9 "jwt": { 10 "_admin": "Admin has access to all APIs", 11 "admin": { 12 "key": "admin123" 13 }, 14 "_user": "User only has access to /volumes endpoint", 15 "user": { 16 "key": "xianghy" 17 } 18 }, 19 20 "_glusterfs_comment": "GlusterFS Configuration", 21 "glusterfs": { 22 "_executor_comment": [ 23 "Execute plugin. Possible choices: mock, ssh", 24 "mock: This setting is used for testing and development.", 25 " It will not send commands to any node.", 26 "ssh: This setting will notify Heketi to ssh to the nodes.", 27 " It will need the values in sshexec to be configured.", 28 "kubernetes: Communicate with GlusterFS containers over", 29 " Kubernetes exec api." 30 ], 31 "executor": "ssh", 32 33 "_sshexec_comment": "SSH username and private key file information", 34 "sshexec": { 35 "keyfile": "/etc/heketi/heketi_key", 36 "user": "root", 37 "port": "22", 38 "fstab": "/etc/fstab" 39 }, 40 41 "_db_comment": "Database file name", 42 "db": "/var/lib/heketi/heketi.db", 43 44 "_loglevel_comment": [ 45 "Set log level. Choices are:", 46 " none, critical, error, warning, info, debug", 47 "Default is warning" 48 ], 49 "loglevel" : "warning" 50 } 51 }

 

1.7 配置免秘钥

[root@k8smaster01 ~]# ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ""

[root@k8smaster01 ~]# chown heketi:heketi /etc/heketi/heketi_key

[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode01

[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode02

[root@k8smaster01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@k8snode03

1.8 启动heketi

[root@k8smaster01 ~]# systemctl enable heketi.service

[root@k8smaster01 ~]# systemctl start heketi.service

[root@k8smaster01 ~]# systemctl status heketi.service

[root@k8smaster01 ~]# curl http://localhost:8080/hello #测试访问

1.9 配置Heketi拓扑

[root@k8smaster01 ~]# vi /etc/heketi/topology.json

1 { 2 "clusters": [ 3 { 4 "nodes": [ 5 { 6 "node": { 7 "hostnames": { 8 "manage": [ 9 "k8snode01" 10 ], 11 "storage": [ 12 "172.24.8.74" 13 ] 14 }, 15 "zone": 1 16 }, 17 "devices": [ 18 "/dev/sdb" 19 ] 20 }, 21 { 22 "node": { 23 "hostnames": { 24 "manage": [ 25 "k8snode02" 26 ], 27 "storage": [ 28 "172.24.8.75" 29 ] 30 }, 31 "zone": 1 32 }, 33 "devices": [ 34 "/dev/sdb" 35 ] 36 }, 37 { 38 "node": { 39 "hostnames": { 40 "manage": [ 41 "k8snode03" 42 ], 43 "storage": [ 44 "172.24.8.76" 45 ] 46 }, 47 "zone": 1 48 }, 49 "devices": [ 50 "/dev/sdb" 51 ] 52 } 53 ] 54 } 55 ] 56 }

 

[root@k8smaster01 ~]# echo "export HEKETI_CLI_SERVER=http://k8smaster01:8080" >> /etc/profile.d/heketi.sh

[root@k8smaster01 ~]# echo "alias heketi-cli='heketi-cli --user admin --secret admin123'" >> .bashrc

[root@k8smaster01 ~]# source /etc/profile.d/heketi.sh

[root@k8smaster01 ~]# source .bashrc

[root@k8smaster01 ~]# echo $HEKETI_CLI_SERVER

http://k8smaster01:8080

[root@k8smaster01 ~]# heketi-cli --server $HEKETI_CLI_SERVER --user admin --secret admin123 topology load --json=/etc/heketi/topology.json

1.10 集群管理及测试

[root@heketi ~]# heketi-cli cluster list #集群列表

[root@heketi ~]# heketi-cli node list #卷信息

[root@heketi ~]# heketi-cli volume list #卷信息

[root@k8snode01 ~]# gluster volume info #通过glusterfs节点查看

1.11 创建StorageClass

[root@k8smaster01 study]# vi heketi-secret.yaml

1 apiVersion: v1 2 kind: Secret 3 metadata: 4 name: heketi-secret 5 namespace: heketi 6 data: 7 key: YWRtaW4xMjM= 8 type: kubernetes.io/glusterfs

 

[root@k8smaster01 study]# kubectl create ns heketi

[root@k8smaster01 study]# kubectl create -f heketi-secret.yaml #创建heketi

[root@k8smaster01 study]# kubectl get secrets -n heketi

[root@k8smaster01 study]# vim gluster-heketi-storageclass.yaml #正式创建StorageClass

1 StorageClass 2 apiVersion: storage.k8s.io/v1 3 kind: StorageClass 4 metadata: 5 name: ghstorageclass 6 parameters: 7 resturl: "http://172.24.8.71:8080" 8 clusterid: "ad0f81f75f01d01ebd6a21834a2caa30" 9 restauthenabled: "true" 10 restuser: "admin" 11 secretName: "heketi-secret" 12 secretNamespace: "heketi" 13 volumetype: "replicate:3" 14 provisioner: kubernetes.io/glusterfs 15 reclaimPolicy: Delete

 

[root@k8smaster01 study]# kubectl create -f gluster-heketi-storageclass.yaml

注意:storageclass资源创建后不可变更,如修改只能删除后重建。

[root@k8smaster01 heketi]# kubectl get storageclasses #查看确认

NAME PROVISIONER AGE

gluster-heketi-storageclass kubernetes.io/glusterfs 85s

[root@k8smaster01 heketi]# kubectl describe storageclasses ghstorageclass

 

二 集群监控Metrics

注意:以下为简略步骤,详情参考《049.集群管理-集群监控Metrics》。

2.1 开启聚合层

开机聚合层功能,使用kubeadm默认已开启此功能,可如下查看验证。

[root@k8smaster01 ~]# cat /etc/kubernetes/manifests/kube-apiserver.yaml

2.2 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/kubernetes-incubator/metrics-server.git

[root@k8smaster01 ~]# cd metrics-server/deploy/1.8+/

[root@k8smaster01 1.8+]# vi metrics-server-deployment.yaml

1 …… 2 image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6 #修改为国内源 3 command: 4 - /metrics-server 5 - --metric-resolution=30s 6 - --kubelet-insecure-tls 7 - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP #添加如上command 8 ……

 

2.3 正式部署

[root@k8smaster01 1.8+]# kubectl apply -f .

[root@k8smaster01 1.8+]# kubectl -n kube-system get pods -l k8s-app=metrics-server

[root@k8smaster01 1.8+]# kubectl -n kube-system logs -l k8s-app=metrics-server -f #可查看部署日志

2.4 确认验证

[root@k8smaster01 ~]# kubectl top nodes

[root@k8smaster01 ~]# kubectl top pods --all-namespaces

 

三 Prometheus部署

注意:以下为简略步骤,详情参考《050.集群管理-Prometheus+Grafana监控方案》。

3.1 获取部署文件

[root@k8smaster01 ~]# git clone https://github.com/prometheus/prometheus

3.2 创建命名空间

[root@k8smaster01 ~]# cd prometheus/documentation/examples/

[root@k8smaster01 examples]# vi monitor-namespace.yaml

1 apiVersion: v1 2 kind: Namespace 3 metadata: 4 name: monitoring 5

[root@k8smaster01 examples]# kubectl create -f monitor-namespace.yaml

3.3 创建RBAC

[root@k8smaster01 examples]# vi rbac-setup.yml

1 apiVersion: rbac.authorization.k8s.io/v1beta1 2 kind: ClusterRole 3 metadata: 4 name: prometheus 5 rules: 6 - apiGroups: [""] 7 resources: 8 - nodes 9 - nodes/proxy 10 - services 11 - endpoints 12 - pods 13 verbs: ["get", "list", "watch"] 14 - apiGroups: 15 - extensions 16 resources: 17 - ingresses 18 verbs: ["get", "list", "watch"] 19 - nonResourceURLs: ["/metrics"] 20 verbs: ["get"] 21 --- 22 apiVersion: v1 23 kind: ServiceAccount 24 metadata: 25 name: prometheus 26 namespace: monitoring #仅需修改命名空间 27 --- 28 apiVersion: rbac.authorization.k8s.io/v1beta1 29 kind: ClusterRoleBinding 30 metadata: 31 name: prometheus 32 roleRef: 33 apiGroup: rbac.authorization.k8s.io 34 kind: ClusterRole 35 name: prometheus 36 subjects: 37 - kind: ServiceAccount 38 name: prometheus 39 namespace: monitoring #仅需修改命名空间

[root@k8smaster01 examples]# kubectl create -f rbac-setup.yml

3.4 创建Prometheus ConfigMap

[root@k8smaster01 examples]# cat prometheus-kubernetes.yml | grep -v ^$ | grep -v "#" >> prometheus-config.yaml

[root@k8smaster01 examples]# vi prometheus-config.yaml

1 apiVersion: v1 2 kind: ConfigMap 3 metadata: 4 name: prometheus-server-conf 5 labels: 6 name: prometheus-server-conf 7 namespace: monitoring #修改命名空间 8 data: 9 prometheus.yml: |- 10 global: 11 scrape_interval: 10s 12 evaluation_interval: 10s 13 14 scrape_configs: 15 - job_name: 'kubernetes-apiservers' 16 kubernetes_sd_configs: 17 - role: endpoints 18 scheme: https 19 tls_config: 20 ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 21 bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 22 relabel_configs: 23 - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] 24 action: keep 25 regex: default;kubernetes;https 26 27 - job_name: 'kubernetes-nodes' 28 scheme: https 29 tls_config: 30 ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 31 bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 32 kubernetes_sd_configs: 33 - role: node 34 relabel_configs: 35 - action: labelmap 36 regex: __meta_kubernetes_node_label_(.+) 37 - target_label: __address__ 38 replacement: kubernetes.default.svc:443 39 - source_labels: [__meta_kubernetes_node_name] 40 regex: (.+) 41 target_label: __metrics_path__ 42 replacement: /api/v1/nodes/${1}/proxy/metrics 43 44 - job_name: 'kubernetes-cadvisor' 45 scheme: https 46 tls_config: 47 ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 48 bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 49 kubernetes_sd_configs: 50 - role: node 51 relabel_configs: 52 - action: labelmap 53 regex: __meta_kubernetes_node_label_(.+) 54 - target_label: __address__ 55 replacement: kubernetes.default.svc:443 56 - source_labels: [__meta_kubernetes_node_name] 57 regex: (.+) 58 target_label: __metrics_path__ 59 replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor 60

评论留言

还没有评论留言,赶紧来抢楼吧~~

吐槽小黑屋()

* 这里是“吐槽小黑屋”,所有人可看,只保留当天信息。

  • Erlo.vip2020-04-04 10:06:50Hello、欢迎使用吐槽小黑屋,这就是个吐槽的地方。
  • 返回顶部