首页 > 代码库 > k8s集群之kubernetes-dashboard和kube-dns组件部署安装

k8s集群之kubernetes-dashboard和kube-dns组件部署安装

k8s集群之kubernetes-dashboard和kube-dns组件部署安装

说明

最好先部署kube-dns,有些组合服务直接主机用hostname解析,例如redis主从,heapster监控组件influxdb、grafana之间等。

参考文档

https://greatbsky.github.io/KubernetesDoc/kubernetes1.5.2/cn.html

安装集群文档见:

http://jerrymin.blog.51cto.com/3002256/1898243

安装PODS文档见:

http://jerrymin.blog.51cto.com/3002256/1900260


一,部署kube-dashboard插件

1,设置好谷歌IP否则dashboard镜像下载不了

[root@k8s-master ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.17.3.20  k8s-master  hub.yiche.io

172.17.3.7   k8s-node1

172.17.3.8   k8s-node2

10.1.8.2 redis-master

61.91.161.217 google.com

61.91.161.217 gcr.io      #在国内,你可能需要指定gcr.io的host,如不指定你需要手动下载google-containers. 如何下载其他文档说的方法挺多

61.91.161.217 www.gcr.io

61.91.161.217 console.cloud.google.com

61.91.161.217 storage.googleapis.com


2,创建pod

[root@k8s-master kubernetes]# wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

[root@k8s-master kubernetes]# kubectl create -f kubernetes-dashboard.yaml 

deployment "kubernetes-dashboard" created

service "kubernetes-dashboard" created

[root@k8s-master kubernetes]# kubectl get service --namespace=kube-system

NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE

kubernetes-dashboard   10.254.173.39   <nodes>       80:31595/TCP   2h

[root@k8s-master kubernetes]#  kubectl get pods --namespace=kube-system -o wide

NAME                                  READY     STATUS    RESTARTS   AGE       IP         NODE

kubernetes-dashboard-47555765-hnvp9   1/1       Running   0          2h        10.1.8.6   k8s-node2


3,验证

可以通过node节点IP加端口访问,这里的是http://172.17.3.7:31595或http://172.17.3.8:31595


4,安装headper插件方便监控流量图

https://github.com/kubernetes/heapster

发现要在k8s集群上跑heapster,需要安装InfluxDB与Google Cloud Monitoring and Google Cloud Logging

这是分开安装的资料,下面有个kube.sh脚本可以直接安装这两个相关组件

安装influxdb资料

https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb

安装google云监控和日志

https://github.com/kubernetes/heapster/blob/master/docs/google.md


git clone https://github.com/kubernetes/heapster

cd heapster/deploy/

注意修改一个配置,这样才能和dashboard连接起来

vim deploy/kube-config/influxdb/heapster-deployment.yaml 

        - /heapster

        - --source=kubernetes:http://172.17.3.20:8080

        - --sink=influxdb:http://monitoring-influxdb:8086


./kube.sh start

这里是自带监控脚本安装,安装完成后会生成

[root@k8s-master deploy]# kubectl get pods --namespace=kube-system

NAME                                   READY     STATUS    RESTARTS   AGE

heapster-564189836-pj07z               1/1       Running   0          20m

kube-dns-3019842428-tqjg8              3/3       Running   0          20h

kube-dns-autoscaler-2715466192-gs937   1/1       Running   0          18h

kubernetes-dashboard-47555765-hnvp9    1/1       Running   1          1d

monitoring-grafana-3730655072-8j5h3    1/1       Running   0          20m

monitoring-influxdb-957705310-8tx2h    1/1       Running   0          20m



二,部署kube-dns插件

1,下载配置文件

root@k8s-master ~]# mkdir -p /root/kubernetes

[root@k8s-master kubernetes]# ffor item in Makefile kubedns-controller.yaml.base kubedns-svc.yaml.base transforms2salt.sed transforms2sed.sed; do

    wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/$item

done

[root@k8s-master kubernetes]# ls

kubedns-controller.yaml.base  kubedns-svc.yaml.base  kubernetes-dashboard.yaml  Makefile  transforms2salt.sed  transforms2sed.sed



2,修改本地环境

[root@k8s-master kubernetes]# kubectl get svc |grep kubernetes  

kubernetes     10.254.0.1       <none>        443/TCP        6d

[root@k8s-master kubernetes]# DNS_SERVER_IP=10.254.0.10

[root@k8s-master kubernetes]# DNS_DOMAIN=cluster.local

修改transforms2sed.sed里面2个变量为实际网段及搜索域,修改前如下

[root@k8s-master kubernetes]# cat transforms2sed.sed 

s/__PILLAR__DNS__SERVER__/$DNS_SERVER_IP/g

s/__PILLAR__DNS__DOMAIN__/$DNS_DOMAIN/g

/__PILLAR__FEDERATIONS__DOMAIN__MAP__/d

s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

修改后如下

[root@k8s-master kubernetes]# cat transforms2sed.sed 

s/__PILLAR__DNS__SERVER__/10.254.0.10/g

s/__PILLAR__DNS__DOMAIN__/cluster.local/g

/__PILLAR__FEDERATIONS__DOMAIN__MAP__/d

s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

编译

[root@k8s-master kubernetes]# make

sed -f transforms2salt.sed kubedns-controller.yaml.base | sed s/__SOURCE_FILENAME__/kubedns-controller.yaml.base/g > kubedns-controller.yaml.in

sed -f transforms2salt.sed kubedns-svc.yaml.base | sed s/__SOURCE_FILENAME__/kubedns-svc.yaml.base/g > kubedns-svc.yaml.in

sed -f transforms2sed.sed kubedns-controller.yaml.base  | sed s/__SOURCE_FILENAME__/kubedns-controller.yaml.base/g > kubedns-controller.yaml.sed

sed -f transforms2sed.sed kubedns-svc.yaml.base  | sed s/__SOURCE_FILENAME__/kubedns-svc.yaml.base/g > kubedns-svc.yaml.sed



3,修改RC和SVC文件

#现在编辑文件kubedns-controller.yaml.sed删除(这是用#注释)volume的配置,以避免两个问题:

#1.错误: error validating "kubedns-controller.yaml.sed.bak": error validating data: found invalid field optional for v1.ConfigMapVolumeSource; if you choose to ignore these errors, turn validation off with --validate=false

#2.创建容器后, tail /var/log/messages, 显示错误: configmaps "kube-dns" not found 

[root@k8s-master kubernetes]# cat kubedns-controller.yaml.sed 

# Copyright 2016 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.


# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml

# in sync with this file.


# Warning: This is a file generated from the base underscore template file: kubedns-controller.yaml.base


apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: kube-dns

  namespace: kube-system

  labels:

    k8s-app: kube-dns

    kubernetes.io/cluster-service: "true"

spec:

  # replicas: not specified here:

  # 1. In order to make Addon Manager do not reconcile this replicas parameter.

  # 2. Default is 1.

  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.

  strategy:

    rollingUpdate:

      maxSurge: 10%

      maxUnavailable: 0

  selector:

    matchLabels:

      k8s-app: kube-dns

  template:

    metadata:

      labels:

        k8s-app: kube-dns

      annotations:

        scheduler.alpha.kubernetes.io/critical-pod: ‘‘

        scheduler.alpha.kubernetes.io/tolerations: ‘[{"key":"CriticalAddonsOnly", "operator":"Exists"}]‘

    spec:

#      volumes:

#      - name: kube-dns-config

#        configMap:

#          name: kube-dns

#          optional: true

      containers:

      - name: kubedns

        image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.12.1

        resources:

          # TODO: Set memory limits when we‘ve profiled the container for large

          # clusters, then set request = limit to keep this container in

          # guaranteed class. Currently, this container falls into the

          # "burstable" category so the kubelet doesn‘t backoff from restarting it.

          limits:

            memory: 170Mi

          requests:

            cpu: 100m

            memory: 70Mi

        livenessProbe:

          httpGet:

            path: /healthcheck/kubedns

            port: 10054

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        readinessProbe:

          httpGet:

            path: /readiness

            port: 8081

            scheme: HTTP

          # we poll on pod startup for the Kubernetes master service and

          # only setup the /readiness HTTP server once that‘s available.

          initialDelaySeconds: 3

          timeoutSeconds: 5

        args:

        - --domain=cluster.local.

        - --dns-port=10053

        - --config-dir=/kube-dns-config

        - --v=2

        env:

        - name: PROMETHEUS_PORT

          value: "10055"

        ports:

        - containerPort: 10053

          name: dns-local

          protocol: UDP

        - containerPort: 10053

          name: dns-tcp-local

          protocol: TCP

        - containerPort: 10055

          name: metrics

          protocol: TCP

#        volumeMounts:

#        - name: kube-dns-config

#          mountPath: /kube-dns-config

      - name: dnsmasq

        image: gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.12.1

        livenessProbe:

          httpGet:

            path: /healthcheck/dnsmasq

            port: 10054

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        args:

        - --cache-size=1000

        - --server=/cluster.local/127.0.0.1#10053

        - --server=/in-addr.arpa/127.0.0.1#10053

        - --server=/ip6.arpa/127.0.0.1#10053

        - --log-facility=-

        ports:

        - containerPort: 53

          name: dns

          protocol: UDP

        - containerPort: 53

          name: dns-tcp

          protocol: TCP

        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details

        resources:

          requests:

            cpu: 150m

            memory: 10Mi

      - name: sidecar

        image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.12.1

        livenessProbe:

          httpGet:

            path: /metrics

            port: 10054

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        args:

        - --v=2

        - --logtostderr

        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A

        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A

        ports:

        - containerPort: 10054

          name: metrics

          protocol: TCP

        resources:

          requests:

            memory: 20Mi

            cpu: 10m

      dnsPolicy: Default  # Don‘t use cluster DNS.

 

 

[root@k8s-master kubernetes]# cat kubedns-svc.yaml.sed

# Copyright 2016 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.


# Warning: This is a file generated from the base underscore template file: kubedns-svc.yaml.base


apiVersion: v1

kind: Service

metadata:

  name: kube-dns

  namespace: kube-system

  labels:

    k8s-app: kube-dns

    kubernetes.io/cluster-service: "true"

    kubernetes.io/name: "KubeDNS"

spec:

  selector:

    k8s-app: kube-dns

  clusterIP: 10.254.0.10

  ports:

  - name: dns

    port: 53

    protocol: UDP

  - name: dns-tcp

    port: 53

    protocol: TCP  


[root@k8s-master kubernetes]# vim /etc/kubernetes/controller-manager  #若忘记修改可能会报错,默认参数为空:No API token found for service account "default"

KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/var/run/kubernetes/apiserver.key --root-ca-file=/var/run/kubernetes/apiserver.crt"



4,创建RC和SVC

为保证创建能够成功,最好先在node节点下载如下镜像

docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.12.1

docker pull gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.12.1

docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.12.1

创建

[root@k8s-master kubernetes]# kubectl create -f kubedns-controller.yaml.sed

deployment "kube-dns" created

[root@k8s-master kubernetes]# kubectl create -f kubedns-svc.yaml.sed

service "kube-dns" created

验证

[root@k8s-master kubernetes]#  kubectl get svc --namespace=kube-system

NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE

kube-dns               10.254.0.10     <none>        53/UDP,53/TCP   7m

kubernetes-dashboard   10.254.173.39   <nodes>       80:31595/TCP    5h

[root@k8s-master kubernetes]#  kubectl get pods --namespace=kube-system -o wide

NAME                                  READY     STATUS    RESTARTS   AGE       IP         NODE

kube-dns-3019842428-cds2l             3/3       Running   0          8m        10.1.8.4   k8s-node2

kubernetes-dashboard-47555765-hnvp9   1/1       Running   1          5h        10.1.8.6   k8s-node2

[root@k8s-master kubernetes]# kubectl cluster-info

Kubernetes master is running at http://localhost:8080

KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns


To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.


5,node节点使用dns解析服务

kubelet里面kubelet_args原来是空,现在加入dns服务器配置,然后重启kubelet服务

[root@k8s-node1 ~]# cat /etc/kubernetes/kubelet|grep -Ev "^#|^$"

KUBELET_ADDRESS="--address=0.0.0.0"

KUBELET_PORT="--port=10250"

KUBELET_HOSTNAME="--hostname-override=k8s-node1"

KUBELET_API_SERVER="--api-servers=http://172.17.3.20:8080"

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

KUBELET_ARGS="--cluster_dns=10.254.0.10 --cluster_domain=cluster.local"

重启

[root@k8s-node1 ~]# systemctl restart kubelet && systemctl status kubelet

验证


6,可以进入容器验证

这个容器所在node1节点修改了DNS参数,所以里面容器resolv.conf文件修改了,nslookup也能找到redis-master了。

[root@k8s-master kubernetes]# kubectl get pods -o wide |grep redis

redis-master-qg457             1/1       Running   1          6d        10.1.8.2    k8s-node2

redis-slave-40tjp              1/1       Running   1          5d        10.1.8.3    k8s-node2

redis-slave-q5kj0              1/1       Running   0          20h       10.1.8.9    k8s-node2

redis-slave-r6mg2              1/1       Running   0          59m       10.1.89.4   k8s-node1


[root@k8s-master kubernetes]# kubectl exec -ti redis-slave-r6mg2 -- bash

root@redis-slave-r6mg2:/data# apt-get update

root@redis-slave-r6mg2:/data# apt-get install dnsutils

root@redis-slave-r6mg2:/data# cat /etc/resolv.conf 

search default.svc.cluster.local svc.cluster.local cluster.local

nameserver 10.254.0.10

options ndots:5

root@redis-slave-r6mg2:/data# nslookup redis-master

Server: 10.254.0.10

Address: 10.254.0.10#53


Non-authoritative answer:

Name: redis-master.default.svc.cluster.local

Address: 10.254.6.192

root@redis-slave-r6mg2:/data# ping gcr.io

PING gcr.io (64.233.187.82): 48 data bytes


对比另外一个容器所在node2节点没有修改DNS参数

[root@k8s-master kubernetes]# kubectl exec -ti redis-slave-r6mg2 -- bash

root@redis-slave-q5kj0:/data# apt-get update

root@redis-slave-q5kj0:/data# apt-get install dnsutils

root@redis-slave-q5kj0:/data# cat /etc/resolv.conf 

nameserver 203.196.0.6

nameserver 203.196.1.6

options ndots:5

root@redis-slave-q5kj0:/data# nslookup redis-master

Server: 203.196.0.6

Address: 203.196.0.6#53


** server can‘t find redis-master: NXDOMAIN

root@redis-slave-q5kj0:/data# ping gcr.io

PING gcr.io (64.233.187.82): 48 data bytes


可以看到redis-slave想要和redis-master通信需要有DNS支持,如果没有可以在/etc/hosts里面绑定redis-master的容器IP也行,但是2个容器外部解析和上网都没有问题



7,安装dns-horizontal-autoscaling插件,在集群里负责扩展DNS服务器

[root@k8s-master kubernetes]# wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml

[root@k8s-master kubernetes]# kubectl create -f dns-horizontal-autoscaler.yaml

deployment "kube-dns-autoscaler" created

[root@k8s-master kubernetes]# cat dns-horizontal-autoscaler.yaml |grep image

        image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0

[root@k8s-master kubernetes]# kubectl get configmap --namespace=kube-system

NAME                  DATA      AGE

kube-dns-autoscaler   1         15m

[root@k8s-master kubernetes]# kubectl get deployment --namespace=kube-system

NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

kube-dns               1         1         1            1           2h

kube-dns-autoscaler    1         1         1            1           16m

kubernetes-dashboard   1         1         1            1           1d



本文出自 “jerrymin” 博客,请务必保留此出处http://jerrymin.blog.51cto.com/3002256/1900508

k8s集群之kubernetes-dashboard和kube-dns组件部署安装