首页 > 代码库 > kubernetes 1.4.5集群部署
kubernetes 1.4.5集群部署
2016/11/16 23:39:58
环境: centos7
[fu@centos server]$ uname -a
Linux centos 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
1. 初始化环境
关闭防火墙:
[root@k8s-master fu]# systemctl stop firewalld
[root@k8s-master fu]# systemctl disable firewalld
1.1 环境:
节点 | IP |
---|---|
node-1 | 192.168.44.129 |
node-2 | 192.168.44.131 |
node-3 | 192.168.44.132 |
1.2 设置hostname
hostnamectl --static set-hostname hostname
IP | hostname |
---|---|
192.168.44.129 | k8s-master |
192.168.44.131 | k8s-node-1 |
192.168.44.132 | k8s-node-2 |
master:
[root@centos fu]# hostnamectl --static set-hostname k8s-master
node-1
[root@centos fu]# hostnamectl --static set-hostname k8s-node-1
node-2
[root@centos fu]# hostnamectl --static set-hostname k8s-node-2
1.3 配置 hosts
vi /etc/hosts
IP | hostname |
---|---|
192.168.44.129 | k8s-master |
192.168.44.131 | k8s-node-1 |
192.168.44.132 | k8s-node-2 |
分别在hosts中加入:
192.168.44.129 k8s-master
192.168.44.131 k8s-node-1
192.168.44.132 k8s-node-2
或者,直接执行,在hosts中追加:
echo ‘192.168.44.129 k8s-master
192.168.44.131 k8s-node-1
192.168.44.132 k8s-node-2‘ >> /etc/hosts
1.4安装kubelet 和kubeadm
添加yum (注:root用户下执行)
cat <<EOF > /etc/yum.repos.d/k8s.repo
[kubelet]
name=kubelet
baseurl=http://files.rm-rf.ca/rpms/kubelet/
enabled=1
gpgcheck=0
EOF
安装并启动:yum install docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
cat <<EOF > /etc/yum.repos.d/k8s.repo
[kubelet]
name=kubelet
baseurl=http://files.rm-rf.ca/rpms/kubelet/
enabled=1
gpgcheck=0
EOF
yum install docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
2 部署 kubernetes master
2.1 添加yum(如环境统一处理,此处略过)
注:root用户下执行
cat <<EOF> /etc/yum.repos.d/k8s.repo
[kubelet]
name=kubelet
baseurl=http://files.rm-rf.ca/rpms/kubelet/
enabled=1
gpgcheck=0
EOF
安装 kubernetes依赖环境:
yum有很多源,大多是网络上的。makecache建立一个缓存,以后用install安装软件时就在缓存中搜索,提高了速度。
[root@k8s-master fu]# yum makecache
[root@k8s-master fu]# yum install -y socat kubelet kubeadm kubectl kubernetes-cni
2.2 安装docker
wget -qO- https://get.docker.com/ | sh
如果提示:
bash: wget: 未找到命令
则先安装wget:
[root@centos fu]# yum -y install wget
如果已经安装过dokcer则可直接启动:
提示未启动:
[root@centos fu]# docker images
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
docker设为开机启动并启动:
systemctl enable docker
systemctl start docker
2.3 下载镜像
images=(kube-proxy-amd64:v1.4.5 kube-discovery-amd64:1.0 kubedns-amd64:1.7 kube-scheduler-amd64:v1.4.5 kube-controller-manager-amd64:v1.4.5 kube-apiserver-amd64:v1.4.5 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.3 exechealthz-amd64:1.1 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.4.1)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done
2.4 启动 kubernetes
systemctl enable kubelet
systemctl start kubelet
2.5 创建集群
kubeadm init --api-advertise-addresses=192.168.44.129 --use-kubernetes-version v1.4.5
如提示:
Running pre-flight checks
preflight check errors:
/etc/kubernetes is not empty
则:
[root@k8s-master kubernetes]# rm -rf manifests/
然后再执行 init
2.6 记录 token
init打出的日志,把加入集群的token记录下来。
Kubernetes master initialised successfully!
You can now join any number of machines by running the following on each node:
kubeadm join --token=a46536.cad65192491d2fd9 192.168.44.129
2.7 检查 kubelet 状态
systemctl status kubelet
2.8 查询集群pods:
[root@k8s-master system]# kubectl get nodes
3 部署 kubernetes node
3.1 安装docker
wget -qO- https://get.docker.com/ | sh
如果提示:
bash: wget: 未找到命令
则先安装wget:
[root@centos fu]# yum -y install wget
设置docker开机启动并启动:
systemctl enable docker
systemctl start docker
3.2 下载镜像
images=(kube-proxy-amd64:v1.4.5 kube-discovery-amd64:1.0 kubedns-amd64:1.7 kube-scheduler-amd64:v1.4.5 kube-controller-manager-amd64:v1.4.5 kube-apiserver-amd64:v1.4.5 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.3 exechealthz-amd64:1.1 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.4.1)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done
3.3 安装并启动 kubernetes(如环境统一处理,此处略过)
可以按上面先指定yum源,速度会有提升
yum makecache
yum install -y socat kubelet kubeadm kubectl kubernetes-cni
systemctl enable kubelet
systemctl start kubelet
3.4 加入集群
复制自己master创建集群的返回值
- kubeadm join --token=a46536.cad65192491d2fd9 192.168.44.129
返回如下错误,须手动清空该目录
如果此错误,请查看防火墙或selinux
返回:
都加入成功后,通过get nodes查看集群状态:
重启机器会有延迟:
4 设置 kubernetes
kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created
4.2 查看系统服务状态
# kube-dns 必须配置完网络才能 Running
[root@k8s-master ~]#kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy-2088944543-io6ca 1/1 Running 0 22m
kube-system etcd-k8s-master 1/1 Running 0 22m
kube-system kube-apiserver-k8s-master 1/1 Running 0 22m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 20m
kube-system kube-discovery-982812725-rm6ut 1/1 Running 0 22m
kube-system kube-dns-2247936740-htw22 3/3 Running 0 21m
kube-system kube-proxy-amd64-lo0hr 1/1 Running 0 15m
kube-system kube-proxy-amd64-t3qpn 1/1 Running 0 15m
kube-system kube-proxy-amd64-wwj2z 1/1 Running 0 21m
kube-system kube-scheduler-k8s-master 1/1 Running 0 21m
kube-system weave-net-6k3ha 2/2 Running 0 11m
kube-system weave-net-auf0c 2/2 Running 0 11m
kube-system weave-net-bxj6d 2/2 Running 0 11m
4.3 其他主机控制集群
# 备份master节点的 配置文件
/etc/kubernetes/admin.conf
# 保存至 其他电脑, 通过执行配置文件控制集群
kubectl --kubeconfig ./admin.conf get nodes
4.4 配置dashboard
#下载 yaml 文件, 直接导入会去官方拉取images
curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
#编辑 yaml 文件
vi kubernetes-dashboard.yaml
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
修改为
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.1
imagePullPolicy: Always
修改为
imagePullPolicy: IfNotPresent
kubectl create -f ./kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
# 查看 NodePort ,既外网访问端口
kubectl describe svc kubernetes-dashboard --namespace=kube-system
NodePort: <unset> 31736/TCP
# 访问 dashboard
http://10.6.0.140:31736
FAQ:
kube-discovery error
failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists]
systemctl stop kubelet;
docker rm -f -v $(docker ps -q);
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
systemctl start kubelet
kubeadm init
来源: http://www.xf80.com/2016/10/31/kubernetes-update-1.4.5/#section-8
****博客:https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/
github上,kubernetes版本:https://github.com/kubernetes/kubernetes/releases
来自为知笔记(Wiz)
kubernetes 1.4.5集群部署
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。