部署节点说明
IP地址 主机名 配置 192.168.100.2 k8s-master 2/4G 192.168.100.3 k8s-node-1 2/4G 192.168.100.4 k8s-node-2 2/4G
软件版本:CentOS 7.6、Kubernetes 1.13.6、Docker 18.09.6
注:下文会以中括号+斜体字 [简写主机名] 的形式标注所在节点的操作,[all] 代表所有节点。
一、初始化环境
1.1 配置主机名 [all]
hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-node-1 hostnamectl set-hostname k8s-node-2
1.2 修改 hosts 文件 [all]
vim /etc/hosts # 追加如下 192.168.100.2 k8s-master 192.168.100.3 k8s-node-1 192.168.100.4 k8s-node-2
1.3 建立所有节点的 ssh-key 认证 [master]
ssh-keygen ssh-copy-id 192.168.100.3 ssh-copy-id 192.168.100.4
1.4 关闭防火墙和 SELinux [all]
systemctl disable firewalld && systemctl stop firewalld
setenforce 0 sed -i 's/^SELINUX=.*$/SELINUX=disabled/g' /etc/selinux/config
1.5 关闭 Swap [all]
swapoff -a && sysctl -w vm.swappiness=0
vim /etc/fstab # 注释下面这一行 #/dev/mapper/centos-swap swap swap defaults 0 0
1.5 设置 Docker 所需参数 [all]
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf sysctl -p
1.6 安装 Docker [all]
# 更新系统 yum update -y yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install -y docker-ce systemctl start docker && systemctl enable docker
1.7 创建安装目录 [all]
创建 kubernetes 主目录,用于存放有关文件:
mkdir -p /opt/kubernetes/{bin,cfg,ssl} && mkdir /var/lib/etcd
vim /etc/profile #Kubernetes export PATH=$PATH:/opt/kubernetes/bin
source /etc/profile
二、安装及配置 CFSSL [master]
自签 TLS 证书,下载地址:http://pkg.cfssl.org/
2.1 下载安装 CFSSL
wget http://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget http://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget http://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 cfssl mv cfssl-certinfo_linux-amd64 cfssl-certinfo mv cfssljson_linux-amd64 cfssljson chmod +x cfssl* && mv cfssl* /usr/local/bin/
创建一个临时存储文件的目录:
mkdir ssl && cd ssl
2.2 创建 CA 配置文件
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
ca-config.json
:可以定义多个profiles
,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile
;signing
:表示该证书可用于签名其它证书;生成的ca.pem
证书中CA=TRUE
;server auth
:表示 client 可以用该 CA 对 server 提供的证书进行验证;client auth
:表示 server 可以用该 CA 对 client 提供的证书进行验证。
2.3 创建 CA 证书签名请求
cat << EOF | tee ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Nanjing",
"ST": "Nanjing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
"CN"
:Common Name,kube-apiserver
从证书中提取该字段作为请求的用户名(User Name)
;浏览器使用该字段验证网站是否合法;"O"
:Organization,kube-apiserver
从证书中提取该字段作为请求用户所属的组(Group)
。
生成 CA 证书和私钥:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2.4 创建 kubernetes 证书签名请求文件
集群节点预加载IP都要写全:
cat << EOF | tee server-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.100.2",
"192.168.100.3",
"192.168.100.4",
"10.10.10.1",
"k8s-master",
"k8s-node-1",
"k8s-node-2",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "NanJing",
"ST": "NanJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
hosts
中的内容可以为空,即使按照上面的配置,向集群中增加新节点后也不需要重新生成证书。
如果hosts
字段不为空则需要指定授权使用该证书的 IP 或域名列表,由于该证书后续被 etcd 集群和 kubernetes master 集群使用,所以上面分别指定了 etcd 集群、kubernetes master 集群的主机 IP 和 kubernetes 服务的服务 IP
生成 kubernetes 证书和私钥:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2.5 创建 admin 证书
cat << EOF | tee admin-csr.json
{
"CN": "admin",
"hosts": [ ],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "NanJing",
"ST": "NanJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;kube-apiserver 预定义了一些 RBAC 使用的RoleBindings
,如cluster-admin
将Group system:masters
与Role cluster-admin
绑定,该Role
授予了调用 kube-apiserver 的所有 API 的权限;OU
指定该证书的Group
为system:masters
,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters
,所以被授予访问所有 API 的权限。
生成 admin 证书和私钥:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2.6 创建 kube-proxy 证书
cat << EOF | tee kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [ ],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Nanjing",
"ST": "Nanjing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
CN
指定该证书的User
为system:kube-proxy
;kube-apiserver 预定义的RoleBinding cluster-admin
将User system:kube-proxy
与Role system:node-proxier
绑定,该Role
授予了调用kube-apiserver Proxy
相关 API 的权限。
生成 kube-proxy 客户端证书和私钥:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2.7 分发证书和密钥
至此证书全部生成完毕!将生成的证书和密钥文件分发到所有机器的/opt/kubernetes/ssl
目录下:
mv *.pem /opt/kubernetes/ssl scp /opt/kubernetes/ssl/*.pem root@k8s-node-1:/opt/kubernetes/ssl/ scp /opt/kubernetes/ssl/*.pem root@k8s-node-2:/opt/kubernetes/ssl/
三、部署Etcd集群 [master]
etcd 是一个高可用的键值存储系统,主要用于共享配置和服务发现。etcd 是由 CoreOS 开发并维护的,灵感来自于 ZooKeeper 和 Doozer,它使用 Go 语言编写,并通过 Raft 一致性算法处理日志复制以保证强一致性。下载地址:https://github.com/etcd-io/etcd/releases
下载安装
mkdir ~/src && cd ~/src wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz tar zxvf etcd-v3.3.13-linux-amd64.tar.gz cp etcd-v3.3.13-linux-amd64/etcd* /opt/kubernetes/bin/
3.1 编辑配置文件
cat << EOF | tee /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.2:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.2:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.2:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.2:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.2:2380,etcd02=https://192.168.100.3:2380,etcd03=https://192.168.100.4:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
3.2 创建 systemd unit 文件来管理 etcd
cat << EOF | tee /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd
--name=${ETCD_NAME}
--data-dir=${ETCD_DATA_DIR}
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS}
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS}
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS}
--initial-cluster=${ETCD_INITIAL_CLUSTER}
--initial-cluster-token=${ETCD_INITIAL_CLUSTER}
--initial-cluster-state=new
--cert-file=/opt/kubernetes/ssl/server.pem
--key-file=/opt/kubernetes/ssl/server-key.pem
--peer-cert-file=/opt/kubernetes/ssl/server.pem
--peer-key-file=/opt/kubernetes/ssl/server-key.pem
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
配置参数解释:
--name
:方便理解的节点名称,默认为default
,在集群中应该保持唯一,可以使用hostname
;--data-dir
:服务运行数据保存的路径,默认为${name}.etcd
;--snapshot-count
:指定有多少事务(transaction)被提交时,触发截取快照保存到磁盘;--heartbeat-interval
:leader
多久发送一次心跳到followers
。默认值是100ms
;--eletion-timeout
:重新投票的超时时间,如果follow
在该时间间隔没有收到心跳包,会触发重新投票,默认为1000 ms
;--listen-peer-urls
:和同伴通信的地址,比如http://ip:2380
,如果有多个,使用逗号分隔。需要所有节点都能够访问,所以不要使用localhost
;--listen-client-urls
:对外提供服务的地址:比如http://ip:2379,http://127.0.0.1:2379
,客户端会连接到这里和 etcd 交互;--advertise-client-urls
:对外公告的该节点客户端监听地址,这个值会告诉集群中其他节点;--initial-advertise-peer-urls
:该节点同伴监听地址,这个值会告诉集群中其他节点;--initial-cluster
:集群中所有节点的信息,格式为node1=http://ip1:2380,node2=http://ip2:2380,…
。注意:这里的 node1 是节点的--name
指定的名字;后面的ip1:2380
是--initial-advertise-peer-urls
指定的值;--initial-cluster-state
:新建集群的时候,这个值为new
;假如已经存在的集群,这个值为existing
;--initial-cluster-token
:创建集群的token
,这个值每个集群保持唯一。这样的话,如果你要重新创建集群,即使配置和之前一样,也会再次生成新的集群和节点uuid
;否则会导致多个集群之间的冲突,造成未知的错误;
3.3 分发文件到各节点
scp /opt/kubernetes/bin/etcd* root@k8s-node-1:/opt/kubernetes/bin/ scp /opt/kubernetes/bin/etcd* root@k8s-node-2:/opt/kubernetes/bin/ scp /opt/kubernetes/cfg/etcd root@k8s-node-1:/opt/kubernetes/cfg/ scp /opt/kubernetes/cfg/etcd root@k8s-node-2:/opt/kubernetes/cfg/ scp /opt/kubernetes/ssl/*.pem root@k8s-node-1:/opt/kubernetes/ssl/ scp /opt/kubernetes/ssl/*.pem root@k8s-node-2:/opt/kubernetes/ssl/ scp /usr/lib/systemd/system/etcd.service root@k8s-node-1:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/etcd.service root@k8s-node-2:/usr/lib/systemd/system/
传输到 k8s-node-1 和 k8s-node-2 之后,配置文件需要修改 ip 等参数信息:
# 在 k8s-node-1 执行 sed -i -e 's|^ETCD_NAME=".*"$|ETCD_NAME="etcd02"|g' -e 's|^ETCD_LISTEN_PEER_URLS=".*"$|ETCD_LISTEN_PEER_URLS="https://192.168.100.3:2380"|g' -e 's|^ETCD_LISTEN_CLIENT_URLS=".*"$|ETCD_LISTEN_CLIENT_URLS="https://192.168.100.3:2379"|g' -e 's|^ETCD_INITIAL_ADVERTISE_PEER_URLS=".*"$|ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.3:2380"|g' -e 's|^ETCD_ADVERTISE_CLIENT_URLS=".*"$|ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.3:2379"|g' /opt/kubernetes/cfg/etcd # 在 k8s-node-2 执行 sed -i -e 's|^ETCD_NAME=".*"$|ETCD_NAME="etcd03"|g' -e 's|^ETCD_LISTEN_PEER_URLS=".*"$|ETCD_LISTEN_PEER_URLS="https://192.168.100.4:2380"|g' -e 's|^ETCD_LISTEN_CLIENT_URLS=".*"$|ETCD_LISTEN_CLIENT_URLS="https://192.168.100.4:2379"|g' -e 's|^ETCD_INITIAL_ADVERTISE_PEER_URLS=".*"$|ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.4:2380"|g' -e 's|^ETCD_ADVERTISE_CLIENT_URLS=".*"$|ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.4:2379"|g' /opt/kubernetes/cfg/etcd
3.4 启动 etcd
systemctl start etcd && systemctl enable etcd ssh root@192.168.100.3 "systemctl start etcd && systemctl enable etcd" ssh root@192.168.100.4 "systemctl start etcd && systemctl enable etcd"
如果 node 结点没有启动,master 节点此时启动后会卡主一段时间(要按ctrl+c
才行),需要将 node 节点一起启动加入集群。所以这里建议将 node 节点配置好后一起启动,如有报错查看日志:
journalctl -xe -u etcd tail -100 /var/log/messages
3.5 查看集群状态
为了命令输入的方便,给 etcdctl 命令做一个别名:
vim ~/.bashrc
alias etcdctl='/opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-ke
y.pem --endpoints=https://192.168.100.2:2379,https://192.168.100.3:2379,https://192.168.100.4:2379'
source ~/.bashrc
etcdctl cluster-health # 输出 member 215f6fa14c83114a is healthy: got healthy result from https://192.168.100.3:2379 member 3179d851e2d8e401 is healthy: got healthy result from https://192.168.100.4:2379 member e5597d27b776a481 is healthy: got healthy result from https://192.168.100.2:2379 cluster is healthy
四、部署 Flannel 网络
Flannel 是 Overlay 网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持 UDP、VXLAN、AWS VPC 和 GCE 路由等数据转发方式。
4.1 分配子网段写入到 etcd,供 flanneld 使用
配置 flanneld 分配的子网段,写入到 etcd 集群(在任意 etcd 集群主机操作):
etcdctl set /kubernetes/network/config '{"Network":"172.17.0.0/16", "Backend":{"Type":"vxlan"}}'
查看子网:
etcdctl get /kubernetes/network/config
4.2 下载 flanneld
下载地址:https://github.com/coreos/flannel/releases,一般在 node 节点部署,这里在 master 下载然后复制到 node:
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz tar zxvf flannel-v0.11.0-linux-amd64.tar.gz
传输到 node1 和 node2 节点:
scp flanneld mk-docker-opts.sh root@192.168.100.3:/opt/kubernetes/bin/ scp flanneld mk-docker-opts.sh root@192.168.100.4:/opt/kubernetes/bin/
4.3 编辑 flanneld 配置文件 [node1][node2]
cat << EOF | tee /opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.100.2:2379,https://192.168.100.3:2379,https://192.168.100.4:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem -etcd-prefix=/kubernetes/network" EOF
4.4 创建 systemd unit 文件来管理 flanneld [node1][node2]
cat << EOF | tee /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network.target network-online.target etcd.service Wants=network-online.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld -ip-masq $FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF
4.5 在 node 节点启动 flanneld [node1][node2]
systemctl start flanneld systemctl enable flanneld
4.6 配置 docker 指定子网启动 [node1][node2]
在 node 节点重新编辑 docker unit 文件:
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
EnvironmentFile=-/run/flannel/subnet.env # 新增
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS # 修改
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
4.7 加载配置文件,并重启 [node1][node2]
systemctl daemon-reload systemctl restart flanneld systemctl restart docker
查看在 etcd 注册的地址:
[root@k8s-master ~]# etcdctl ls /kubernetes/network/subnets /kubernetes/network/subnets/172.17.96.0-24 /kubernetes/network/subnets/172.17.81.0-24 [root@k8s-master ~]# etcdctl get /kubernetes/network/subnets/172.17.81.0-24 {"PublicIP":"192.168.100.4","BackendType":"vxlan","BackendData":{"VtepMAC":"9e:29:d6:ed:b7:84"}} [root@k8s-master ~]# etcdctl get /kubernetes/network/subnets/172.17.96.0-24 {"PublicIP":"192.168.100.3","BackendType":"vxlan","BackendData":{"VtepMAC":"fe:82:32:d9:65:49"}}
五、部署 kubectl 工具,创建 kubeconfig 文件
软件包下载地址获取:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md
kubectl 是 kubernetes 的集群管理工具,任何节点通过 kubetcl 都可以管理整个 k8s 集群。这里是在 master 节点安装 kubectl 工具,kubectl 是通过下面创建的 kubelet kubeconfig 文件获取来 kube-apiserver 地址、证书、用户名等信息。
5.1 下载安装 kubectl [master]
# 方法一:单纯的下载 kubectl 软件包 wget https://dl.k8s.io/v1.15.0/kubernetes-client-linux-amd64.tar.gz tar zxvf kubernetes-client-linux-amd64.tar.gz cp kubernetes/client/bin/kubectl /opt/kubernetes/bin/
# 方法二:这是一步到位,包含上面的 kubectl 软件包和 master 组件 wget https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz tar zxvf kubernetes-server-linux-amd64.tar.gz cp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /opt/kubernetes/bin/
5.2 创建 TLS Bootstrapping Token [master]
cd /opt/kubernetes/cfg head -c 16 /dev/urandom | od -An -t x | tr -d ' ' > token echo "$(cat token),kubelet-bootstrap,10001,"system:kubelet-bootstrap"" > token.csv
配置 kubelet kubeconfig,用于 kubelet 自动签发证书,kubelet 访问 kube-apiserver 的时候是通过 bootstrap.kubeconfig 进行用户验证。
# 设置集群参数,--server 需要指定 master 节点 ip kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.100.2:6443 --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap --token=$(cat token) --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
5.3 创建 kube-proxy kubeconfig [master]
# 设置集群参数,--server参数为 master ip kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.100.2:6443 --kubeconfig=kube-proxy.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig # 设置上下文参数 kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
5.4 复制创建好的配置文件到 node 节点
scp *.kubeconfig root@192.168.100.3:/opt/kubernetes/cfg/ scp *.kubeconfig root@192.168.100.4:/opt/kubernetes/cfg/
六、部署 master 节点
6.1 下载解压 [master]
在上面 5.1 节已经下载部署好了,这里略过。
kube-scheduler、kube-controller-manager 一般和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通信;
6.2 配置和启动 kube-apiserver 客户端接口 [master]
kube-apiserver 是集群的统一入口,各组件协调者,以 HTTP API 提供接口服务,所有对象资源的增删改查和监听操作都交给 APIServer 处理后再提交给 etcd 存储。
cat << EOF | tee /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true --v=4 --etcd-servers=https://192.168.100.2:2379,https://192.168.100.3:2379,https://192.168.100.4:2379 --insecure-bind-address=127.0.0.1 --bind-address=192.168.100.2 --insecure-port=8080 --secure-port=6443 --advertise-address=192.168.100.2 --allow-privileged=true --service-cluster-ip-range=10.10.10.0/24 --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-swagger-ui=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/kubernetes/ssl/ca.pem --etcd-certfile=/opt/kubernetes/ssl/server.pem --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem" EOF
参数解释:
--logtostderr=true # 启用错误日志 --v=4 # 错误日志级别 --etcd-servers= # etcd集群地址 --insecure-bind-address=127.0.0.1 # 非安全的端口绑定地址,不对外 --bind-address=192.168.100.2 # 安全的端口绑定地址,https加密,对外 --insecure-port=8080 # 非安全端口 --secure-port=6443 # 安全端口 --advertise-address=192.168.100.2 # 通告地址,用于集群通信 --allow-privileged=true # 启用授权,启用后容器访问内核权限和宿主机一样 --service-cluster-ip-range=10.10.10.0/24 # 指定 Service Cluster IP 地址段,该地址段不能路由可达 --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction # 准入模块,做一些授权认证 --authorization-mode=RBAC,Node # 指定在安全端口使用 RBAC 授权模式,拒绝未通过授权的请求,RBAC(基于角色访问控制) --kubelet-https=true # 启用https访问 --enable-bootstrap-token-auth # 启用 bootstrap 引导证书生成 --token-auth-file=/opt/kubernetes/cfg/token.csv # token 认证文件路径 --service-node-port-range=30000-50000 # service 节点端口范围 --tls-cert-file=/opt/kubernetes/ssl/server.pem # 证书路径 --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem # 证书路径 --client-ca-file=/opt/kubernetes/ssl/ca.pem # 证书路径 --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem # 证书路径 --etcd-cafile=/opt/kubernetes/ssl/ca.pem # 证书路径 --etcd-certfile=/opt/kubernetes/ssl/server.pem # 证书路径 --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem # 证书路径
创建 systemd unit 文件来管理 kube-apiserver:
cat << EOF | tee /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server After=network.target etcd.service Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
启动:
systemctl daemon-reload systemctl start kube-apiserver && systemctl enable kube-apiserver
6.4 配置和启动 kube-controller-manager 控制器 [master]
kube-controller-manager 用来处理集群中常规后台任务,一个资源对应一个控制器,而 ControllerManager 就是负责管理这些控制器的。
cat << EOF | tee /opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.10.10.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem" EOF
参数解释:
--logtostderr=true # 启用错误日志 --v=4 # 错误日志级别 --master=127.0.0.1:8080 # 使用非安全8080端口与kube-apiserver通信 --leader-elect=true # 部署master集群时,选举一台处于工作状态的kube-controller-manager进程 --address=127.0.0.1 # 值必须为127.0.0.1,因为当前kube-apiserver、scheduler、controller-manager在同一台机器 --service-cluster-ip-range=10.10.10.0/24 # 参数指定Cluster中Service的CIDR范围,该网络在各Node间必须路由不可达,必须和kube-apiserver中的参数一致 --cluster-name=kubernetes # 集群名字 --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem # 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥 --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem # 同上 --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem # 用来对kube-apiserver证书进行校验,指定该参数后,才会在Pod容器的ServiceAccount中放置该CA证书文件
创建 systemd unit 文件来管理 kube-controller-manager:
cat << EOF | tee /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure LimitNOFILE=65536 RestartSec=5 [Install] WantedBy=multi-user.target EOF
启动:
systemctl daemon-reload systemctl start kube-controller-manager && systemctl enable kube-controller-manager
6.5 配置和启动 kube-scheduler 调度器 [master]
kube-scheduler 根据调度算法为新创建的 Pod 选择一个 Node 节点。
cat << EOF | tee /opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=http://127.0.0.1:8080 --leader-elect" EOF
创建 systemd unit 文件:
cat << EOF | tee /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure LimitNOFILE=65536 RestartSec=5 [Install] WantedBy=multi-user.target EOF
启动:
systemctl daemon-reload systemctl start kube-scheduler && systemctl enable kube-scheduler
6.6 查看集群状态 [master]
kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"}
七、部署 node 节点
7.1 下载解压 [master]
上面 5.1 节已经下载了相关组件,这里只需要将 kubelet 和 kube-proxy 复制到其他节点即可:
scp /opt/kubernetes/bin/{kubelet,kube-proxy} root@192.168.100.3:/opt/kubernetes/bin/ scp /opt/kubernetes/bin/{kubelet,kube-proxy} root@192.168.100.4:/opt/kubernetes/bin/
7.2 配置启动 kubelet [node1] [node2]
kubelet 是 Master 在 Node 节点上的 Agent,管理本机运行容器的生命周期,比如创建容器、Pod 挂载数据卷、下载 secret、获取容器和节点状态等工作。kubelet 将每个 Pod 转换成一组容器。
kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的kubelet-bootstrap
用户赋予system:node-bootstrapper
角色,然后 kubelet 才有权限创建认证请求,所以需要在 master 执行以下命令:
[root@k8s-master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap # 输出 clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
如果没有执行这个命令会报以下错误信息:
failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User “kubelet-bootstrap” cannot create resource “certificatesigningrequests” in API group “certificates.k8s.io” at the cluster scope
在 node 节点创建配置文件:
cat << EOF | tee /opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true --v=4 --address=192.168.100.3 --hostname-override=192.168.100.3 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --cert-dir=/opt/kubernetes/ssl --cluster-dns=10.10.10.2 --cluster-domain=cluster.local --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:laster" EOF
注:配置文件种各节点 IP 需要修改。
配置 systemd unit 文件:
cat << EOF | tee /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF
启动:
systemctl daemon-reload systemctl start kubelet && systemctl enable kubelet
7.3 在 master 执行 TLS 证书授权请求 [master]
kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须授权通过后,Node 才会加入到集群中在三个节点都部署完 kubelet 之后,在 master 节点执行授权操作:
[root@k8s-master ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-NjK9pqzwPietB2x4uB4UWbza28XvY5LM2AGFm60A_BQ 42m kubelet-bootstrap Pending node-csr-TZyF-I-gAsPEV-nlwj-PGUGnDAe4AfQuToJsYYfU9fg 45s kubelet-bootstrap Pending [root@k8s-master ~]# kubectl get nodes No resources found. [root@k8s-master ~]# kubectl certificate approve node-csr-NjK9pqzwPietB2x4uB4UWbza28XvY5LM2AGFm60A_BQ certificatesigningrequest.certificates.k8s.io/node-csr-NjK9pqzwPietB2x4uB4UWbza28XvY5LM2AGFm60A_BQ approved [root@k8s-master ~]# [root@k8s-master ~]# kubectl certificate approve node-csr-TZyF-I-gAsPEV-nlwj-PGUGnDAe4AfQuToJsYYfU9fg certificatesigningrequest.certificates.k8s.io/node-csr-TZyF-I-gAsPEV-nlwj-PGUGnDAe4AfQuToJsYYfU9fg approved [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.100.3 Ready <none> 26s v1.15.0 192.168.100.4 Ready <none> 14s v1.15.0
7.4 配置启动 kube-proxy [node1][node2]
kube-proxy 在 Node 节点上实现 Pod 网络代理,维护网络规则和四层负载均衡工作。
创建配置文件:
cat << EOF | tee /opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true --v=4 --bind-address=192.168.100.3 --hostname-override=192.168.100.3 --cluster-cidr=10.10.10.0/24 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF
创建 systemd unit 文件:
cat << EOF | tee /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
启动:
systemctl daemon-reload systemctl start kube-proxy && systemctl enable kube-proxy
7.5 查询集群状态
kubectl get node kubectl get componentstatus kubectl get pod -o wide
未完待续…
暂无评论内容