目录

K8s集群部署实战(3)

1. 虚拟机资源准备

1.1 所需虚拟机数量及其用处

集群共使用5台虚拟机,分别作为k8s的主从节点服务器,Harbror作为私有镜像仓库,koolshare作为路由器(软路由【安装有路由器系统openwat的主机】,连接集群和外网)

https://z3.ax1x.com/2021/05/18/ghFSt1.png

1.2 k8s集群结构

https://z3.ax1x.com/2021/05/18/ghPZKe.md.png

1.3 每台虚拟机配置

  • 2个处理器,每个处理器2个核,一个4四核

  • 100G硬盘

  • Centos7

  • kubernetesVersion 15.1

2. centos 系统初始化

2.1 设置主机名以及Host文件相互解析

1
2
3
4
5
6
7
hostnamectl set-hostname k8s-master01

# 编辑/etc/hosts 文件,大型分布式系统需使用DNS解析
192.168.66.10 k8s-master01
192.168.66.20 k8s-node01
192.168.66.21 k8s-node02

2.2 安装依赖包

1
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

2.3 设置防火墙为Iptables本设置空规则

1
2
systemctl stop firewalld && systemctl disable firewalld && yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

2.4 关闭Swap分区和SELINUX

1
2
 swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 
 setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

2.5 调整Linux内核参数

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它 
vm.overcommit_memory=1 # 不检查物理内存是否够用 
vm.panic_on_oom=0 # 开启 OOM 
fs.inotify.max_user_instances=8192 
fs.inotify.max_user_watches=1048576 
fs.file-max=52706963 
fs.nr_open=52706963 
net.ipv6.conf.all.disable_ipv6=1 
net.netfilter.nf_conntrack_max=2310720 
EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf 
sysctl -p /etc/sysctl.d/kubernetes.conf # 加载内核参数文件

2.6 调整系统时区

1
2
3
4
5
6
7
# 设置系统时区为 中国/上海 
timedatectl set-timezone Asia/Shanghai 
# 将当前的 UTC 时间写入硬件时钟 
timedatectl set-local-rtc 0 
# 重启依赖于系统时间的服务 
systemctl restart rsyslog 
systemctl restart crond

2.7 关闭不需要的服务,节约资源

1
2
# 关闭邮件服务
systemctl stop postfix && systemctl disable postfix

2.8 设置rsyslogd和systemd journald

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
mkdir /var/log/journal 
# 持久化保存日志的目录 
mkdir /etc/systemd/journald.conf.d 
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF 
[Journal] 
# 持久化保存到磁盘 
Storage=persistent 
# 压缩历史日志 
Compress=yes 
SyncIntervalSec=5m 
RateLimitInterval=30s 
RateLimitBurst=1000 
# 最大占用空间 10G 
SystemMaxUse=10G 
# 单日志文件最大 200M 
SystemMaxFileSize=200M 
# 日志保存时间 2 周 
MaxRetentionSec=2week 
# 不将日志转发到 syslog 
ForwardToSyslog=no 
EOF 
systemctl restart systemd-journald

2.9 升级系统内核为4.44

1
2
3
4
5
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm 
# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装 一次! 
yum --enablerepo=elrepo-kernel install -y kernel-lt 
# 设置开机从新内核启动 
grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'

3 Kubeadm部署安装

3.1 kube-proxy开启ipvs的前置条件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
modprobe br_netfilter 
cat > /etc/sysconfig/modules/ipvs.modules <<EOF 
#!/bin/bash 
modprobe -- ip_vs 
modprobe -- ip_vs_rr 
modprobe -- ip_vs_wrr 
modprobe -- ip_vs_sh 
modprobe -- nf_conntrack_ipv4 
EOF 
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

3.2 安装Docker

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# 安装依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 设置dockerce安装源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum update -y && yum install -y docker-ce 
## 创建 /etc/docker 目录 
mkdir /etc/docker 
# 配置 daemon. 
cat > /etc/docker/daemon.json <<EOF 
{ 
	"exec-opts": ["native.cgroupdriver=systemd"], 
	"log-driver": "json-file", 
	"log-opts": { "max-size": "100m" } 
}
EOF 
mkdir -p /etc/systemd/system/docker.service.d 
# 重启docker服务 
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

3.3 安装Kubeadm(主从配置)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes 
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=0 
repo_gpgcheck=0 
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1 
systemctl enable kubelet.service

3.4 离线加载镜像脚本

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# 1.上传镜像压缩包
# 2.镜像较多使用脚本批量加载镜像
#!/bin/bash

ls /root/kubeadm-basic.images > /tmp/image-list.txt

cd /root/kubeadm-basic.images

for i in $(cat /tmp/image-list.txt)
do
	docker load -i $i
done

rm -rf /tmp/image-list.txt

3.5 初始化主节点

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# 将kubeadm默认配置输出到kubeadm-config.yaml
kubeadm config print init-defaults > kubeadm-config.yaml 

# 需要修改的配置
	localAPIEndpoint: advertiseAddress: 192.168.66.10 
	kubernetesVersion: v1.15.1 
	# 需要与Flannel配置中Pod的网段一致(这是使用Flannel默认配置的Pod网段)
	networking: 
	podSubnet: "10.244.0.0/16" 
	serviceSubnet: 10.96.0.0/12
	
	# 改为Ipvs调度 
	--- 
	apiVersion: kubeproxy.config.k8s.io/v1alpha1 
	kind: KubeProxyConfiguration 
	featureGates:
		SupportIPVSProxyMode: true 
	mode: ipvs 
	
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

3.6 加入主节点以及其余工作节点

根据kubeadm的安装日志,运行指定命令

1
2
3
4
5
6
7
our Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm-init.log 会提示 Then you can join any number of worker nodes by running the following on each as root,此时执行安装日志中的加入命令即可

1
2
kubeadm join 192.168.66.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:32ca46553b3134cd10cdfb414548338651b7262ce2f806add81b08dd92b25aa5

3.7 使用Flannel部署网络

1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

3.8 部署成功后的网络

https://z3.ax1x.com/2021/05/18/ghpTMQ.md.png

4. 搭建企业级Docker私有仓库

4.1 安装HarBor

  • 安装底层环境(python,Docker,Docker Compose
  • 下载安装包
  • 配置必选参数(主机域名,证书路径)
  • 生成HTTPS证书
  • 运行安装脚本(会加载镜像)
  • 测试(登录访问)

4.2 生成Https证书

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# 创建证书的存放文件夹
mkdir /data/cert

# 生成证书
openssl genrsa -des3 -out server.key 2048
openssl req -new -key server.key -out server.csr
cp server.key server.key.org
openssl rsa -in server.key.org -out server.key
openssl x509 -req -days 365 -in server.csr-signkey server.key -out server.crt


chmod -R 777 /data/cert