在线高可用部署

Kubernetes 在线高可用部署

本章介绍如何使用 kubeadm 在线部署高可用 Kubernetes 1.30+ 集群,适用于生产环境。

部署概述

高可用集群通过部署多个 Master 节点和负载均衡器,确保集群在任意节点故障时仍能正常运行。

架构图

                    ┌──────────────┐
                    │  负载均衡器   │
                    │  (HAProxy)   │
                    │  VIP: x.x.x.100
                    └──────┬───────┘
                           │
       ┌───────────────────┼───────────────────┐
       │                   │                   │
┌──────▼──────┐    ┌──────▼──────┐    ┌──────▼──────┐
│  Master 1   │    │  Master 2   │    │  Master 3   │
│  ┌────────┐ │    │  ┌────────┐ │    │  ┌────────┐ │
│  │ API    │ │    │  │ API    │ │    │  │ API    │ │
│  │ etcd   │◄├────┤  │ etcd   │◄├────┤  │ etcd   │ │
│  │ Sched  │ │    │  │ Sched  │ │    │  │ Sched  │ │
│  │ Ctrl   │ │    │  │ Ctrl   │ │    │  │ Ctrl   │ │
│  └────────┘ │    │  └────────┘ │    │  └────────┘ │
└─────────────┘    └─────────────┘    └─────────────┘
       │                   │                   │
       └───────────────────┼───────────────────┘
                           │
       ┌───────────────────┼───────────────────┐
       │                   │                   │
┌──────▼──────┐    ┌──────▼──────┐    ┌──────▼──────┐
│  Worker 1   │    │  Worker 2   │    │  Worker 3   │
│  ┌────────┐ │    │  ┌────────┐ │    │  ┌────────┐ │
│  │ kubelet│ │    │  │ kubelet│ │    │  │ kubelet│ │
│  │  Pods  │ │    │  │  Pods  │ │    │  │  Pods  │ │
│  └────────┘ │    │  └────────┘ │    │  └────────┘ │
└─────────────┘    └─────────────┘    └─────────────┘

特点

  • ✅ 高可用(99.9%+ SLA)
  • ✅ 任意节点故障不影响服务
  • ✅ 水平扩展能力
  • ✅ 生产级可靠性
  • ❌ 配置复杂
  • ❌ 成本较高(至少 6 台服务器)

适用场景

  • 🏢 生产环境
  • 💼 企业关键业务
  • 📈 需要 SLA 保障
  • 🌍 多区域部署

一、环境要求

1.1 服务器规划

节点类型 数量 CPU 内存 磁盘 IP 地址示例
负载均衡器 2 2 核 4GB 50GB 192.168.1.100-101
Master 节点 3 4 核 8GB 100GB SSD 192.168.1.10-12
Worker 节点 3+ 8 核 16GB 200GB SSD 192.168.1.20-22

最小配置(小型生产)

  • 2 个负载均衡器 + 3 个 Master + 3 个 Worker = 8 台服务器

推荐配置(中型生产)

  • 2 个负载均衡器 + 5 个 Master + 5 个 Worker = 12 台服务器

1.2 网络规划

网络配置:
  节点网络: 192.168.1.0/24
  VIP(虚拟IP): 192.168.1.100
  
  Master 节点:
    - 192.168.1.10 (master-1)
    - 192.168.1.11 (master-2)
    - 192.168.1.12 (master-3)
  
  Worker 节点:
    - 192.168.1.20 (worker-1)
    - 192.168.1.21 (worker-2)
    - 192.168.1.22 (worker-3)
  
  Pod 网络: 10.244.0.0/16
  Service 网络: 10.96.0.0/12

1.3 软件要求

  • 操作系统:Ubuntu 22.04 / CentOS 8+ / RHEL 8+
  • 内核版本:>= 4.19
  • 容器运行时:containerd 1.7+ / CRI-O 1.30+
  • 负载均衡:HAProxy 2.0+ / Nginx
  • 高可用工具:Keepalived 2.0+

二、部署负载均衡器

2.1 安装 HAProxy 和 Keepalived

在两台负载均衡器上执行:

# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y haproxy keepalived

# CentOS/RHEL
sudo yum install -y haproxy keepalived

2.2 配置 HAProxy

在两台负载均衡器上创建相同配置:

# 备份原配置
sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

# 创建新配置
cat <<EOF | sudo tee /etc/haproxy/haproxy.cfg
global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    tcp
    option  tcplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000

# Kubernetes API Server 负载均衡
frontend kubernetes-apiserver
    bind *:6443
    mode tcp
    option tcplog
    default_backend kubernetes-apiserver

backend kubernetes-apiserver
    mode tcp
    option tcp-check
    balance roundrobin
    server master-1 192.168.1.10:6443 check fall 3 rise 2
    server master-2 192.168.1.11:6443 check fall 3 rise 2
    server master-3 192.168.1.12:6443 check fall 3 rise 2

# HAProxy 监控页面(可选)
listen stats
    bind *:8080
    mode http
    stats enable
    stats uri /stats
    stats refresh 30s
    stats realm HAProxy\ Statistics
    stats auth admin:admin123
EOF

# 启动 HAProxy
sudo systemctl enable haproxy
sudo systemctl restart haproxy
sudo systemctl status haproxy

2.3 配置 Keepalived

主负载均衡器(192.168.1.100)

cat <<EOF | sudo tee /etc/keepalived/keepalived.conf
global_defs {
    router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "/usr/bin/killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    
    authentication {
        auth_type PASS
        auth_pass 1234
    }
    
    virtual_ipaddress {
        192.168.1.100
    }
    
    track_script {
        check_haproxy
    }
}
EOF

# 启动 Keepalived
sudo systemctl enable keepalived
sudo systemctl restart keepalived
sudo systemctl status keepalived

备份负载均衡器(192.168.1.101)

cat <<EOF | sudo tee /etc/keepalived/keepalived.conf
global_defs {
    router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "/usr/bin/killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 90
    advert_int 1
    
    authentication {
        auth_type PASS
        auth_pass 1234
    }
    
    virtual_ipaddress {
        192.168.1.100
    }
    
    track_script {
        check_haproxy
    }
}
EOF

# 启动 Keepalived
sudo systemctl enable keepalived
sudo systemctl restart keepalived
sudo systemctl status keepalived

2.4 验证负载均衡

# 检查 VIP 是否生效
ip addr show eth0

# 测试 HAProxy
curl -k https://192.168.1.100:6443/healthz
# 预期响应:404(此时 API Server 还未部署)

# 查看 HAProxy 状态页面
# 浏览器访问: http://192.168.1.100:8080/stats
# 用户名: admin / 密码: admin123

三、准备所有节点

在所有 Master 和 Worker 节点上执行系统准备步骤:

3.1 系统准备脚本

#!/bin/bash
# 在所有节点上执行

# 1. 禁用 swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# 2. 关闭防火墙
sudo systemctl stop firewalld || sudo ufw disable
sudo systemctl disable firewalld || true

# 3. 禁用 SELinux
sudo setenforce 0 || true
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config || true

# 4. 加载内核模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter

# 5. 配置内核参数
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
sudo sysctl --system

# 6. 安装 containerd
sudo apt-get update
sudo apt-get install -y containerd.io

# 7. 配置 containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

# 8. 安装 Kubernetes 组件
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

echo "✅ 节点准备完成"

四、初始化第一个 Master 节点

4.1 创建配置文件

在第一个 Master 节点(192.168.1.10)上执行:

cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.10
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.30.0
controlPlaneEndpoint: "192.168.1.100:6443"
networking:
  serviceSubnet: "10.96.0.0/12"
  podSubnet: "10.244.0.0/16"
  dnsDomain: "cluster.local"
apiServer:
  certSANs:
  - "192.168.1.100"
  - "192.168.1.10"
  - "192.168.1.11"
  - "192.168.1.12"
  - "master-1"
  - "master-2"
  - "master-3"
  extraArgs:
    authorization-mode: "Node,RBAC"
controllerManager:
  extraArgs:
    bind-address: "0.0.0.0"
scheduler:
  extraArgs:
    bind-address: "0.0.0.0"
etcd:
  local:
    dataDir: "/var/lib/etcd"
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

4.2 初始化集群

# 初始化第一个 Master
sudo kubeadm init \
  --config=kubeadm-config.yaml \
  --upload-certs | tee kubeadm-init.log

# 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# ⚠️ 重要:保存输出中的 join 命令和 certificate-key

输出示例

Your Kubernetes control-plane has initialized successfully!

# Master 节点加入命令
kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:xxxxx \
  --control-plane --certificate-key yyyyy

# Worker 节点加入命令
kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:xxxxx

五、加入其他 Master 节点

在第二个和第三个 Master 节点上执行:

# 使用初始化时输出的 Master join 命令
sudo kubeadm join 192.168.1.100:6443 \
  --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:xxxxx \
  --control-plane \
  --certificate-key yyyyy \
  --apiserver-advertise-address=<当前节点IP>

# 配置 kubectl(可选)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

验证 Master 节点

# 在任意 Master 节点执行
kubectl get nodes

# 输出示例
NAME       STATUS     ROLES           AGE   VERSION
master-1   NotReady   control-plane   5m    v1.30.0
master-2   NotReady   control-plane   3m    v1.30.0
master-3   NotReady   control-plane   2m    v1.30.0

六、安装网络插件

在第一个 Master 节点上执行:

# 安装 Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml

# 等待网络插件启动
kubectl wait --for=condition=Ready pods --all -n kube-system --timeout=600s

# 验证节点状态
kubectl get nodes

# 输出示例(所有节点应该变为 Ready)
NAME       STATUS   ROLES           AGE   VERSION
master-1   Ready    control-plane   8m    v1.30.0
master-2   Ready    control-plane   6m    v1.30.0
master-3   Ready    control-plane   5m    v1.30.0

七、加入 Worker 节点

在所有 Worker 节点上执行:

# 使用初始化时输出的 Worker join 命令
sudo kubeadm join 192.168.1.100:6443 \
  --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:xxxxx

如果 token 过期,重新生成

# 在任意 Master 节点执行
kubeadm token create --print-join-command

验证集群状态

# 查看所有节点
kubectl get nodes -o wide

# 输出示例
NAME       STATUS   ROLES           AGE   VERSION
master-1   Ready    control-plane   15m   v1.30.0
master-2   Ready    control-plane   13m   v1.30.0
master-3   Ready    control-plane   12m   v1.30.0
worker-1   Ready    <none>          5m    v1.30.0
worker-2   Ready    <none>          5m    v1.30.0
worker-3   Ready    <none>          5m    v1.30.0

八、验证高可用

8.1 验证 etcd 集群

# 在任意 Master 节点执行
sudo ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  member list

# 输出应该显示 3 个 etcd 成员

8.2 验证 API Server 高可用

# 测试通过 VIP 访问
kubectl --server=https://192.168.1.100:6443 get nodes

# 停止第一个 Master 节点
# SSH 到 master-1
sudo systemctl stop kubelet

# 在其他节点测试,应该仍然可以访问
kubectl get nodes

# 恢复第一个 Master
sudo systemctl start kubelet

8.3 验证负载均衡

# 多次访问 API Server,观察负载分布
for i in {1..10}; do
  kubectl get nodes --server=https://192.168.1.100:6443
done

# 查看 HAProxy 统计信息
# 浏览器访问: http://192.168.1.100:8080/stats

九、部署测试应用

# 创建 Deployment
kubectl create deployment nginx --image=nginx:1.25 --replicas=6

# 查看 Pod 分布
kubectl get pods -o wide

# 暴露服务
kubectl expose deployment nginx --port=80 --type=NodePort

# 测试服务
kubectl get svc nginx
curl http://<任意节点IP>:<NodePort>

# 测试高可用:停止一个 Worker 节点
# Pod 会自动迁移到其他节点
kubectl drain worker-1 --ignore-daemonsets
kubectl get pods -o wide

# 恢复节点
kubectl uncordon worker-1

十、安装生产组件

10.1 Metrics Server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# 测试
kubectl top nodes
kubectl top pods -A

10.2 Ingress Nginx Controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml

# 验证
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx

10.3 存储类(可选)

# 使用 NFS 作为存储后端示例
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/deployment.yaml

十一、监控和日志

11.1 部署 Prometheus Stack

# 添加 Helm 仓库
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# 安装 kube-prometheus-stack
helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set prometheus.prometheusSpec.replicas=2 \
  --set alertmanager.alertmanagerSpec.replicas=3 \
  --set grafana.replicas=1

# 访问 Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
# 浏览器访问: http://localhost:3000
# 用户名: admin / 密码: prom-operator

11.2 部署 EFK 日志栈(可选)

参见监控与日志章节

十二、备份和恢复

12.1 备份 etcd

# 创建备份脚本
cat <<'EOF' > /usr/local/bin/backup-etcd.sh
#!/bin/bash
BACKUP_DIR="/backup/etcd"
DATE=$(date +%Y%m%d-%H%M%S)

mkdir -p ${BACKUP_DIR}

ETCDCTL_API=3 etcdctl snapshot save ${BACKUP_DIR}/etcd-snapshot-${DATE}.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

# 保留最近 7 天的备份
find ${BACKUP_DIR} -name "etcd-snapshot-*.db" -mtime +7 -delete

echo "✅ Backup completed: etcd-snapshot-${DATE}.db"
EOF

chmod +x /usr/local/bin/backup-etcd.sh

# 设置定时任务(每天凌晨 2 点)
echo "0 2 * * * /usr/local/bin/backup-etcd.sh" | crontab -

12.2 恢复 etcd

# 停止所有 Master 节点的 API Server
sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/

# 恢复快照
ETCDCTL_API=3 etcdctl snapshot restore /backup/etcd/etcd-snapshot-xxx.db \
  --data-dir=/var/lib/etcd-restore

# 替换 etcd 数据目录
sudo systemctl stop etcd
sudo mv /var/lib/etcd /var/lib/etcd.bak
sudo mv /var/lib/etcd-restore /var/lib/etcd

# 重启服务
sudo systemctl start etcd
sudo mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/

十三、常见问题排查

13.1 Master 节点加入失败

# 检查 certificate-key 是否过期(2小时有效期)
# 重新生成
sudo kubeadm init phase upload-certs --upload-certs

# 使用新的 certificate-key 重新加入

13.2 VIP 无法访问

# 检查 Keepalived 状态
sudo systemctl status keepalived
sudo journalctl -u keepalived -f

# 检查 VIP
ip addr show

# 检查 HAProxy
sudo systemctl status haproxy
sudo journalctl -u haproxy -f

13.3 etcd 集群不健康

# 检查 etcd 健康状态
ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  endpoint health

# 查看 etcd 日志
sudo journalctl -u etcd -f

十四、生产最佳实践

14.1 节点标签和污点

# 为节点添加标签
kubectl label nodes worker-1 node-role.kubernetes.io/worker=worker
kubectl label nodes worker-1 workload=compute-intensive

# 为 Master 节点添加污点(默认已有)
kubectl taint nodes master-1 node-role.kubernetes.io/control-plane:NoSchedule

14.2 资源配额

# 为命名空间设置资源配额
kubectl create namespace production

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ResourceQuota
metadata:
  name: production-quota
  namespace: production
spec:
  hard:
    requests.cpu: "100"
    requests.memory: 200Gi
    limits.cpu: "200"
    limits.memory: 400Gi
    persistentvolumeclaims: "50"
EOF

14.3 Pod 安全策略

# 启用 Pod Security Standards
kubectl label namespace production \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/audit=restricted \
  pod-security.kubernetes.io/warn=restricted

14.4 审计日志

在 kubeadm 配置中启用审计:

apiServer:
  extraArgs:
    audit-log-path: /var/log/kubernetes/audit.log
    audit-log-maxage: "30"
    audit-log-maxbackup: "10"
    audit-log-maxsize: "100"
    audit-policy-file: /etc/kubernetes/audit-policy.yaml
  extraVolumes:
  - name: audit-log
    hostPath: /var/log/kubernetes
    mountPath: /var/log/kubernetes
  - name: audit-policy
    hostPath: /etc/kubernetes/audit-policy.yaml
    mountPath: /etc/kubernetes/audit-policy.yaml

小结

本章介绍了在线高可用部署:

高可用架构:3 Master + 3 Worker + 2 LB
负载均衡:HAProxy + Keepalived
etcd 集群:3 或 5 节点,Raft 协议
故障转移:自动切换,无单点故障
监控备份:Prometheus + etcd 备份
生产最佳实践:资源配额、安全策略、审计日志

部署时间:1-2 小时
可用性:99.9%+ SLA
最小规模:8 台服务器(2 LB + 3 Master + 3 Worker)

下一步