在线单节点部署
Kubernetes 在线单节点部署
本章介绍如何使用 kubeadm 在线部署单节点 Kubernetes 1.30+ 集群,适合学习、开发和测试环境。
部署概述
单节点集群将 Master 和 Worker 角色部署在同一台服务器上,是最简单快速的部署方式。
架构图:
┌─────────────────────────────┐
│ Master + Worker │
│ ┌─────────────────────┐ │
│ │ Control Plane │ │
│ │ - API Server │ │
│ │ - etcd │ │
│ │ - Scheduler │ │
│ │ - Controller Mgr │ │
│ └─────────────────────┘ │
│ ┌─────────────────────┐ │
│ │ Worker Components │ │
│ │ - kubelet │ │
│ │ - Container Runtime│ │
│ │ - kube-proxy │ │
│ └─────────────────────┘ │
└─────────────────────────────┘
特点:
- ✅ 部署简单快速(10-15 分钟)
- ✅ 资源占用少(最低 2 核 4GB)
- ✅ 适合学习和测试
- ❌ 无高可用保障
- ❌ 单点故障
适用场景:
- 🎓 个人学习 Kubernetes
- 💻 本地开发和调试
- 🧪 功能测试和验证
- 📚 培训和演示
一、环境要求
1.1 硬件要求
| 配置级别 | CPU | 内存 | 磁盘 | 适用场景 |
|---|---|---|---|---|
| 最低配置 | 2 核 | 4GB | 20GB | 学习 |
| 推荐配置 | 4 核 | 8GB | 50GB | 开发 |
| 高配 | 8 核 | 16GB | 100GB SSD | 测试 |
1.2 软件要求
- 操作系统:Ubuntu 22.04 / CentOS 8+ / RHEL 8+
- 内核版本:>= 4.19
- 容器运行时:containerd 1.7+ / CRI-O 1.30+
1.3 网络要求
- 禁用 swap
- 唯一的主机名和 MAC 地址
- 开放必要端口(6443、2379-2380、10250-10259)
二、系统准备
2.1 设置主机名
# 设置主机名
sudo hostnamectl set-hostname k8s-single
# 添加到 hosts
echo "127.0.0.1 k8s-single" | sudo tee -a /etc/hosts
2.2 禁用 swap
# 临时禁用
sudo swapoff -a
# 永久禁用
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 验证
free -h | grep -i swap
2.3 关闭防火墙
# Ubuntu/Debian
sudo ufw disable
# CentOS/RHEL
sudo systemctl stop firewalld
sudo systemctl disable firewalld
开发环境建议:关闭防火墙以简化配置。生产环境请配置允许规则。
2.4 禁用 SELinux(CentOS/RHEL)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
2.5 加载内核模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# 验证
lsmod | grep br_netfilter
lsmod | grep overlay
2.6 配置内核参数
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
# 验证
sysctl net.bridge.bridge-nf-call-iptables \
net.bridge.bridge-nf-call-ip6tables \
net.ipv4.ip_forward
三、安装容器运行时
3.1 安装 containerd(推荐)
Ubuntu/Debian:
# 1. 安装依赖
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
# 2. 添加 Docker GPG 密钥
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# 3. 添加 Docker 仓库
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# 4. 安装 containerd
sudo apt-get update
sudo apt-get install -y containerd.io
# 5. 生成默认配置
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# 6. 配置 systemd cgroup 驱动
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
# 7. 配置镜像加速(国内用户)
sudo mkdir -p /etc/containerd/certs.d/docker.io
cat <<EOF | sudo tee /etc/containerd/certs.d/docker.io/hosts.toml
server = "https://docker.io"
[host."https://registry.cn-hangzhou.aliyuncs.com"]
capabilities = ["pull", "resolve"]
EOF
# 8. 重启 containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
# 9. 验证
sudo systemctl status containerd
CentOS/RHEL:
# 1. 配置 Docker 仓库
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 2. 安装 containerd
sudo yum install -y containerd.io
# 3-8. 执行上述相同的配置步骤
四、安装 Kubernetes 组件
4.1 安装 kubeadm、kubelet、kubectl
Ubuntu/Debian(官方源):
# 1. 添加 Kubernetes apt 仓库
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \
https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
# 2. 安装
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
# 3. 锁定版本
sudo apt-mark hold kubelet kubeadm kubectl
# 4. 验证
kubeadm version
kubelet --version
kubectl version --client
国内镜像源(阿里云):
# 添加阿里云镜像源
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | \
sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
CentOS/RHEL:
# 添加 Kubernetes yum 仓库
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
# 安装
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 启动
sudo systemctl enable --now kubelet
五、初始化单节点集群
5.1 基础初始化
# 初始化集群
sudo kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.30.0
# 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# ⚠️ 重要:移除 Master 节点污点,允许调度 Pod
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
解释:
- 默认情况下,Master 节点有污点(taint),不允许调度普通 Pod
- 单节点集群需要移除污点,才能在 Master 上运行应用
5.2 使用国内镜像源
sudo kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.30.0
# 后续步骤同上
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
5.3 自定义配置初始化
# 创建配置文件
cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.1.10
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.30.0
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "10.244.0.0/16"
dnsDomain: "cluster.local"
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
# 使用配置文件初始化
sudo kubeadm init --config=kubeadm-config.yaml
# 配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 移除污点
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
六、安装网络插件
6.1 Calico(推荐)
# 安装 Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
# 等待 Pod 启动
kubectl wait --for=condition=Ready pods --all -n kube-system --timeout=300s
# 查看状态
kubectl get pods -n kube-system | grep calico
6.2 Flannel(轻量级)
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# 验证
kubectl get pods -n kube-flannel
6.3 Cilium(高性能)
# 安装 Cilium CLI
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
curl -L --fail --remote-name-all \
https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
# 安装 Cilium
cilium install --version 1.15.0
# 验证
cilium status --wait
七、验证集群
7.1 检查节点状态
# 查看节点
kubectl get nodes
# 输出示例
NAME STATUS ROLES AGE VERSION
k8s-single Ready control-plane 5m v1.30.0
# 查看详细信息
kubectl get nodes -o wide
7.2 检查系统 Pod
# 查看所有系统 Pod
kubectl get pods --all-namespaces
# 输出示例(所有 Pod 应该是 Running 状态)
NAMESPACE NAME READY STATUS RESTARTS
kube-system calico-node-xxxxx 1/1 Running 0
kube-system coredns-xxxxx 1/1 Running 0
kube-system coredns-xxxxx 1/1 Running 0
kube-system etcd-k8s-single 1/1 Running 0
kube-system kube-apiserver-k8s-single 1/1 Running 0
kube-system kube-controller-manager-k8s-single 1/1 Running 0
kube-system kube-proxy-xxxxx 1/1 Running 0
kube-system kube-scheduler-k8s-single 1/1 Running 0
7.3 部署测试应用
# 创建 Deployment
kubectl create deployment nginx --image=nginx:1.25 --replicas=2
# 查看 Pod
kubectl get pods -o wide
# 暴露服务
kubectl expose deployment nginx --port=80 --type=NodePort
# 查看服务
kubectl get svc nginx
# 输出示例
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.96.123.45 <none> 80:31234/TCP 10s
# 访问测试(使用节点 IP + NodePort)
curl http://192.168.1.10:31234
# 清理
kubectl delete deployment nginx
kubectl delete service nginx
7.4 测试 DNS 解析
# 创建测试 Pod
kubectl run test-dns --image=busybox:1.28 --rm -it --restart=Never -- nslookup kubernetes.default
# 输出示例
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
八、安装常用组件
8.1 Metrics Server(资源监控)
# 安装 Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# 修复证书问题(开发环境)
kubectl patch deployment metrics-server -n kube-system --type='json' \
-p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'
# 等待启动
kubectl wait --for=condition=Ready pod -l k8s-app=metrics-server -n kube-system --timeout=300s
# 测试
kubectl top nodes
kubectl top pods -A
8.2 Kubernetes Dashboard
# 安装 Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# 创建管理员用户
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
# 获取访问 Token
kubectl -n kubernetes-dashboard create token admin-user
# 启动代理
kubectl proxy
# 访问地址(在浏览器中打开)
# http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
8.3 Ingress Nginx(可选)
# 安装 Ingress Nginx Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml
# 验证
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx
九、常见问题排查
9.1 节点 NotReady
# 检查 kubelet 日志
journalctl -u kubelet -f
# 常见原因:
# 1. 网络插件未安装
kubectl get pods -n kube-system | grep -E "calico|flannel"
# 2. 容器运行时异常
sudo systemctl status containerd
# 3. 污点未移除
kubectl describe node | grep Taints
9.2 Pod 无法启动
# 查看 Pod 详细信息
kubectl describe pod <pod-name>
# 查看 Pod 日志
kubectl logs <pod-name>
# 常见原因:
# 1. 镜像拉取失败
kubectl get events --field-selector type=Warning
# 2. 资源不足
kubectl top nodes
kubectl top pods -A
9.3 无法访问服务
# 检查 Service
kubectl get svc
kubectl describe svc <service-name>
# 检查 Endpoints
kubectl get endpoints <service-name>
# 测试 Pod 间网络
kubectl run test-1 --image=busybox --rm -it -- ping <pod-ip>
十、集群管理
10.1 查看日志
# kubelet 日志
journalctl -u kubelet -f
# Pod 日志
kubectl logs -f <pod-name>
# 查看事件
kubectl get events --all-namespaces --sort-by='.lastTimestamp'
10.2 备份配置
# 备份管理员配置
cp ~/.kube/config ~/.kube/config.backup
# 备份 etcd(重要)
kubectl get all --all-namespaces -o yaml > cluster-backup.yaml
10.3 重置集群
# 重置 kubeadm
sudo kubeadm reset
# 清理网络配置
sudo rm -rf /etc/cni/net.d
sudo rm -rf ~/.kube/config
# 清理 iptables
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
十一、性能优化
11.1 资源限制
# 为系统组件设置资源限制
kubectl edit deployment coredns -n kube-system
# 添加资源限制
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
11.2 调整 Pod 数量
# CoreDNS 副本数(单节点建议 1-2)
kubectl scale deployment coredns -n kube-system --replicas=1
小结
本章介绍了在线单节点部署:
✅ 快速部署:10-15 分钟完成
✅ 资源要求低:最低 2 核 4GB
✅ 适合学习开发:功能完整,便于实验
✅ 网络插件:Calico/Flannel/Cilium
✅ 常用组件:Metrics Server、Dashboard、Ingress
✅ 故障排查:详细的问题诊断方法
注意事项:
- ⚠️ 单节点没有高可用保障
- ⚠️ 不适合生产环境
- ⚠️ 记得移除 Master 污点
下一步:
- 学习在线高可用部署
- 了解离线部署方案
- 开始Kubernetes 基础教程