需求


1. 操作系统

几乎所有安装了Docker的Linux操作系统都可以运行RKE,但是推荐使用Ubuntu 16.04,因为您大多数RKE的开发和测试都在Ubuntu 16.04上。

1.1. 某些操作系统有限制和特定要求:

  • SSH user - 用于访问节点的SSH用户,必须加入docker组:

    usermod -aG docker <user_name>
    
    1

    请参阅Manage Docker as a non-root user 以了解如何在不使用root用户的情况下配置对Docker的访问。

  • worker上禁用交换

  • 加载以下内核模块,可以使用以下方法检查:

    • modprobe module_name

    • lsmod | grep module_name

    • grep module_name /lib/modules/$(uname -r)/modules.builtin, 如果它是一个内置模块

      Module name
      br_netfilter
      ip6_udp_tunnel
      ip_set
      ip_set_hash_ip
      ip_set_hash_net
      iptable_filter
      iptable_nat
      iptable_mangle
      iptable_raw
      nf_conntrack_netlink
      nf_conntrack
      nf_conntrack_ipv4
      nf_defrag_ipv4
      nf_nat
      nf_nat_ipv4
      nf_nat_masquerade_ipv4
      nfnetlink
      udp_tunnel
      veth
      vxlan
      x_tables
      xt_addrtype
      xt_conntrack
      xt_comment
      xt_mark
      xt_multiport
      xt_nat
      xt_recent
      xt_set
      xt_statistic
      xt_tcpudp
    • 模块检测脚本

      module_list='br_netfilter ip6_udp_tunnel ip_set ip_set_hash_ip ip_set_hash_net iptable_filter iptable_nat iptable_mangle iptable_raw nf_conntrack_netlink nf_conntrack nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat nf_nat_ipv4 nf_nat_masquerade_ipv4 nfnetlink udp_tunnel veth vxlan x_tables xt_addrtype xt_conntrack xt_comment xt_mark xt_multiport xt_nat xt_recent xt_set xt_statistic xt_tcpudp'
      
      for module in $module_list;
      do
            if ! lsmod | grep -q $module; then
                  echo "module $module is not present"
            fi
      done;
      
      1
      2
      3
      4
      5
      6
      7
      8
  • 必须应用以下sysctl设置

    net.bridge.bridge-nf-call-iptables=1
    
    1

1.2. Red Hat Enterprise Linux(RHEL)/Oracle Enterprise Linux(OEL)/CentOS

如果使用Red Hat Enterprise Linux,Oracle Enterprise Linux或CentOS,由于Bugzilla 1527565您无法将root用户用作SSH用户。请根据您在节点上安装Docker的方式,按照以下说明正确设置Docker。

  • 使用docker-ce

    检查是否安装docker-ce或docker-ee,可以执行以下命令检查已安装的软件包:

    rpm -q docker-ce
    
    1
  • 使用RHEL/CentOS维护的Docker

    • 如果您使用的是Red Hat/CentOS提供的Docker软件包,则软件包名称为docker。您可以执行以下命令检查已安装的软件包

      rpm -q docker
      
      1
    • 如果您使用的是Red Hat/CentOS提供的Docker软件包,该dockerroot组将自动添加到系统中。您需要编辑(或创建)/etc/docker/daemon.json 包含以下内容:

      {
          "group": "dockerroot"
      }
      
      1
      2
      3
    • 编辑或创建文件后重新启动Docker,重新启动Docker后,您可以检查Docker socket(/var/run/docker.sock)的组权限,该权限应显示为grou (dockerroot)

      srw-rw----. 1 root dockerroot 0 Jul  4 09:57 /var/run/docker.sock
      
      1
    • 将要使用的SSH用户添加到该组,这不是root用户。

      usermod -aG dockerroot <user_name>
      
      1
    • 要验证用户配置是否正确,请注销节点并使用SSH用户重新登录,然后执行docker ps

      ssh <user_name>@node
      $ docker ps
      CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              端口S                  NAMES
      
      1
      2
      3

1.3. Red Hat Atomic

在尝试将RKE与Red Hat Atomic节点一起使用之前,需要对操作系统进行一些更新才能使RKE正常工作。

  • OpenSSH 版本

    默认情况下,Atomic安装OpenSSH 6.4,它不支持SSH隧道,这是核心RKE要求,需要升级openssh。

  • 创建Docker Group

    默认情况下,Atomic不附带Docker组,可以通过启用特定用户来启动RKE来更新Docker套接字的所有权。

    chown <user> /var/run/docker.sock
    
    1

2. 软件

  • Docker

    每个Kubernetes版本都支持不同的Docker版本。

    Kubernetes版本 支持Docker版本(s)
    v1.13.x RHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2
    v1.12.x RHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2
    v1.11.x RHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2

    您可以按照Docker安装说明操作,也可以使用Rancher的安装脚本安装Docker。对于RHEL,请参阅如何在Red Hat Enterprise Linux 7上安装Docker

    Docker版本 安装脚本
    18.09.2 curl https://releases.rancher.com/install-docker/18.09.2.sh | sh
    18.06.2 curl https://releases.rancher.com/install-docker/18.06.2.sh | sh
    17.03.2 curl https://releases.rancher.com/install-docker/17.03.2.sh | sh

    确认安装的docker版本:

    docker version --format '{{.Server.Version}}'  
    17.03.2-ce
    
    1
    2
  • OpenSSH 7.0+

    必须在每个节点上安装OpenSSH。

3. 端口

RKE node:

Node that runs the rke commands

3.1. RKE node - Outbound rules

协议 端口 目标 描述
TCP 22 RKE node Any node configured in Cluster Configuration File SSH provisioning of node by RKE
TCP 6443 RKE node controlplane nodes Kubernetes apiserver

etcd nodes:

Nodes with the role etcd

3.2. etcd nodes - Inbound rules

协议 端口 描述
TCP 2376 Rancher nodes Docker daemon TLS 端口 used by Docker Machine (only needed when using Node Driver/Templates)
TCP 2379 etcd nodescontrolplane nodes etcd client requests
TCP 2380 etcd nodescontrolplane nodes etcd peer communication
UDP 8472 etcd nodescontrolplane nodesworker nodes Canal/Flannel VXLAN overlay networking
TCP 9099 etcd node itself (local traffic, not across nodes)See Local node traffic Canal/Flannel livenessProbe/readinessProbe
TCP 10250 controlplane nodes kubelet

3.3. etcd nodes - Outbound rules

协议 端口 目标 描述
TCP 443 Rancher nodes Rancher agent
TCP 2379 etcd nodes etcd client requests
TCP 2380 etcd nodes etcd peer communication
TCP 6443 controlplane nodes Kubernetes apiserver
UDP 8472 etcd nodescontrolplane nodesworker nodes Canal/Flannel VXLAN overlay networking
TCP 9099 etcd node itself (local traffic, not across nodes)See Local node traffic Canal/Flannel livenessProbe/readinessProbe

controlplane nodes:

Nodes with the role controlplane

3.4. controlplane nodes - Inbound rules

协议 端口 描述
TCP 80 Any that consumes Ingress services Ingress controller (HTTP)
TCP 443 Any that consumes Ingress services Ingress controller (HTTPS)
TCP 2376 Rancher nodes Docker daemon TLS 端口 used by Docker Machine (only needed when using Node Driver/Templates)
TCP 6443 etcd nodescontrolplane nodesworker nodes Kubernetes apiserver
UDP 8472 etcd nodescontrolplane nodesworker nodes Canal/Flannel VXLAN overlay networking
TCP 9099 controlplane node itself (local traffic, not across nodes)See Local node traffic Canal/Flannel livenessProbe/readinessProbe
TCP 10250 controlplane nodes kubelet
TCP 10254 controlplane node itself (local traffic, not across nodes)See Local node traffic Ingress controller livenessProbe/readinessProbe
TCP/UDP 30000-32767 Any 源 that consumes Node端口 services Node端口 端口 range

3.5. controlplane nodes - Outbound rules

协议 端口 目标 描述
TCP 443 Rancher nodes Rancher agent
TCP 2379 etcd nodes etcd client requests
TCP 2380 etcd nodes etcd peer communication
UDP 8472 etcd nodescontrolplane nodesworker nodes Canal/Flannel VXLAN overlay networking
TCP 9099 controlplane node itself (local traffic, not across nodes)See Local node traffic Canal/Flannel livenessProbe/readinessProbe
TCP 10250 etcd nodescontrolplane nodesworker nodes kubelet
TCP 10254 controlplane node itself (local traffic, not across nodes)See Local node traffic Ingress controller livenessProbe/readinessProbe

worker nodes:

Nodes with the role worker

3.6. worker nodes - Inbound rules

协议 端口 描述
TCP 22 Linux worker nodes onlyAny network that you want to be able to remotely access this node from. Remote access over SSH
TCP 3389 Windows worker nodes onlyAny network that you want to be able to remotely access this node from. Remote access over RDP
TCP 80 Any that consumes Ingress services Ingress controller (HTTP)
TCP 443 Any that consumes Ingress services Ingress controller (HTTPS)
TCP 2376 Rancher nodes Docker daemon TLS 端口 used by Docker Machine (only needed when using Node Driver/Templates)
UDP 8472 etcd nodescontrolplane nodesworker nodes Canal/Flannel VXLAN overlay networking
TCP 9099 worker node itself (local traffic, not across nodes)See Local node traffic Canal/Flannel livenessProbe/readinessProbe
TCP 10250 controlplane nodes kubelet
TCP 10254 worker node itself (local traffic, not across nodes)See Local node traffic Ingress controller livenessProbe/readinessProbe
TCP/UDP 30000-32767 Any 源 that consumes Node端口 services Node端口 端口 range

3.7. worker nodes - Outbound rules

协议 端口 目标 描述
TCP 443 Rancher nodes Rancher agent
TCP 6443 controlplane nodes Kubernetes apiserver
UDP 8472 etcd nodescontrolplane nodesworker nodes Canal/Flannel VXLAN overlay networking
TCP 9099 worker node itself (local traffic, not across nodes)See Local node traffic Canal/Flannel livenessProbe/readinessProbe
TCP 10254 worker node itself (local traffic, not across nodes)See Local node traffic Ingress controller livenessProbe/readinessProbe

3.8. Information on local node traffic

Kubernetes healthchecks (livenessProbe and readinessProbe) are executed on the host itself. On most nodes, this is allowed by default. When you have applied strict host firewall (i.e. iptables) policies on the node, or when you are using nodes that have multiple interfaces (multihomed), this traffic gets blocked. In this case, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as 源 or 目标 in your security group, that this only applies to the private interface of the nodes/instances.

If you are using an external firewall, make sure you have this 端口 opened between the machine you are using to run rke and the nodes that you are going to use in the cluster.

3.9. Opening 端口 TCP/6443 using iptables

# Open TCP/6443 for all
iptables -A INPUT -p tcp --d端口 6443 -j ACCEPT

# Open TCP/6443 for one specific IP
iptables -A INPUT -p tcp -s your_ip_here --d端口 6443 -j ACCEPT
1
2
3
4
5

3.10. Opening 端口 TCP/6443 using firewalld

# Open TCP/6443 for all
firewall-cmd --zone=public --add-端口=6443/tcp --permanent
firewall-cmd --reload

# Open TCP/6443 for one specific IP
firewall-cmd --permanent --zone=public --add-rich-rule='
  rule family="ipv4"
  源 address="your_ip_here/32"
  端口 协议="tcp" 端口="6443" accept'
firewall-cmd --reload
1
2
3
4
5
6
7
8
9
10

4. SSH Server Configuration

Your SSH server system-wide configuration file, located at /etc/ssh/sshd_config, must include this line that allows TCP forwarding:

AllowTcpForwarding yes
1