「Rancher Requirements」修訂間的差異
跳至導覽
跳至搜尋
(建立內容為「Kubernetes」的新頁面) |
|||
(未顯示同一使用者於中間所作的 34 次修訂) | |||
行 1: | 行 1: | ||
+ | == Large Clusters == |
||
− | Kubernetes |
||
+ | * A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane. |
||
+ | ** No more than 100 pods per node |
||
+ | ** No more than 5,000 nodes |
||
+ | ** No more than 150,000 total pods |
||
+ | ** No more than 300,000 total containers |
||
+ | * Consider resource quotas |
||
+ | ** Computer instances |
||
+ | ** CPUs |
||
+ | ** Storage volumes |
||
+ | ** In-use IP addresses |
||
+ | ** Packet filtering rule sets |
||
+ | ** Number of load balancers |
||
+ | ** Network subnets |
||
+ | ** Log streams |
||
+ | * need a control plane with sufficient compute and other resources. |
||
+ | * store Event objects in a separate dedicated etcd instance |
||
+ | need a control plane with sufficient compute and other resources. |
||
+ | == RKE == |
||
+ | high-availability RKE cluster |
||
+ | * '''Three Linux nodes''', typically virtual machines, in an infrastructure provider such as Amazon’s EC2, Google Compute Engine, or vSphere. |
||
+ | : These nodes must be in the same region/data center. You may place these servers in separate availability zones. |
||
+ | : Rancher server data is stored on etcd database that runs on all three nodes. |
||
+ | :* etcd is a distributed reliable key-value store for the most critical data of a distributed system, with a focus: Simple, Secure, Fast & Reliable. |
||
+ | :: etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. |
||
+ | : general installation requirements for OS, container runtime, hardware, and networking. |
||
+ | : {| class="wikitable" style="text-align: center;" |
||
+ | ! Deployment Size || Clusters || Nodes || vCPUs || RAM |
||
+ | |- |
||
+ | ! Small |
||
+ | | Up to 150 || Up to 1500 || 2 || 8 GB |
||
+ | |- |
||
+ | ! Medium |
||
+ | | Up to 300 || Up to 3000 || 4 || 16 GB |
||
+ | |- |
||
+ | ! Large |
||
+ | | Up to 500 || Up to 5000 || 8 || 32 GB |
||
+ | |- |
||
+ | ! X-Large |
||
+ | | Up to 1000 || Up to 10,000 || 16 || 64 GB |
||
+ | |- |
||
+ | ! XX-Large |
||
+ | | Up to 2000 || Up to 20,000 || 32 || 128 GB |
||
+ | |} |
||
+ | : Contact Rancher for more than 2000 clusters and/or 20,000 nodes. |
||
+ | * '''A load balancer''' to direct front-end traffic to the three nodes. |
||
+ | : RKE tool will deploy an NGINX Ingress controller. |
||
+ | : This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. |
||
+ | :* A layer-4 load balancer |
||
+ | :: Install NGINX, '''stream''' module is required. |
||
+ | :: <code>/etc/nginx/nginx.conf</code> |
||
+ | worker_processes 4; |
||
+ | worker_rlimit_nofile 40000; |
||
+ | |||
+ | events { |
||
+ | worker_connections 8192; |
||
+ | } |
||
+ | |||
+ | stream { |
||
+ | upstream rancher_servers_http { |
||
+ | least_conn; |
||
+ | server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s; |
||
+ | server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s; |
||
+ | server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s; |
||
+ | } |
||
+ | server { |
||
+ | listen 80; |
||
+ | proxy_pass rancher_servers_http; |
||
+ | } |
||
+ | |||
+ | upstream rancher_servers_https { |
||
+ | least_conn; |
||
+ | server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s; |
||
+ | server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s; |
||
+ | server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s; |
||
+ | } |
||
+ | server { |
||
+ | listen 443; |
||
+ | proxy_pass rancher_servers_https; |
||
+ | } |
||
+ | |||
+ | } |
||
+ | |||
+ | docker run -d --restart=unless-stopped \ |
||
+ | -p 80:80 -p 443:443 \ |
||
+ | -v /etc/nginx.conf:/etc/nginx/nginx.conf \ |
||
+ | nginx:1.14 |
||
+ | :* A layer-7 load balancer |
||
+ | * '''A DNS record''' to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. |
||
+ | == RancherD == |
||
+ | {| class="wikitable" style="text-align: center;" |
||
+ | ! Deployment Size || Clusters || Nodes || vCPUs || RAM |
||
+ | |- |
||
+ | ! Small |
||
+ | | Up to 5 || Up to 50 || 2 || 5 GB |
||
+ | |- |
||
+ | ! Medium |
||
+ | | Up to 15 || Up to 200 || 3 || 9 GB |
||
+ | |} |
||
+ | == Worker == |
||
+ | === Linux === |
||
+ | * Requirements |
||
+ | ** 1 core CPU |
||
+ | ** 1 GB memory |
||
+ | * Install the Required CLI Tools |
||
+ | : kubectl - Kubernetes command-line tool. |
||
+ | : helm - Package management for Kubernetes. |
||
+ | * Add the Helm Chart Repository |
||
+ | helm repo add rancher-stable https://releases.rancher.com/server-charts/stable |
||
+ | * Create a Namespace for Rancher |
||
+ | kubectl create namespace cattle-system |
||
+ | * Choose your SSL Configuration |
||
+ | {| class="wikitable" |
||
+ | ! Configuraton || Helm Chart Option || Requires cert-manager |
||
+ | |- |
||
+ | | Rancher Generated Certificates (Default) || ingress.tls.source=rancher || align="center" | yes |
||
+ | |- |
||
+ | | Let’s Encrypt || ingress.tls.source=letsEncrypt || align="center" | yes |
||
+ | |- |
||
+ | | Certificates from Files || ingress.tls.source=secret || align="center" | no |
||
+ | |} |
||
+ | * Install cert-manager (if requires) |
||
+ | # Install the CustomResourceDefinition resources separately |
||
+ | kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml |
||
+ | |||
+ | # **Important:** |
||
+ | # If you are running Kubernetes v1.15 or below, you |
||
+ | # will need to add the `--validate=false` flag to your |
||
+ | # kubectl apply command, or else you will receive a |
||
+ | # validation error relating to the |
||
+ | # x-kubernetes-preserve-unknown-fields field in |
||
+ | # cert-manager’s CustomResourceDefinition resources. |
||
+ | # This is a benign error and occurs due to the way kubectl |
||
+ | # performs resource validation. |
||
+ | |||
+ | # Create the namespace for cert-manager |
||
+ | kubectl create namespace cert-manager |
||
+ | |||
+ | # Add the Jetstack Helm repository |
||
+ | helm repo add jetstack https://charts.jetstack.io |
||
+ | |||
+ | # Update your local Helm chart repository cache |
||
+ | helm repo update |
||
+ | |||
+ | # Install the cert-manager Helm chart |
||
+ | helm install \ |
||
+ | cert-manager jetstack/cert-manager \ |
||
+ | --namespace cert-manager \ |
||
+ | --version v1.0.4 |
||
+ | |||
+ | kubectl get pods --namespace cert-manager |
||
+ | |||
+ | NAME READY STATUS RESTARTS AGE |
||
+ | cert-manager-5c6866597-zw7kh 1/1 Running 0 2m |
||
+ | cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m |
||
+ | cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m |
||
+ | * Install Rancher with Helm and Your Chosen Certificate Option |
||
+ | : Rancher Generated Certificates (Default) |
||
+ | helm install rancher rancher-latest/rancher \ |
||
+ | --namespace cattle-system \ |
||
+ | --set hostname=rancher.my.org |
||
+ | |||
+ | kubectl -n cattle-system rollout status deploy/rancher |
||
+ | Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... |
||
+ | deployment "rancher" successfully rolled out |
||
+ | :* HTTP Proxy |
||
+ | :* Private Docker Image Registry |
||
+ | :* TLS Termination on an External Load Balancer |
||
+ | * Verify that the Rancher Server is Successfully Deployed |
||
+ | kubectl -n cattle-system rollout status deploy/rancher |
||
+ | Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... |
||
+ | deployment "rancher" successfully rolled out |
||
+ | |||
+ | kubectl -n cattle-system get deploy rancher |
||
+ | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE |
||
+ | rancher 3 3 3 3 3m |
||
+ | * Save Your Options |
||
+ | : Make sure you save the <code>--set</code> options you used. |
||
+ | * Finishing Up |
||
+ | * Optional Next Steps |
||
+ | : Enable the Enterprise Cluster Manager. |
||
+ | === Windows === |
||
+ | * Docker Engine - Enterprise Edition (EE) |
||
+ | * Requirements |
||
+ | ** 2 core CPUs |
||
+ | ** 5 GB memory |
||
+ | ** 50 GB disk space |
||
+ | * Rancher only supports Windows using Flannel as the network provider. |
||
+ | : 2 network options: Host Gateway (L2bridge) and VXLAN (Overlay)(default) |
||
+ | :* Host Gateway (L2bridge): |
||
+ | :: best to use the same Layer 2 network for all nodes. Otherwise, you need to configure the route rules for them. |
||
+ | :: maybe need to disable private IP address checks |
||
+ | :* VXLAN (Overlay): |
||
+ | :: KB4489899 hotfix must be installed |
||
+ | * Minimum Architecture Requirements |
||
+ | {| class="wikitable" |
||
+ | ! Operating System || Kubernetes Cluster Role || Purpose |
||
+ | |- |
||
+ | | Linux || Control plane, etcd, worker || Manage the Kubernetes cluster |
||
+ | |- |
||
+ | | Linux || Worker || Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster |
||
+ | |- |
||
+ | | Windows Server core 1809 or above || Worker || Run your Windows containers |
||
+ | |} |
||
+ | __NOTOC__ |
||
+ | [[Category:Kubernetes]] |
||
+ | [[Category:Rancher]] |
於 2021年1月28日 (四) 16:20 的最新修訂
Large Clusters
- A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane.
- No more than 100 pods per node
- No more than 5,000 nodes
- No more than 150,000 total pods
- No more than 300,000 total containers
- Consider resource quotas
- Computer instances
- CPUs
- Storage volumes
- In-use IP addresses
- Packet filtering rule sets
- Number of load balancers
- Network subnets
- Log streams
- need a control plane with sufficient compute and other resources.
- store Event objects in a separate dedicated etcd instance
need a control plane with sufficient compute and other resources.
RKE
high-availability RKE cluster
- Three Linux nodes, typically virtual machines, in an infrastructure provider such as Amazon’s EC2, Google Compute Engine, or vSphere.
- These nodes must be in the same region/data center. You may place these servers in separate availability zones.
- Rancher server data is stored on etcd database that runs on all three nodes.
- etcd is a distributed reliable key-value store for the most critical data of a distributed system, with a focus: Simple, Secure, Fast & Reliable.
- etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster.
- general installation requirements for OS, container runtime, hardware, and networking.
Deployment Size Clusters Nodes vCPUs RAM Small Up to 150 Up to 1500 2 8 GB Medium Up to 300 Up to 3000 4 16 GB Large Up to 500 Up to 5000 8 32 GB X-Large Up to 1000 Up to 10,000 16 64 GB XX-Large Up to 2000 Up to 20,000 32 128 GB
- Contact Rancher for more than 2000 clusters and/or 20,000 nodes.
- A load balancer to direct front-end traffic to the three nodes.
- RKE tool will deploy an NGINX Ingress controller.
- This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
- A layer-4 load balancer
- Install NGINX, stream module is required.
/etc/nginx/nginx.conf
worker_processes 4; worker_rlimit_nofile 40000; events { worker_connections 8192; } stream { upstream rancher_servers_http { least_conn; server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s; server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s; server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s; } server { listen 80; proxy_pass rancher_servers_http; } upstream rancher_servers_https { least_conn; server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s; server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s; server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s; } server { listen 443; proxy_pass rancher_servers_https; } }
docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -v /etc/nginx.conf:/etc/nginx/nginx.conf \ nginx:1.14
- A layer-7 load balancer
- A DNS record to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
RancherD
Deployment Size | Clusters | Nodes | vCPUs | RAM |
---|---|---|---|---|
Small | Up to 5 | Up to 50 | 2 | 5 GB |
Medium | Up to 15 | Up to 200 | 3 | 9 GB |
Worker
Linux
- Requirements
- 1 core CPU
- 1 GB memory
- Install the Required CLI Tools
- kubectl - Kubernetes command-line tool.
- helm - Package management for Kubernetes.
- Add the Helm Chart Repository
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
- Create a Namespace for Rancher
kubectl create namespace cattle-system
- Choose your SSL Configuration
Configuraton | Helm Chart Option | Requires cert-manager |
---|---|---|
Rancher Generated Certificates (Default) | ingress.tls.source=rancher | yes |
Let’s Encrypt | ingress.tls.source=letsEncrypt | yes |
Certificates from Files | ingress.tls.source=secret | no |
- Install cert-manager (if requires)
# Install the CustomResourceDefinition resources separately kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml # **Important:** # If you are running Kubernetes v1.15 or below, you # will need to add the `--validate=false` flag to your # kubectl apply command, or else you will receive a # validation error relating to the # x-kubernetes-preserve-unknown-fields field in # cert-manager’s CustomResourceDefinition resources. # This is a benign error and occurs due to the way kubectl # performs resource validation. # Create the namespace for cert-manager kubectl create namespace cert-manager # Add the Jetstack Helm repository helm repo add jetstack https://charts.jetstack.io # Update your local Helm chart repository cache helm repo update # Install the cert-manager Helm chart helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --version v1.0.4
kubectl get pods --namespace cert-manager NAME READY STATUS RESTARTS AGE cert-manager-5c6866597-zw7kh 1/1 Running 0 2m cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
- Install Rancher with Helm and Your Chosen Certificate Option
- Rancher Generated Certificates (Default)
helm install rancher rancher-latest/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org
kubectl -n cattle-system rollout status deploy/rancher Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... deployment "rancher" successfully rolled out
- HTTP Proxy
- Private Docker Image Registry
- TLS Termination on an External Load Balancer
- Verify that the Rancher Server is Successfully Deployed
kubectl -n cattle-system rollout status deploy/rancher Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... deployment "rancher" successfully rolled out
kubectl -n cattle-system get deploy rancher NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE rancher 3 3 3 3 3m
- Save Your Options
- Make sure you save the
--set
options you used.
- Finishing Up
- Optional Next Steps
- Enable the Enterprise Cluster Manager.
Windows
- Docker Engine - Enterprise Edition (EE)
- Requirements
- 2 core CPUs
- 5 GB memory
- 50 GB disk space
- Rancher only supports Windows using Flannel as the network provider.
- 2 network options: Host Gateway (L2bridge) and VXLAN (Overlay)(default)
- Host Gateway (L2bridge):
- best to use the same Layer 2 network for all nodes. Otherwise, you need to configure the route rules for them.
- maybe need to disable private IP address checks
- VXLAN (Overlay):
- KB4489899 hotfix must be installed
- Minimum Architecture Requirements
Operating System | Kubernetes Cluster Role | Purpose |
---|---|---|
Linux | Control plane, etcd, worker | Manage the Kubernetes cluster |
Linux | Worker | Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster |
Windows Server core 1809 or above | Worker | Run your Windows containers |