「Rancher Requirements」修訂間的差異
跳至導覽
跳至搜尋
行 27: | 行 27: | ||
: Contact Rancher for more than 2000 clusters and/or 20,000 nodes. |
: Contact Rancher for more than 2000 clusters and/or 20,000 nodes. |
||
* '''A load balancer''' to direct front-end traffic to the three nodes. |
* '''A load balancer''' to direct front-end traffic to the three nodes. |
||
+ | : RKE tool will deploy an NGINX Ingress controller. |
||
+ | : This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. |
||
+ | ** A layer-4 load balancer |
||
+ | ** A layer-7 load balancer |
||
* '''A DNS record''' to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. |
* '''A DNS record''' to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. |
||
== RancherD == |
== RancherD == |
於 2021年1月27日 (三) 21:50 的修訂
RKE
high-availability RKE cluster
- Three Linux nodes, typically virtual machines, in an infrastructure provider such as Amazon’s EC2, Google Compute Engine, or vSphere.
- These nodes must be in the same region/data center. You may place these servers in separate availability zones.
- Rancher server data is stored on etcd database that runs on all three nodes.
- etcd is a distributed reliable key-value store for the most critical data of a distributed system, with a focus: Simple, Secure, Fast & Reliable.
- etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster.
- general installation requirements for OS, container runtime, hardware, and networking.
Deployment Size Clusters Nodes vCPUs RAM Small Up to 150 Up to 1500 2 8 GB Medium Up to 300 Up to 3000 4 16 GB Large Up to 500 Up to 5000 8 32 GB X-Large Up to 1000 Up to 10,000 16 64 GB XX-Large Up to 2000 Up to 20,000 32 128 GB
- Contact Rancher for more than 2000 clusters and/or 20,000 nodes.
- A load balancer to direct front-end traffic to the three nodes.
- RKE tool will deploy an NGINX Ingress controller.
- This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
- A layer-4 load balancer
- A layer-7 load balancer
- A DNS record to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.