「Rancher Requirements」修訂間的差異
跳至導覽
跳至搜尋
行 30: | 行 30: | ||
: This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. |
: This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. |
||
:* A layer-4 load balancer |
:* A layer-4 load balancer |
||
+ | :: Install NGINX, '''stream''' module is required. |
||
+ | :: nginx.conf |
||
+ | worker_processes 4; |
||
+ | worker_rlimit_nofile 40000; |
||
+ | |||
+ | events { |
||
+ | worker_connections 8192; |
||
+ | } |
||
+ | |||
+ | stream { |
||
+ | upstream rancher_servers_http { |
||
+ | least_conn; |
||
+ | server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s; |
||
+ | server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s; |
||
+ | server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s; |
||
+ | } |
||
+ | server { |
||
+ | listen 80; |
||
+ | proxy_pass rancher_servers_http; |
||
+ | } |
||
+ | |||
+ | upstream rancher_servers_https { |
||
+ | least_conn; |
||
+ | server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s; |
||
+ | server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s; |
||
+ | server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s; |
||
+ | } |
||
+ | server { |
||
+ | listen 443; |
||
+ | proxy_pass rancher_servers_https; |
||
+ | } |
||
+ | |||
+ | } |
||
:* A layer-7 load balancer |
:* A layer-7 load balancer |
||
* '''A DNS record''' to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. |
* '''A DNS record''' to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. |
於 2021年1月27日 (三) 22:06 的修訂
RKE
high-availability RKE cluster
- Three Linux nodes, typically virtual machines, in an infrastructure provider such as Amazon’s EC2, Google Compute Engine, or vSphere.
- These nodes must be in the same region/data center. You may place these servers in separate availability zones.
- Rancher server data is stored on etcd database that runs on all three nodes.
- etcd is a distributed reliable key-value store for the most critical data of a distributed system, with a focus: Simple, Secure, Fast & Reliable.
- etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster.
- general installation requirements for OS, container runtime, hardware, and networking.
Deployment Size Clusters Nodes vCPUs RAM Small Up to 150 Up to 1500 2 8 GB Medium Up to 300 Up to 3000 4 16 GB Large Up to 500 Up to 5000 8 32 GB X-Large Up to 1000 Up to 10,000 16 64 GB XX-Large Up to 2000 Up to 20,000 32 128 GB
- Contact Rancher for more than 2000 clusters and/or 20,000 nodes.
- A load balancer to direct front-end traffic to the three nodes.
- RKE tool will deploy an NGINX Ingress controller.
- This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
- A layer-4 load balancer
- Install NGINX, stream module is required.
- nginx.conf
worker_processes 4; worker_rlimit_nofile 40000; events { worker_connections 8192; } stream { upstream rancher_servers_http { least_conn; server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s; server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s; server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s; } server { listen 80; proxy_pass rancher_servers_http; } upstream rancher_servers_https { least_conn; server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s; server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s; server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s; } server { listen 443; proxy_pass rancher_servers_https; } }
- A layer-7 load balancer
- A DNS record to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.