필터 지우기
필터 지우기

How do I configure a load balancer in front of MATLAB Production Server?

조회 수: 11 (최근 30일)
How do I configure a load balancer (for example, nginx) to distribute requests evenly to multiple MATLAB Production Server instances?

채택된 답변

Nicole Bonfatti
Nicole Bonfatti 2022년 6월 30일
편집: Nicole Bonfatti 2022년 7월 14일
For most uses, this is very straightforward. Put the MATLAB Production Server instance URLs into an upstream block in your nginx config, and then reference that upstream block as the proxy_pass argument. Nginx will then distribute requests in round-robin fashion. Similar capabilities exist in other load balancers as well.
upstream prodserverpool {
server prodserver:9910;
server prodserver2:9910;
}
server {
listen 9910;
listen [::]:9910;
server_name myserver;
location / {
proxy_pass http://prodserverpool;
}
}
However, if you want to support asynchronous requests, there's a bit more to consider. You will need to ensure the backend that each client communicates with stays the same across requests. This is called "session stickiness", and there are several ways to implement this in nginx.
  1. Using the IP Hash. This is easy to implement but if your clients appear to come from the same IP address could lead to one backend instance being overloaded (for example, behind a NAT.) To implement this, just add ip_hash; to your upstream block.
  2. Using a session cookie (Requires the commercial version of nginx). This requires your client to store and include a session cookie
  3. Using the generic hash object with a URL Parameter. Since asynchronous requests already have the client URL Parameter, I recommend using that.
upstream prodserverpool {
# 1. Using the IP Hash.
#ip_hash;
# 2. Using a session cookie
#sticky cookie srv_id expires=1h domain=.example.com path=/;
# 3. Using the generic hash object with the client URL parameter
hash $arg_client;
server prodserver:9910;
server prodserver2:9910;
}
server {
listen 9910;
listen [::]:9910;
server_name myserver;
location / {
proxy_pass http://prodserverpool;
}
}
Other load balancers may implement this slightly different, but the concept is the same.
Next, for either the session cookie or URL parameter hash methods, you'll need to ensure your client code is written to support it. That means you will need to retain the session cookie returned from the first request for the cookie method, or to keep using a consistent client URL parameter for each request to the load balancer.
For more information on using nginx as a load balancer, see the nginx load balancer documentation.
If you are using Amazon AWS Elastic Load Balancer, you need to use a cookie to set affinity. See the ELB sticky sessions documentation.
If you are using Azure Load Balancer, it supports source IP affinity. See the Azure Load balancer distribution mode documentation.
If you are using our Kubernetes Reference Architecture, it uses nginx for ingress, but (as of the time this was written) only supports cookie-based affinity. See the nginx ingress sticky sessions documentation.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Installation에 대해 자세히 알아보기

제품


릴리스

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by