Commit b4d54c3d authored by Ge Jun's avatar Ge Jun

translate docs on AutoConcurrencyLimiter in server.md to English

parent ce16cdf2
......@@ -55,7 +55,7 @@ You can use it to:
* [Debug server issues](docs/cn/server_debugging.md)
* [Server push](docs/en/server_push.md)
* [Avalanche](docs/cn/avalanche.md)
* [AutoConcurrencyLimiter](docs/cn/auto_concurrency_limiter.md)
* [Auto ConcurrencyLimiter](docs/cn/auto_concurrency_limiter.md)
* [Media Server](https://github.com/brpc/media-server)
* [json2pb](docs/cn/json2pb.md)
* [Builtin Services](docs/en/builtin_service.md)
......
......@@ -22,7 +22,7 @@
目前只有method级别支持自适应限流。如果要为某个method开启自适应限流,只需要将它的最大并发设置为"auto"即可。
```c++
// Set auto concurrency limiter for all method
// Set auto concurrency limiter for all methods
brpc::ServerOptions options;
options.method_max_concurrency = "auto";
......
......@@ -579,9 +579,9 @@ QPS是一个秒级的指标,无法很好地控制瞬间的流量爆发。而
### 计算最大并发数
最大并发度 = 极限QPS * 平均延时 ([little's law](https://en.wikipedia.org/wiki/Little%27s_law))
最大并发度 = 极限QPS * 低负载延时 ([little's law](https://en.wikipedia.org/wiki/Little%27s_law))
极限QPS和平均延时指的是server在没有严重积压请求的前提下(请求的延时仍能接受时)所能达到的最大QPS和当时的平均延时。一般的服务上线都会有性能压测,把测得的QPS和延时相乘一般就是该服务的最大并发度。
极限QPS指的是server能达到的最大qps,低负载延时指的是server在没有严重积压请求的前提下时的平均延时。一般的服务上线都会有性能压测,把测得的QPS和延时相乘一般就是该服务的最大并发度。
### 限制server级别并发度
......@@ -609,21 +609,19 @@ server.MaxConcurrencyOf("example.EchoService.Echo") = "10"; // You can also ass
注意:没有service级别的max_concurrency。
### 使用自适应限流算法
实际生产环境中,最大并发并不一定是一成不变的。这个时候可以使用自适应限流算法。自适应限流是method级别的。要使用自适应限流算法,把method的最大并发度设置为"auto"即可:
实际生产环境中,最大并发未必一成不变,在每次上线前逐个压测和设置服务的最大并发也很繁琐。这个时候可以使用自适应限流算法。
自适应限流是method级别的。要使用自适应限流算法,把method的最大并发度设置为"auto"即可:
```c++
// Set auto concurrency limiter for all method
// Set auto concurrency limiter for all methods
brpc::ServerOptions options;
options.method_max_concurrency = "auto";
// Set auto concurrency limiter for specific method
server.MaxConcurrencyOf("example.EchoService.Echo") = "auto";
```
自适应限流的算法能够正常工作的前提是:
1. 客户端开启了重试
2. 服务端有多个节点,当一个节点返回过载时,客户端可以向其他节点发起重试
关于自适应限流的更多细节可以看[这里](https://github.com/brpc/brpc/blob/master/docs/cn/auto_concurrency_limiter.md)
关于自适应限流的更多细节可以看[这里](auto_concurrency_limiter.md)
## pthread模式
......
......@@ -580,9 +580,11 @@ In addition, when a server has stable latencies, limiting concurrency has simila
### Calculate max concurrency
MaxConcurrency = PeakQPS * AverageLatency ([little's law](https://en.wikipedia.org/wiki/Little%27s_law))
max_concurrency = peak_qps * noload_latency ([little's law](https://en.wikipedia.org/wiki/Little%27s_law))
PeakQPS and AverageLatency are queries-per-second and latencies measured in a server being pushed to its limit provided that requests are not delayed severely (with an acceptable latency). Most services have performance tests before going online, multiplications of the two metrics calculates max concurrency of the service.
peak_qps is the maximum of Queries-Per-Second.
noload_latency is the average latency measured in a server without pushing to its limit(with an acceptable latency).
peak_qps and nolaod_latency can be measured in pre-online performance tests and multiplied to calculate the max_concurrency.
### Limit server-level concurrency
......@@ -606,6 +608,20 @@ When method-level and server-level max_concurrency are both set, framework check
NOTE: No service-level max_concurrency.
### AutoConcurrencyLimiter
max_concurrency may change over time and measuring and setting max_concurrency for all services before each deployment are probably very troublesome and impractical.
AutoConcurrencyLimiter addresses on this issue by limiting concurrency for methods. To use the algorithm, set max_concurrency of the method to "auto".
```c++
// Set auto concurrency limiter for all methods
brpc::ServerOptions options;
options.method_max_concurrency = "auto";
// Set auto concurrency limiter for specific method
server.MaxConcurrencyOf("example.EchoService.Echo") = "auto";
```
Read [this](../cn/auto_concurrency_limiter.md) to know more about the algorithm.
## pthread mode
User code(client-side done, server-side CallMethod) runs in bthreads with 1MB stacksize by default. But some of them cannot run in bthreads, namely:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment