Commit 14600ccb authored by gejun's avatar gejun

Reviewed vars.md

parent c38679c2
[bvar](https://github.com/brpc/brpc/tree/master/src/bvar/)是多线程环境下的计数器类库,方便记录和查看用户程序中的各类数值,它利用了thread local存储避免了cache bouncing,相比UbMonitor几乎不会给程序增加性能开销,也快于竞争频繁的原子操作。brpc集成了bvar,[/vars](http://brpc.baidu.com:8765/vars)可查看所有曝光的bvar,[/vars/VARNAME](http://brpc.baidu.com:8765/vars/rpc_socket_count)可查阅某个bvar,增加计数器的方法请查看[bvar](bvar.md)。brpc大量使用了bvar提供统计数值,当你需要在多线程环境中计数并展现时,应该第一时间想到bvar。但bvar不能代替所有的计数器,它的本质是把写时的竞争转移到了读:读得合并所有写过的线程中的数据,而不可避免地变慢了。当你读写都很频繁并得基于数值做一些逻辑判断时,你不应该用bvar。
[English version](../en/vars.md)
[bvar](https://github.com/brpc/brpc/tree/master/src/bvar/)是多线程环境下的计数器类库,方便记录和查看用户程序中的各类数值,它利用了thread local存储减少了cache bouncing,相比UbMonitor(百度内的老计数器库)几乎不会给程序增加性能开销,也快于竞争频繁的原子操作。brpc集成了bvar,[/vars](http://brpc.baidu.com:8765/vars)可查看所有曝光的bvar,[/vars/VARNAME](http://brpc.baidu.com:8765/vars/rpc_socket_count)可查阅某个bvar,增加计数器的方法请查看[bvar](bvar.md)。brpc大量使用了bvar提供统计数值,当你需要在多线程环境中计数并展现时,应该第一时间想到bvar。但bvar不能代替所有的计数器,它的本质是把写时的竞争转移到了读:读得合并所有写过的线程中的数据,而不可避免地变慢了。当你读写都很频繁或得基于最新值做一些逻辑判断时,你不应该用bvar。
## 查询方法
[/vars](http://brpc.baidu.com:8765/vars) : 列出所有的bvar
[/vars](http://brpc.baidu.com:8765/vars) : 列出所有曝光的bvar
[/vars/NAME](http://brpc.baidu.com:8765/vars/rpc_socket_count):查询名字为NAME的bvar
[/vars/NAME1,NAME2,NAME3](http://brpc.baidu.com:8765/vars/pid;process_cpu_usage;rpc_controller_count):查询名字为NAME1或NAME2或NAME3的bvar
[/vars/foo*,b$r](http://brpc.baidu.com:8765/vars/rpc_server*_count;iobuf_blo$k_*):查询名字与某一统配符匹配的bvar,注意用$代替?匹配单个字符,因为?在url中有特殊含义
[/vars/foo*,b$r](http://brpc.baidu.com:8765/vars/rpc_server*_count;iobuf_blo$k_*):查询名字与某一统配符匹配的bvar,注意用$代替?匹配单个字符,因为?是URL的保留字符
以下动画演示了如何使用过滤功能。你可以把包含过滤表达式的url复制粘贴给他人,他们点开后将看到你看到的内容。
以下动画演示了如何使用过滤功能。你可以把包含过滤表达式的url复制粘贴给他人,他们点开后将看到相同的计数器条目。(数值可能随运行变化)
![img](../images/vars_1.gif)
/vars最前边有一个搜索框能加快寻找特定bvar的速度,在这个搜索框你只需键入bvar名称的一部分,框架将补上*进行模糊查找。不同的名称间可以逗号、分号或空格分隔。
/vars左上角有一个搜索框能加快寻找特定bvar的速度,在这个搜索框你只需键入bvar名称的一部分,框架将补上*进行模糊查找。不同的名称间可以逗号、分号或空格分隔。
![img](../images/vars_2.gif)
......@@ -46,34 +48,40 @@ bthread_worker_usage : 1.01056
## 统计和查看分位值
x%分位值(percentile)是指把一段时间内的N个统计值排序,排在第N * x%位的值就是x%分位值。比如一段时间内有1000个值,排在第500位(1000 * 50%)的值是50%分位值(即中位数),排在第990位的是99%分位值(1000 * 99%),排在第999位的是99.9%分位值。分位值能比平均值更准确的刻画数值分布,对衡量系统SLA有重要意义。对于最常见的延时统计,平均值很难反映出实质性的内容,99.9%分位值往往更加关键,它决定了系统能做什么
x%分位值(percentile)是指把一段时间内的N个统计值排序,排在第N * x%位的值。比如一段时间内有1000个值,先从小到大排序,排在第500位(1000 * 50%)的值是50%分位值(即中位数),排在第990位的是99%分位值(1000 * 99%),排在第999位的是99.9%分位值。分位值能比平均值更准确的刻画数值的分布,对理解系统行为有重要意义。工业级应用的SLA一般在99.97%以上(此为百度对二级系统的要求,一级是99.99%以上),一些系统即使平均值不错,但不佳的长尾区域也会明显拉低和打破SLA。分位值能帮助分析长尾区域
分位值可以绘制为CDF曲线和按时间变化的曲线。
**下图是分位值的CDF**,横轴是比例(排序位置/总数),纵轴是对应的分位值。比如横轴=50%处对应的纵轴值便是50%分位值。如果系统要求的性能指标是"99.9%的请求在xx毫秒内完成“,那么你就得看下99.9%那儿的值。
![img](../images/vars_4.png)
上图是CDF曲线。纵轴是延时。横轴是小于纵轴数值的数据比例。很明显地,这个图就是由从10%到99.99%的所有分位值组成。比如横轴=50%处对应的纵轴值便是50%分位值。那为什么要叫它CDF?CDF是[Cumulative Distribution Function](https://en.wikipedia.org/wiki/Cumulative_distribution_function)的缩写。当我们选定一个纵轴值x时,对应横轴的含义是"数值 <= x的比例”,如果数值是来自随机采样,那么含义即为“数值 <= x的概率”,这不就是概率的定义么?CDF的导数是[概率密度函数](https://en.wikipedia.org/wiki/Probability_density_function),换句话说如果我们把CDF的纵轴分为很多小段,对每个小段计算两端对应的横轴值之差,并把这个差作为新的横轴,那么我们便绘制了PDF曲线,就像(横着的)正态分布,泊松分布那样。但密度会放大差距,中位数的密度往往很高,在PDF中很醒目,这使得边上的长尾相对很扁而不易查看,所以大部分系统测试结果选择CDF曲线而不是PDF曲线。
为什么叫它[CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function)? 当选定一个纵轴值y时,对应横轴的含义是"数值 <= y的比例”,由于数值一般来自随机采样,横轴也可以理解为“数值 <= y的概率”,或P(数值 <= y),这就是CDF的定义。
CDF的导数是[概率密度函数](https://en.wikipedia.org/wiki/Probability_density_function)。如果把CDF的纵轴分为很多小段,对每个小段计算两端对应的横轴值之差,并把这个差作为新的横轴,那么我们便绘制了PDF曲线,就像顺时针旋转了90度的正态分布那样。但中位数的密度往往很高,在PDF中很醒目,这使得边上的长尾很扁而不易查看,所以大部分系统测量结果选择CDF曲线而不是PDF曲线。
用一些简单规则衡量CDF曲线好坏:
可用一些简单规则衡量CDF曲线好坏:
- 越平越好。一条水平线是最理想的,这意味着所有的数值都相等,没有任何等待,拥塞,停顿。当然这是不可能的。
- 99%之后越窄越好:99%之后是长尾的聚集地,对大部分系统的SLA有重要影响,越少越好。如果存储系统给出的性能指标是"99.9%的读请求在xx毫秒内完成“,那么你就得看下99.9%那儿的值;如果检索系统给出的性能指标是”99.99%的请求在xx毫秒内返回“,那么你得关注99.99%分位值
- 99%和100%间的面积越小越好:99%之后是长尾的聚集地,对大部分系统的SLA有重要影响
一条真实的好CDF曲线的特征是”斜率很小,尾部很窄“。
一条缓慢上升且长尾区域面积不大的CDF便是不错的曲线。
**下图是按分位值按时间变化的曲线**,包含了4条曲线,横轴是时间,纵轴从上到下分别对应99.9%,99%,90%,50%分位值。颜色从上到下也越来越浅(从橘红到土黄)。
![img](../images/vars_5.png)
上图是按时间变化曲线。包含了4条曲线,横轴是时间,纵轴从上到下分别对应99.9%,99%,90%,50%分位值。颜色从上到下也越来越浅(从橘红到土黄)。滑动鼠标可以阅读对应数据点的值,上图中显示是”39秒种前的99%分位值是330微秒”。这幅图中不包含99.99%的曲线,因为99.99%分位值常明显大于99.9%及以下的分位值,画在一起的话会使得其他曲线变得很”矮“,难以辨认。你可以点击以"_latency_9999"结尾的bvar独立查看99.99%曲线,当然,你也可以独立查看50%,90%,99%,99.9%等曲线。按时间变化曲线可以看到分位值的变化趋势,对分析系统的性能变化很实用。
滑动鼠标可以阅读对应数据点的值,上图中显示的是”39秒种前的99%分位值是330**微秒**”。这幅图中不包含99.99%的曲线,因为99.99%分位值常明显大于99.9%及以下的分位值,画在一起的话会使得其他曲线变得很”矮“,难以辨认。你可以点击以"\_latency\_9999"结尾的bvar独立查看99.99%曲线。按时间变化曲线可以看到分位值的变化趋势,对分析系统的性能变化很实用。
brpc的服务都会自动统计延时分布,用户不用自己加了。如下图所示:
![img](../images/vars_6.png)
你可以用bvar::LatencyRecorder统计非brpc服务的延时,这么做(更具体的使用方法请查看[bvar-c++](bvar_c++.md)):
你可以用bvar::LatencyRecorder统计任何代码的延时,这么做(更具体的使用方法请查看[bvar-c++](bvar_c++.md)):
```c++
#include <bvar/bvar.h>
...
bvar::LatencyRecorder g_latency_recorder("client"); // expose this recorder
...
......@@ -90,4 +98,4 @@ void foo() {
## 非brpc server
如果这个程序只是一个brpc client或根本没有使用brpc,并且你也想看到动态曲线,看[这里](dummy_server.md)
如果你的程序只是一个brpc client或根本没有使用brpc,并且你也想看到动态曲线,看[这里](dummy_server.md)
[bvar](https://github.com/brpc/brpc/tree/master/src/bvar/) is a counting utility designed for multiple threaded applications. It stores data in thread local storage(TLS) to avoid costly cache bouncing caused by concurrent modification. It is much faster than UbMonitor(a legacy counting utility used inside Baidu) and atomic operation in highly contended scenarios. bvar is builtin within brpc, through [/vars](http://brpc.baidu.com:8765/vars) you can access all the exposed bvars inside the server, or a single one specified by [/vars/`VARNAME`](http://brpc.baidu.com:8765/vars/rpc_socket_count). Check out [bvar](../cn/bvar.md) if you'd like add some bvars for you own services. bvar is widely used inside brpc to calculate indicators of internal status. It is **almost free** in most scenarios to collect data. If you are looking for a utility to collect and show internal status of your application, try bvar at the first time. However bvar is designed for general purpose counters, the read process of a single bvar have to combines all the TLS data from the threads that the very bvar has been written, which is very slow compared to the write process and atomic operations.
[中文版](../cn/vars.md)
## Check out bvars
[bvar](https://github.com/brpc/brpc/tree/master/src/bvar/) is a set of counters to record and view miscellaneous statistics conveniently in multi-threaded applications. The implementation reduces cache bouncing by storing data in thread local storage(TLS), being much faster than UbMonitor(a legacy counting library inside Baidu) and even atomic operations in highly contended scenarios. brpc integrates bvar by default, namely all exposed bvars in a server are accessible through [/vars](http://brpc.baidu.com:8765/vars), and a single bvar is addressable by [/vars/VARNAME](http://brpc.baidu.com:8765/vars/rpc_socket_count). Read [bvar](bvar.md) to know how to add bvars for your program. brpc extensively use bvar to expose internal status. If you are looking for an utility to collect and display metrics of your application, consider bvar in the first place. bvar definitely can't replace all counters, essentially it moves contentions occurred during write to read: which needs to combine all data written by all threads and becomes much slower than an ordinary read. If read and write on the counter are both frequent or decisions need to be made based on latest values, you should not use bvar.
[/vars](http://brpc.baidu.com:8765/vars) : List all the bvars
## Query methods
[/vars/NAME](http://brpc.baidu.com:8765/vars/rpc_socket_count):Check out the bvar whose name is `NAME`
[/vars](http://brpc.baidu.com:8765/vars) : List all exposed bvars
[/vars/NAME1,NAME2,NAME3](http://brpc.baidu.com:8765/vars/pid;process_cpu_usage;rpc_controller_count):Check out the bvars whose name are `NAME1`, `NAME2` or `NAME3`.
[/vars/NAME](http://brpc.baidu.com:8765/vars/rpc_socket_count):List the bvar whose name is `NAME`
[/vars/foo*,b$r](http://brpc.baidu.com:8765/vars/rpc_server*_count;iobuf_blo$k_*) Check out for the bvar whose name matches the given pattern. Note that `$` replaces `?` to represent a single character since `?` is reserved in URL.
[/vars/NAME1,NAME2,NAME3](http://brpc.baidu.com:8765/vars/pid;process_cpu_usage;rpc_controller_count):List bvars whose names are either `NAME1`, `NAME2` or `NAME3`.
The following animation shows how you can check out bvars with pattern. You can paste the URI to other forks who will see excatcly the same contents through this URI.
[/vars/foo*,b$r](http://brpc.baidu.com:8765/vars/rpc_server*_count;iobuf_blo$k_*): List bvars whose names match given wildcard patterns. Note that `$` matches a single character instead of `?` which is a reserved character in URL.
Following animation shows how to find bvars with wildcard patterns. You can copy and paste the URL to others who will see same bvars that you see. (values may change)
![img](../images/vars_1.gif)
There's a search box in front of /vars page. You can check out bvars with parts of names. Different parts can be specareted by `,` `:` or ` `.
There's a search box in the upper-left corner on /vars page, in which you can type part of the names to locate bvars. Different patterns are separated by `,` `:` or space.
![img](../images/vars_2.gif)
It's OK to access /vars throught terminal with curl as well:
/vars is accessible from terminal as well:
```shell
$ curl brpc.baidu.com:8765/vars/bthread*
......@@ -38,42 +40,48 @@ bthread_num_workers : 24
bthread_worker_usage : 1.01056
```
## Check out timing diagrams.
## View historical trends
You can click most of numerical bvars to check out their timing diagrams. Each clickable bvar stores value in the recent `60s/60m/24h/30d`, *174* numbers in total。It takes about 1M memory when there are 1000 clickable bvars.
Clicking on most of the numerical bvars shows historical trends. Each clickable bvar records values in recent *60 seconds, 60 minutes, 24 hours and 30 days*, which are *174* numbers in total. 1000 clickable bvars take roughly 1M memory.
![img](../images/vars_3.gif)
## Calculate and check out percentiles
## Calculate and view percentiles
x-ile (short for x-th percentile) is the value ranked at N * x%-th position amongst a group of ordered values. E.g. there are 1000 values inside a time window, sort them in ascending order first. The 500-th value(1000 * 50%) in the ordered list is 50-ile(a.k.a median), the 990-th(1000 * 99%) value is 99-ile, the 999-th value is 99.9-ile. Percentiles give more information about the latency distribution than average latencies, and being helpful for understanding behaviors of the system. Industrial-level services often require SLA to be not less than 99.97% (the requirement for 2nd-level services inside Baidu, >=99.99% for 1st-level services), even if a system has good average latencies, a bad long-tail area may significantly lower and break SLA. Percentiles do help analyzing the long-tail area.
A percentile indicates the value below which a given percentage of samples in a group of samples fall. E.g. there are 1000 in a very time window,The 500th in the sorted set(1000 * 50%) is the value of 50%-percentile(a.k.a median), the number at the 990-th is 99%-percentile(1000 * 99%),the number at 999-th is 99.9%-percentile. Percentiles show more information about the latency distribution than average latency, which is very important when calculating SAL. Usually 99.9%-percentile of latency limits the usage of the service rather than the average latency.
Percentiles can be plotted as a CDF or a percentiles-over-time curve.
Percentiles can be plotted as a CDF curve or a timing diagram.
**Following diagram plots percentiles as CDF**, where the X-axis is the ratio(ranked-position/total-number) and the Y-axis is the corresponding percentile. E.g. The Y-axis value corresponding to X=50% is 50-ile. If a system requires that "99.9% requests need to be processed within xx milliseconds", you should check the value at 99.9%.
![img](../images/vars_4.png)
The diagram above is a CDF curve. The vertical axis is the value of latency and the horizontal axis is the percentage of value less than the corresponding value at vertical axis. Obviously, this diagram is plotted by percentiles from 10% to 99.99%。 For example, the vertical axis value corresponding to the horizontal axis at 50% is 50%-percentile of the quantile value. CDF is short for [Cumulative Distribution Function](https://en.wikipedia.org/wiki/Cumulative_distribution_function). When we choose a vertical axis value `x`, the corresponding horizontal axis means "the ratio of the value <= `x`". If the numbers are randomly sampled, it stands for "*the probability* of value <= `x`”, which is exacly the definition of distribution. The derivative of the CDF is a [PDF(probability density function)](https://en.wikipedia.org/wiki/Probability_density_function). In other words, if we divide the vertical axis of the CDF into a number of small segments, calculating the difference between the corresponding values at the at both ends and use the difference as a new horizontal axis, it would draw the PDF curve, just as the *(horizontal) normal distribution* or *Poisson distribution*. The density of median will be significantly higher than the long tail in PDF curve. However we care more about the long tail. As a result, most system tests show CDF curves rather than PDF curves.
Why do we call it [CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function) ? When a Y=y is chosen, the corresponding X means "percentage of values <= y". Since values are sampled randomly (and uniformly), the X can be viewed as "probability of values <= y", or P(values <= y), which is just the definition of CDF.
Derivative of the CDF is [PDF](https://en.wikipedia.org/wiki/Probability_density_function). If we divide the Y-axis of the CDF into many small-range segments, calculate the difference between X-axis value of both ends of each segment, and use the difference as new value for X-axis, a PDF curve would be plotted, just like a normal distribution rotated 90 degrees clockwise. However density of the median is often much higher than others in a PDF and probably make long-tail area very flat and hard to read. As a result, systems prefer showing distributions in CDF rather than PDF.
Some simple rules to check if it is a *good* CDF curve
Here're 2 simple rules to check if a CDF curve is good or not:
- The flatter the better. It's best if the CDF curve is just a horizontal line, which indicates that there's no waiting, congestion nor pausing. Of course it's impossible actually.
- The more narrow after 99% the better, which shows the range of long tail. And it's a very important part in SLA of most system. For example, if one of indicators in storage system is "*99.9%* of read should finish in *xx milliseconds*"), the maintainer should care about the value at 99.9%; If one of indicaters in search system is "*99.99%* of requests should finish in *xx milliseconds*), maintainers should care about the value at 99.99%.
- The flatter the better. A horizontal line is an ideal CDF curve which means that there're no waitings, congestions or pauses, very unlikely in practice.
- The area between 99% and 100% should be as small as possible: right-side of 99% is the long-tail area, which has a significant impact on SLA.
It is a good CDF curve if the gradient is small and the tail is narrow.
A CDF with slowly ascending curve and small long-tail area is great in practice.
**Following diagram plots percentiles over time** and has four curves. The X-axis is time and Y-axis from top to bottom are 99.9% 99% 90% 50% percentiles respectively, plotted in lighter and lighter colors (from orange to yellow).
![img](../images/vars_5.png)
It's a timing diagram of percentiles above, consisting of four curves. The horizontal axis is the time and the vertical axis is the latency. The curves from top to bottom show the timing disgram of latency at 99.9%/99%/90%/50%-percentiles. The color from top to bottom is also more and more shallow (from orange to yellow). You can move the mouse on over curves to get the corresponding data at different time. The number showed above means "The `99%`-percentile of latency before `39` seconds is `330` microseconds". The curve of 99.99% percentile is not counted in this diagram since it's usually significantly higher than the others which would make the other four curves hard to tell. You can click the bvars whose names end with "*_latency_9999*" to check the 99.99%-percentile along, and you can also check out curves of 50%,90%,99%,99.9% percentiles along in the same way. The timing digram shows the trends of percentiles, which is very helpful when you are analyzing the performance of the system.
Hovering mouse over the curves shows corresponding values at the time. The tooltip in above diagram means "The 99% percentile of latency before 39 seconds is 330 **microseconds**". The diagram does not include the 99.99-ile curve which is usually significantly higher than others, making others hard to read. You may click bvars ended with "\_latency\_9999" to read the 99.99-ile curve separately. This diagram shows how percentiles change over time, which is helpful to analyze performance regressions of systems.
brpc calculates latency distributed of the services. Users don't need to do this by themselves. The result is like the following piecture.
brpc calculates latency distributions of services automatically, which do not need users to add manually. The metrics are as follows:
![img](../images/vars_6.png)
Use `bvar::LatencyRecorder` to calculate the latency distribution of non rpc services in the ways shows in teh following code block. (checkout [bvar-c++](bvar_c++.md) for more details):
`bvar::LatencyRecorder` is able to calculate latency distributions of any code, as depicted below. (checkout [bvar-c++](bvar_c++.md) for details):
```c++
#include <bvar/bvar.h>
...
bvar::LatencyRecorder g_latency_recorder("client"); // expose this recorder
...
......@@ -84,10 +92,10 @@ void foo() {
}
```
If there's already a rpc server started in the application, you can view the value like `client_latency, client_latency_cdf` through `/vars`. Click them and you view dynamic curves, like the folowing picture.
If the application already starts a brpc server, values like `client_latency`, `client_latency_cdf` can be viewed from `/vars` as follows. Clicking them to see (dynamically-updated) curves:
![img](../images/vars_7.png)
## Non brpc server
If there's only clients of brpc used in the application or you don't even use brpc. Check out [this page](../cn/dummy_server.md) if you'd like check out the curves as well.
If your program only uses brpc client or even not use brpc, and you also want to view the curves, check [here](../cn/dummy_server.md).
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment