Commit 4eb69978 authored by gejun's avatar gejun

reviewed memcache_client.md

parent c462f86a
[English version](../en/memcache_client.md)
[memcached](http://memcached.org/)是常用的缓存服务,为了使用户更快捷地访问memcached并充分利用bthread的并发能力,brpc直接支持memcache协议。示例程序:[example/memcache_c++](https://github.com/brpc/brpc/tree/master/example/memcache_c++/)
**注意**:brpc只支持memcache的二进制协议。memcached在1.3前只有文本协议,但在当前看来支持的意义甚微。如果你的memcached早于1.3,升级版本。
......@@ -5,11 +7,11 @@
相比使用[libmemcached](http://libmemcached.org/libMemcached.html)(官方client)的优势有:
- 线程安全。用户不需要为每个线程建立独立的client。
- 支持同步、异步、批量同步、批量异步等访问方式,能使用ParallelChannel等组合访问方式。
- 有明确的request和response。而libmemcached是没有的,收到的消息不能直接和发出的消息对应上,用户需要自己做维护工作
- 支持多种[连接方式](client.md#连接方式)。支持超时、backup request、取消、tracing、内置服务等一系列RPC基本福利
- 支持同步、异步、半同步等访问方式,能使用[ParallelChannel等](combo_channel.md)组合访问方式。
- 支持多种[连接方式](client.md#连接方式)。支持超时、backup request、取消、tracing、内置服务等一系列brpc提供的福利
- 有明确的request和response。而libmemcached是没有的,收到的消息不能直接和发出的消息对应上,用户得做额外开发,而且并没有那么容易做对
当前实现充分利用了RPC的并发机制并尽量避免了拷贝。一个client可以轻松地把一个同机memcached实例(版本1.4.15)压到极限:单连接9万,多连接33万。在大部分情况下,brpc应该能充分发挥memcached的性能。
当前实现充分利用了RPC的并发机制并尽量避免了拷贝。一个client可以轻松地把一个同机memcached实例(版本1.4.15)压到极限:单连接9万,多连接33万。在大部分情况下,brpc client能充分发挥memcached的性能。
# 访问单台memcached
......@@ -19,7 +21,7 @@
#include <brpc/memcache.h>
#include <brpc/channel.h>
ChannelOptions options;
brpc::ChannelOptions options;
options.protocol = brpc::PROTOCOL_MEMCACHE;
if (channel.Init("0.0.0.0:11211", &options) != 0) { // 11211是memcached的默认端口
LOG(FATAL) << "Fail to init channel to memcached";
......@@ -51,12 +53,12 @@ if (!response.PopSet(NULL)) {
...
```
上述的代码有如下注意点
上述代码的说明
- 请求类型必须为MemcacheRequest,回复类型必须为MemcacheResponse,否则CallMethod会失败。不需要stub,直接调用channel.CallMethod,method填NULL。
- 调用request.XXX()增加操作,本例XXX=Set,一个request多次调用不同的操作,这些操作会被同时送到memcached(常被称为pipeline模式)。
- 依次调用response.PopXXX()弹出操作结果,本例XXX=Set,成功返回true,失败返回false,调用response.LastError()可获得错误信息。XXX必须和request的依次对应,否则失败。本例中若用PopGet就会失败,错误信息为“not a GET response"。
- Pop结果独立于RPC结果。即使Set失败,RPC可能还是成功的。RPC失败意味着连接断开,超时之类的。“不能把某个值设入memcached”对于RPC来说还是成功的。如果业务上认为要成功操作才算成功,那么你不仅要判RPC成功,还要判PopXXX是成功的。
- Pop结果独立于RPC结果。即使“不能把某个值设入memcached”,RPC可能还是成功的。RPC失败指连接断开,超时之类的。如果业务上认为要成功操作才算成功,那么你不仅要判RPC成功,还要判PopXXX是成功的。
目前支持的请求操作有:
......@@ -95,6 +97,6 @@ bool PopVersion(std::string* version);
# 访问memcached集群
建立一个使用c_md5负载均衡算法的channel,每个MemcacheRequest只包含一个操作或确保所有的操作始终落在同一台server,就能访问挂载在对应名字服务下的memcached集群了。如果request包含了多个操作,在当前实现下这些操作总会送向同一个server。比方说一个request中包含了多个Get操作,而对应的key分布在多个server上,那么结果就肯定不对了,这个情况下你必须把一个request分开为多个
建立一个使用c_md5负载均衡算法的channel就能访问挂载在对应名字服务下的memcached集群了。注意每个MemcacheRequest应只包含一个操作或确保所有的操作是同一个key。如果request包含了多个操作,在当前实现下这些操作总会送向同一个server,假如对应的key分布在多个server上,那么结果就不对了,这个情况下你必须把一个request分开为多个,每个包含一个操作
或者你可以沿用常见的[twemproxy](https://github.com/twitter/twemproxy)方案。这个方案虽然需要额外部署proxy,还增加了延时,但client端仍可以像访问单点一样的访问它。
[memcached](http://memcached.org/) is a common cache service today. In order to speed up the access to memcached and make full use of bthread concurrency, brpc directly support the memcached protocol. For examples please refer to: [example/memcache_c++](https://github.com/brpc/brpc/tree/master/example/memcache_c++/)
[中文版](../cn/memcache_client.md)
**NOTE**: brpc only supports the binary protocol of memcache rather than the textual one before version 1.3 since there is little benefit to do that now. If your memcached has a version earlier than 1.3, please upgrade to the latest.
[memcached](http://memcached.org/) is a common caching service. In order to access memcached more conveniently and make full use of bthread's capability of concurrency, brpc directly supports the memcached protocol. Check [example/memcache_c++](https://github.com/brpc/brpc/tree/master/example/memcache_c++/) for an example.
Compared to [libmemcached](http://libmemcached.org/libMemcached.html) (the official client), we have advantages in:
**NOTE**: brpc only supports the binary protocol of memcache. There's little benefit to support the textual protocol which is replaced since memcached 1.3. If your memcached is older than 1.3, upgrade to a newer version.
- Thread safety. No need to set up a separate client for each thread.
- Support access patterns of synchronous, asynchronous, batch synchronous, batch asynchronous. Can be used with ParallelChannel to enable access combinations.
- Support various [connection types](client.md#Connection Type). Support timeout, backup request, cancellation, tracing, built-in services, and other basic benefits of the RPC framework.
- Have the concept of request/response while libmemcached haven't, where users have to do extra maintenance since the received message doesn't have a relationship with the sent message.
The advantages compared to [libmemcached](http://libmemcached.org/libMemcached.html) (the official client):
The current implementation takes full advantage of the RPC concurrency mechanism to avoid copying as much as possible. A single client can easily reaches the limit of a memcached instance (version 1.4.15) on the same machine: 90,000 QPS for single connection, 330,000 QPS for multiple connections. In most cases, brpc should be able to make full use of memcached's performance.
- Thread safety. No need to set up separate clients for each thread.
- Support synchronous, asynchronous, semi-synchronous accesses etc. Support [ParallelChannel etc](combo_channel.md) to define access patterns declaratively.
- Support various [connection types](client.md#connection-type). Support timeout, backup request, cancellation, tracing, built-in services, and other benefits offered by brpc.
- Have the concept of requests and responses while libmemcached don't. Users have to do extra bookkeepings to associate received messages with sent messages, which is not trivial.
# Request to single memcached
The current implementation takes full advantage of the RPC concurrency mechanism and avoids copying as much as possible. A single client can easily pushes a memcached instance (version 1.4.15) on the same machine to its limit: 90,000 QPS for single connection, 330,000 QPS for multiple connections. In most cases, brpc is able to make full use of memcached's capabilities.
Create a `Channel` to access memcached:
# Request a memcached server
Create a `Channel` for accessing memcached:
```c++
#include <brpc/memcache.h>
#include <brpc/channel.h>
ChannelOptions options;
brpc::ChannelOptions options;
options.protocol = brpc::PROTOCOL_MEMCACHE;
if (channel.Init("0.0.0.0:11211", &options) != 0) { // 11211 is the default port for memcached
LOG(FATAL) << "Fail to init channel to memcached";
......@@ -28,7 +30,7 @@ if (channel.Init("0.0.0.0:11211", &options) != 0) { // 11211 is the default por
...
```
Set data to memcached
Following example tries to set data to memcached:
```c++
// Set key="hello" value="world" flags=0xdeadbeef, expire in 10s, and ignore cas
......@@ -51,14 +53,14 @@ if (!response.PopSet(NULL)) {
...
```
There are some notes on the above code:
Notes on above code:
- The class of the request must be `MemcacheRequest`, and `MemcacheResponse` for the response, otherwise `CallMethod` will fail. `stub` is not necessary. Just call `channel.CallMethod` with `method` set to NULL.
- Call `request.XXX()` to add operation, where `XXX=Set` in this case. Multiple operations on a single request will be sent to memcached in batch (often referred to as pipeline mode).
- call `response.PopXXX()` pop-up operation results, where `XXX=Set` in this case. Return true on success, and false on failure, in which case use `response.LastError()` to get the error message. Operation `XXX` must correspond to request, otherwise it will fail. In the above example, a `PopGet` will fail with the error message of "not a GET response".
- The results of `Pop` are independent of RPC result. Even if `Set` fails, RPC may still be successful. RPC failure means things like broken connection, timeout, and so on . *Can not put a value into memcached* is still a successful RPC. AS a reulst, in order to make sure success of the entire process, you need to not only determine the success of RPC, but also the success of `PopXXX`.
- The class of the request must be `MemcacheRequest`, response must be `MemcacheResponse`, otherwise `CallMethod` fails. `stub` is not necessary, just call `channel.CallMethod` with `method` to NULL.
- Call `request.XXX()` to add an operation, where `XXX` is `Set` in this example. Multiple operations inside a request are sent to a memcached server together (often referred to as "pipeline mode").
- call `response.PopXXX()` to pop result of an operation from the response, where `XXX` is `Set` in this example. true is returned on success, and false otherwise in which case use `response.LastError()` to get the error message. `XXX` must match the corresponding operation in the request, otherwise the pop is rejected. In above example, a `PopGet` would fail with the error message of "not a GET response".
- Results of `Pop` are independent from the RPC result. Even if "a value cannot be put into the memcached", the RPC may still be successful. RPC failure means things like broken connection, timeout etc. If the business logic requires the memcache operations to be succesful, you should test successfulness of both RPC and `PopXXX`.
Currently our supported operations are:
Supported operations currently:
```c++
bool Set(const Slice& key, const Slice& value, uint32_t flags, uint32_t exptime, uint64_t cas_value);
......@@ -74,7 +76,7 @@ bool Touch(const Slice& key, uint32_t exptime);
bool Version();
```
And the corresponding reply operations:
Corresponding operations in replies:
```c++
// Call LastError() of the response to check the error text when any following operation fails.
......@@ -93,8 +95,8 @@ bool PopTouch();
bool PopVersion(std::string* version);
```
# Access to memcached cluster
# Request a memcached cluster
If you want to access a memcached cluster mounted on some naming service, you should create a `Channel` that uses the c_md5 as the load balancing algorithm and make sure each `MemcacheRequest` contains only one operation or all operations fall on the same server. Since under the current implementation, multiple operations inside a single request will always be sent to the same server. For example, if a request contains a number of Get while the corresponding keys distribute in different servers, the result must be wrong, in which case you have to separate the request according to key distribution.
Create a `Channel` using the `c_md5` as the load balancing algorithm to access a memcached cluster mounted under a naming service. Note that each `MemcacheRequest` should contain only one operation or all operations have the same key. Under current implementation, multiple operations inside a single request are always sent to a same server. If the corresponding keys are located at different servers, the result must be wrong. In which case, you have to divide the request into multilples which has one operation each.
Another choice is to follow the common [twemproxy](https://github.com/twitter/twemproxy) style. This allows the client can still access the cluster just like a single point, although it requires deployment of the proxy and the additional latency.
\ No newline at end of file
Another choice is to use the common [twemproxy](https://github.com/twitter/twemproxy) solution, which makes clients access the cluster just like accessing a single server, although the solution needs to deploy proxies and adds more latency.
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment