Commit 147062b2 authored by gejun's avatar gejun

Add self-built-deps section in getting_started.md

parent f94605a1
...@@ -81,7 +81,7 @@ brpc pays special attentions to development and maintenance efficency, you can [ ...@@ -81,7 +81,7 @@ brpc pays special attentions to development and maintenance efficency, you can [
### Better latency and throughput ### Better latency and throughput
Although almost all RPC implementations claim that they're "high-performant", the number are probably just numbers. Being really high-performant in different scenarios is difficult. To unify communication infra inside Baidu, brpc goes much deeper at performance than other implementations. Although almost all RPC implementations claim that they're "high-performant", the numbers are probably just numbers. Being really high-performant in different scenarios is difficult. To unify communication infra inside Baidu, brpc goes much deeper at performance than other implementations.
* Reading and parsing requests from different clients is fully parallelized, and users don't need to distinguish between "IO-threads" and "Processing-threads". Other implementations probably have "IO-threads" and "Processing-threads" and hash file descriptors(fd) into IO-threads. When a IO-thread handles one of its fds, other fds in the thread can't be handled. If a message is large, other fds are significantly delayed. Although different IO-threads run in parallel, you won't have many IO-threads since they don't have too much to do generally except reading/parsing from fds. If you have 10 IO-threads, one fd may affect 10% of all fds, which is unacceptable to industrial online services (requiring 99.99% availability). The problem will be worse, when fds are distributed unevenly accross IO-threads (unfortunately common), or the service is multi-tenancy (common in cloud services). In brpc, reading from different fds is parallelized and even processing different messages from one fd is parallelized as well. Parsing a large message does not block other messages from the same fd, not to mention other fds. More details can be found [here](docs/cn/io.md#收消息). * Reading and parsing requests from different clients is fully parallelized, and users don't need to distinguish between "IO-threads" and "Processing-threads". Other implementations probably have "IO-threads" and "Processing-threads" and hash file descriptors(fd) into IO-threads. When a IO-thread handles one of its fds, other fds in the thread can't be handled. If a message is large, other fds are significantly delayed. Although different IO-threads run in parallel, you won't have many IO-threads since they don't have too much to do generally except reading/parsing from fds. If you have 10 IO-threads, one fd may affect 10% of all fds, which is unacceptable to industrial online services (requiring 99.99% availability). The problem will be worse, when fds are distributed unevenly accross IO-threads (unfortunately common), or the service is multi-tenancy (common in cloud services). In brpc, reading from different fds is parallelized and even processing different messages from one fd is parallelized as well. Parsing a large message does not block other messages from the same fd, not to mention other fds. More details can be found [here](docs/cn/io.md#收消息).
* Writing into one fd and multiple fds are highly concurrent. When multiple threads write into the same fd (common for multiplexed connections), the first thread directly writes in-place and other threads submit their write requests in [wait-free](http://en.wikipedia.org/wiki/Non-blocking_algorithm#Wait-freedom) manner. One fd can be written into 5,000,000 16-byte messages per second by a couple of highly-contended threads. More details can be found [here](docs/cn/io.md#发消息). * Writing into one fd and multiple fds are highly concurrent. When multiple threads write into the same fd (common for multiplexed connections), the first thread directly writes in-place and other threads submit their write requests in [wait-free](http://en.wikipedia.org/wiki/Non-blocking_algorithm#Wait-freedom) manner. One fd can be written into 5,000,000 16-byte messages per second by a couple of highly-contended threads. More details can be found [here](docs/cn/io.md#发消息).
......
...@@ -3,11 +3,20 @@ ...@@ -3,11 +3,20 @@
brpc prefers static linking if possible, so that deps don't have to be installed on every brpc prefers static linking if possible, so that deps don't have to be installed on every
machine running the code. machine running the code.
brpc depends on following packages:
* [gflags](https://github.com/gflags/gflags): Extensively used to specify global options.
* [protobuf](https://github.com/google/protobuf): needless to say, pb is a must-have dep.
* [leveldb](https://github.com/google/leveldb): required by [/rpcz](rpcz.md) to record RPCs for tracing.
## Ubuntu/LinuxMint/WSL ## Ubuntu/LinuxMint/WSL
### compile ### Prepare deps
1. install common deps: `git g++ make libssl-dev` install common deps: `git g++ make libssl-dev`
2. install gflags, protobuf, leveldb, including: `libgflags-dev libprotobuf-dev libprotoc-dev protobuf-compiler libleveldb-dev`. If you need to statically link leveldb, install `libsnappy-dev` as well.
3. git clone this repo. cd into the repo and run install [gflags](https://github.com/gflags/gflags), [protobuf](https://github.com/google/protobuf), [leveldb](https://github.com/google/leveldb), including: `libgflags-dev libprotobuf-dev libprotoc-dev protobuf-compiler libleveldb-dev`. If you need to statically link leveldb, install `libsnappy-dev` as well.
### Compile brpc
git clone brpc, cd into the repo and run
``` ```
$ sh config_brpc.sh --headers=/usr/include --libs=/usr/lib $ sh config_brpc.sh --headers=/usr/include --libs=/usr/lib
$ make $ make
...@@ -22,10 +31,9 @@ $ make ...@@ -22,10 +31,9 @@ $ make
$ ./echo_server & $ ./echo_server &
$ ./echo_client $ ./echo_client
``` ```
Examples link brpc statically, if you need to link libbrpc.so, `make clean` and `LINK_SO=1 make` Examples link brpc statically, if you need to link the shared version, `make clean` and `LINK_SO=1 make`
### run examples with cpu/heap profilers To run examples with cpu/heap profilers, install `libgoogle-perftools-dev` and re-run `config_brpc.sh` before compiling
Install `libgoogle-perftools-dev` and re-run config_brpc.sh before compiling
### compile tests ### compile tests
Install gmock and gtest, use the gtest embedded in gmock and don't install libgtest-dev Install gmock and gtest, use the gtest embedded in gmock and don't install libgtest-dev
...@@ -39,13 +47,17 @@ $ sudo mv gtest/include/gtest /usr/include/ ...@@ -39,13 +47,17 @@ $ sudo mv gtest/include/gtest /usr/include/
``` ```
Rerun config_brpc.sh and run make in test/ Rerun config_brpc.sh and run make in test/
## Fedora/centos ## Fedora/CentOS
### Prepare deps
install common deps: `git g++ make openssl-devel`
install [gflags](https://github.com/gflags/gflags), [protobuf](https://github.com/google/protobuf), [leveldb](https://github.com/google/leveldb), including: `gflags-devel protobuf-devel protobuf-compiler leveldb-devel`.
### compile ### Compile brpc
1. install common deps: `git g++ make openssl-devel` git clone brpc, cd into the repo and run
2. install gflags, protobuf, leveldb, including: `gflags-devel protobuf-devel protobuf-compiler leveldb-devel`.
3. git clone this repo. cd into the repo and run
``` ```
$ sh config_brpc.sh --headers=/usr/include --libs=/usr/lib64 $ sh config_brpc.sh --headers=/usr/include --libs=/usr/lib64
...@@ -61,10 +73,42 @@ $ make ...@@ -61,10 +73,42 @@ $ make
$ ./echo_server & $ ./echo_server &
$ ./echo_client $ ./echo_client
``` ```
Examples link brpc statically, if you need to link libbrpc.so, `make clean` and `LINK_SO=1 make` Examples link brpc statically, if you need to link the shared version, `make clean` and `LINK_SO=1 make`
### run examples with cpu/heap profilers
To run examples with cpu/heap profilers, install `gperftools-devel` and re-run `config_brpc.sh` before compiling
## Linux with self-built deps
### Prepare deps
brpc builds itself to both static and shared libs by default, so it needs static and shared libs of deps to be built as well.
Take [gflags](https://github.com/gflags/gflags) as example, which does not build shared lib by default, you need to pass options to `cmake` to change the behavior, like this: `cmake . -DBUILD_SHARED_LIBS=1 -DBUILD_STATIC_LIBS=1` then `make`.
### Compile brpc
Keep on with the gflags example, let `../gflags_dev` be where you clone gflags.
git clone brpc. cd into the repo and run
```
$ sh config_brpc.sh --headers="../gflags_dev /usr/include" --libs="../gflags_dev /usr/lib64"
$ make
```
to change compiler to clang, add `--cxx=clang++ --cc=clang`.
Here we pass multiple paths to `--headers` and `--libs` to make the script search for multiple places. You can also group all deps and brpc into one directory, then pass the directory to --headers/--libs which actually search all subdirectories recursively and will find necessary files.
```
$ ls my_dev
gflags_dev protobuf_dev leveldb_dev brpc_dev
$ cd brpc_dev
$ sh config_brpc.sh --headers=.. --libs=..
$ make
```
Install `gperftools-devel` and re-run config_brpc.sh before compiling Note: don't put ~ (tilde) in paths to --headers/--libs, it's not converted.
# Supported deps # Supported deps
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment