README.md 6.29 KB
Newer Older
1 2 3 4 5

# Protocol Buffers Benchmarks

This directory contains benchmarking schemas and data sets that you
can use to test a variety of performance scenarios against your
6
protobuf language runtime. If you are looking for performance
7
numbers of officially support languages, see [here](
Feng Xiao's avatar
Feng Xiao committed
8
https://github.com/protocolbuffers/protobuf/blob/master/docs/performance.md)
9

Yilun Chong's avatar
Yilun Chong committed
10
## Prerequisite
11

12
First, you need to follow the instruction in the root directory's README to
Yilun Chong's avatar
Yilun Chong committed
13
build your language's protobuf, then:
14

Yilun Chong's avatar
Yilun Chong committed
15
### CPP
Yilun Chong's avatar
Yilun Chong committed
16
You need to install [cmake](https://cmake.org/) before building the benchmark.
17

18 19
We are using [google/benchmark](https://github.com/google/benchmark) as the
benchmark tool for testing cpp. This will be automaticly made during build the
Yilun Chong's avatar
Yilun Chong committed
20
cpp benchmark.
21

22 23 24 25 26
The cpp protobuf performance can be improved by linking with [tcmalloc library](
https://gperftools.github.io/gperftools/tcmalloc.html). For using tcmalloc, you
need to build [gpertools](https://github.com/gperftools/gperftools) to generate
libtcmallc.so library.

Yilun Chong's avatar
Yilun Chong committed
27
### Java
28 29 30
We're using maven to build the java benchmarks, which is the same as to build
the Java protobuf. There're no other tools need to install. We're using
[google/caliper](https://github.com/google/caliper) as benchmark tool, which
Yilun Chong's avatar
Yilun Chong committed
31
can be automaticly included by maven.
Yilun Chong's avatar
Yilun Chong committed
32

Yilun Chong's avatar
Yilun Chong committed
33
### Python
34
We're using python C++ API for testing the generated
Yilun Chong's avatar
Yilun Chong committed
35
CPP proto version of python protobuf, which is also a prerequisite for Python
36 37 38
protobuf cpp implementation. You need to install the correct version of Python
C++ extension package before run generated CPP proto version of Python
protobuf's benchmark. e.g. under Ubuntu, you need to
Yilun Chong's avatar
Yilun Chong committed
39 40

```
41
$ sudo apt-get install python-dev
Yilun Chong's avatar
Yilun Chong committed
42 43
$ sudo apt-get install python3-dev
```
44
And you also need to make sure `pkg-config` is installed.
Yilun Chong's avatar
Yilun Chong committed
45

BSBandme's avatar
BSBandme committed
46
### Go
BSBandme's avatar
BSBandme committed
47
Go protobufs are maintained at [github.com/golang/protobuf](
48 49
http://github.com/golang/protobuf). If not done already, you need to install the
toolchain and the Go protoc-gen-go plugin for protoc.
BSBandme's avatar
BSBandme committed
50 51

To install protoc-gen-go, run:
BSBandme's avatar
BSBandme committed
52 53 54

```
$ go get -u github.com/golang/protobuf/protoc-gen-go
BSBandme's avatar
BSBandme committed
55
$ export PATH=$PATH:$(go env GOPATH)/bin
BSBandme's avatar
BSBandme committed
56 57
```

BSBandme's avatar
BSBandme committed
58 59 60
The first command installs `protoc-gen-go` into the `bin` directory in your local `GOPATH`.
The second command adds the `bin` directory to your `PATH` so that `protoc` can locate the plugin later.

Yilun Chong's avatar
Yilun Chong committed
61
### PHP
62
PHP benchmark's requirement is the same as PHP protobuf's requirements. The benchmark will automaticly
Yilun Chong's avatar
Yilun Chong committed
63 64
include PHP protobuf's src and build the c extension if required.

65
### Node.js
Feng Xiao's avatar
Feng Xiao committed
66
Node.js benchmark need [node](https://nodejs.org/en/)(higher than V6) and [npm](https://www.npmjs.com/) package manager installed. This benchmark is using the [benchmark](https://www.npmjs.com/package/benchmark) framework to test, which needn't to manually install. And another prerequisite is [protobuf js](https://github.com/protocolbuffers/protobuf/tree/master/js), which needn't to manually install either
67

68 69 70 71 72 73
### C#
The C# benchmark code is built as part of the main Google.Protobuf
solution. It requires the .NET Core SDK, and depends on
[BenchmarkDotNet](https://github.com/dotnet/BenchmarkDotNet), which
will be downloaded automatically.

Yilun Chong's avatar
Yilun Chong committed
74 75
### Big data

76 77
There's some optional big testing data which is not included in the directory
initially, you need to run the following command to download the testing data:
Yilun Chong's avatar
Yilun Chong committed
78 79

```
80
$ ./download_data.sh
Yilun Chong's avatar
Yilun Chong committed
81 82
```

83
After doing this the big data file will automatically generated in the
84
benchmark directory.
Yilun Chong's avatar
Yilun Chong committed
85 86 87

## Run instructions

88 89
To run all the benchmark dataset:

Yilun Chong's avatar
Yilun Chong committed
90
### Java:
91 92 93 94 95

```
$ make java
```

Yilun Chong's avatar
Yilun Chong committed
96
### CPP:
97 98

```
99
$ make cpp
100 101
```

102 103 104 105 106 107
For linking with tcmalloc:

```
$ env LD_PRELOAD={directory to libtcmalloc.so} make cpp
```

Yilun Chong's avatar
Yilun Chong committed
108 109
### Python:

110 111
We have three versions of python protobuf implementation: pure python, cpp
reflection and cpp generated code. To run these version benchmark, you need to:
Yilun Chong's avatar
Yilun Chong committed
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128

#### Pure Python:

```
$ make python-pure-python
```

#### CPP reflection:

```
$ make python-cpp-reflection
```

#### CPP generated code:

```
$ make python-cpp-generated-code
129 130
```

BSBandme's avatar
BSBandme committed
131 132 133 134 135
### Go
```
$ make go
```

Yilun Chong's avatar
Yilun Chong committed
136 137 138 139 140 141 142 143 144 145 146 147

### PHP
We have two version of php protobuf implemention: pure php, php with c extension. To run these version benchmark, you need to:
#### Pure PHP
```
$ make php
```
#### PHP with c extension
```
$ make php_c
```

148 149 150 151 152
### Node.js
```
$ make js
```

153
To run a specific dataset or run with specific options:
154

Yilun Chong's avatar
Yilun Chong committed
155
### Java:
156 157

```
158
$ make java-benchmark
159
$ ./java-benchmark $(specific generated dataset file name) [$(caliper options)]
160 161
```

Yilun Chong's avatar
Yilun Chong committed
162
### CPP:
163 164

```
165
$ make cpp-benchmark
166
$ ./cpp-benchmark $(specific generated dataset file name) [$(benchmark options)]
167 168
```

Yilun Chong's avatar
Yilun Chong committed
169 170
### Python:

171 172
For Python benchmark we have `--json` for outputing the json result

Yilun Chong's avatar
Yilun Chong committed
173 174 175 176
#### Pure Python:

```
$ make python-pure-python-benchmark
177
$ ./python-pure-python-benchmark [--json] $(specific generated dataset file name)
Yilun Chong's avatar
Yilun Chong committed
178 179 180 181 182 183
```

#### CPP reflection:

```
$ make python-cpp-reflection-benchmark
184
$ ./python-cpp-reflection-benchmark [--json] $(specific generated dataset file name)
Yilun Chong's avatar
Yilun Chong committed
185 186 187 188 189 190
```

#### CPP generated code:

```
$ make python-cpp-generated-code-benchmark
191
$ ./python-cpp-generated-code-benchmark [--json] $(specific generated dataset file name)
Yilun Chong's avatar
Yilun Chong committed
192 193
```

BSBandme's avatar
BSBandme committed
194 195 196
### Go:
```
$ make go-benchmark
197
$ ./go-benchmark $(specific generated dataset file name) [go testing options]
BSBandme's avatar
BSBandme committed
198 199
```

Yilun Chong's avatar
Yilun Chong committed
200 201 202 203 204 205 206 207 208 209 210 211
### PHP
#### Pure PHP
```
$ make php-benchmark
$ ./php-benchmark $(specific generated dataset file name)
```
#### PHP with c extension
```
$ make php-c-benchmark
$ ./php-c-benchmark $(specific generated dataset file name)
```

212 213 214 215 216
### Node.js
```
$ make js-benchmark
$ ./js-benchmark $(specific generated dataset file name)
```
BSBandme's avatar
BSBandme committed
217

218 219 220 221 222 223 224 225 226
### C#
From `csharp/src/Google.Protobuf.Benchmarks`, run:

```
$ dotnet run -c Release
```

We intend to add support for this within the makefile in due course.

Yilun Chong's avatar
Yilun Chong committed
227 228
## Benchmark datasets

229
Each data set is in the format of benchmarks.proto:
Yilun Chong's avatar
Yilun Chong committed
230

231 232 233 234
1. name is the benchmark dataset's name.
2. message_name is the benchmark's message type full name (including package and message name)
3. payload is the list of raw data.

Yilun Chong's avatar
Yilun Chong committed
235 236
The schema for the datasets is described in `benchmarks.proto`.

237
Benchmark likely want to run several benchmarks against each data set (parse,
238 239 240 241 242 243
serialize, possibly JSON, possibly using different APIs, etc).

We would like to add more data sets.  In general we will favor data sets
that make the overall suite diverse without being too large or having
too many similar tests.  Ideally everyone can run through the entire
suite without the test run getting too long.