2. Install build dependencies using the `install_dependencies.sh` script in the project root folder.
2. Install build dependencies using the `install_dependencies.sh` script in the project root folder:
```sh
chmod +x install_dependencies.sh
```
```sh
./install_dependencies.sh
```
3. By default, the build enables the Inference Engine GPU plugin to infer models on your Intel® Processor Graphics. This requires you to [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.04.12237](https://github.com/intel/compute-runtime/releases/tag/19.04.12237) before running the build. If you don't want to use the GPU plugin, use the `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the Intel® Graphics Compute Runtime for OpenCL™ Driver.
3. By default, the build enables the Inference Engine GPU plugin to infer models on your Intel® Processor Graphics. This requires you to [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.04.12237](https://github.com/intel/compute-runtime/releases/tag/19.04.12237) before running the build. If you don't want to use the GPU plugin, use the `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the Intel® Graphics Compute Runtime for OpenCL™ Driver.
4. Create a build folder:
4. Create a build folder:
```sh
```sh
...
@@ -90,7 +97,12 @@ You can use the following additional build options:
...
@@ -90,7 +97,12 @@ You can use the following additional build options:
- If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
- If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
- To build the Python API wrapper:
1. Install all additional packages listed in the `/inference-engine/ie_bridges/python/requirements.txt` file:
```sh
pip install -r requirements.txt
```
2. use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
> **NOTE**: These steps are only required if you want to perform inference on Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started](https://software.intel.com/en-us/neural-compute-stick/get-started)
> **NOTE**: These steps are only required if you want to perform inference on Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started](https://software.intel.com/en-us/neural-compute-stick/get-started)
...
@@ -461,7 +477,7 @@ For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2,
...
@@ -461,7 +477,7 @@ For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2,
1. Go to the `<DLDT_ROOT_DIR>/inference-engine/thirdparty/movidius/MovidiusDriver` directory, where the `DLDT_ROOT_DIR` is the directory to which the DLDT repository was cloned.
1. Go to the `<DLDT_ROOT_DIR>/inference-engine/thirdparty/movidius/MovidiusDriver` directory, where the `DLDT_ROOT_DIR` is the directory to which the DLDT repository was cloned.
2. Right click on the `Movidius_VSC_Device.inf` file and choose **Install** from the pop up menu.
2. Right click on the `Movidius_VSC_Device.inf` file and choose **Install** from the pop up menu.
You have installed the driver for yourIntel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.
You have installed the driver for yourIntel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.
Result model has statistics which allow you to infer this model in INT8 precision. To measure performance, you can use the [Benchmark App](./inference-engine/ie_bridges/python/sample/benchmark_app/README.md).
Result model has statistics which allow you to infer this model in INT8 precision. To measure performance, you can use the [Benchmark App](./inference-engine/tools/benchmark_tool/README.md).
### Calibrate the Model
### Calibrate the Model
During calibration process, the model is adjusted for efficient quantization and minimization of accuracy drop on calibration dataset. Calibration tool produces calibrated model which will be executed in low precision 8-bit quantized mode after loading into CPU plugin.
During calibration process, the model is adjusted for efficient quantization and minimization of accuracy drop on calibration dataset. Calibration tool produces calibrated model which will be executed in low precision 8-bit quantized mode after loading into CPU plugin.
...
@@ -180,4 +180,6 @@ To run the Calibration Tool in the simplified mode, use the following command:
...
@@ -180,4 +180,6 @@ To run the Calibration Tool in the simplified mode, use the following command: