README.md 7.13 KB

nGraph Compiler Stack Beta

License Build Status

Quick start

To begin using nGraph with popular frameworks to accelerate deep learning workloads on CPU for inference, please refer to the links below.

Framework / Version Installation guide Notes
TensorFlow* 1.12 Pip package or Build from source 17 Validated workloads
MXNet* 1.4 [Enable the module] or Source compile 17 Validated workloads
ONNX 1.3 Pip package 13 Functional workloads with DenseNet-121, Inception-v1, ResNet-50, Inception-v2, ShuffleNet, SqueezeNet, VGG-19, and 7 more

Frameworks using nGraph Compiler stack to execute workloads have shown 3X to 45X performance boost when compared to native framework implementations. We've also seen performance boosts running workloads that are not included on the list of Validated workloads, thanks to our powerful subgraph pattern matching and thanks to the collaborative efforts we've put into the DL community, such as with nGraph-ONNX adaptable APIs and nGraph for PyTorch developers.

Additional work is also being done via PlaidML which will feature running compute for Deep Learning with GPU accleration and support for MacOS. See our Architecture and features for what the stack looks like today and watch our Release Notes for recent changes.

What is nGraph Compiler?

nGraph Compiler aims to accelerate developing and deploying AI workloads using any deep learning framework with a variety of hardware targets. We strongly believe in providing freedom, performance, and ease-of-use to AI developers.

The diagram below shows what deep learning frameworks and hardware targets we support. More details on these current and future plans are in the ecosystem section.

nGraph ecosystem

While the ecosystem shown above is all functioning, we have validated performance metrics for deep learning inference on CPU processors including as Intel:registered: Xeon:registered:. Please refer to the [Beta release notes] to learn more. The Gold release is targeted for April 2019; it will feature broader workload coverage, including support for quantized graphs, and more detail on our advanced support for int8.

Our documentation has extensive information about how to use nGraph Compiler stack to create an nGraph computational graph, integrate custom frameworks, and interact with supported backends. If you wish to contribute to the project, please don't hesitate to ask questions in GitHub issues after reviewing our contribution guide below.

How to contribute

We welcome community contributions to nGraph. If you have an idea how to improve it:

  • See the contrib guide for code formatting and style guidelines.
  • Share your proposal via GitHub issues.
  • Ensure you can build the product and run all the examples with your patch.
  • In the case of a larger feature, create a test.
  • Submit a pull request.
  • Make sure your PR passes all CI tests. Note: our Travis-CI service runs only on a CPU backend on Linux. We will run additional tests in other environments.
  • We will review your contribution and, if any additional fixes or modifications are necessary, may provide feedback to guide you. When accepted, your pull request will be merged to the repository.

nGraph Compiler Stack

Backend current support future support
Intel:registered: Architecture CPU yes yes
Intel:registered: Nervana:tm: Neural Network Processor (NNP) yes yes
Intel Movidius:tm: Myriad:tm: 2 VPUs coming soon yes
Intel:registered: Architecture GPUs via PlaidML yes
AMD* GPUs via PlaidML yes
NVIDIA* GPUs via PlaidML some
Field Programmable Gate Arrays (FPGA) no yes