Name
Last commit
Last update
.ci Travis tests bugfix (#1550)
cmake [ONNX] Upgrade libonnx to version 1.3 (#1530)
contrib/docker remove custom install path (#1164)
doc Update fusion doc and add ONNX build flag to buildlb doc (#1585)
licenses updating ngraph theme with IntelNeoSans font on headings, better OFL font for doc readability, better rendering of certain text, etc (#727)
maint set correct perms on source files (#1564)
python Update fusion doc and add ONNX build flag to buildlb doc (#1585)
src [ONNX] Unary ops
test Use validated call in the quantization unit test and the correct shape (#1590)
.clang-format Use weak_ptr for node in inputs/outputs, turn off alignment style.
.gitignore Public documentation version release info updated and confirmed (#1458)
.gitmodules Silee2/single repo (#646)
.travis.yml Travis tests bugfix (#1550)
CMakeLists.txt Cmake flags update (#1539)
CONTRIB.md Draft of updates for JIRA tasks WIP (#1227)
INSTALL.md removed contrib/docker/Dockerfile for gcc 4.8 for Ubuntu 16.04 - not tested (#648)
LICENSE Add LICENSE and switch to Intel Copyright (#466)
README.md Update doc build v and fix doc on captioning (#1568)
VERSION.in Auto generate version number and apply it to install dir and libngraph.so (#925)
changes.md Validate/infer types as a virtual function (#1463)

nGraph Library Build Status

Welcome to the open-source repository for the Intel:registered: nGraph:tm: Library. Our code base provides a Compiler and runtime suite of tools (APIs) designed to give developers maximum flexibility for their software design, allowing them to create or customize a scalable solution using any framework while also avoiding device-level hardware lock-in that is so common with many AI vendors. A neural network model compiled with nGraph can run on any of our currently-supported backends, and it will be able to run on any backends we support in the future with minimal disruption to your model. With nGraph, you can co-evolve your software and hardware's capabilities to stay at the forefront of your industry.

The nGraph Compiler is Intel's graph compiler for Artificial Neural Networks. Documentation in this repo describes how you can program any framework to run training and inference computations on a variety of Backends including Intel:registered: Architecture Processors (CPUs), Intel:registered: Nervana:tm: Neural Network Processors (NNPs), cuDNN-compatible graphics cards (GPUs), custom VPUs like Movidius, and many others. The default CPU Backend also provides an interactive Interpreter mode that can be used to zero in on a DL model and create custom nGraph optimizations that can be used to further accelerate training or inference, in whatever scenario you need.

nGraph provides both a C++ API for framework developers and a Python API which can run inference on models imported from ONNX.

nGraph ecosystem

Framework bridge available? ONNX support?
neon yes yes
MXNet* yes yes
TensorFlow* yes yes
PyTorch* not yet yes
Chainer* not yet yes
CNTK* not yet yes
Caffe2* not yet yes

Documentation

See our install docs for how to get started.

For this early release, we provide framework integration guides to compile MXNet and TensorFlow-based projects. If you already have a trained model, we've put together a getting started guide for how to import a deep learning model and start working with the nGraph APIs.

Support

Please submit your questions, feature requests and bug reports via GitHub issues.

How to Contribute

We welcome community contributions to nGraph. If you have an idea how to improve the Library:

  • See the contrib guide for code formatting and style guidelines.
  • Share your proposal via GitHub issues.
  • Ensure you can build the product and run all the examples with your patch.
  • In the case of a larger feature, create a test.
  • Submit a pull request.
  • Make sure your PR passes all CI tests. Note: our Travis-CI service runs only on a CPU backend on Linux. We will run additional tests in other environments.
  • We will review your contribution and, if any additional fixes or modifications are necessary, may provide feedback to guide you. When accepted, your pull request will be merged to the repository.