• Adam Procter's avatar
    Partial Shapes and Types, Part 4k: AvgPool/MaxPool and backprops (#1871) · 256a8b6d
    Adam Procter authored
    * Add merge_rank function
    
    * Update infer_windowed_reduction_output_shape to use PartialShape
    
    * Minor simplification
    
    * Some unit tests and (whaddaya know) fixes for infer_windowed_reduction_output_shape
    
    * Update infer_batched_pooling_forward to use PartialShape
    
    * Update pooling fprop ops for partial shapes
    
    * Update pooling bprop ops for partial shapes
    
    * Add test-failing reminders to implement unit tests for partial shape/type prop for pooling ops
    
    * Add unit tests for partial shape propagation for poolign ops
    
    * Nuke C-style casts for Dimensions/Ranks in validation_util.cpp
    256a8b6d
Name
Last commit
Last update
.ci [ONNX CI] ONNX CI Improvements (#1824)
cmake Switch to onnx-ml.pb.h as a ONNX protobuf message definitions header. (#1738)
contrib/docker add DropOut op and stub to Documentation ToC (#1675)
doc Quantized and Dequantize doc fixes (#1850)
licenses updating ngraph theme with IntelNeoSans font on headings, better OFL font for doc readability, better rendering of certain text, etc (#727)
maint set correct perms on source files (#1564)
python [Py] Apply PEP8 code style to setup.py (#1852)
src Partial Shapes and Types, Part 4k: AvgPool/MaxPool and backprops (#1871)
test Partial Shapes and Types, Part 4k: AvgPool/MaxPool and backprops (#1871)
.clang-format Use weak_ptr for node in inputs/outputs, turn off alignment style.
.gitattributes Normalize line endings (#1649)
.gitignore Public documentation version release info updated and confirmed (#1458)
.gitmodules Silee2/single repo (#646)
.travis.yml Travis tests bugfix (#1550)
CMakeLists.txt enable code coverage flag via cmake (#1802)
CODEOWNERS Delete a parenthetical comment from CODEOWNERS (#1727)
CONTRIB.md Draft of updates for JIRA tasks WIP (#1227)
INSTALL.md removed contrib/docker/Dockerfile for gcc 4.8 for Ubuntu 16.04 - not tested (#648)
LICENSE Add LICENSE and switch to Intel Copyright (#466)
README.md Leona/10112018 (#1806)
VERSION.in Auto generate version number and apply it to install dir and libngraph.so (#925)
changes.md Rename runtime::TensorView to runtime::Tensor (#1699)

nGraph Library Build Status

Welcome to the open-source repository for the Intel:registered: nGraph Library. Our code base provides a Compiler and runtime suite of tools (APIs) designed to give developers maximum flexibility for their software design, allowing them to create or customize a scalable solution using any framework while also avoiding device-level hardware lock-in that is so common with many AI vendors. A neural network model compiled with nGraph can run on any of our currently-supported backends, and it will be able to run on any backends we support in the future with minimal disruption to your model. With nGraph, you can co-evolve your software and hardware's capabilities to stay at the forefront of your industry.

nGraph ecosystem

The nGraph Compiler is Intel's graph compiler for Artificial Neural Networks. Documentation in this repo describes how you can program any framework to run training and inference computations on a variety of Backends including Intel:registered: Architecture Processors (CPUs), Intel:registered: Nervana:tm: Neural Network Processors (NNPs), cuDNN-compatible graphics cards (GPUs), custom VPUs like [Movidius], and many others. The default CPU Backend also provides an interactive Interpreter mode that can be used to zero in on a DL model and create custom nGraph optimizations that can be used to further accelerate training or inference, in whatever scenario you need.

nGraph provides both a C++ API for framework developers and a Python API which can run inference on models imported from ONNX.

See the Release Notes for recent changes.

Framework bridge available? ONNX support?
TensorFlow* yes yes
MXNet* yes yes
PaddlePaddle yes yes
PyTorch* no yes
Chainer* no yes
CNTK* no yes
Caffe2* no yes
Backend current support future support
Intel:registered: Architecture CPU yes yes
Intel:registered: Nervana:tm: Neural Network Processor (NNP) yes yes
Intel Movidius:tm: Myriad:tm: 2 VPUs coming soon yes
Intel:registered: Architecture GPUs via PlaidML yes
AMD* GPUs via PlaidML yes
NVIDIA* GPUs via PlaidML some
Field Programmable Gate Arrays (FPGA) no yes

Documentation

See our install docs for how to get started.

For this early release, we provide framework integration guides to compile MXNet and TensorFlow-based projects. If you already have a trained model, we've put together a getting started guide for how to import a deep learning model and start working with the nGraph APIs.

Support

Please submit your questions, feature requests and bug reports via GitHub issues.

How to Contribute

We welcome community contributions to nGraph. If you have an idea how to improve the Library:

  • See the contrib guide for code formatting and style guidelines.
  • Share your proposal via GitHub issues.
  • Ensure you can build the product and run all the examples with your patch.
  • In the case of a larger feature, create a test.
  • Submit a pull request.
  • Make sure your PR passes all CI tests. Note: our Travis-CI service runs only on a CPU backend on Linux. We will run additional tests in other environments.
  • We will review your contribution and, if any additional fixes or modifications are necessary, may provide feedback to guide you. When accepted, your pull request will be merged to the repository.