Commit 1d5f047a authored by L.S. Cook's avatar L.S. Cook Committed by Scott Cyphers

Leona/10112018 (#1806)

* initial rough draft

* import changes md to release notes for readability

* import changes md to release notes for readability

* remove draft

* update README with link to Release Notes

* release notes link

* fix graphic placement

* Wording change on ONNX supported frameworks
parent 3d4f98e2
# nGraph Library [![Build Status][build-status-badge]][build-status]
Welcome to the open-source repository for the Intel® nGraph™ Library. Our code
Welcome to the open-source repository for the **Intel® nGraph Library**. Our code
base provides a Compiler and runtime suite of tools (APIs) designed to give
developers maximum flexibility for their software design, allowing them to
create or customize a scalable solution using any framework while also avoiding
......@@ -11,7 +10,9 @@ backends, and it will be able to run on any backends we support in the future
with minimal disruption to your model. With nGraph, you can co-evolve your
software and hardware's capabilities to stay at the forefront of your industry.
The nGraph Compiler is Intel's graph compiler for Artificial Neural Networks.
![nGraph ecosystem][ngraph-ecosystem]
The **nGraph Compiler** is Intel's graph compiler for Artificial Neural Networks.
Documentation in this repo describes how you can program any framework
to run training and inference computations on a variety of Backends including
Intel® Architecture Processors (CPUs), Intel® Nervana™ Neural Network Processors
......@@ -24,18 +25,29 @@ whatever scenario you need.
nGraph provides both a C++ API for framework developers and a Python API which
can run inference on models imported from ONNX.
![nGraph ecosystem][ngraph-ecosystem]
See the [Release Notes] for recent changes.
| Framework | bridge available? | ONNX support? |
|----------------|-------------------|----------------|
| TensorFlow* | yes | yes |
| MXNet* | yes | yes |
| PaddlePaddle | yes | yes |
| PyTorch* | no | yes |
| Chainer* | no | yes |
| CNTK* | no | yes |
| Caffe2* | no | yes |
|Framework | bridge available? | ONNX support? |
|------------|-------------------|----------------|
| neon | yes | yes |
| MXNet* | yes | yes |
| TensorFlow*| yes | yes |
| PyTorch* | not yet | yes |
| Chainer* | not yet | yes |
| CNTK* | not yet | yes |
| Caffe2* | not yet | yes |
| Backend | current support | future support |
|-----------------------------------------------|-------------------|----------------|
| Intel® Architecture CPU | yes | yes |
| Intel® Nervana™ Neural Network Processor (NNP)| yes | yes |
| Intel [Movidius™ Myriad™ 2] VPUs | coming soon | yes |
| Intel® Architecture GPUs | via PlaidML | yes |
| AMD* GPUs | via PlaidML | yes |
| NVIDIA* GPUs | via PlaidML | some |
| Field Programmable Gate Arrays (FPGA) | no | yes |
## Documentation
......@@ -72,12 +84,13 @@ to improve the Library:
[install]: http://ngraph.nervanasys.com/docs/latest/buildlb.html
[framework integration guides]: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html
[release notes]: http://ngraph.nervanasys.com/docs/latest/project/release-notes.html
[Github issues]: https://github.com/NervanaSystems/ngraph/issues
[contrib guide]: http://ngraph.nervanasys.com/docs/latest/project/code-contributor-README.html
[pull request]: https://github.com/NervanaSystems/ngraph/pulls
[how to import]: http://ngraph.nervanasys.com/docs/latest/howto/import.html
[ngraph-ecosystem]: doc/sphinx/source/graphics/ngraph-ecosystem.png "nGraph Ecosystem"
[ngraph-ecosystem]: doc/sphinx/source/graphics/599px-Intel-ngraph-ecosystem.png "nGraph Ecosystem"
[build-status]: https://travis-ci.org/NervanaSystems/ngraph/branches
[build-status-badge]: https://travis-ci.org/NervanaSystems/ngraph.svg?branch=master
[develop-without-lockin]: doc/sphinx/source/graphics/develop-without-lockin.png "Develop on any part of the stack wtihout lockin"
[Movidius]:https://www.movidius.com/solutions/vision-processing-unit
[Movidius™ Myriad™ 2]:https://www.movidius.com/solutions/vision-processing-unit
......@@ -30,8 +30,8 @@ software engineers, and others with the means to make their work :ref:`portable`
:abbr:`Machine Learning (ML)` hardware available today: optimized Deep Learning
computation devices.
.. figure:: graphics/ngraph-ecosystem.png
:width: 650px
.. figure:: graphics/599px-Intel-ngraph-ecosystem.png
:width: 599px
.. _portable:
......@@ -66,16 +66,15 @@ Python-based API. See the `ngraph onnx companion tool`_ to get started.
.. csv-table::
:header: "Framework", "Bridge Code Available?", "ONNX Support?"
:header: "Framework", "Bridge Available?", "ONNX Support?"
:widths: 27, 10, 10
TensorFlow, Yes, Yes
MXNet, Yes, Yes
PaddlePaddle, Coming Soon, Yes
neon, none needed, Yes
PyTorch, Coming Soon, Yes
CNTK, Not yet, Yes
Other, Not yet, Doable
PyTorch, No, Yes
CNTK, No, Yes
Other, Custom, Custom
.. _deployable:
......@@ -104,29 +103,30 @@ model to run on a variety of backends:
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
:header: "Backend", "Current support", "Future nGraph support"
:widths: 35, 10, 10
Intel® Architecture Processors (CPUs), Yes, Yes
Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes
NVIDIA\* CUDA (GPUs), Yes, Some
Intel® Nervana™ Neural Network Processor (NNPs), Yes, Yes
AMD\* GPUs, via PlaidML, Yes
NVIDIA\* GPUs, via PlaidML, Some
Intel® Architecture GPUs, Yes, Yes
:abbr:`Field Programmable Gate Arrays (FPGA)` (FPGAs), Coming soon, Yes
`Movidius`_, Not yet, Yes
Intel Movidius™ Myriad™ 2 (VPU), Coming soon, Yes
Other, Not yet, Ask
The value we're offering to the developer community is empowerment: we are
confident that Intel® Architecture already provides the best computational
resources available for the breadth of ML/DL tasks. We welcome ideas and
`contributions`_ from the community.
resources available for the breadth of ML/DL tasks. We welcome ideas and
`contributions`_ from the community.
Further project details can be found on our :doc:`project/about` page, or see
our :doc:`buildlb` guide for how to get started.
our :doc:`buildlb` guide for how to get started.
.. note:: The library code is under active development as we're continually
.. note:: The Library code is under active development as we're continually
adding support for more kinds of DL models and ops, framework compiler
optimizations, and backends.
optimizations, and backends.
=======
......@@ -152,7 +152,6 @@ Contents
project/index.rst
Indices and tables
==================
......@@ -160,8 +159,7 @@ Indices and tables
* :ref:`genindex`
.. _ONNX: http://onnx.ai
.. _ONNX: http://onnx.ai
.. _ngraph onnx companion tool: https://github.com/NervanaSystems/ngraph-onnx
.. _Movidius: https://www.movidius.com/
.. _contributions: https://github.com/NervanaSystems/ngraph#how-to-contribute
.. _contributions: https://github.com/NervanaSystems/ngraph#how-to-contribute
\ No newline at end of file
......@@ -5,7 +5,7 @@ Overview
Welcome to the documentation site for |InG|, an open-source C++ Compiler,
Library, and runtime suite for running training and inference on
Library, and runtime suite for Deep Learning frameworks running training and inference on
:abbr:`Deep Neural Network (DNN)` models. nGraph is framework-neutral and can be
targeted for programming and deploying :abbr:`Deep Learning (DL)` applications
on the most modern compute and edge devices.
......@@ -22,8 +22,8 @@ Features
Develop without lock-in
~~~~~~~~~~~~~~~~~~~~~~~
.. figure:: ../graphics/develop-without-lockin.png
:width: 650px
.. figure:: ../graphics/599px-Intel-ngraph-ecosystem.png
:width: 599px
Being able to increase training performance or reduce inference latency by simply
......@@ -34,47 +34,6 @@ developers working with nGraph. Our commitment to bake flexibility into our
ecosystem ensures developers' freedom to design user-facing APIs for various
hardware deployments directly into their frameworks.
.. figure:: ../graphics/ngraph-ecosystem.png
:width: 585px
nGraph currently supports :doc:`three popular <../framework-integration-guides>`
frameworks for :abbr:`Deep Learning (DL)` models through what we call
a :term:`bridge` that can be integrated during the framework's build time.
For developers working with other frameworks (even those not listed above),
we've created a :doc:`How to Guide <../howto/index>` so you can learn how to
create custom bridge code that can be used to
:doc:`compile and run <../howto/execute>` a training model.
Additionally, nGraph Library supports the `ONNX`_ format. Developers who
already have a "trained" model can use nGraph to bypass much of the
framework-based complexity and :doc:`../howto/import` to test or run it
on targeted and efficient backends with our user-friendly ``ngraph_api``.
With nGraph, data scientists can focus on data science rather than worrying
about how to adapt models to train and run efficiently on different devices.
Be sure to add the ``-DNGRAPH_ONNX_IMPORT_ENABLE=ON`` option when running `cmake`
to build the Library.
Supported platforms
--------------------
* Intel® Architecture Processors (CPUs),
* Intel® Nervana™ Neural Network Processor™ (NNPs), and
* NVIDIA\* CUDA (GPUs).
We built the first-generation of the Intel Nervana™ NNP family of processors
last year to show that the nGraph Library can be used to train a
:abbr:`Neural Network (NN)` more quickly. The more advanced the silicon, the
more powerful a lightweight a library can be. So while we do currently support
traditional GPUs, they are not advanced silicon, and trying to scale workloads
using traditional GPU libraries is clunky and brittle with bottlenecks. Iteration
from an already-trained NN model to one that can also perform inference
computations is immensely simplified. Read more about these compute-friendly
options on the documentation for :doc:`../fusion/index`.
.. note:: The library code is under active development as we're continually
adding support for more kinds of DL models and ops, framework compiler
optimizations, and backends.
......
......@@ -15,13 +15,13 @@
.. limitations under the License.
.. ---------------------------------------------------------------------------
nGraph Library docs
====================
nGraph Library docs
===================
Read this for changes affecting anything in ``ngraph/doc``
----------------------------------------------------------
For updates to the Intel® nGraph Library ``/doc`` repo, please submit a PR with
For updates to the Intel® nGraph Library ``/doc`` repo, please submit a PR with
any changes or ideas you'd like integrated. This helps us maintain trackability
with respect to additions or feature requests.
......@@ -127,7 +127,7 @@ To build documentation locally, run:
.. code-block:: console
$ pip3 install [-I] Sphinx==1.6.5 [--user]
$ pip3 install [-I] Sphinx==1.7.5 [--user]
$ pip3 install [-I] breathe numpy [--user]
$ cd doc/sphinx/
$ make html
......@@ -150,11 +150,16 @@ To build documentation in a python3 virtualenv, run:
Then point your browser at ``localhost:8000``.
.. note:: For docs built in a virtual env, Sphinx latest changes may break
documentation; try building with a specific version of Sphinx.
For tips on writing reStructuredText-formatted documentation, see the `sphinx`_
stable reST documentation.
.. _ngraph repo: https://github.com/NervanaSystems/ngraph-cpp/
.. _documentation repo: https://github.com/NervanaSystems/private-ngraph/tree/master/doc
.. _ngraph repo: https://github.com/NervanaSystems/ngraph/
.. _documentation repo: https://github.com/NervanaSystems/ngraph/tree/master/doc
.. _sphinx: http://www.sphinx-doc.org/en/stable/rest.html
.. _wiki: https://github.com/NervanaSystems/ngraph/wiki/
.. _breathe: https://breathe.readthedocs.io/en/latest/
......
......@@ -3,5 +3,12 @@
Release Notes
#############
This is the |version| of release.
This is release |release|.
API Changes
===========
.. literalinclude:: ../../../../changes.md
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment