Commit 972033c9 authored by L.S. Cook's avatar L.S. Cook Committed by Scott Cyphers

[v0.1.0] Cleanup docs (#660)

* Cleanup docs

* Fix typo on syntax

* Update about page with Tunde's v2 of ngraph-ecostystem graphic

* Incorporating feedback from PR reviews

* Add more prereqs to table

* Cleanup neon install documentation

* fix doc build warnings

* Make sure Tundes v3 graphic links properly

* Add detail about ONNX imports to README as requested

* Fix the broken merge
parent a9252dc1
......@@ -5,20 +5,23 @@ project enables modern compute platforms to run and train Deep Neural Network
(DNN) models. It is framework-neutral and supports a variety of backends
used by Deep Learning (DL) frameworks.
The nGraph library translates a framework’s representation of computations into
an Intermediate Representation (IR) designed to promote computational efficiency
on target hardware. Initially-supported backends include Intel Architecture CPUs,
the Intel® Nervana Neural Network Processor™ (NNP), and NVIDIA\* GPUs.
Currently-supported compiler optimizations include efficient memory management
and data layout abstraction.
![nGraph ecosystem][ngraph-ecosystem]
## Documentation
See our [install] docs for how to get started.
For this early release, we provide [framework integration guides] to compile
MXNet and TensorFlow-based projects.
MXNet and TensorFlow-based projects. If you already have a trained model, we've
put together a getting started guide for [how to import] a deep learning model
and start working with the nGraph APIs.
<<<<<<< HEAD
[install]: http://ngraph.nervanasys.com/index.html/install.html
[framework integration guides]: http://ngraph.nervanasys.com/index.html/framework-integration-guides.html
[how to import]: http://ngraph.nervanasys.com/index.html/howto/import.html
=======
## Support
Please submit your questions, feature requests and bug reports via [GitHub issues].
......@@ -37,5 +40,4 @@ We welcome community contributions to nGraph. If you have an idea how to improve
[install]: http://ngraph.nervanasys.com/docs/latest/install.html
[framework integration guides]: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html
[Github issues]: https://github.com/NervanaSystems/ngraph/issues
[pull request]: https://github.com/NervanaSystems/ngraph/pulls
[ngraph-ecosystem]: http://ngraph.nervanasys.com/index.html/_images/ngraph-ecosystem.png "nGraph Ecosystem"
......@@ -18,12 +18,12 @@ import onnx
onnx_protobuf = onnx.load('/path/to/model/cntk_ResNet20_CIFAR10/model.onnx')
# Convert ONNX model to an ngraph model
# Convert a serialized ONNX model to an ngraph model
from ngraph_onnx.onnx_importer.importer import import_onnx_model
ng_model = import_onnx_model(onnx_protobuf)[0]
# Using an ngraph runtime (CPU backend), create a callable computation
# Using ngraph_api, create a callable computation object
import ngraph_api as ng
runtime = ng.runtime(manager_name='CPU')
resnet = runtime.computation(ng_model['output'], *ng_model['inputs'])
......
......@@ -1836,13 +1836,19 @@ div[class^='highlight'] td.code {
width: 100%;
}
code, p.caption, caption-text {
code, p.caption {
font-family: Inconsolata, sans, monospace;
color: #A79992;
font-size: 0.99em;
line-height: 1.39em;
}
caption-text {
font-family: 'RobotoSlab', Lato, monospace;
}
.code-block-caption {
font-variant: small-caps;
font-size: 0.88em;
......
......@@ -4,9 +4,61 @@
Framework Integration Guides
#############################
* :ref:`neon_intg`
* :ref:`mxnet_intg`
* :ref:`tensorflow_intg`
.. _neon_intg:
Neon |trade|
============
Use ``neon`` as a frontend
---------------------------
``neon`` is a open source Deep Learning framework specifically designed to be
powered by |InG| backends.
.. important:: The numbered instructions below pick up from where
the :doc:`install` installation instructions left off, and they presume
that your system already has the library installed at ``$HOME/ngraph_dist`` as
the default location. If the |nGl| code has not yet been installed to your
system, you can follow the instructions on the `ngraph-neon python README`_ to
install everything at once. If the |nGl| code already is installed,
#. Set the ``NGRAPH_CPP_BUILD_PATH`` and the ``LD_LIBRARY_PATH`` path to the location
where you built the nGraph libraries:
.. code-block:: bash
export NGRAPH_CPP_BUILD_PATH=$HOME/ngraph_dist/
export LD_LIBRARY_PATH=$HOME/ngraph_dist/lib/
#. Install the dependency for the ``neon`` framework:
.. code-block:: console
$ sudo apt-get install python3-pip
#. (Optionally) activate a virtualenv if you like working with virtualenvs:
and go to the `python` subdirectory of the ``ngraph`` repo:
.. code-block:: console
$ python3 -m venv .venv
$ . .venv/bin/activate
(venv)$ cd ngraph/python
(venv)$ pip install -U .
#. See `this file`_ if you want detail about how to run unit tests. To start
working with models, see the `ngraph-neon repo's README`_ to start working
with models.
.. _mxnet_intg:
Compile MXNet\* with ``libngraph``
......@@ -46,8 +98,8 @@ Compile MXNet\* with ``libngraph``
$ cd ngraph-mxnet && git checkout ngraph-integration-dev
#. Edit the ``make/config.mk`` file from the repo we just checked out to set
the ``USE_NGRAPH`` option (line ``80``) to true with `1` and set the :envvar:`NGRAPH_DIR`
(line ``81``) to point to the installation location target where the |nGl|
the ``USE_NGRAPH`` option (line ``100``) to true with `1` and set the :envvar:`NGRAPH_DIR`
(line ``101``) to point to the installation location target where the |nGl|
was installed:
.. code-block:: bash
......@@ -56,7 +108,7 @@ Compile MXNet\* with ``libngraph``
NGRAPH_DIR = $(HOME)/ngraph_dist
#. Ensure that settings on the config file are disabled for ``USE_MKL2017``
(line ``93``) and ``USE_NNPACK`` (line ``100``).
(line ``113``) and ``USE_NNPACK`` (line ``120``).
.. code-block:: bash
......@@ -214,8 +266,12 @@ your cloned version of `ngraph-tensorflow`_:
$ python mnist_softmax_ngraph.py
.. _this file: https://github.com/NervanaSystems/ngraph/blob/master/python/README.md
.. _MXNet: http://mxnet.incubator.apache.org
.. _bazel version 0.5.4: https://github.com/bazelbuild/bazel/releases/tag/0.5.4
.. _1.3 installation guide: https://www.tensorflow.org/versions/r1.3/install/install_sources#prepare_environment_for_linux
.. _ngraph-tensorflow: https://github.com/NervanaSystems/ngraph-tensorflow
.. _/examples/mnist: https://github.com/NervanaSystems/ngraph-tensorflow/tree/develop/tensorflow/compiler/plugin/ngraph/examples/mnist
.. _ngraph-neon python README: https://github.com/NervanaSystems/ngraph/blob/master/python/README.md
.. _ngraph-neon repo's README: https://github.com/NervanaSystems/ngraph-neon/blob/master/README.md
\ No newline at end of file
.. execute-cmp.rst
######################
Execute a Computation
Execute a computation
######################
This section explains how to manually perform the steps that would normally be
performed by a framework :term:`bridge` to execute a computation. Intel® nGraph++
performed by a framework :term:`bridge` to execute a computation. The nGraph
library is targeted toward automatic construction; it is far easier for a
processing unit (GPU, CPU, or an `Intel Nervana NNP`_) to run a computation than
it is for a user to map out how that computation happens. Unfortunately, things
......
......@@ -6,20 +6,20 @@ Import a model
:ref:`from_onnx`
.. That can be the first page data scientists find when they are simply trying
.. to run a trained model; they DO NOT need to do a system install of the Intel
.. nGraph++ bridges; they can use our Python APIs to run a trained model.
..
.. TODO Make sure that this is the first page data scientists find when they
.. are simply trying to run a trained model; they DO NOT need to do a system
.. install of the Intel nGraph++ bridges; they can use our Python APIs to run
.. a trained model.
Intel nGraph APIs can be used to import a model that has been *exported* from
a Deep Learning framework. The export producess a file with a serialized model
that can be loaded and passed to one of the nGraph backends for execution.
The Intel nGraph APIs can be used to run inference on a model that has been
*exported* from a Deep Learning framework. An export produces a file with
a serialized model that can be loaded and passed to one of the nGraph
backends.
.. _from_onnx:
Importing models from ONNX
===========================
Importing a model from ONNX
============================
The most-widely supported :term:`export` format available today is `ONNX`_.
Models that have been serialized to ONNX are easy to identify; they are
......@@ -30,27 +30,27 @@ usually named ``<some_model>.onnx`` or ``<some_model>.onnx.pb``. These
.. important:: If you landed on this page and you already have an ``.onnx``
or ``.onnx.pb`` formatted file, you should be able to run the inference
without needing to dig into anything from the "Frameworks" sections. You
will, however, need to have completed the steps described in
will, however, need to have completed the steps outlined in
our :doc:`../install` guide.
To demonstrate this functionality, we'll use an `already serialized CIFAR10`_
model trained via ResNet20. Remember that this model *has already been trained* to
a degree deemed well enough by a developer, and then exported from a framework
such as Caffe2, PyTorch or CNTK. We are simply going to build an nGraph
representation of the model, execute it, and produce some outputs.
To demonstrate functionality, we'll use an already serialized CIFAR10 model
trained via ResNet20. Remember that this model has already been trained and
exported from a framework such as Caffe2, PyTorch or CNTK; we are simply going
to build an nGraph representation of the model, execute it, and produce some
outputs.
Installing ``ngraph_onnx``
--------------------------
==========================
In order to use ONNX models, you will also need the companion tool ``ngraph_onnx``.
``ngraph_onnx`` requires Python 3.5 or higher.
To use ONNX models with ngraph, you will also need the companion tool
``ngraph_onnx``. ``ngraph_onnx`` requires Python 3.4 or higher.
This code assumes that you already followed the default instructions from the
:doc:`../install` guide; ``ngraph_dist`` was installed to ``$HOME/ngraph_dist``
and the `ngraph` repo was cloned to ``/opt/libraries/``
#. First set the environment variables to where we built the nGraph++ libraries;
This code assumes that you followed the default instructions from the
:doc:`../install` guide and that your version of ``ngraph_dist`` can be found
at ``$HOME/ngraph_dist``:
#. First set the environment variables to where we built the nGraph++ libraries:
.. code-block:: bash
......@@ -72,21 +72,20 @@ In order to use ONNX models, you will also need the companion tool ``ngraph_onnx
$ cd /opt/libraries/ngraph
$ git checkout python_binding
Switched to branch 'python_binding'
Your branch is up-to-date with 'origin/python_binding'.
#. Recursively update the submodule and install the Python dependencies.
#. Recursively update the submodule and install the Python dependencies:
.. code-block:: console
$ git submodule update --init --recursive
$ cd python
$ pip3 install -r requirements.txt
$ pip3 install .
$ pip3 install -r requirements.txt
$ pip3 install .
#. Finally, clone the ``ngraph-onnx`` repo and use :command:`pip` to
install the Python dependencies for this tool; if you set up your
original nGraph library installation under a ``libraries`` directory
as recommended, it's a good idea to clone this repo there, too.
#. Finally, clone the ``ngraph-onnx`` repo and use :command:`pip` to install the
Python dependencies for this tool; if you set up your original nGraph library
installation under a ``libraries`` directory as recommended, it's a good
idea to clone this repo there, as well.
.. code-block:: console
......@@ -98,17 +97,20 @@ In order to use ONNX models, you will also need the companion tool ``ngraph_onnx
Importing a serialized model
-----------------------------
=============================
.. Now we can import any model that has been serialized by ONNX,
run Python code locally to interact with that model, create and
load objects, and run inference.
With the dependencies added, we can now import a model that has
been serialized by ONNX, interact locally with the model by running
Python code, create and load objects, and run inference.
These instructions demonstrate how to run ResNet on an
`already serialized CIFAR10`_ model trained via ResNet20.
This section assumes that you have your own ONNX model. With this
example model from Microsoft\*'s Deep Learning framework, `CNTK`_,
we can outline the procedure to show how to run ResNet on model
that has been trained on the CIFAR10 data set and serialized with
ONNX.
Import ONNX and load an ONNX file from disk
Enable ONNX and load an ONNX file from disk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../../examples/onnx_example.py
......@@ -117,15 +119,15 @@ Import ONNX and load an ONNX file from disk
Convert an ONNX model to an ngraph model
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 22-23
Create a callable computation object
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using ngraph_api, create a callable computation object
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
......@@ -144,7 +146,7 @@ Run ResNet inference on picture
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 36
:lines: 36-37
Put it all together
......@@ -153,18 +155,18 @@ Put it all together
.. literalinclude:: ../../../examples/onnx_example.py
:language: python
:lines: 17-37
:caption: "Code to run inference on a CIFAR10 trained model"
:caption: "Demo sample code to run inference with nGraph"
If you tested the ``.onnx`` file used in the example above, the outputs
should look something like:
Outputs will vary greatly, depending on your model; for
demonstration purposes, the code will look something like:
.. code-block:: python
Attempting to write a float64 value to a <Type: 'float32'> tensor. Will attempt type conversion.
array([[ 1.3120822 , -1.6729498 , 4.2079573 , 1.4012246 , -3.5463796 ,
2.343378 , 1.7799224 , -1.6155218 , 0.07770489, -4.2944083 ]],
dtype=float32)
array([[ 1.312082 , -1.6729496, 4.2079577, 1.4012241, -3.5463796,
2.3433776, 1.7799224, -1.6155214, 0.0777044, -4.2944093]],
dtype=float32)
......@@ -175,8 +177,8 @@ should look something like:
.. Importing models from XLA
--------------------------
.. Importing models serialized with XLA
-------------------------------------
.. if you work on XLA you can add this instruction here.
......@@ -187,5 +189,4 @@ should look something like:
.. _ONNX: http://onnx.ai
.. _tutorials from ONNX: https://github.com/onnx/tutorials
.. _already serialized CIFAR10: https://github.com/NervanaSystems/ngraph-onnx-val/tree/master/models
.. _CNTK: https://www.microsoft.com/en-us/cognitive-toolkit/features/model-gallery/
\ No newline at end of file
......@@ -20,30 +20,34 @@ resources, it can either:
* Provide a serialized model that can be imported to run on one of the nGraph
backends.
.. note:: This section is aimed at intermediate-level developers working with
the nGraph++ library. It assumes a developer has understanding of the concepts
in the previous sections. It does not assume knowledge of any particular
frontend framework.
.. note:: This section is aimed at intermediate-level developers. It assumes an
understanding of the concepts in the previous sections. It does not assume
knowledge of any particular frontend framework.
Since our primary audience is developers who are pushing the boundaries of deep
learning systems, we go beyond the use of deep learning primitives, and include
APIs and documentation for developers who want the ability to write programs
that use custom backends. For example, we know that GPU resources can be useful
backends for *some* kinds of algorithmic operations while they impose inherent
limitations and slow down others.
limitations or slow down others.
One of our goals with the nGraph++ library is to enable developers with tools to
build programs that quickly access and process data a breadth of edge and network
devices. Furthermore, we want them to be able to make use of the best kind of
computational resources for the kind of data they are processing, after it has
been gathered.
quickly build programs that access and process data from a breadth of edge and
networked devices. This might mean bringing compute resources closer to edge
devices, or it might mean programatically adjusting a model or the compute
resources it requires, at an unknown or arbitray time after it has been deemed
to be trained well enough.
To get started, we've provided a basic example for how to :doc:`execute` a
computation with an nGraph backend; this is analogous to a framework bridge.
computation with an nGraph backend; this is analogous to a framework bridge.
This section is under development; it will eventually be populated with more
articles geared toward data scientists, algorithm designers, framework developers,
backend engineers, and others. We welcome ideas and contributions from the
community.
For data scientists or algorithm developers who are trying to extract specifics
about the state of a model at a certain node, or who want to optimize a model
at a more granular level, we provide an example for how to :doc:`import` a
model and run inference after it has been exported from a DL framework.
This section is under development; we'll continually populate it with more
articles geared toward data scientists, algorithm designers, framework developers,
backend engineers, and others. We welcome ideas and contributions from the
community.
......@@ -13,19 +13,22 @@
.. limitations under the License.
.. ---------------------------------------------------------------------------
######################
Intel nGraph++ library
######################
#####################
Intel nGraph library
#####################
Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train :abbr:`Deep Neural Network (DNN)`
models. It is framework-neutral and supports a variety of backends used by
:abbr:`Deep Learning (DL)` frameworks.
.. figure:: graphics/ngraph-hub.png
.. figure:: ../graphics/ngraph-ecosystem.png
:width: 585px
For this early release, we've provided :doc:`framework-integration-guides` to
compile and run MXNet\* and TensorFlow\*-based projects.
compile and run MXNet\* and TensorFlow\*-based projects. If you already have
a trained model, we've got a section on How to :doc:`howto/import` that model
start working with the nGraph APIs.
.. note:: The library code is under active development as we're continually
adding support for more ops, more frameworks, and more backends.
......
......@@ -16,7 +16,7 @@ with the following packages and prerequisites:
:escape: ~
CentOS 7.4 64-bit, GCC 4.8, CMake 3.2, supported, ``patch diffutils zlib1g-dev libtinfo-dev``
Ubuntu 16.04 (LTS) 64-bit, Clang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 git zlib1g libtinfo-dev``
Ubuntu 16.04 (LTS) 64-bit, Clang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 git curl zlib1g zlib1g-dev libtinfo-dev``
Clear Linux\* OS for Intel Architecture, Clang 5.0.1, CMake 3.10.2, experimental, bundles ``machine-learning-basic dev-utils python3-basic python-basic-dev``
Other configurations may work, but should be considered experimental with
......@@ -32,9 +32,6 @@ flags when building. (**Do NOT** supply the ``-DNGRAPH_USE_PREBUILT_LLVM``
flag in this case, because the prebuilt tarball supplied on llvm.org is not
compatible with a gcc 4.8-based build.)
Support for macOS is limited; see the `macOS development`_ section at the end
of this page for details.
Installation Steps
==================
......@@ -43,6 +40,8 @@ The CMake procedure installs ``ngraph_dist`` to the installing user's ``$HOME``
directory as the default location. See the :file:`CMakeLists.txt` file for
details about how to change or customize the install location.
The process documented here will work on Ubuntu 16.04 (LTS)
#. (Optional) Create something like ``/opt/libraries`` and (with sudo),
give ownership of that directory to your user. Creating such a placeholder
can be useful if you'd like to have a local reference for APIs and
......@@ -92,12 +91,12 @@ details about how to change or customize the install location.
the `website docs`_ locally. The low-level API docs with inheritance and
collaboration diagrams can be found inside the ``/docs/doxygen/`` directory.
.. macos_development:
macOS development
-----------------
macOS\* development
--------------------
.. note:: The macOS*\ platform is not officially unsupported.
.. note:: Although we do not offer support for the macOS platform; some
configurations and features may work.
The repository includes two scripts (``maint/check-code-format.sh`` and
``maint/apply-code-format.sh``) that are used respectively to check adherence
......@@ -146,9 +145,9 @@ Intel nGraph++ enabled backend
For this former case, this early |release| release, :doc:`framework-integration-guides`,
can help you get started with a training a model on a supported framework.
* :doc:`neon<framework-integration-guides>` framework,
* :doc:`MXNet<framework-integration-guides>` framework,
* :doc:`TensorFlow<framework-integration-guides>` framework, and
* neon™ `frontend framework`_.
For the latter case, if you've followed a tutorial from `ONNX`_, and you have an
exported, serialized model, you can skip the section on frameworks and go directly
......
......@@ -5,11 +5,12 @@ About
Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train
:abbr:`Deep Neural Network (DNN)`models. It is framework-neutral and supports
:abbr:`Deep Neural Network (DNN)` models. It is framework-neutral and supports
a variety of backends used by :abbr:`Deep Learning (DL)` frameworks.
.. figure:: ../graphics/ngraph-hub.png
.. figure:: graphics/ngraph-ecosystem.png
:width: 585px
The nGraph library translates a framework’s representation of computations into
an :abbr:`Intermediate Representation (IR)` designed to promote computational
efficiency on target hardware. Initially-supported backends include Intel
......
......@@ -3,5 +3,5 @@
Release Notes
#############
This is the |release| release.
This is the |version| of release.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment