Commit a2ab7b50 authored by L.S. Cook's avatar L.S. Cook Committed by Scott Cyphers

Editing so far for review and feedback (#813)

* WIP editing so far for review and feedback

* Add missing env var export for neon install new process

* Add modified venv setup for TF

* More edits for FW integration and landpage

* Revise from PR feedback

* More PR feedback and editing for clarity

* Minor rewording, clearer explanation

* Final pass edit

* more editing
parent 24afb41e
...@@ -116,11 +116,6 @@ html_static_path = ['../ngraph_theme/static'] ...@@ -116,11 +116,6 @@ html_static_path = ['../ngraph_theme/static']
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ["../"] html_theme_path = ["../"]
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names # Custom sidebar templates, must be a dictionary that maps document names
# to template names. # to template names.
# #
...@@ -189,8 +184,6 @@ texinfo_documents = [ ...@@ -189,8 +184,6 @@ texinfo_documents = [
'Miscellaneous'), 'Miscellaneous'),
] ]
html_add_permalinks = "true"
breathe_projects = { breathe_projects = {
"ngraph": "../../doxygen/xml", "ngraph": "../../doxygen/xml",
} }
......
...@@ -58,6 +58,7 @@ system that already has an ``ngraph_dist`` installed. ...@@ -58,6 +58,7 @@ system that already has an ``ngraph_dist`` installed.
(frameworks)$ cd /opt/libraries/ngraph/python (frameworks)$ cd /opt/libraries/ngraph/python
(frameworks)$ git clone --recursive -b allow-nonconstructible-holders https://github.com/jagerman/pybind11.git (frameworks)$ git clone --recursive -b allow-nonconstructible-holders https://github.com/jagerman/pybind11.git
(frameworks)$ export PYBIND_HEADERS_PATH=/opt/libraries/ngraph/python/pybind11
(frameworks)$ pip install -U . (frameworks)$ pip install -U .
#. Finally we're ready to install the `neon` integration: #. Finally we're ready to install the `neon` integration:
...@@ -169,161 +170,15 @@ Compile MXNet with nGraph ...@@ -169,161 +170,15 @@ Compile MXNet with nGraph
TensorFlow\* TensorFlow\*
============= =============
This section describes how install TensorFlow* with the bridge code See the `ngraph tensorflow bridge README`_ for how to install the
needed to be able to access nGraph backends. Note that you **do not** nGraph-TensorFlow bridge.
need to have already installed nGraph for this procedure to work.
Bridge TensorFlow/XLA to nGraph
-------------------------------
#. Prepare your system with the TensorFlow prerequisite, a system called
"bazel". These instructions were tested with `bazel version`_ 0.11.0.
.. code-block:: console
$ wget https://github.com/bazelbuild/bazel/releases/download/0.11.0/bazel-0.11.0-installer-linux-x86_64.sh
$ chmod +x bazel-0.11.0-installer-linux-x86_64.sh
$ ./bazel-0.11.0-installer-linux-x86_64.sh --user
#. Add and source the ``bin`` path that bazel just created to your ``~/.bashrc``
file in order to be able to call bazel from the user's installation we set up:
.. code-block:: bash
export PATH=$PATH:~/bin
.. code-block:: console
$ source ~/.bashrc
#. Ensure that all the other TensorFlow dependencies are installed, as per the
TensorFlow `installation guide`_:
.. important:: CUDA is not needed.
#. After TensorFlow's dependencies are installed, clone the source of the
`ngraph-tensorflow`_ repo to your machine; this is the required fork for
this integration. Many users may prefer to use a Python virtual env from
here forward:
.. code-block:: console
$ python3 -m venv frameworks
$ cd frameworks
$ . bin/activate
$ git clone git@github.com:NervanaSystems/ngraph-tensorflow.git
$ cd ngraph-tensorflow
$ git checkout ngraph-tensorflow-preview-0
#. Now run :command:`./configure` and choose `y` when prompted to build TensorFlow
with XLA :abbr:`Just In Time (JIT)` support.
.. code-block:: console
:emphasize-lines: 6-7
. . .
Do you wish to build TensorFlow with Apache Kafka Platform support? [y/N]: n
No Apache Kafka Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with XLA JIT support? [y/N]: y
XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with GDR support? [y/N]:
No GDR support will be enabled for TensorFlow.
. . .
#. Build and install the pip package:
.. code-block:: console
$ bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ pip install -U /tmp/tensorflow_pkg/tensorflow-1.*whl
.. note:: The actual name of the Python wheel file will be updated to the official
version of TensorFlow as the ngraph-tensorflow repository is synchronized frequently
with the original TensorFlow repository.
#. Now clone the ``ngraph-tensorflow-bridge`` repo one level above -- in the
parent directory of the ngraph-tensorflow repo cloned in step 4:
.. code-block:: console
$ cd ..
$ git clone https://github.com/NervanaSystems/ngraph-tensorflow-bridge.git
$ cd ngraph-tensorflow-bridge
#. Finally, build and install ngraph-tensorflow-bridge
.. code-block:: console
$ mkdir build
$ cd build
$ cmake ../
$ make install
This final step automatically downloads the necessary version of ngraph and the
dependencies. The resulting plugin `DSO`_ named ``libngraph_plugin.so`` gets copied
to the following directory inside the TensorFlow installation directory:
::
<Python site-packages>/tensorflow/plugins
Once the build and installation steps are complete, you can start experimenting with
coding for nGraph.
Run MNIST Softmax with the activated bridge
----------------------------------------------
To see everything working together, you can run MNIST Softmax example with the now-activated
bridge to nGraph. The script named mnist_softmax_ngraph.py can be found under the
ngraph-tensorflow-bridge/test directory. It was modified from the example explained
in the TensorFlow\* tutorial; the following changes were made from the original script:
.. code-block:: python
def main(_):
with tf.device('/device:NGRAPH:0'):
run_mnist(_)
def run_mnist(_):
# Import data
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
...
To test everything together, set the configuration options:
.. code-block:: bash
export OMP_NUM_THREADS=4
export KMP_AFFINITY=granularity=fine,scatter
And run the script as follows from within the `/test`_ directory of
your cloned version of `ngraph-tensorflow-bridge`_:
.. code-block:: console
$ python mnist_softmax_ngraph.py
.. note:: The number-of-threads parameter specified in the ``OMP_NUM_THREADS``
is a function of number of CPU cores that are available in your system.
.. _MXNet: http://mxnet.incubator.apache.org .. _MXNet: http://mxnet.incubator.apache.org
.. _bazel version: https://github.com/bazelbuild/bazel/releases/tag/0.11.0
.. _DSO: http://csweb.cs.wfu.edu/%7Etorgerse/Kokua/More_SGI/007-2360-010/sgi_html/ch03.html .. _DSO: http://csweb.cs.wfu.edu/%7Etorgerse/Kokua/More_SGI/007-2360-010/sgi_html/ch03.html
.. _installation guide: https://www.tensorflow.org/install/install_sources#prepare_environment_for_linux
.. _ngraph-tensorflow: https://github.com/NervanaSystems/ngraph-tensorflow
.. _ngraph-tensorflow-bridge: https://github.com/NervanaSystems/ngraph-tensorflow-bridge
.. _/test: https://github.com/NervanaSystems/ngraph-tensorflow-bridge/tree/master/test
.. _ngraph-neon python README: https://github.com/NervanaSystems/ngraph/blob/master/python/README.md .. _ngraph-neon python README: https://github.com/NervanaSystems/ngraph/blob/master/python/README.md
.. _ngraph-neon repo's README: https://github.com/NervanaSystems/ngraph-neon/blob/master/README.md .. _ngraph neon repo's README: https://github.com/NervanaSystems/ngraph-neon/blob/master/README.md
.. _neon docs: https://github.com/NervanaSystems/neon/tree/master/doc .. _neon docs: https://github.com/NervanaSystems/neon/tree/master/doc
.. _being the fastest: https://github.com/soumith/convnet-benchmarks/ .. _being the fastest: https://github.com/soumith/convnet-benchmarks/
.. _for training CNN-based models with GPUs: https://www.microway.com/hpc-tech-tips/deep-learning-frameworks-survey-tensorflow-torch-theano-caffe-neon-ibm-machine-learning-stack/ .. _for training CNN-based models with GPUs: https://www.microway.com/hpc-tech-tips/deep-learning-frameworks-survey-tensorflow-torch-theano-caffe-neon-ibm-machine-learning-stack/
.. _ngraph tensorflow bridge README: https://github.com/NervanaSystems/ngraph-tensorflow-bridge
...@@ -4,15 +4,16 @@ ...@@ -4,15 +4,16 @@
Derive a trainable model Derive a trainable model
######################### #########################
Documentation in this section describes one of the ways to derive a trainable Documentation in this section describes one of the ways to derive and run a
model from an inference model. trainable model from an inference model.
We can derive a trainable model from any graph that has been constructed with We can derive a trainable model from any graph that has been constructed with
weight-based updates. For this example named ``mnist_mlp.cpp``, we start with a hand-designed inference model and convert it to a model that can be trained weight-based updates. For this example named ``mnist_mlp.cpp``, we start with
a hand-designed inference model and convert it to a model that can be trained
with nGraph. with nGraph.
Additionally, and to provide a more complete walk-through that *also* trains the Additionally, and to provide a more complete walk-through that *also* trains the
trainable model, our example includes the use of a simple data loader for the model, our example includes the use of a simple data loader for uncompressed
MNIST data. MNIST data.
* :ref:`model_overview` * :ref:`model_overview`
...@@ -24,27 +25,46 @@ MNIST data. ...@@ -24,27 +25,46 @@ MNIST data.
- :ref:`update` - :ref:`update`
.. _understanding_ml_ecosystem:
Understanding the ML ecosystem
===============================
In a :abbr:`Machine Learning (ML)` ecosystem, it makes sense to take advantage
of automation and abstraction as much as possible. As such, nGraph was designed
to integrate wtih graph construction endpoints (AKA *ops*) handed down to it
from a framework. Our graph-construction API, therefore, needs to operate at a
fundamentally lower level than a typical framework's API. For this reason,
writing a model directly in nGraph would be somewhat akin to programming in
assembly language: not impossible, but not exactly the easiest thing for humans
to do.
.. _model_overview: .. _model_overview:
Model overview Model overview
============== ===============
Due to the lower-level nature of the graph-construction API, the example we've
selected to document here is a relatively simple model: a fully-connected
topology with one hidden layer followed by ``Softmax``.
The nGraph API was designed for automatic graph construction under direction of Remember that in nGraph, the graph is stateless; values for the weights must
a framework. Without a framework, the process is somewhat tedious, so the example be provided as parameters along with the normal inputs. Starting with the graph
selected is a relatively simple model: a fully-connected topology with one hidden for inference, we will use it to create a graph for training. The training
layer followed by ``Softmax``. function will return tensors for the updated weights.
Since the graph is stateless there are parameters for both the inputs .. note:: This example illustrates how to convert an inference model into one
and the weights. We will construct the graph for inference and use that can be trained. Depending on the framework, bridge code may do something
nGraph to create a graph for training. The training function will similar, or the framework might do this operation itself. Here we do the
return tensors for the updated weights. Note that this is not the same conversion with nGraph because the computation for training a model is
as *constructing* the training model directly, which would be significantly larger than for inference, and doing the conversion manually
significantly more work. is tedious and error-prone.
.. _code_structure: .. _code_structure:
Code Structure Code structure
============== ==============
......
...@@ -6,15 +6,10 @@ Import a model ...@@ -6,15 +6,10 @@ Import a model
:ref:`from_onnx` :ref:`from_onnx`
.. That can be the first page data scientists find when they are simply trying nGraph APIs can be used to run inference on a model that has been *exported*
.. to run a trained model; they DO NOT need to do a system install of the Intel from a Deep Learning framework. An export produces a file with a serialized
.. nGraph++ bridges; they can use our Python APIs to run a trained model. model that can be loaded and passed to one of the nGraph backends.
..
The Intel nGraph APIs can be used to run inference on a model that has been
*exported* from a Deep Learning framework. An export produces a file with
a serialized model that can be loaded and passed to one of the nGraph
backends.
.. _from_onnx: .. _from_onnx:
...@@ -40,60 +35,59 @@ to build an nGraph representation of the model, execute it, and produce some ...@@ -40,60 +35,59 @@ to build an nGraph representation of the model, execute it, and produce some
outputs. outputs.
Installing ``ngraph_onnx`` Installing ``ngraph_onnx`` with nGraph from scratch
========================== ====================================================
To use ONNX models with ngraph, you will also need the companion tool To use ONNX models with nGraph, you will also need the companion tool
``ngraph_onnx``. ``ngraph_onnx`` requires Python 3.4 or higher. ``ngraph_onnx``, which requires Python 3.4 or higher. If nGraph has not
yet been installed to your system, you can follow these steps to install
everything at once; if an `ngraph_dist` is already installed on your system,
skip ahead to the next section, :ref:`install_ngonnx`.
#. Prepare to install the nGraph library by building a Python3 wheel.
.. code-block:: console
This code assumes that you already followed the default instructions from the # apt update
:doc:`../install` guide; ``ngraph_dist`` was installed to ``$HOME/ngraph_dist`` # apt install python3 python3-pip python3-dev
and the `ngraph` repo was cloned to ``/opt/libraries/`` # apt install build-essential cmake curl clang-3.9 git zlib1g zlib1g-dev libtinfo-dev
$ git clone https://github.com/NervanaSystems/ngraph.git
$ cd ngraph/python
$ ./build_python3_wheel.sh
#. First set the environment variables to where we built the nGraph++ libraries: #. After the Python3 binary wheel file (``ngraph-*.whl``) is prepared, install
with :command:`pip3`, or :command:`pip` in a virtual environment.
.. code-block:: bash .. code-block:: console
export NGRAPH_CPP_BUILD_PATH=$HOME/ngraph_dist (your_venv) $ pip install -U build/dist/ngraph-0.1.0-cp35-cp35m-linux_x86_64.whl
export LD_LIBRARY_PATH=$HOME/ngraph_dist/lib
export DYLD_LIBRARY_PATH=$HOME/ngraph_dist/lib # On MacOS
#. Now add *Protocol Buffers* and Python3 PIP dependencies to your system. ONNX #. Confirm ngraph is properly installed through a Python interpreter:
requires Protocol Buffers version 2.6.1 or higher. For example, on Ubuntu:
.. code-block:: console .. code-block:: console
$ sudo apt install protobuf-compiler libprotobuf-dev python3-pip (your_venv) $ python3
#. Checkout the branch named `python_binding`: .. code-block:: python
import ngraph as ng
ng.abs([[1, 2, 3], [4, 5, 6]])
<Abs: 'Abs_1' ([2, 3])>
.. code-block:: console If you don't see any errors, ngraph should be installed correctly.
$ cd /opt/libraries/ngraph
$ git checkout python_binding
Switched to branch 'python_binding'
#. Recursively update the submodule and install the Python dependencies: .. _install_ngonnx:
.. code-block:: console Installing ngraph-onnx
-----------------------
$ git submodule update --init --recursive Install the ``ngraph-onnx`` companion tool using pip:
$ cd python
$ pip3 install -r requirements.txt
$ pip3 install .
#. Finally, clone the ``ngraph-onnx`` repo and use :command:`pip` to install the .. code-block:: console
Python dependencies for this tool; if you set up your original nGraph library
installation under a ``libraries`` directory as recommended, it's a good
idea to clone this repo there, as well.
.. code-block:: console
$ cd /opt/libraries (your_venv) $ pip install git+https://github.com/NervanaSystems/ngraph-onnx/
$ git clone git@github.com:NervanaSystems/ngraph-onnx
$ cd ngnraph-onnx
$ pip3 install -r requirements.txt
$ pip3 install .
Importing a serialized model Importing a serialized model
...@@ -111,7 +105,7 @@ ONNX. ...@@ -111,7 +105,7 @@ ONNX.
Enable ONNX and load an ONNX file from disk Enable ONNX and load an ONNX file from disk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --------------------------------------------
.. literalinclude:: ../../../examples/onnx_example.py .. literalinclude:: ../../../examples/onnx_example.py
:language: python :language: python
...@@ -119,15 +113,34 @@ Enable ONNX and load an ONNX file from disk ...@@ -119,15 +113,34 @@ Enable ONNX and load an ONNX file from disk
Convert an ONNX model to an ngraph model Convert an ONNX model to an ngraph model
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------------------------------------
.. literalinclude:: ../../../examples/onnx_example.py .. literalinclude:: ../../../examples/onnx_example.py
:language: python :language: python
:lines: 22-23 :lines: 22-23
The importer returns a list of ngraph models for every ONNX graph
output:
.. code-block:: python
print(ng_models)
[{
'name': 'Plus5475_Output_0',
'output': <Add: 'Add_1972' ([1, 10])>,
'inputs': [<Parameter: 'Parameter_1104' ([1, 3, 32, 32], float)>]
}]
The ``output`` field contains the ngraph node corrsponding to the output node
in the imported ONNX computational graph. The ``inputs`` list contains all
input parameters for the computation which generates the output.
Using ngraph_api, create a callable computation object Using ngraph_api, create a callable computation object
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------------------------------------------------
.. literalinclude:: ../../../examples/onnx_example.py .. literalinclude:: ../../../examples/onnx_example.py
:language: python :language: python
...@@ -135,14 +148,14 @@ Using ngraph_api, create a callable computation object ...@@ -135,14 +148,14 @@ Using ngraph_api, create a callable computation object
Load or create an image Load or create an image
~~~~~~~~~~~~~~~~~~~~~~~~ ------------------------
.. literalinclude:: ../../../examples/onnx_example.py .. literalinclude:: ../../../examples/onnx_example.py
:language: python :language: python
:lines: 32-33 :lines: 32-33
Run ResNet inference on picture Run ResNet inference on picture
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ---------------------------------
.. literalinclude:: ../../../examples/onnx_example.py .. literalinclude:: ../../../examples/onnx_example.py
:language: python :language: python
...@@ -173,7 +186,7 @@ demonstration purposes, the code will look something like: ...@@ -173,7 +186,7 @@ demonstration purposes, the code will look something like:
.. Importing models from NNVM .. Importing models from NNVM
--------------------------- ---------------------------
.. if you work on NNVM you can add this instuction here. .. if you work on NNVM you can add this instruction here.
...@@ -186,7 +199,7 @@ demonstration purposes, the code will look something like: ...@@ -186,7 +199,7 @@ demonstration purposes, the code will look something like:
.. etc, eof .. etc, eof
.. _ngraph-onnx: https://github.com/NervanaSystems/ngraph-onnx#ngraph
.. _ONNX: http://onnx.ai .. _ONNX: http://onnx.ai
.. _tutorials from ONNX: https://github.com/onnx/tutorials .. _tutorials from ONNX: https://github.com/onnx/tutorials
.. _CNTK: https://www.microsoft.com/en-us/cognitive-toolkit/features/model-gallery/ .. _CNTK: https://www.microsoft.com/en-us/cognitive-toolkit/features/model-gallery/
\ No newline at end of file
...@@ -5,11 +5,11 @@ Make a stateful computation ...@@ -5,11 +5,11 @@ Make a stateful computation
########################### ###########################
In this section, we show how to make a stateful computation from In this section, we show how to make a stateful computation from
nGraphs stateless operations. The basic idea is that any computation nGraph's stateless operations. The basic idea is that any computation
with side-effects can be factored into a stateless function that with side-effects can be factored into a stateless function that
transforms the old state into the new state. transforms the old state into the new state.
An Example from C++ An example from C++
=================== ===================
Let's start with a simple C++ example, a function ``count`` that Let's start with a simple C++ example, a function ``count`` that
...@@ -18,11 +18,12 @@ returns how many times it has already been called: ...@@ -18,11 +18,12 @@ returns how many times it has already been called:
.. literalinclude:: ../../../examples/update.cpp .. literalinclude:: ../../../examples/update.cpp
:language: cpp :language: cpp
:lines: 20-24 :lines: 20-24
:caption: update.cpp
The static variable ``counter`` provides state for this function. The The static variable ``counter`` provides state for this function. The
state is initialized to 0. Every time ``count`` is called, the current state is initialized to 0. Every time ``count`` is called, the current
value of ``counter`` is returned and ``counter`` is incremented. To value of ``counter`` is returned and ``counter`` is incremented. To
convert this to use a stateless function, we make a function that convert this to use a stateless function, define a function that
takes the current value of ``counter`` as an argument and returns the takes the current value of ``counter`` as an argument and returns the
updated value. updated value.
...@@ -39,10 +40,11 @@ To use this version of counting, ...@@ -39,10 +40,11 @@ To use this version of counting,
Update in nGraph Update in nGraph
================ ================
We use the same approach with nGraph. During training, we include all In working with nGraph-based construction of graphs, updating takes
the weights as arguments to the training function and return the the same approach. During training, we include all the weights as
updated weights along with any other results. If we are doing a more arguments to the training function and return the updated weights
complex form of training, such as using momentum, we would add the along with any other results. For more complex forms of training,
momementum tensors are additional arguments and add their updated such as those using momentum, we would add the momementum tensors
values as additional results. The simple case is illiustrated in the as additional arguments and include their updated values as additional
trainable model how to. results. A simple case is illustrated in the documentation for how
to :doc:`derive-for-training`.
...@@ -13,58 +13,120 @@ ...@@ -13,58 +13,120 @@
.. limitations under the License. .. limitations under the License.
.. --------------------------------------------------------------------------- .. ---------------------------------------------------------------------------
###############
nGraph library
###############
.. This documentation is available online at
.. http://ngraph.nervanasys.com/docs/latest/
Welcome to nGraph™, an open-source C++ compiler library for running and
training :abbr:`Deep Neural Network (DNN)` models. This project is ########
framework-neutral and can target a variety of modern devices or platforms. nGraph
########
Welcome to the nGraph documentation site. nGraph is an open-source C++ library
and runtime / compiler suite for :abbr:`Deep Learning (DL)` ecosystems. Our goal
is to empower algorithm designers, data scientists, framework architects,
software engineers, and others with the means to make their work :ref:`portable`,
:ref:`adaptable`, and :ref:`deployable` across the most modern
:abbr:`Machine Learning (ML)` hardware available today: optimized Deep Learning
computation devices.
.. figure:: graphics/ngraph-ecosystem.png .. figure:: graphics/ngraph-ecosystem.png
:width: 585px :width: 650px
nGraph currently supports :doc:`three popular <framework-integration-guides>`
frameworks for :abbr:`Deep Learning (DL)` models through what we call
a :term:`bridge` that can be integrated during the framework's build time.
For developers working with other frameworks (even those not listed above),
we've created a :doc:`How to Guide <howto/index>` so you can learn how to create
custom bridge code that can be used to :doc:`compile and run <howto/execute>`
a training model.
We've recently added initial support for the ONNX format. Developers who .. _portable:
already have a "trained" model can use nGraph to bypass a lot of the
framework-based complexity and :doc:`howto/import` to test or run it
on targeted and efficient backends with our user-friendly ``ngraph_api``.
With nGraph, data scientists can focus on data science rather than worrying
about how to adapt models to train and run efficiently on different devices.
Portable
========
Supported platforms One of nGraph's key features is **framework neutrality**. While we currently
-------------------- support :doc:`three popular <framework-integration-guides>` frameworks with
pre-optimized deployment runtimes for training :abbr:`Deep Neural Network (DNN)`,
models, you are not limited to these when choosing among frontends. Architects
of any framework (even those not listed above) can use our documentation for how
to :doc:`compile and run <howto/execute>` a training model and design or tweak
a framework to bridge directly to the nGraph compiler. With a *portable* model
at the core of your :abbr:`DL (Deep Learning)` ecosystem, it's no longer
necessary to bring large datasets to the model for training; you can take your
model -- in whole, or in part -- to where the data lives and save potentially
significant or quantifiable machine resources.
Initially-supported backends include:
* Intel® Architecture Processors (CPUs),
* Intel® Nervana™ Neural Network Processor™ (NNPs), and
* NVIDIA\* CUDA (GPUs).
Tentatively in the pipeline, we plan to add support for more backends, .. _adaptable:
including:
* :abbr:`Field Programmable Gate Arrays (FPGA)` (FPGAs) Adaptable
* Movidius =========
.. note:: The library code is under active development as we're continually We've recently begun support for the `ONNX`_ format. Developers who already have
adding support for more kinds of DL models and ops, framework compiler a "trained" :abbr:`DNN (Deep Neural Network)` model can use nGraph to bypass
optimizations, and backends. significant framework-based complexity and :doc:`import it <howto/import>`
to test or run on targeted and efficient backends with our user-friendly
Python-based API. See the `ngraph onnx companion tool`_ to get started.
.. csv-table::
:header: "Framework", "Bridge Code Available?", "ONNX Support?"
:widths: 27, 10, 10
TensorFlow, Yes, Yes
MXNet, Yes, Yes
neon, none needed, Yes
PyTorch, Not yet, Yes
CNTK, Not yet, Yes
Other, Not yet, Doable
.. _deployable:
Deployable
==========
It's no secret that the :abbr:`DL (Deep Learning)` ecosystem is evolving
rapidly. Benchmarking comparisons can be blown steeply out of proportion by
subtle tweaks to batch or latency numbers here and there. Where traditional
GPU-based training excels, inference can lag and vice versa. Sometimes what we
care about is not "speed at training a large dataset" but rather latency
compiling a complex multi-layer algorithm locally, and then outputting back to
an edge network, where it can be analyzed by an already-trained model.
Indeed, when choosing among topologies, it is important to not lose sight of
the ultimate deployability and machine-runtime demands of your component in
the larger ecosystem. It doesn't make sense to use a heavy-duty backhoe to
plant a flower bulb. Furthermore, if you are trying to develop an entirely
new genre of modeling for a :abbr:`DNN (Deep Neural Network)` component, it
may be especially beneficial to consider ahead of time how portable and
mobile you want that model to be within the rapidly-changing ecosystem.
With nGraph, any modern CPU can be used to design, write, test, and deploy
a training or inference model. You can then adapt and update that same core
model to run on a variety of backends:
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
:widths: 35, 10, 10
Intel® Architecture Processors (CPUs), Yes, Yes
Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes
NVIDIA\* CUDA (GPUs), Yes, Some
:abbr:`Field Programmable Gate Arrays (FPGA)` (FPGAs), Coming soon, Yes
`Movidius`_, Not yet, Yes
Other, Not yet, Ask
The value we're offering to the developer community is empowerment: we are
confident that Intel® Architecture already provides the best computational
resources available for the breadth of ML/DL tasks. We welcome ideas and
`contributions`_ from the community.
Further project details can be found on our :doc:`project/about` page, or see Further project details can be found on our :doc:`project/about` page, or see
our :doc:`install` guide for how to get started. our :doc:`install` guide for how to get started.
.. note:: The library code is under active development as we're continually
adding support for more kinds of DL models and ops, framework compiler
optimizations, and backends.
======= =======
...@@ -91,3 +153,7 @@ Indices and tables ...@@ -91,3 +153,7 @@ Indices and tables
* :ref:`genindex` * :ref:`genindex`
.. _ONNX: http://onnx.ai
.. _ngraph onnx companion tool: https://github.com/NervanaSystems/ngraph-onnx
.. _Movidius: https://www.movidius.com/
.. _contributions: https://github.com/NervanaSystems/ngraph#how-to-contribute
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment