Commit f8926a7b authored by L.S. Cook's avatar L.S. Cook Committed by Robert Kimball

Upstream for versioning (#1309)

* update frameworkdocs

* revise docs with new MXNet bridge code instructions

* revise docs with new MXNet bridge code instructions
parent 87b5758d
......@@ -8,101 +8,42 @@ Integrate Supported Frameworks
* :ref:`tensorflow_intg`
* :ref:`neon_intg`
A framework is "supported" when there is a framework :term:`bridge` that can be
cloned from one of our GitHub repos and built to connect to a supported backend
with nGraph, all the while maintaining the framework's programmatic or user
interface. Current bridge-enabled frameworks include TensorFlow* and MXNet*.
.. _mxnet_intg:
MXNet\*
========
Compile MXNet with nGraph
--------------------------
.. important:: As of version |version|, these instructions presume that your
system already has the Library installed to the default location, as outlined
in our :doc:`install` documentation. If the |nGl| code has not yet been
installed to your system, please go back and return here to finish compiling
MXNet with ``libngraph``.
#. Set the ``LD_LIBRARY_PATH`` path to the location where we built the nGraph
libraries:
.. code-block:: bash
export LD_LIBRARY_PATH=$HOME/ngraph_dist/lib/
#. Add the `MXNet`_ prerequisites to your system, if the system doesn't have them
already. These requirements are Ubuntu\*-specific.
.. code-block:: console
$ sudo apt-get install -y libopencv-dev curl libatlas-base-dev python
python-pip python-dev python-opencv graphviz python-scipy python-sklearn
libopenblas-dev
#. Clone the ``ngraph-mxnet`` repository recursively
.. code-block:: console
$ git clone --recursive git@github.com:NervanaSystems/ngraph-mxnet.git
#. Edit the ``make/config.mk`` file from the repo we just checked out to set
the ``USE_NGRAPH`` option (line ``100``) to true with `1` and set the
:envvar:`NGRAPH_DIR` (line ``101``) to point to the installation location
of ``ngraph_dist``:
Once connected via the bridge, the framework can then run and train a deep
learning model with various workloads on various backends using nGraph Compiler
as an optimizing compiler available through the framework.
.. code-block:: bash
USE_NGRAPH = 1
NGRAPH_DIR = $(HOME)/ngraph_dist
#. Ensure that settings on the config file are disabled for ``USE_MKL2017``
(line ``113``) and ``USE_NNPACK`` (line ``120``).
.. code-block:: bash
# whether use MKL2017 library
USE_MKL2017 = 0
# whether use MKL2017 experimental feature for high performance
# Prerequisite USE_MKL2017=1
USE_MKL2017_EXPERIMENTAL = 0
# whether use NNPACK library
USE_NNPACK = 0
#. Finally, compile MXNet with |InG|:
.. code-block:: console
$ make -j $(nproc)
#. After successfully running ``make``, install the Python integration packages
that your MXNet build needs to run a training example.
.. code-block:: console
.. _mxnet_intg:
$ cd python && pip install -e . && cd ../
MXNet\* bridge
===============
#. Confirm a successful integration by running the MNIST training example:
.. code-block:: console
$ python example/image-classification/train_mnist.py
<<<<<<< HEAD
#. See the README on `nGraph-MXNet`_ Integration for how to enable the bridge.
#. (Optional) For experimental or alternative approaches to distributed training
methodologies, including data parallel training, see the MXNet-relevant sections
of the docs on :doc:`distr/index` and :doc:`How to <howto/index>` topics like
=======
#. See the README on the `nGraph-MXNet`_ Integration for how to enable the bridge.
#. (Optional) For experimental or alternative approaches to distributed training
methodologies, including data parallel training, see the :doc:`distr/index`
and :doc:`How to <howto/index>` articles on :doc:`howto/distribute-train`.
methodologies, including data parallel training, see the MXNet-relevant sections
of the :doc:`distr/index` and :doc:`How to <howto/index>` articles on
>>>>>>> 0e70ab43c7d1af34dad9f3b1cd723bc0dc985fe9
:doc:`howto/distribute-train`.
.. _tensorflow_intg:
TensorFlow\*
=============
TensorFlow\* bridge
===================
See the `ngraph tensorflow bridge README`_ for how to install the `DSO`_ for the
nGraph-TensorFlow bridge.
......@@ -176,12 +117,12 @@ system that already has an ``ngraph_dist`` installed.
and :doc:`How to <howto/index>` articles on :doc:`howto/distribute-train`.
.. _nGraph-MXNet: https://github.com/NervanaSystems/ngraph-mxnet/blob/master/NGRAPH_README.md
.. _MXNet: http://mxnet.incubator.apache.org
.. _DSO: http://csweb.cs.wfu.edu/%7Etorgerse/Kokua/More_SGI/007-2360-010/sgi_html/ch03.html
.. _ngraph-neon python README: https://github.com/NervanaSystems/ngraph/blob/master/python/README.md
.. _ngraph neon repo's README: https://github.com/NervanaSystems/ngraph-neon/blob/master/README.md
.. _neon docs: https://github.com/NervanaSystems/neon/tree/master/doc
.. _being the fastest: https://github.com/soumith/convnet-benchmarks/
.. _for training CNN-based models with GPUs: https://www.microway.com/hpc-tech-tips/deep-learning-frameworks-survey-tensorflow-torch-theano-caffe-neon-ibm-machine-learning-stack/
.. _being the fastest: https://github.com/soumith/convnet-benchmarks
.. _for training CNN-based models with GPUs: https://www.microway.com/hpc-tech-tips/deep-learning-frameworks-survey-tensorflow-torch-theano-caffe-neon-ibm-machine-learning-stack
.. _ngraph tensorflow bridge README: https://github.com/NervanaSystems/ngraph-tf
.. frameworks/generic.rst
Working with generic frameworks
###############################
An engineer may want to work with a deep learning framework that does not yet
have bridge code written. For non-supported or “generic” frameworks, it is
expected that engineers will use the nGraph library to create custom bridge code,
and/or to design and document a user interface (UI) with specific runtime
options for whatever custom use case they need.
The two primary tasks that can be accomplished in the “bridge code” space of the
nGraph Abstraction layer are: (1) compiling a dataflow graph and (2) executing
a pre-compiled graph. See the :doc:`../framework-integration-guides` for how we
have built bridges with other frameworks. For more in-depth help in writing
graph optimizations and bridge code, we provide articles on how to
:doc:`../fusion/index`, and programmatically :doc:`../howto/execute` that can
target various compute resources using nGraph when a framework provides some
inputs to be computed.
Activate nGraph |trade| on generic frameworks
=============================================
......
......@@ -141,11 +141,11 @@ Contents
install.rst
graph-basics.rst
fusion/index.rst
howto/index.rst
ops/index.rst
framework-integration-guides.rst
frameworks/index.rst
fusion/index.rst
programmable/index.rst
distr/index.rst
python_api/index.rst
......
......@@ -44,10 +44,63 @@ Alphabetical list of Core ``ops``
Not currently a comprehensive list.
.. tabularcolumns:: column spec
.. hlist::
:columns: 3
* :doc:`abs`
* :doc:`acos`
* :doc:`add`
* :doc:`allreduce`
* :doc:`and`
* :doc:`asin`
* :doc:`atan`
* :doc:`avg_pool`
* :doc:`avg_pool_backprop`
* :doc:`batch_norm`
* :doc:`broadcast`
* :doc:`ceiling`
* :doc:`concat`
* :doc:`constant`
* :doc:`convert`
* :doc:`convolution`
* :doc:`cos`
* :doc:`cosh`
* :doc:`divide`
* :doc:`dot`
* :doc:`equal`
* :doc:`exp`
* :doc:`floor`
* :doc:`function_call`
* :doc:`get_output_element`
* :doc:`greater_eq`
* :doc:`greater`
* :doc:`less_eq`
* :doc:`less`
* :doc:`log`
* :doc:`max`
* :doc:`maximum`
* :doc:`max_pool`
* :doc:`min`
* :doc:`minimum`
* :doc:`multiply`
* :doc:`negative`
* :doc:`not_equal`
* :doc:`not`
* :doc:`one_hot`
* :doc:`or`
* :doc:`pad`
* :doc:`parameter`
* :doc:`power`
* :doc:`product`
* :doc:`relu`
* :doc:`sigmoid`
* :doc:`softmax`
* :doc:`tanh`
.. toctree::
:maxdepth: 1
:hidden:
abs.rst
acos.rst
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment