Commit e2322b66 authored by Leona C's avatar Leona C

Previously-tested MXNet detail may not be current

parent 177372c3
...@@ -21,9 +21,9 @@ from a framework on a CPU, GPU, or ASIC; it can also be used with an ...@@ -21,9 +21,9 @@ from a framework on a CPU, GPU, or ASIC; it can also be used with an
*Interpreter* mode, which is primarily intended for testing, to analyze a *Interpreter* mode, which is primarily intended for testing, to analyze a
program, or to help a framework developer customize targeted solutions. program, or to help a framework developer customize targeted solutions.
.. nGraph also provides a way to use the advanced tensor compiler PlaidML nGraph also provides a way to use the advanced tensor compiler PlaidML
.. as a backend; you can learn more about this backend and how to build it as a backend; you can learn more about this backend and how to build it
.. from source in our documentation: :ref:`ngraph_plaidml_backend`. from source in our documentation: :ref:`ngraph_plaidml_backend`.
.. csv-table:: .. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support" :header: "Backend", "Current nGraph support", "Future nGraph support"
...@@ -31,7 +31,6 @@ program, or to help a framework developer customize targeted solutions. ...@@ -31,7 +31,6 @@ program, or to help a framework developer customize targeted solutions.
Intel® Architecture Processors (CPUs), Yes, Yes Intel® Architecture Processors (CPUs), Yes, Yes
Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes
NVIDIA\* CUDA (GPUs), Yes, Some
AMD\* GPUs, Yes, Some AMD\* GPUs, Yes, Some
......
...@@ -10,7 +10,7 @@ workloads: ...@@ -10,7 +10,7 @@ workloads:
* :ref:`tensorflow_valid` * :ref:`tensorflow_valid`
* :ref:`mxnet_valid` * :ref:`mxnet_valid`
* :ref:`onnx_valid` * :ref:`onnx_valid`
* :doc:`../../project/extras/testing_latency.rst` * :ref:`testing_latency`
.. _tensorflow_valid: .. _tensorflow_valid:
......
.. contribution-guide: .. project/contribution-guide.rst:
.._contribution_guide:
################## ##################
Contribution guide Contribution guide
...@@ -261,5 +264,8 @@ it is automatically enforced and reduces merge conflicts. ...@@ -261,5 +264,8 @@ it is automatically enforced and reduces merge conflicts.
To contribute documentation for your code, please see the :doc:`doc-contributor-README`. To contribute documentation for your code, please see the :doc:`doc-contributor-README`.
.. include:: doc-contributor-README.rst
.. _Apache 2: https://www.apache.org/licenses/LICENSE-2.0 .. _Apache 2: https://www.apache.org/licenses/LICENSE-2.0
.. _repo wiki: https://github.com/NervanaSystems/ngraph/wiki .. _repo wiki: https://github.com/NervanaSystems/ngraph/wiki
\ No newline at end of file
.. project/extras/testing_latency.rst: .. project/extras/testing_latency.rst:
.. _testing_latency:
Testing latency Testing latency
=============== ===============
.. important:: This tutorial was tested using previous versions. While it is
not currently or officially supported in the latest nGraph Compiler
stack |version|, some configuration options may still work.
Many open-source DL frameworks provide a layer where experts in data science Many open-source DL frameworks provide a layer where experts in data science
can make use of optimizations contributed by machine learning engineers. Having can make use of optimizations contributed by machine learning engineers. Having
a common API benefits both: it simplifies deployment and makes it easier for ML a common API benefits both: it simplifies deployment and makes it easier for ML
engineers working on advanced deep learning hardware to bring highly-optimized engineers working on advanced deep learning hardware to bring highly-optimized
performance to a wide range of models, especially in inference. performance to a wide range of models, especially in inference.
One DL framework with advancing efforts on graph optimizations is Apache One DL framework with advancing efforts on graph optimizations is Apache
MXNet\*, where `Intel has contributed efforts showing`_ how to work with our MXNet\*, where `Intel has contributed efforts showing`_ how to work with our
...@@ -17,7 +24,7 @@ nGraph Compiler stack as an `experimental backend`_. Our approach provides ...@@ -17,7 +24,7 @@ nGraph Compiler stack as an `experimental backend`_. Our approach provides
optimizations **than would be available to the MXNet framework alone**, for optimizations **than would be available to the MXNet framework alone**, for
reasons outlined in our `introduction`_ documentation. Note that the reasons outlined in our `introduction`_ documentation. Note that the
MXNet bridge requires trained models only; it does not support distributed MXNet bridge requires trained models only; it does not support distributed
training. training.
...@@ -62,7 +69,7 @@ install MXNet to the virtual environment: ...@@ -62,7 +69,7 @@ install MXNet to the virtual environment:
Now we're ready to use nGraph to run any model on a CPU backend. Building MXNet Now we're ready to use nGraph to run any model on a CPU backend. Building MXNet
with nGraph automatically enabled nGraph on your model scripts, and you with nGraph automatically enabled nGraph on your model scripts, and you
shouldn't need to do anything special. If you run into trouble, you can disable shouldn't need to do anything special. If you run into trouble, you can disable
nGraph by setting nGraph by setting
.. code-block:: console .. code-block:: console
...@@ -81,14 +88,14 @@ Note that the nGraph-MXNet bridge supports static graphs only (dynamic graphs ...@@ -81,14 +88,14 @@ Note that the nGraph-MXNet bridge supports static graphs only (dynamic graphs
are in the works); so for this example, we begin by converting the gluon model are in the works); so for this example, we begin by converting the gluon model
into a static graph. Also note that any model with a saved checkpoint can be into a static graph. Also note that any model with a saved checkpoint can be
considered a "static graph" in nGraph. For this example, we'll presume that the considered a "static graph" in nGraph. For this example, we'll presume that the
model is pre-trained. model is pre-trained.
.. literalinclude:: ../../../../examples/subgraph_snippets/mxnet-gluon-example.py .. literalinclude:: ../../../../examples/subgraph_snippets/mxnet-gluon-example.py
:language: python :language: python
:lines: 17-32 :lines: 17-32
To load the model into nGraph, we simply bind the symbol into an Executor. To load the model into nGraph, we simply bind the symbol into an Executor.
.. literalinclude:: ../../../../examples/subgraph_snippets/mxnet-gluon-example.py .. literalinclude:: ../../../../examples/subgraph_snippets/mxnet-gluon-example.py
:language: python :language: python
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment