Commit 18d20b7b authored by Leona C's avatar Leona C Committed by Scott Cyphers

Fix version module and add latest release notes; update some doc indexes and menus (#3714)

* Release notes 0.26.0-rc.3 and revise bridges overview

* Fix version module and add latest release notes; update some doc menus

* Let release notes be discoverable

* Documentation master index

* Documentation index pg

* Update ops for version

* Update illustration

* Remove old illustration

* Add note for 0-rc.4

* Add section on NGRAPH_PROFILE_PASS to backend API

* rc-0.5

* Final preview 0.26 release notes

* Note changes on Sum validation

* Fix link syntax that breaks on Sphinx 3.0

* Revert file that triggers unnecessary review

* Typo fix

* PR feedback

* Consistency on section title capitalizations

* Update link

* Section title consistency
parent b7a87656
...@@ -84,9 +84,9 @@ ...@@ -84,9 +84,9 @@
<body> <body>
<div id="menu-float" class="menu-float"> <div id="menu-float" class="menu-float">
<a href="https://www.ngraph.ai">Home</a> <a href="https://www.ngraph.ai" target="_blank">Home</a>
<a href="https://www.youtube.com/embed/C9S0nmNS8bQ">Video</a> <a href="https://www.youtube.com/embed/C9S0nmNS8bQ" target="_blank">Video</a>
<a href="https://www.ngraph.ai/ecosystem">Ecosystem</a> <a href="https://www.ngraph.ai/ecosystem" target="_blank">Ecosystem</a>
<a href="https://ngraph.nervanasys.com/docs/latest">Docs</a> <a href="https://ngraph.nervanasys.com/docs/latest">Docs</a>
<a href="https://www.ngraph.ai/tutorials">Tutorials</a> <a href="https://www.ngraph.ai/tutorials">Tutorials</a>
<a href="https://ngraph.slack.com/"><img src="https://cdn.brandfolder.io/5H442O3W/as/pl546j-7le8zk-5h439l/Slack_Mark_Monochrome_White.png?width=35&height=35"></a> <a href="https://ngraph.slack.com/"><img src="https://cdn.brandfolder.io/5H442O3W/as/pl546j-7le8zk-5h439l/Slack_Mark_Monochrome_White.png?width=35&height=35"></a>
......
...@@ -10,13 +10,12 @@ ...@@ -10,13 +10,12 @@
<dd><!-- Until our https://docs.ngraph.ai/ publishing is set up, we link to GitHub --> <dd><!-- Until our https://docs.ngraph.ai/ publishing is set up, we link to GitHub -->
<ul> <ul>
<!-- <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26">0.26</a></li> --> <!-- <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26">0.26</a></li> -->
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26.0">0.26.0</a></li> <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26.0-rc.3">Prerelease 0.26</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26.0">0.25.1</a></li> <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.25.1-rc.4">0.25.1</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.25.0">0.25.0</a></li> <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.25.0">0.25.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.24.0">0.24.0</a></li> <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.24.0">0.24.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.23.0">0.23.0</a></li> <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.22.2-rc.0">0.22.2</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.22.0">0.22.0</a></li> <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.22.1">0.22.1</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.21.0">0.21.0</a></li>
</ul></dd> </ul></dd>
</dl> </dl>
<dl> <dl>
......
...@@ -1645,6 +1645,7 @@ h1 { ...@@ -1645,6 +1645,7 @@ h1 {
h2 { h2 {
font-size: 133%; font-size: 133%;
text-decoration: underline 4px dotted #D3D3D3; text-decoration: underline 4px dotted #D3D3D3;
margin-top: -2px;
} }
h2, .rst-content .toctree-wrapper p.caption { h2, .rst-content .toctree-wrapper p.caption {
......
...@@ -51,7 +51,7 @@ How to use? ...@@ -51,7 +51,7 @@ How to use?
#. A single iteration of the executable is executed by calling the ``call`` #. A single iteration of the executable is executed by calling the ``call``
method on the ``Executable`` object. method on the ``Executable`` object.
.. figure:: ../graphics/execution-interface.png .. figure:: ../graphics/ExecutionInterfaceRunGraphs.png
:width: 650px :width: 650px
The execution interface for nGraph The execution interface for nGraph
...@@ -69,6 +69,13 @@ interface; each backend implements the following five functions: ...@@ -69,6 +69,13 @@ interface; each backend implements the following five functions:
* And, finally, the ``call()`` method is used to invoke an nGraph function * And, finally, the ``call()`` method is used to invoke an nGraph function
against a particular set of tensors. against a particular set of tensors.
How to display ngraph-related passes executed during runtime?
-------------------------------------------------------------
One easy way to get info about passes is to set the environment variable
:envvar:`NGRAPH_PROFILE_PASS_ENABLE=1`. With this set, the pass manager
will dump the name and execution time of each pass.
.. _ngraph_bridge: .. _ngraph_bridge:
......
.. howto/index: .. core/constructing-graphs/index.rst:
Constructing graphs .. _constructing_graphs:
Constructing Graphs
=================== ===================
.. toctree:: .. toctree::
......
.. fusion/index.rst: .. fusion/index.rst:
Pattern matcher Pattern Matcher
############### ###############
.. toctree:: .. toctree::
......
.. fusion/overview.rst .. core/fusion/overview.rst
.. _fusion_overview:
Overview: Optimize graphs with nGraph Compiler fusions Overview: Optimize graphs with nGraph Compiler fusions
------------------------------------------------------- -------------------------------------------------------
......
.. core/overview.rst: .. core/overview.rst:
Overview Basic concepts
======== ==============
.. figure:: ../graphics/whole-stack.png .. figure:: ../graphics/nGraphCompilerstack.png
:alt: The whole stack :alt: The whole stack
The whole nGraph Compiler stack The whole nGraph Compiler stack
...@@ -13,7 +13,7 @@ The nGraph Compiler stack consists of bridges, core, and backends. We'll examine ...@@ -13,7 +13,7 @@ The nGraph Compiler stack consists of bridges, core, and backends. We'll examine
each of these briefly to get started. each of these briefly to get started.
A framework bridge interfaces with the "frontend" Core API. A framework bridge A framework bridge interfaces with the "frontend" Core API. A framework bridge
is a component that sits between a framework like TensorFlow or MXNet, and the is a component that sits between a framework like TensorFlow or PaddlePaddle, and the
nGraph Core frontend API. A framework bridge does two things: first, it nGraph Core frontend API. A framework bridge does two things: first, it
translates a framework's operations into graphs in nGraph’s in-memory translates a framework's operations into graphs in nGraph’s in-memory
:abbr:`Intermediary Representation (IR)`. Second, it executes the nGraph IR :abbr:`Intermediary Representation (IR)`. Second, it executes the nGraph IR
...@@ -24,8 +24,9 @@ are some common patterns: a fairly typical example for a graph-based framework ...@@ -24,8 +24,9 @@ are some common patterns: a fairly typical example for a graph-based framework
is illustrated here, and consists of basically two phases: a **clustering** is illustrated here, and consists of basically two phases: a **clustering**
phase and a **translation** phase. phase and a **translation** phase.
.. figure:: ../graphics/translation-flow-to-ng-fofx.png .. figure:: ../graphics/overview-translation-flow.svg
:alt: The whole stack :width: 725px
:alt: Translation flow to an nGraph function graph
Translation flow to an nGraph function Translation flow to an nGraph function
......
.. fusion/passes-that-use-matcher.rst: .. core/passes/passes-that-use-matcher.rst:
Passes that use Matcher Passes that use Matcher
......
.. core/passes: .. core/passes/passes.rst:
Compiler passes .. _core_compiler_passes:
Compiler Passes
=============== ===============
.. toctree:: .. toctree::
...@@ -11,9 +13,8 @@ Compiler passes ...@@ -11,9 +13,8 @@ Compiler passes
passes-that-use-matcher.rst passes-that-use-matcher.rst
Basic concepts
Overview --------------
--------
*Generic graph optimization passes* *Generic graph optimization passes*
......
:orphan:
.. frameworks/fw_overview:
.. _fw_overview:
Overview
========
A framework is "supported" with a framework :term:`bridge` that can be written or
cloned and used to connect to nGraph device backends while maintaining the
framework's programmatic or user interface. There is a bridge for the
`TensorFlow framework`_. We also have a :doc:`paddle_integ` bridge. Intel
previously contributed work to an MXNet bridge; however, support for this
bridge is no longer active.
`ONNX`_ on its own is not a framework; however, it can be used with nGraph's
:doc:`../python_api/index` to import and execute ONNX models.
.. figure:: ../graphics/overview-framework-bridges.svg
:width: 960px
:alt: Framework bridge to a graph construction
Framework bridge to nGraph
Once connected via the bridge, the framework can then run and train a deep
learning model with various workloads on various backends using nGraph Compiler
as an optimizing compiler available through the framework.
While a :abbr:`Deep Learning (DL)` :term:`framework` is ultimately meant for
end use by data scientists, or for deployment in cloud container environments,
nGraph Core ops and the nGraph C++ Library are designed for framework builders
themselves. We invite anyone working on new and novel frameworks or neural
network designs to explore our highly-modularized stack of components that
can be implemented or integrated in countless ways.
Please read this section if you are considering incorporating components from
the nGraph Compiler stack in your framework or neural network design. Contents
here are also useful if you are working on something built-from-scratch, or on
an existing framework that is less widely-supported than the popular frameworks
like TensorFlow and PyTorch.
.. figure:: ../graphics/overview-translation-flow.svg
:width: 725px
:alt: Translation flow to nGraph function graph
.. _TensorFlow framework: https://github.com/tensorflow/ngraph-bridge/README.md
.. _ONNX: http://onnx.ai/
.. _tune the workload to extract best performance: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations
.. _a few small: https://software.intel.com/en-us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi
.. frameworks/index.rst .. frameworks/index.rst
Working with Frameworks Working with Frameworks
======================= #######################
.. include:: overview.rst
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
getting_started.rst overview.rst
quickstart.rst
onnx_integ.rst onnx_integ.rst
paddle_integ.rst paddle_integ.rst
tensorflow_connect.rst tensorflow_connect.rst
generic_configs.rst other.rst
.. frameworks/onnx_integ.rst: .. frameworks/onnx_integ.rst:
ONNX Support ONNX overview
============ =============
nGraph is able to import and execute ONNX models. Models are converted to nGraph is able to import and execute ONNX models. Models are converted to
nGraph's :abbr:`Intermediate Representation (IR)` and converted to ``Function`` nGraph's :abbr:`Intermediate Representation (IR)` and converted to ``Function``
...@@ -78,7 +78,7 @@ data: ...@@ -78,7 +78,7 @@ data:
Find more information about nGraph and ONNX in the Find more information about nGraph and ONNX in the
`nGraph ONNX`_ GitHub\* repository. `nGraph ONNX`_ GitHub repository.
.. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx .. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx
......
.. frameworks/generic_configs.rst: .. frameworks/other.rst:
.. _generic_configs: .. _fw_other:
Integrating new frameworks Integrating other frameworks
========================== ============================
This section details some of the *configuration options* and some of the This section details some of the *configuration options* and some of the
*environment variables* that can be used to tune for optimal performance when *environment variables* that can be used to tune for optimal performance when
......
.. frameworks/overview.rst .. frameworks/overview.rst
.. _fw_overview: Basic concepts
==============
Overview .. figure:: ../graphics/overview-framework-bridges.svg
======== :width: 960px
:alt: Bridge to nGraph graph construction API
A framework is "supported" with a framework :term:`bridge` that can be written or A framework bridge connects to the nGraph graph construction API
cloned and used to connect to nGraph device backends while maintaining the
framework's programmatic or user interface. A `bridge currently exists`_ for the
TensorFlow framework. We also have a bridge to do :doc:`paddle_integ`. Intel
previously contributed work to an MXNet bridge; however, support for this
bridge is no longer active.
`ONNX`_ on its own is not a framework; however, it can be used with nGraph's To understand how a data science :term:`framework` (:doc:`TensorFlow <tensorflow_connect>`,
:doc:`../python_api/index` to import and execute ONNX models. PyTorch, :doc:`paddle_integ`, and others) can unlock acceleration available in
the nGraph Compiler, it helps to familiarize yourself with some basic concepts.
.. figure:: ../graphics/overview-framework-bridges.svg We use the term :term:`bridge` to describe code that connects to any nGraph
:width: 960px device backend(s) while maintaining the framework's programmatic or user
:alt: JiT compiling of a computation interface. We have a `bridge for the TensorFlow framework`_. We also have a
:doc:`paddle_integ` bridge. Intel previously :doc:`contributed work to an MXNet bridge <../project/extras/testing_latency>`;
however, support for the MXNet bridge is no longer active.
`ONNX`_ on its own is not a framework; it can be used with nGraph's
:doc:`../python_api/index` to import and execute ONNX models.
:abbr:`Just-in-Time (JiT)` Compiling for computation. nGraph `Core` Because it is framework agnostic (providing opportunities to optimize at the
components are colored in blue. graph level), nGraph can do the heavy lifting required by many popular
:doc:`workloads <validated/list>` without any additional effort of the framework user.
Optimizations that were previously available only after careful integration of
a kernel or hardware-specific library are exposed via the
:doc:`Core graph construction API <../core/constructing-graphs/index>`
Once connected via the bridge, the framework can then run and train a deep The illustration above shows how this works.
learning model with various workloads on various backends using nGraph Compiler
as an optimizing compiler available through the framework.
While a :abbr:`Deep Learning (DL)` :term:`framework` is ultimately meant for While a :abbr:`Deep Learning (DL)` framework is ultimately meant for end-use by
end use by data scientists, or for deployment in cloud container environments, data scientists, or for deployment in cloud container environments, nGraph's
nGraph Core ops and the nGraph C++ Library are designed for framework builders :doc:`Core ops <../core/overview>` are designed for framework builders themselves.
themselves. We invite anyone working on new and novel frameworks or neural We invite anyone working on new and novel frameworks or neural network designs
network designs to explore our highly-modularized stack of components that to explore our highly-modularized stack of components.
can be implemented or integrated in countless ways.
Please read this section if you are considering incorporating components from Please read the :doc:`other` section for other framework-agnostic
the nGraph Compiler stack in your framework or neural network design. Contents configurations available to users of the nGraph Compiler stack.
here are also useful if you are working on something built-from-scratch, or on
an existing framework that is less widely-supported than the popular frameworks
like TensorFlow and PyTorch.
.. figure:: ../graphics/overview-translation-flow.svg .. figure:: ../graphics/overview-translation-flow.svg
:width: 725px :width: 725px
:alt: Translation flow to nGraph function graph :alt: Translation flow to an nGraph function graph
.. _bridge currently exists: https://github.com/tensorflow/ngraph-bridge/README.md .. _bridge for the TensorFlow framework: https://github.com/tensorflow/ngraph-bridge/README.md
.. _ONNX: http://onnx.ai/ .. _ONNX: http://onnx.ai/
.. _tune the workload to extract best performance: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations .. _tune the workload to extract best performance: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations
.. _a few small: https://software.intel.com/en-us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi .. _a few small: https://software.intel.com/en-us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi
\ No newline at end of file
.. frameworks/paddle_integ.rst: .. frameworks/paddle_integ.rst:
PaddlePaddle PaddlePaddle integration
============ ========================
nGraph PaddlePaddle integration overview
----------------------------------------
PaddlePaddle is an open source deep learning framework developed by Baidu. It PaddlePaddle is an open source deep learning framework developed by Baidu. It
aims to enable performant large-scale distributed computation for deep learning. aims to enable performant large-scale distributed computation for deep learning.
...@@ -67,7 +64,7 @@ is organized in the following file structure: ...@@ -67,7 +64,7 @@ is organized in the following file structure:
.. _figure-paddle-dir: .. _figure-paddle-dir:
.. figure:: ../graphics/paddlepaddle_directory.png .. figure:: ../graphics/PaddlePaddleDir.svg
:width: 555px :width: 555px
:alt: :alt:
...@@ -135,8 +132,8 @@ and nGraph bridges are provided below: ...@@ -135,8 +132,8 @@ and nGraph bridges are provided below:
are managed through a map. are managed through a map.
- SetOutputNode: sets the constructed node to the map. - SetOutputNode: sets the constructed node to the map.
- Related code : - Related code :
+ ``Paddle/fluid/operators/ngraph/ngraph_bridge.h`` `link to ngraph_bridge header code`_ - ``Paddle/fluid/operators/ngraph/ngraph_bridge.h`` `link to ngraph_bridge header code`_
+ ``Paddle/fluid/operators/ngraph/ngraph_bridge.cc`` `link to ngraph_bridge cpp code`_ - ``Paddle/fluid/operators/ngraph/ngraph_bridge.cc`` `link to ngraph_bridge cpp code`_
nGraph compilation control and trigger method nGraph compilation control and trigger method
---------------------------------------------- ----------------------------------------------
...@@ -149,8 +146,9 @@ nGraph compilation control and trigger method ...@@ -149,8 +146,9 @@ nGraph compilation control and trigger method
#. **Trigger Control** -- ``FLAGS_use_ngraph`` triggers nGraph. If this option #. **Trigger Control** -- ``FLAGS_use_ngraph`` triggers nGraph. If this option
is set to ``true``, nGraph will be triggered by the PaddlePaddle executor is set to ``true``, nGraph will be triggered by the PaddlePaddle executor
to convert and execute the supported subgraph. `Examples are provided`_ under to convert and execute the supported subgraph. Demos are provided under
``paddle/benchmark/fluid/ngraph``. ``paddle/benchmark/fluid/train/demo`` (link `train_demo`_) and ``paddle/benchmark/fluid/train/imdb_demo``
(link `imdb_demo`_)
.. _link to ngraph_engine_op header code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_engine_op.h .. _link to ngraph_engine_op header code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_engine_op.h
...@@ -160,5 +158,5 @@ nGraph compilation control and trigger method ...@@ -160,5 +158,5 @@ nGraph compilation control and trigger method
.. _located in the ngraph ops: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/operators/ngraph/ops .. _located in the ngraph ops: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/operators/ngraph/ops
.. _link to ngraph_bridge header code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_bridge.h .. _link to ngraph_bridge header code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_bridge.h
.. _link to ngraph_bridge cpp code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_bridge.cc .. _link to ngraph_bridge cpp code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_bridge.cc
.. _Examples are provided: https://github.com/PaddlePaddle/Paddle/tree/develop/benchmark/fluid .. _train_demo: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/train/demo
.. _imdb_demo: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/train/imdb_demo
.. frameworks/getting_started.rst .. frameworks/quickstart.rst
Getting Started .. _fw_quickstart:
###############
Quick start
===========
No matter what your level of experience with :abbr:`Deep Learning (DL)` systems No matter what your level of experience with :abbr:`Deep Learning (DL)` systems
may be, nGraph provides a path to start working with the DL stack. Let's begin may be, nGraph provides a path to start working with the DL stack. Let's begin
with the easiest and most straightforward options. with the easiest and most straightforward options.
.. figure:: ../graphics/translation-flow-to-ng-fofx.png TensorFlow
:width: 725px ----------
:alt: Translation flow to nGraph function graph
The easiest way to get started is to use the latest PyPI `ngraph-tensorflow-bridge`_, The easiest way to get started is to use the latest PyPI `ngraph-tensorflow-bridge`_,
which has instructions for Linux* systems, and tips for users of Mac OS X. which has instructions for Linux\* systems, and tips for users of Mac OS X.
You can install TensorFlow\* and nGraph to a virtual environment; otherwise, the code You can install TensorFlow and nGraph in a virtual environment; otherwise, the code
will install to a system location. will install to a system location.
.. code-block:: console .. code-block:: console
...@@ -43,17 +44,17 @@ Output will look something like: ...@@ -43,17 +44,17 @@ Output will look something like:
TensorFlow version used for this build: v[version-hash] TensorFlow version used for this build: v[version-hash]
CXX11_ABI flag used for this build: boolean CXX11_ABI flag used for this build: boolean
More detail in the `ngraph_bridge examples`_ directory. More detail in the `ngraph_bridge examples`_ directory.
See also the `diagnostic tools`_.
ONNX ONNX
==== ----
Another easy way to get started working with the :abbr:`DL (Deep Learning)` Another easy way to get started working with the :abbr:`DL (Deep Learning)`
stack is to try the examples available via `nGraph ONNX`_. stack is to try the examples available via `nGraph ONNX`_.
Installation
------------
To prepare your environment to use nGraph and ONNX, install the Python packages To prepare your environment to use nGraph and ONNX, install the Python packages
for nGraph, ONNX and NumPy: for nGraph, ONNX and NumPy:
...@@ -67,18 +68,18 @@ Now you can start exploring some of the :doc:`onnx_integ` examples. ...@@ -67,18 +68,18 @@ Now you can start exploring some of the :doc:`onnx_integ` examples.
See also nGraph's :doc:`../python_api/index`. See also nGraph's :doc:`../python_api/index`.
PlaidML PlaidML
======= -------
See the :ref:`ngraph_plaidml_backend` section on how to build the See the :ref:`ngraph_plaidml_backend` section on how to build the
nGraph-PlaidML. nGraph-PlaidML.
Other integration paths Other integration paths
======================= -----------------------
If you are considering incorporating components from the nGraph Compiler stack If you are considering incorporating components from the nGraph Compiler stack
in your framework or neural network design, another useful doc is the section in your framework or neural network design, another useful doc is the section
on :doc:`generic-configs`. Contents here are also useful if you are working on on :doc:`other` . Contents here are also useful if you are working on
something built-from-scratch, or on an existing framework that is less something built-from-scratch, or on an existing framework that is less
widely-supported than the popular frameworks like TensorFlow and PyTorch. widely-supported than the popular frameworks like TensorFlow and PyTorch.
...@@ -86,3 +87,4 @@ widely-supported than the popular frameworks like TensorFlow and PyTorch. ...@@ -86,3 +87,4 @@ widely-supported than the popular frameworks like TensorFlow and PyTorch.
.. _ngraph-tensorflow-bridge: https://pypi.org/project/ngraph-tensorflow-bridge .. _ngraph-tensorflow-bridge: https://pypi.org/project/ngraph-tensorflow-bridge
.. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx .. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx
.. _ngraph_bridge examples: https://github.com/tensorflow/ngraph-bridge/blob/master/examples/README.md .. _ngraph_bridge examples: https://github.com/tensorflow/ngraph-bridge/blob/master/examples/README.md
.. _diagnostic tools: https://github.com/tensorflow/ngraph-bridge/blob/master/diagnostics/README.md
\ No newline at end of file
.. frameworks/tensorflow_connect.rst: .. frameworks/tensorflow_connect.rst:
Connect TensorFlow\* nGraph Bridge for TensorFlow
==================== ============================
See the `README`_ on the `ngraph_bridge repo`_ for the many ways to connect See the `README`_ on the `ngraph_bridge repo`_ for the many ways to connect
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
.. _validated: .. _validated:
Validated workloads Validated Workloads
################### ###################
We have validated performance [#f1]_ for the following workloads: We have validated performance [#f1]_ for the following workloads:
......
This diff is collapsed.
...@@ -12,9 +12,9 @@ ...@@ -12,9 +12,9 @@
.. limitations under the License. .. limitations under the License.
.. --------------------------------------------------------------------------- .. ---------------------------------------------------------------------------
######################
nGraph Compiler stack nGraph Compiler stack Documentation
###################### ###################################
.. _ngraph_home: .. _ngraph_home:
...@@ -23,36 +23,25 @@ nGraph Compiler stack ...@@ -23,36 +23,25 @@ nGraph Compiler stack
nGraph Compiler stack documentation for version |version|. nGraph Compiler stack documentation for version |version|.
Documentation for the latest (master) development branch can be found
at https://ngraph.nervanasys.com/docs/latest
.. https://docs.ngraph.ai/
.. only:: (development or daily) .. only:: (development or daily)
nGraph Compiler stack documentation for the master tree under development nGraph Compiler stack documentation for the master tree under development
(version |version|). (version |version|).
The nGraph Library and Compiler stack are provided under the `Apache 2.0 license`_
(found in the LICENSE file in the project's `repo`_). It may also import or reference
packages, scripts, and other files that use licensing.
.. _Apache 2.0 license: https://github.com/NervanaSystems/ngraph/blob/master/LICENSE
.. _repo: https://github.com/NervanaSystems/ngraph
.. toctree:: .. toctree::
:name: mastertoctree :name: mastertoctree
:titlesonly: :titlesonly:
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Getting Started
introduction.rst introduction.rst
features.rst features.rst
project/release-notes.rst
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
:caption: Framework Support :caption: Framework Support
frameworks/index.rst frameworks/index.rst
...@@ -89,15 +78,20 @@ packages, scripts, and other files that use licensing. ...@@ -89,15 +78,20 @@ packages, scripts, and other files that use licensing.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Project Metadata :caption: Contributing
project/release-notes.rst
project/contribution-guide.rst project/contribution-guide.rst
project/index.rst
project/extras/index.rst
glossary.rst glossary.rst
.. toctree::
:maxdepth: 1
:hidden:
project/release-notes.rst
project/index.rst
project/extras/index.rst
.. only:: html .. only:: html
......
...@@ -160,15 +160,30 @@ Not currently a comprehensive list. ...@@ -160,15 +160,30 @@ Not currently a comprehensive list.
More about Core Ops More about Core Ops
------------------- -------------------
An ``Op``'s primary role is to function as a node in a directed acyclic graph An ``Op``'s primary role is to function as a node in a ddirected acyclic
dependency computation graph. computation graph.
*Core ops* are ops that are available and generally useful to all framework *Core ops* are ops that are available and generally useful to all framework
bridges and that can be compiled by all transformers. A framework bridge may bridges and that can be compiled by all transformers. A framework bridge may
define framework-specific ops to simplify graph construction, provided that the define framework-specific ops to simplify graph construction, provided that the
bridge can enable every transformer to replace all such ops with equivalent bridge can enable every transformer to replace all such ops with equivalent
clusters or subgraphs composed of core ops. In a similar manner, transformers may define clusters or subgraphs composed of core ops. In a similar manner, transformers
transformer-specific ops to represent kernels or other intermediate operations. may define transformer-specific ops to represent kernels or other intermediate
operations.
The input and output ports of ops are any of the functions which work with
``Output<Node>/Input<Node>``. Previous functions that worked at the level
of ops are deprecated, like::
Node::get_element_type()
as it does not take any input. This function has been replaced with new
functions like::
Node::get_output_element_type(index)
where there is no ambiguity.
If a framework supports extending the set of ops it offers, a bridge may even If a framework supports extending the set of ops it offers, a bridge may even
expose transformer-specific ops to the framework user. expose transformer-specific ops to the framework user.
......
...@@ -44,6 +44,6 @@ Outputs ...@@ -44,6 +44,6 @@ Outputs
C++ Interface C++ Interface
============= =============
.. doxygenclass:: ngraph::op::Max .. doxygenclass:: ngraph::op::v0::Max
:project: ngraph :project: ngraph
:members: m_axes :members: m_axes
...@@ -44,6 +44,6 @@ Outputs ...@@ -44,6 +44,6 @@ Outputs
C++ Interface C++ Interface
============= =============
.. doxygenclass:: ngraph::op::Min .. doxygenclass:: ngraph::op::v0::Min
:project: ngraph :project: ngraph
:members: m_axes :members: m_axes
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
.. _contribution_guide: .. _contribution_guide:
################## ##################
Contribution guide Contribution Guide
################## ##################
......
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
:orphan: :orphan:
.. _release_notes:
Release Notes Release Notes
############# #############
...@@ -18,19 +20,33 @@ We are pleased to announce the release of version |version|. ...@@ -18,19 +20,33 @@ We are pleased to announce the release of version |version|.
Core updates for |version| Core updates for |version|
-------------------------- --------------------------
+ All ops support ``Output<Node>`` arguments + All ops support ``Output<Node>`` arguments
+ Additional ops + Additional ops
+ ONNX handling unknown domains + ONNX handling unknown domains
+ Provenance works with builders and fused ops
+ ``RPATH`` for finding openmpi
+ Negative indices/axes fixes
+ Migrate some ``get_argument`` removals
+ Negative indices/axes fixes
+ Better support for MKL-DNN 1.0 (DNNL)
Latest documentation updates
----------------------------
Latest documentation updates for |version|
------------------------------------------
+ Note the only support for nGPU is now through PlaidML; nGraph support for nGPU + Note the only support for nGPU is now through PlaidML; nGraph support for nGPU
(via cuDNN) has been deprecated. (via cuDNN) has been deprecated.
+ iGPU works only with nGraph version `0.24`. + iGPU works only with nGraph version `0.24`.
+ Add new Sphinx-friendly theme (can be built natively for an alternative to ngraph.ai docs).
+ Update PaddlePaddle documentation to reflect demo directories instead of example directory.
+ Update doc regarding the validation of ``Sum`` op.
.. important:: Pre-releases (``-rc-0.*``) have newer features, and are less stable. .. important:: Pre-releases (``-rc-0.*``) have newer features, and are less stable.
Changelog on Previous Releases Changelog on Previous Releases
============================== ==============================
...@@ -41,7 +57,7 @@ Changelog on Previous Releases ...@@ -41,7 +57,7 @@ Changelog on Previous Releases
+ Add rank id to trace file name + Add rank id to trace file name
+ Allow provenance merging to be disabled + Allow provenance merging to be disabled
+ Remove some white-listed compiler warnings + Remove some white-listed compiler warnings
+ Provenance on builders and fused op expansions + Provenance, builders, ops that make ops, and fused op expansions
0.25.0 0.25.0
...@@ -136,8 +152,8 @@ Changelog on Previous Releases ...@@ -136,8 +152,8 @@ Changelog on Previous Releases
+ Provenance improvements + Provenance improvements
0.19 pre-0.20
---- --------
+ More dynamic shape preparation + More dynamic shape preparation
+ Distributed interface factored out + Distributed interface factored out
...@@ -153,11 +169,7 @@ Changelog on Previous Releases ...@@ -153,11 +169,7 @@ Changelog on Previous Releases
+ Add graph visualization tools to doc + Add graph visualization tools to doc
+ Update doxygen to be friendlier to frontends + Update doxygen to be friendlier to frontends
.. 0.18
0.18
----
+ Python formatting issue + Python formatting issue
+ mkl-dnn work-around + mkl-dnn work-around
+ Event tracing improvements + Event tracing improvements
...@@ -166,10 +178,7 @@ Changelog on Previous Releases ...@@ -166,10 +178,7 @@ Changelog on Previous Releases
+ ONNX quantization + ONNX quantization
+ More fusions + More fusions
.. 0.17
0.17
----
+ Allow negative padding in more places + Allow negative padding in more places
+ Add code generation for some quantized ops + Add code generation for some quantized ops
+ Preliminary dynamic shape support + Preliminary dynamic shape support
...@@ -177,10 +186,7 @@ Changelog on Previous Releases ...@@ -177,10 +186,7 @@ Changelog on Previous Releases
+ Pad op takes CoordinateDiff instead of Shape pad values to allow for negative + Pad op takes CoordinateDiff instead of Shape pad values to allow for negative
padding. padding.
.. 0.16
0.16
----
+ NodeInput and NodeOutput classes prepare for simplifications of Node + NodeInput and NodeOutput classes prepare for simplifications of Node
+ Test improvements + Test improvements
+ Additional quantization ops + Additional quantization ops
......
:orphan:
.. toctree::
:includehidden:
frameworks/index
project/index
python_api/index
inspection/index
core/overview
backends/index
project/extras/index
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment