Commit 14f16bc1 authored by Leona C's avatar Leona C Committed by Scott Cyphers

Leona/doc v0.20 (#2971)

* Cleanup section

* Add updated illustrations for pattern_matcher and tensor_descriptor

* Add subsection link to be consistent
parent 39cdee0e
.. backend-support/backend-api/index.rst:
Backend
=======
.. doxygenclass:: ngraph::runtime::Backend
:project: ngraph
:members:
......@@ -3,11 +3,14 @@
Backend APIs
############
* :ref:`backend-api`
* :ref:`executable-api`
* :ref:`tensor-api`
* :ref:`hosttensor-api`
* :ref:`plaidml-ng-api`
.. toctree::
:maxdepth: 1
backend-api/index
executable-api/index
hosttensor-api/index
plaidml-ng-api/index
tensor-api/index
As of version ``0.15``, there is a new backend API to work with functions that
......@@ -18,96 +21,3 @@ more direct methods to actions such as ``validate``, ``call``, ``get_performance
*out of* storage and makes it easier to distinguish when a Function is compiled,
thus making the internals of the ``Backend`` and ``Executable`` easier to
implement.
How to use?
-----------
#. Create a ``Backend``; think of it as a compiler.
#. A ``Backend`` can then produce an ``Executable`` by calling ``compile``.
#. A single iteration of the executable is executed by calling the ``call``
method on the ``Executable`` object.
.. figure:: ../graphics/execution-interface.png
:width: 650px
The execution interface for nGraph
The nGraph execution API for ``Executable`` objects is a simple, five-method
interface; each backend implements the following five functions:
* The ``create_tensor()`` method allows the bridge to create tensor objects
in host memory or an accelerator's memory.
* The ``write()`` and ``read()`` methods are used to transfer raw data into
and out of tensors that reside in off-host memory.
* The ``compile()`` method instructs the backend to prepare an nGraph function
for later execution.
* And, finally, the ``call()`` method is used to invoke an nGraph function
against a particular set of tensors.
.. _backend-api:
Backend
=======
.. figure:: ../graphics/backend-dgm.png
:width: 650px
Various backends are accessible via nGraph core APIs
.. doxygenclass:: ngraph::runtime::Backend
:project: ngraph
:members:
.. _executable-api:
Executable
==========
.. figure:: ../graphics/runtime-exec.png
:width: 650px
The ``compile`` function on an ``Executable`` has more direct methods to
actions such as ``validate``, ``call``, ``get_performance_data``, and so on.
.. doxygenclass:: ngraph::runtime::Executable
:project: ngraph
:members:
.. _tensor-api:
Tensor
======
.. doxygenclass:: ngraph::runtime::Tensor
:project: ngraph
:members:
.. _hosttensor-api:
HostTensor
==========
.. doxygenclass:: ngraph::runtime::HostTensor
:project: ngraph
:members:
.. _plaidml-ng-api:
PlaidML
=======
.. doxygenclass:: ngraph::runtime::plaidml::PlaidML_Backend
:project: ngraph
:members:
\ No newline at end of file
.. backend-support/executable-api/index.rst:
Executable
==========
The ``compile`` function on an ``Executable`` has more direct methods to
actions such as ``validate``, ``call``, ``get_performance_data``, and so on.
.. doxygenclass:: ngraph::runtime::Executable
:project: ngraph
:members:
.. backend-support/hosttensor-api/index.rst:
HostTensor
==========
.. doxygenclass:: ngraph::runtime::HostTensor
:project: ngraph
:members:
.. backend-support/index.rst
About backends
##############
* :ref:`what_is_backend`
* :ref:`hybrid_transformer`
* :ref:`cpu_backend`
* :ref:`plaidml_backend`
* :ref:`gpu_backend`
* :ref:`how_to_use`
.. _what_is_backend:
......@@ -23,7 +19,7 @@ from a framework on a CPU or GPU; or it can be used with an *Interpreter* mode,
which is primarily intended for testing, to analyze a program, or to help a
framework developer customize targeted solutions. Experimental APIs to support
current and future nGraph Backends are also available; see, for example, the
section on the :ref:`plaidml_backend`.
section on :doc:`plaidml-ng-api/index`.
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
......@@ -35,39 +31,31 @@ section on the :ref:`plaidml_backend`.
AMD\* GPUs, Yes, Some
.. _hybrid_transformer:
Hybrid Transformer
==================
More detail coming soon
.. _cpu_backend:
CPU Backend
===========
More detail coming soon
.. _how_to_use:
.. _gpu_backend:
How to use?
-----------
GPU Backend
===========
#. Create a ``Backend``; think of it as a compiler.
#. A ``Backend`` can then produce an ``Executable`` by calling ``compile``.
#. A single iteration of the executable is executed by calling the ``call``
method on the ``Executable`` object.
More detail coming soon
.. figure:: ../graphics/execution-interface.png
:width: 650px
The execution interface for nGraph
.. _plaidml_backend:
PlaidML Backend
===============
The nGraph execution API for ``Executable`` objects is a simple, five-method
interface; each backend implements the following five functions:
The nGraph ecosystem has recently added initial (experimental) support for `PlaidML`_,
which is an advanced :abbr:`Machine Learning (ML)` library that can further
accelerate training models built on GPUs. When you select the ``PlaidML`` option
as a backend, it behaves as an advanced tensor compiler that can further speed up
training with large data sets.
.. _PlaidML: https://github.com/plaidml
* The ``create_tensor()`` method allows the bridge to create tensor objects
in host memory or an accelerator's memory.
* The ``write()`` and ``read()`` methods are used to transfer raw data into
and out of tensors that reside in off-host memory.
* The ``compile()`` method instructs the backend to prepare an nGraph function
for later execution.
* And, finally, the ``call()`` method is used to invoke an nGraph function
against a particular set of tensors.
.. plaidml-ng-api/index.rst:
PlaidML from nGraph
===================
.. doxygenclass:: ngraph::runtime::plaidml::PlaidML_Backend
:project: ngraph
:members:
\ No newline at end of file
.. backend-support/tensor-api/index.rst:
Tensor
======
.. doxygenclass:: ngraph::runtime::Tensor
:project: ngraph
:members:
......@@ -14,7 +14,20 @@ capture a given :term:`function graph` and perform a series of optimization
passes over that graph. The result is a semantically-equivalent graph that, when
executed using any :doc:`backend <../../backend-support/index>`, has optimizations
inherent at the hardware level: superior runtime characteristics to increase
training performance or reduce inference latency.
training performance or reduce inference latency.
.. figure:: ../../graphics/classngraph_patternmatcher.png
:width: 95%
``ngraph::pattern::Matcher`` compares two graphs
.. doxygenclass:: ngraph::pattern::Matcher
:project: ngraph
:members:
Fusion
======
There are several ways to describe what happens when we capture and translate
the framework's output of ops into an nGraph graph. :term:`Fusion` is the term
......
......@@ -129,8 +129,8 @@ Glossary
Tensors are maps from *coordinates* to scalar values, all of the
same type, called the *element type* of the tensor.
.. figure:: graphics/descriptor-of-tensor.png
:width: 559px
.. figure:: graphics/classngraph_tensor_descriptor.png
:width: 90%
Tensorview
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment