Commit b0d86687 authored by Leona C's avatar Leona C Committed by Scott Cyphers

ngraph.ai theme (#2654)

* [ONNX] Add documentation

* Update documentation contributor's instructions

* Doc theme to match ngraph.ai

* Minor formatting fixes and PR feedback

* ToC fixes

* ToC fixes

* Add changes

* Be consistent with BUILDDIR

* Be consistent with substitution

* Update Makefile
parent 4f586563
#
# Robust Makefile for Sphinx documentation # Robust Makefile for Sphinx documentation
# #
# You can set these variables from the command line. # You can set these variables from the command line.
SPHINXOPTS = SPHINXOPTS =
SPHINXBUILD = sphinx-build SPHINXBUILD = sphinx-build
SPHINXPROJ = IntelnGraphlibrary SPHINXPROJ = nGraphCompilerStack
SOURCEDIR = source SOURCEDIR = source
BUILDDIR = build BUILDDIR = build
ALLSPHINXOPTS = ${SOURCEDIR}
# Put it first so that "make" without argument is like "make help". # Put it first so that "make" without argument is like "make help".
help: help:
......
...@@ -25,6 +25,15 @@ framework developer customize targeted solutions. Experimental APIs to support ...@@ -25,6 +25,15 @@ framework developer customize targeted solutions. Experimental APIs to support
current and future nGraph Backends are also available; see, for example, the current and future nGraph Backends are also available; see, for example, the
section on the :ref:`plaidml_backend`. section on the :ref:`plaidml_backend`.
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
:widths: 35, 10, 10
Intel® Architecture Processors (CPUs), Yes, Yes
Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes
NVIDIA\* CUDA (GPUs), Yes, Some
AMD\* GPUs, Yes, Some
.. _hybrid_transformer: .. _hybrid_transformer:
......
.. buildlb.rst: .. buildlb.rst:
########################### ###############
nGraph Library for backends Build and Test
########################### ###############
This section details how to build the C++ version of the nGraph Library, which This section details how to build the C++ version of the nGraph Library, which
is targeted toward developers working on kernel-specific operations, is targeted toward developers working on kernel-specific operations,
......
...@@ -73,11 +73,11 @@ author = 'Intel Corporation' ...@@ -73,11 +73,11 @@ author = 'Intel Corporation'
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
version = '0.15' version = '0.16'
# The Documentation full version, including alpha/beta/rc tags. Some features # The Documentation full version, including alpha/beta/rc tags. Some features
# available in the latest code will not necessarily be documented first # available in the latest code will not necessarily be documented first
release = '0.15.0' release = '0.16.1'
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.
...@@ -171,7 +171,7 @@ latex_elements = { ...@@ -171,7 +171,7 @@ latex_elements = {
# (source start file, target name, title, # (source start file, target name, title,
# author, documentclass [howto, manual, or own class]). # author, documentclass [howto, manual, or own class]).
latex_documents = [ latex_documents = [
(master_doc, 'IntelnGraphlibrary.tex', 'Intel nGraph Library', (master_doc, 'nGraphCompilerStack.tex', 'nGraph Compiler Stack Documentation',
'Intel Corporation', 'manual'), 'Intel Corporation', 'manual'),
] ]
...@@ -181,11 +181,10 @@ latex_documents = [ ...@@ -181,11 +181,10 @@ latex_documents = [
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [ man_pages = [
(master_doc, 'intelngraphlibrary', 'Intel nGraph Library', (master_doc, 'ngraphcompiler', 'nGraph Compiler stack',
[author], 1) [author], 1)
] ]
# -- Options for Texinfo output ------------------------------------------- # -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples # Grouping the document tree into Texinfo files. List of tuples
......
...@@ -8,18 +8,6 @@ This section details some of the *configuration options* and some of the ...@@ -8,18 +8,6 @@ This section details some of the *configuration options* and some of the
your system already has a version of nGraph installed with one of our supported your system already has a version of nGraph installed with one of our supported
backends. backends.
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
:widths: 35, 10, 10
Intel® Architecture Processors (CPUs), Yes, Yes
Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes
NVIDIA\* CUDA (GPUs), Yes, Some
:abbr:`Field Programmable Gate Arrays (FPGA)` (FPGAs), Coming soon, Yes
`Movidius`_, Not yet, Yes
Other, Not yet, Ask
Regardless of the framework, after the :doc:`../buildlb` step, a good place Regardless of the framework, after the :doc:`../buildlb` step, a good place
to start usually involves making the libraries available to the framework. On to start usually involves making the libraries available to the framework. On
Linux\* systems built on Intel® Architecture, that command tends to looks Linux\* systems built on Intel® Architecture, that command tends to looks
......
...@@ -19,17 +19,26 @@ ...@@ -19,17 +19,26 @@
ONNX Support ONNX Support
============ ============
nGraph is able to import and execute ONNX models. Models are converted to nGraph's internal representation and converted to ``Function`` objects, which can be compiled and executed on one of nGraph's backends.
You can use nGraph's Python API to run an ONNX model and nGraph can be used as an ONNX backend using the add-on package `nGraph-ONNX <ngraph_onnx>`_. nGraph is able to import and execute ONNX models. Models are converted to
nGraph's internal representation and converted to ``Function`` objects, which
can be compiled and executed on one of nGraph's backends.
.. note:: In order to support ONNX, nGraph must be built with the ``NGRAPH_ONNX_IMPORT_ENABLE`` flag. See `Building nGraph-ONNX <ngraph_onnx_building>`_ for more information. All nGraph packages published on PyPI are built with ONNX support. You can use nGraph's Python API to run an ONNX model and nGraph can be used
as an ONNX backend using the add-on package `nGraph-ONNX <ngraph_onnx>`_.
.. note:: In order to support ONNX, nGraph must be built with the
``NGRAPH_ONNX_IMPORT_ENABLE`` flag. See `Building nGraph-ONNX
<ngraph_onnx_building>`_ for more information. All nGraph packages
published on PyPI are built with ONNX support.
Installation Installation
------------ ------------
In order to prepare your environment to use nGraph and ONNX, install the Python packages for nGraph, ONNX and NumPy: To prepare your environment to use nGraph and ONNX, install the Python packages
for nGraph, ONNX and NumPy:
:: ::
...@@ -48,7 +57,8 @@ For example ResNet-50: ...@@ -48,7 +57,8 @@ For example ResNet-50:
$ tar -xzvf resnet50.tar.gz $ tar -xzvf resnet50.tar.gz
Use the following Python commands to convert the downloaded model to an nGraph ``Function``: Use the following Python commands to convert the downloaded model to an nGraph
``Function``:
.. code-block:: python .. code-block:: python
...@@ -65,14 +75,16 @@ Use the following Python commands to convert the downloaded model to an nGraph ` ...@@ -65,14 +75,16 @@ Use the following Python commands to convert the downloaded model to an nGraph `
<Function: 'resnet50' ([1, 1000])> <Function: 'resnet50' ([1, 1000])>
This creates an nGraph ``Function`` object, which can be used to execute a computation on a chosen backend. This creates an nGraph ``Function`` object, which can be used to execute a
computation on a chosen backend.
Running a computation Running a computation
--------------------- ---------------------
You can now create an nGraph ``Runtime`` backend and use it to compile your ``Function`` to a backend-specific ``Computation`` object. You can now create an nGraph ``Runtime`` backend and use it to compile your
Finally, you can execute your model by calling the created ``Computation`` object with input data. ``Function`` to a backend-specific ``Computation`` object. Finally, you can
execute your model by calling the created ``Computation`` object with input
data:
.. code-block:: python .. code-block:: python
...@@ -94,7 +106,8 @@ Finally, you can execute your model by calling the created ``Computation`` objec ...@@ -94,7 +106,8 @@ Finally, you can execute your model by calling the created ``Computation`` objec
... ...
You can find more information about nGraph and ONNX in the `nGraph-ONNX <ngraph_onnx>`_ GitHub repository. Find more information about nGraph and ONNX in the
`nGraph-ONNX <ngraph_onnx>`_ GitHub repository.
.. _ngraph_onnx: https://github.com/NervanaSystems/ngraph-onnx/ .. _ngraph_onnx: https://github.com/NervanaSystems/ngraph-onnx/
......
...@@ -22,15 +22,24 @@ nGraph Compiler stack ...@@ -22,15 +22,24 @@ nGraph Compiler stack
###################### ######################
.. toctree:: `nGraph`_ is an open-source graph compiler for :abbr:`Artificial Neural Networks (ANNs)`.
:maxdepth: 1 The nGraph Compiler stack provides an inherently efficient graph-based compilation
infrastructure designed to be compatible with the many of the upcoming
:abbr:`Application-Specific Integrated Circuits (ASICs)`, like the Intel® Nervana™
Neural Network Processor (Intel® Nervana™ NNP), while also unlocking a massive
performance boost on any existing hardware targets in your neural network: both GPUs
and CPUs. Using its flexible infrastructure, you will find it becomes much easier
to create Deep Learning (DL) models that can adhere to the "write once, run anywhere"
mantra that enables your AI solutions to easily go from concept to production to scale.
project/introduction.rst Frameworks using nGraph to execute workloads have shown `up to 45X`_ performance
boost compared to native implementations. For a high-level overview, see the
:doc:`project/introduction`.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Framework Support :caption: Connecting Frameworks
frameworks/index.rst frameworks/index.rst
frameworks/validated/list.rst frameworks/validated/list.rst
...@@ -41,16 +50,16 @@ nGraph Compiler stack ...@@ -41,16 +50,16 @@ nGraph Compiler stack
:maxdepth: 1 :maxdepth: 1
:caption: nGraph Core :caption: nGraph Core
buildlb.rst
core/overview.rst core/overview.rst
core/fusion/index.rst core/fusion/index.rst
nGraph Core Ops <ops/index.rst> nGraph Core Ops <ops/index.rst>
core/constructing-graphs/index.rst core/constructing-graphs/index.rst
core/passes/passes.rst core/passes/passes.rst
buildlb.rst
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Python API :caption: nGraph Python API
python_api/index.rst python_api/index.rst
...@@ -65,7 +74,7 @@ nGraph Compiler stack ...@@ -65,7 +74,7 @@ nGraph Compiler stack
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Distributed training :caption: Distributed Training
distr/index.rst distr/index.rst
...@@ -83,13 +92,14 @@ nGraph Compiler stack ...@@ -83,13 +92,14 @@ nGraph Compiler stack
:maxdepth: 1 :maxdepth: 1
:caption: Tutorials :caption: Tutorials
tutorials/index.rst nGraph.ai Tutorials <https://www.ngraph.ai/tutorials>
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Project Metadata :caption: Project Metadata
project/introduction.rst
project/release-notes.rst project/release-notes.rst
project/contribution-guide.rst project/contribution-guide.rst
project/governance.rst project/governance.rst
...@@ -102,3 +112,9 @@ Indices and tables ...@@ -102,3 +112,9 @@ Indices and tables
* :ref:`search` * :ref:`search`
* :ref:`genindex` * :ref:`genindex`
.. _nGraph: https://www.ngraph.ai
.. _up to 45X: https://ai.intel.com/ngraph-compiler-stack-beta-release/
\ No newline at end of file
...@@ -118,14 +118,6 @@ expand your network's hardware. Each integration is unique to the framework ...@@ -118,14 +118,6 @@ expand your network's hardware. Each integration is unique to the framework
and its set of deep learning operators, its view on memory layout, its and its set of deep learning operators, its view on memory layout, its
feature set, etc. feature set, etc.
.. _figure-B:
.. figure:: ../graphics/intro_kernel_to_fw_accent.png
:width: 555px
:alt:
Each of these connections represents significant work for what will
ultimately be a brittle setup that is enormously expensive to maintain.
nGraph solves this problem with nGraph bridges. A bridge takes a computational nGraph solves this problem with nGraph bridges. A bridge takes a computational
graph and reconstructs it in the nGraph IR with a few primitive nGraph graph and reconstructs it in the nGraph IR with a few primitive nGraph
...@@ -146,12 +138,16 @@ of each parameter for each operation. In the past, the number of required ...@@ -146,12 +138,16 @@ of each parameter for each operation. In the past, the number of required
kernels was limited, but as the AI research and industry rapidly develops, the kernels was limited, but as the AI research and industry rapidly develops, the
final product of required kernels is increasing exponentially. final product of required kernels is increasing exponentially.
.. _figure-C: .. _figure-B:
.. figure:: ../graphics/intro_kernel_explosion.png .. figure:: ../graphics/intro_kernel_explosion.png
:width: 555px :width: 555px
:alt: :alt:
Each of these connections represents significant work for what will
ultimately be a brittle setup that is enormously expensive to maintain.
PlaidML addresses the kernel explosion problem in a manner that lifts a heavy PlaidML addresses the kernel explosion problem in a manner that lifts a heavy
burden off kernel developers. It automatically lowers networks from nGraph burden off kernel developers. It automatically lowers networks from nGraph
......
.. tutorials/index: .. tutorials/index:
.. This will hold the organization of the tutorials we put on ngraph.ai
.. it will need to be organized in a way that is navigable for the many kinds of frameworks and backends we support in the "Compiler stack". It will need to be workable with a sitemap structure. The initial example is for the latest nGraph-TensorFlow bridge.
:orphan:
########## ##########
Tutorials Tutorials
########## ##########
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment