Commit b0d86687 authored by Leona C's avatar Leona C Committed by Scott Cyphers

ngraph.ai theme (#2654)

* [ONNX] Add documentation

* Update documentation contributor's instructions

* Doc theme to match ngraph.ai

* Minor formatting fixes and PR feedback

* ToC fixes

* ToC fixes

* Add changes

* Be consistent with BUILDDIR

* Be consistent with substitution

* Update Makefile
parent 4f586563
#
# Robust Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = IntelnGraphlibrary
SPHINXPROJ = nGraphCompilerStack
SOURCEDIR = source
BUILDDIR = build
ALLSPHINXOPTS = ${SOURCEDIR}
# Put it first so that "make" without argument is like "make help".
help:
......
......@@ -25,6 +25,15 @@ framework developer customize targeted solutions. Experimental APIs to support
current and future nGraph Backends are also available; see, for example, the
section on the :ref:`plaidml_backend`.
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
:widths: 35, 10, 10
Intel® Architecture Processors (CPUs), Yes, Yes
Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes
NVIDIA\* CUDA (GPUs), Yes, Some
AMD\* GPUs, Yes, Some
.. _hybrid_transformer:
......
.. buildlb.rst:
###########################
nGraph Library for backends
###########################
###############
Build and Test
###############
This section details how to build the C++ version of the nGraph Library, which
is targeted toward developers working on kernel-specific operations,
......
......@@ -73,11 +73,11 @@ author = 'Intel Corporation'
# built documents.
#
# The short X.Y version.
version = '0.15'
version = '0.16'
# The Documentation full version, including alpha/beta/rc tags. Some features
# available in the latest code will not necessarily be documented first
release = '0.15.0'
release = '0.16.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
......@@ -171,7 +171,7 @@ latex_elements = {
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'IntelnGraphlibrary.tex', 'Intel nGraph Library',
(master_doc, 'nGraphCompilerStack.tex', 'nGraph Compiler Stack Documentation',
'Intel Corporation', 'manual'),
]
......@@ -181,11 +181,10 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'intelngraphlibrary', 'Intel nGraph Library',
(master_doc, 'ngraphcompiler', 'nGraph Compiler stack',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
......
......@@ -8,18 +8,6 @@ This section details some of the *configuration options* and some of the
your system already has a version of nGraph installed with one of our supported
backends.
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
:widths: 35, 10, 10
Intel® Architecture Processors (CPUs), Yes, Yes
Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes
NVIDIA\* CUDA (GPUs), Yes, Some
:abbr:`Field Programmable Gate Arrays (FPGA)` (FPGAs), Coming soon, Yes
`Movidius`_, Not yet, Yes
Other, Not yet, Ask
Regardless of the framework, after the :doc:`../buildlb` step, a good place
to start usually involves making the libraries available to the framework. On
Linux\* systems built on Intel® Architecture, that command tends to looks
......
......@@ -19,17 +19,26 @@
ONNX Support
============
nGraph is able to import and execute ONNX models. Models are converted to nGraph's internal representation and converted to ``Function`` objects, which can be compiled and executed on one of nGraph's backends.
You can use nGraph's Python API to run an ONNX model and nGraph can be used as an ONNX backend using the add-on package `nGraph-ONNX <ngraph_onnx>`_.
nGraph is able to import and execute ONNX models. Models are converted to
nGraph's internal representation and converted to ``Function`` objects, which
can be compiled and executed on one of nGraph's backends.
.. note:: In order to support ONNX, nGraph must be built with the ``NGRAPH_ONNX_IMPORT_ENABLE`` flag. See `Building nGraph-ONNX <ngraph_onnx_building>`_ for more information. All nGraph packages published on PyPI are built with ONNX support.
You can use nGraph's Python API to run an ONNX model and nGraph can be used
as an ONNX backend using the add-on package `nGraph-ONNX <ngraph_onnx>`_.
.. note:: In order to support ONNX, nGraph must be built with the
``NGRAPH_ONNX_IMPORT_ENABLE`` flag. See `Building nGraph-ONNX
<ngraph_onnx_building>`_ for more information. All nGraph packages
published on PyPI are built with ONNX support.
Installation
------------
In order to prepare your environment to use nGraph and ONNX, install the Python packages for nGraph, ONNX and NumPy:
To prepare your environment to use nGraph and ONNX, install the Python packages
for nGraph, ONNX and NumPy:
::
......@@ -48,7 +57,8 @@ For example ResNet-50:
$ tar -xzvf resnet50.tar.gz
Use the following Python commands to convert the downloaded model to an nGraph ``Function``:
Use the following Python commands to convert the downloaded model to an nGraph
``Function``:
.. code-block:: python
......@@ -65,14 +75,16 @@ Use the following Python commands to convert the downloaded model to an nGraph `
<Function: 'resnet50' ([1, 1000])>
This creates an nGraph ``Function`` object, which can be used to execute a computation on a chosen backend.
This creates an nGraph ``Function`` object, which can be used to execute a
computation on a chosen backend.
Running a computation
---------------------
You can now create an nGraph ``Runtime`` backend and use it to compile your ``Function`` to a backend-specific ``Computation`` object.
Finally, you can execute your model by calling the created ``Computation`` object with input data.
You can now create an nGraph ``Runtime`` backend and use it to compile your
``Function`` to a backend-specific ``Computation`` object. Finally, you can
execute your model by calling the created ``Computation`` object with input
data:
.. code-block:: python
......@@ -94,7 +106,8 @@ Finally, you can execute your model by calling the created ``Computation`` objec
...
You can find more information about nGraph and ONNX in the `nGraph-ONNX <ngraph_onnx>`_ GitHub repository.
Find more information about nGraph and ONNX in the
`nGraph-ONNX <ngraph_onnx>`_ GitHub repository.
.. _ngraph_onnx: https://github.com/NervanaSystems/ngraph-onnx/
......
......@@ -22,15 +22,24 @@ nGraph Compiler stack
######################
.. toctree::
:maxdepth: 1
`nGraph`_ is an open-source graph compiler for :abbr:`Artificial Neural Networks (ANNs)`.
The nGraph Compiler stack provides an inherently efficient graph-based compilation
infrastructure designed to be compatible with the many of the upcoming
:abbr:`Application-Specific Integrated Circuits (ASICs)`, like the Intel® Nervana™
Neural Network Processor (Intel® Nervana™ NNP), while also unlocking a massive
performance boost on any existing hardware targets in your neural network: both GPUs
and CPUs. Using its flexible infrastructure, you will find it becomes much easier
to create Deep Learning (DL) models that can adhere to the "write once, run anywhere"
mantra that enables your AI solutions to easily go from concept to production to scale.
project/introduction.rst
Frameworks using nGraph to execute workloads have shown `up to 45X`_ performance
boost compared to native implementations. For a high-level overview, see the
:doc:`project/introduction`.
.. toctree::
:maxdepth: 1
:caption: Framework Support
:caption: Connecting Frameworks
frameworks/index.rst
frameworks/validated/list.rst
......@@ -41,16 +50,16 @@ nGraph Compiler stack
:maxdepth: 1
:caption: nGraph Core
buildlb.rst
core/overview.rst
core/fusion/index.rst
nGraph Core Ops <ops/index.rst>
core/constructing-graphs/index.rst
core/passes/passes.rst
buildlb.rst
.. toctree::
:maxdepth: 1
:caption: Python API
:caption: nGraph Python API
python_api/index.rst
......@@ -65,7 +74,7 @@ nGraph Compiler stack
.. toctree::
:maxdepth: 1
:caption: Distributed training
:caption: Distributed Training
distr/index.rst
......@@ -83,13 +92,14 @@ nGraph Compiler stack
:maxdepth: 1
:caption: Tutorials
tutorials/index.rst
nGraph.ai Tutorials <https://www.ngraph.ai/tutorials>
.. toctree::
:maxdepth: 1
:caption: Project Metadata
project/introduction.rst
project/release-notes.rst
project/contribution-guide.rst
project/governance.rst
......@@ -102,3 +112,9 @@ Indices and tables
* :ref:`search`
* :ref:`genindex`
.. _nGraph: https://www.ngraph.ai
.. _up to 45X: https://ai.intel.com/ngraph-compiler-stack-beta-release/
\ No newline at end of file
......@@ -118,14 +118,6 @@ expand your network's hardware. Each integration is unique to the framework
and its set of deep learning operators, its view on memory layout, its
feature set, etc.
.. _figure-B:
.. figure:: ../graphics/intro_kernel_to_fw_accent.png
:width: 555px
:alt:
Each of these connections represents significant work for what will
ultimately be a brittle setup that is enormously expensive to maintain.
nGraph solves this problem with nGraph bridges. A bridge takes a computational
graph and reconstructs it in the nGraph IR with a few primitive nGraph
......@@ -146,12 +138,16 @@ of each parameter for each operation. In the past, the number of required
kernels was limited, but as the AI research and industry rapidly develops, the
final product of required kernels is increasing exponentially.
.. _figure-C:
.. _figure-B:
.. figure:: ../graphics/intro_kernel_explosion.png
:width: 555px
:alt:
Each of these connections represents significant work for what will
ultimately be a brittle setup that is enormously expensive to maintain.
PlaidML addresses the kernel explosion problem in a manner that lifts a heavy
burden off kernel developers. It automatically lowers networks from nGraph
......
.. tutorials/index:
.. This will hold the organization of the tutorials we put on ngraph.ai
.. it will need to be organized in a way that is navigable for the many kinds of frameworks and backends we support in the "Compiler stack". It will need to be workable with a sitemap structure. The initial example is for the latest nGraph-TensorFlow bridge.
:orphan:
##########
Tutorials
##########
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment