Unverified Commit 78a5b764 authored by Jennifer Myers's avatar Jennifer Myers Committed by GitHub

Merge pull request #675 from NervanaSystems/leona/doc_validation

Doc validation, cleanup, remove stuff we are not using, etc
parents 45e9aa4e da783552
......@@ -33,79 +33,6 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Licenses for incorporated software
==================================
The included smartypants module, included as sphinx.util.smartypants,
is available under the following license:
----------------------------------------------------------------------
SmartyPants_ license::
Copyright (c) 2003 John Gruber
(https://daringfireball.net/projects/smartypants/)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
* Neither the name "SmartyPants" nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
This software is provided by the copyright holders and
contributors "as is" and any express or implied warranties,
including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose are
disclaimed. In no event shall the copyright owner or contributors
be liable for any direct, indirect, incidental, special,
exemplary, or consequential damages (including, but not limited
to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on
any theory of liability, whether in contract, strict liability, or
tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of
such damage.
smartypants.py license::
smartypants.py is a derivative work of SmartyPants.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
This software is provided by the copyright holders and
contributors "as is" and any express or implied warranties,
including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose are
disclaimed. In no event shall the copyright owner or contributors
be liable for any direct, indirect, incidental, special,
exemplary, or consequential damages (including, but not limited
to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on
any theory of liability, whether in contract, strict liability, or
tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of
such damage.
----------------------------------------------------------------------
The included JQuery JavaScript library is available under the MIT
license:
......@@ -132,34 +59,6 @@ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------
The included Underscore JavaScript library is available under the MIT
license:
----------------------------------------------------------------------
Copyright (c) 2009 Jeremy Ashkenas, DocumentCloud
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
-------------------------------------------------------------------------------
The included implementation of NumpyDocstring._parse_numpydoc_see_also_section
was derived from code under the following license:
......
# nGraph library
Welcome to Intel® nGraph™, an open source C++ library and
compiler. This project enables modern compute platforms to run and
train Deep Neural Network (DNN) models. It is framework-neutral and
supports a variety of backends used by Deep Learning (DL) frameworks.
Welcome to Intel® nGraph™, an open source C++ library and compiler.
This project enables modern compute platforms to run and train Deep
Neural Network (DNN) models. It is framework-neutral and supports a
variety of backends used by Deep Learning (DL) frameworks.
![nGraph ecosystem][ngraph-ecosystem]
......@@ -12,7 +12,7 @@ supports a variety of backends used by Deep Learning (DL) frameworks.
See our [install] docs for how to get started.
For this early release, we provide [framework integration guides] to
compile MXNet and TensorFlow-based projects. If you already have a
compile MXNet and TensorFlow-based projects. If you already have a
trained model, we've put together a getting started guide for
[how to import] a deep learning model and start working with the nGraph
APIs.
......@@ -28,16 +28,16 @@ We welcome community contributions to nGraph. If you have an idea how
to improve the library:
* Share your proposal via [GitHub issues].
* Ensure you can build the product and run all the examples with your patch
* In the case of a larger feature, create a test
* Submit a [pull request]
* Ensure you can build the product and run all the examples with your patch.
* In the case of a larger feature, create a test.
* Submit a [pull request].
* We will review your contribution and, if any additional fixes or
modifications are necessary, may provide feedback to guide you. When
accepted, your pull request will be merged the repository.
accepted, your pull request will be merged to the repository.
[install]: http://ngraph.nervanasys.com/docs/latest/install.html
[framework integration guides]: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html
[Github issues]: https://github.com/NervanaSystems/ngraph/issues
[pull request]: https://github.com/NervanaSystems/ngraph/pulls
[how to import]: http://ngraph.nervanasys.com/docs/latest/howto/import.html
[ngraph-ecosystem]: doc/sphinx/source/graphics/ngraph-ecosystem.png "nGraph Ecosystem"
[ngraph-ecosystem]: doc/sphinx/source/graphics/ngraph-ecosystem3.png "nGraph Ecosystem"
......@@ -18,7 +18,7 @@
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; <a href="{{ path }}">Copyright</a> {{ copyright }}. {% endtrans %}
{%- else %}
<span class="crt-size">{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.</span> <br/><div class="brandnote"> Intel nGraph library contains trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. * Other names and brands may be claimed as the property of others; see <a class="reference internal" href="branding-notice.html">branding notice</a> for more information.</class>{% endtrans %}
<span class="crt-size">{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.</span> <br/><div class="brandnote"> Intel nGraph library contains trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. * Other names and brands may be claimed as the property of others; see <a href="http://ngraph.nervanasys.com/docs/latest/branding-notice.html">branding notice</a> for more information.</class>{% endtrans %}
{%- endif %}
{%- endif %}
......
......@@ -1836,8 +1836,16 @@ div[class^='highlight'] td.code {
width: 100%;
}
.wy-menu-vertical p.caption {
font-weight: bold;
text-transform: uppercase;
font-size: 110%;
color: #fff;
white-space: nowrap;
}
code, p.caption {
font-family: Inconsolata, sans, monospace;
font-family: Consolas, sans, monospace;
color: #A79992;
font-size: 0.99em;
line-height: 1.39em;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -9,3 +9,4 @@ sticky_navigation = True
logo_only =
collapse_navigation = False
display_version = True
use_bower = FALSE
......@@ -81,6 +81,7 @@ language = 'en'
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
......@@ -187,7 +188,7 @@ texinfo_documents = [
'Miscellaneous'),
]
html_add_permalinks = ""
html_add_permalinks = "true"
breathe_projects = {
"ngraph": "../../doxygen/xml",
......
......@@ -10,26 +10,28 @@ Framework Integration Guides
.. _neon_intg:
Neon |trade|
neon |trade|
============
Use ``neon`` as a frontend
---------------------------
Use ``neon`` as a frontend for nGraph backends
-----------------------------------------------
``neon`` is a open source Deep Learning framework specifically designed to be
powered by |InG| backends.
``neon`` is a open source Deep Learning framework. For info about how to
interact and use a model with this framework, see the `ngraph-neon docs`_.
This section covers installation only.
.. important:: The numbered instructions below pick up from where
the :doc:`install` installation instructions left off, and they presume
that your system already has the library installed at ``$HOME/ngraph_dist`` as
the default location. If the |nGl| code has not yet been installed to your
system, you can follow the instructions on the `ngraph-neon python README`_ to
install everything at once. If the |nGl| code already is installed,
the :doc:`install` instructions left off, and they presume that your system
already has the library installed at ``$HOME/ngraph_dist`` as the default
location. If the |nGl| code has not yet been installed to your system, you
can follow the instructions on the `ngraph-neon python README`_ to install
everything at once. If the |nGl| code base already is installed on a system,
use this process.
#. Set the ``NGRAPH_CPP_BUILD_PATH`` and the ``LD_LIBRARY_PATH`` path to the location
where you built the nGraph libraries:
#. Set the ``NGRAPH_CPP_BUILD_PATH`` and the ``LD_LIBRARY_PATH`` path to the
location where you built the nGraph libraries. (This example shows the default
location):
.. code-block:: bash
......@@ -44,8 +46,8 @@ powered by |InG| backends.
$ sudo apt-get install python3-pip
#. (Optionally) activate a virtualenv if you like working with virtualenvs:
and go to the `python` subdirectory of the ``ngraph`` repo:
#. (Optionally) activate a virtualenv if you like working with virtualenvs and
go to the `python` subdirectory of the ``ngraph`` repo:
.. code-block:: console
......@@ -54,15 +56,24 @@ powered by |InG| backends.
(venv)$ cd ngraph/python
(venv)$ pip install -U .
#. See `this file`_ if you want detail about how to run unit tests. To start
working with models, see the `ngraph-neon repo's README`_ to start working
with models.
#. See `this file`_ if you want detail about how to run unit tests. See the
documentation at `ngraph-neon docs`_. To test the neon install you can run
the sample available in the ngraph-neon clone named
`python examples/cifar10/cifar10_conv.py`
.. code-block:: console
(venv)$ python examples/cifar10/cifar10_conv.py
.. _mxnet_intg:
Compile MXNet\* with ``libngraph``
==================================
MXNet\*
========
Compile MXNet with nGraph
--------------------------
.. important:: These instructions pick up from where the :doc:`install`
installation instructions left off, so they presume that your system already
......@@ -146,8 +157,11 @@ Compile MXNet\* with ``libngraph``
.. _tensorflow_intg:
Build TensorFlow\* with an XLA plugin to ``libngraph``
======================================================
TensorFlow\*
=============
Build with an XLA plugin to ``libngraph``
------------------------------------------
.. important:: These instructions pick up where the :doc:`install`
installation instructions left off, so they presume that your system already
......@@ -274,4 +288,5 @@ your cloned version of `ngraph-tensorflow`_:
.. _ngraph-tensorflow: https://github.com/NervanaSystems/ngraph-tensorflow
.. _/examples/mnist: https://github.com/NervanaSystems/ngraph-tensorflow/tree/develop/tensorflow/compiler/plugin/ngraph/examples/mnist
.. _ngraph-neon python README: https://github.com/NervanaSystems/ngraph/blob/master/python/README.md
.. _ngraph-neon repo's README: https://github.com/NervanaSystems/ngraph-neon/blob/master/README.md
\ No newline at end of file
.. _ngraph-neon repo's README: https://github.com/NervanaSystems/ngraph-neon/blob/master/README.md
.. _ngraph-neon docs: https://github.com/NervanaSystems/ngraph-neon/tree/master/doc
\ No newline at end of file
......@@ -13,22 +13,22 @@
.. limitations under the License.
.. ---------------------------------------------------------------------------
#####################
Intel nGraph library
#####################
###############
nGraph library
###############
Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train :abbr:`Deep Neural Network (DNN)`
project enables modern compute platforms to run and train :abbr:`Deep Neural Network (DNN)`
models. It is framework-neutral and supports a variety of backends used by
:abbr:`Deep Learning (DL)` frameworks.
.. figure:: ../graphics/ngraph-ecosystem.png
.. image:: graphics/ngraph-ecosys.png
:width: 585px
For this early release, we've provided :doc:`framework-integration-guides` to
compile and run MXNet\* and TensorFlow\*-based projects. If you already have
a trained model, we've got a section on How to :doc:`howto/import` that model
start working with the nGraph APIs.
a trained model, see our section on How to :doc:`howto/import` and start working
with the nGraph APIs.
.. note:: The library code is under active development as we're continually
adding support for more ops, more frameworks, and more backends.
......@@ -46,19 +46,19 @@ Further project details can be found on our :doc:`project/about` page.
=======
Sections
=========
Contents
========
.. toctree::
:maxdepth: 1
:name: tocmaster
:caption: Table of Contents
:caption: Documentation
install.rst
framework-integration-guides.rst
graph-basics.rst
howto/index.rst
ops/index.rst
framework-integration-guides.rst
project/index.rst
......
......@@ -40,7 +40,7 @@ The CMake procedure installs ``ngraph_dist`` to the installing user's ``$HOME``
directory as the default location. See the :file:`CMakeLists.txt` file for
details about how to change or customize the install location.
The process documented here will work on Ubuntu 16.04 (LTS)
The process documented here will work on Ubuntu\* 16.04 (LTS)
#. (Optional) Create something like ``/opt/libraries`` and (with sudo),
give ownership of that directory to your user. Creating such a placeholder
......@@ -83,7 +83,8 @@ The process documented here will work on Ubuntu 16.04 (LTS)
.. code-block:: console
$ make # note: make -j <N> may work, but sometimes results in out-of-memory errors if too many compilation processes are used
$ make # note: make -j <N> may work, but sometimes results in out-of-memory
# errors if too many compilation processes are used
#. (Optional, requires `doxygen`_, `Sphinx`_, and `breathe`_). Run ``make html``
......@@ -137,12 +138,12 @@ To perform unit tests on the install:
Compile a framework with ``libngraph``
======================================
After building and installing nGraph++ on your system, there are two likely
After building and installing nGraph on your system, there are two likely
paths for what you'll want to do next: either compile a framework to run a DL
training model, or load an import "already-trained" model for inference on an
Intel nGraph++ enabled backend
training model, or load an import of an "already-trained" model for inference
on an Intel nGraph-enabled backend.
For this former case, this early |release| release, :doc:`framework-integration-guides`,
For the former case, this early |version|, :doc:`framework-integration-guides`,
can help you get started with a training a model on a supported framework.
* :doc:`neon<framework-integration-guides>` framework,
......@@ -154,7 +155,7 @@ exported, serialized model, you can skip the section on frameworks and go direct
to our :doc:`../howto/import` documentation.
Please keep in mind that both of these are under continuous development, and will
be updated frequently in the coming months. Stay tuned!
be updated frequently in the coming months. Stay tuned!
.. _doxygen: https://www.stack.nl/~dimitri/doxygen/
......@@ -162,7 +163,6 @@ be updated frequently in the coming months. Stay tuned!
.. _breathe: https://breathe.readthedocs.io/en/latest/
.. _llvm.org: https://www.llvm.org
.. _NervanaSystems: https://github.com/NervanaSystems/ngraph/blob/master/README.md
.. _website docs: http://ngraph.nervanasys.com/index.html/index.html
.. _googletest framework: https://github.com/google/googletest.git
.. _ONNX: http://onnx.ai
.. _frontend framework: http://neon.nervanasys.com/index.html/
.. _website docs: http://ngraph.nervanasys.com/docs/latest/
\ No newline at end of file
......@@ -8,24 +8,24 @@ project enables modern compute platforms to run and train
:abbr:`Deep Neural Network (DNN)` models. It is framework-neutral and supports
a variety of backends used by :abbr:`Deep Learning (DL)` frameworks.
.. figure:: graphics/ngraph-ecosystem.png
:width: 585px
.. figure:: ../graphics/ngraph-ecosys.png
:width: 585px
The nGraph library translates a framework’s representation of computations into
an :abbr:`Intermediate Representation (IR)` designed to promote computational
efficiency on target hardware. Initially-supported backends include Intel
Architecture CPUs, the Intel® Nervana Neural Network Processor™ (NNP),
and NVIDIA\* GPUs. Currently-supported compiler optimizations include efficient
memory management and data layout abstraction.
and NVIDIA\* GPUs.
Why is this needed?
--------------------
Why was this needed?
---------------------
When Deep Learning (DL) frameworks first emerged as the vehicle for training
and inference models, they were designed around kernels optimized for a
particular platform. As a result, many backend details were being exposed in
the model definitions, making the adaptability and portability of DL models
to other or more advanced backends inherently complex and expensive.
models, they were designed around kernels optimized for a particular platform.
As a result, many backend details were being exposed in the model definitions,
making the adaptability and portability of DL models to other, or more advanced
backends inherently complex and expensive.
The traditional approach means that an algorithm developer cannot easily adapt
his or her model to different backends. Making a model run on a different
......@@ -35,36 +35,61 @@ to similar ops in the new framework, and finally make the necessary changes
for the preferred backend configuration on the new framework.
We designed the Intel nGraph project to substantially reduce these kinds of
engineering complexities. While optimized kernels for deep-learning primitives
are provided through the project and via libraries like Intel® Math Kernel
Library (Intel® MKL) for Deep Neural Networks (Intel® MKL-DNN), there are
several compiler-inspired ways in which performance can be further optimized.
=======
The *nGraph core* uses a strongly-typed and platform-neutral stateless graph
representation for computations. Each node, or *op*, in the graph corresponds
to one step in a computation, where each step produces zero or more tensor
outputs from zero or more tensor inputs.
There is a *framework bridge* for each supported framework which acts as
an intermediary between the *ngraph core* and the framework. A *transformer*
plays a similar role between the ngraph core and the various execution
platforms.
Transformers compile the graph using a combination of generic and
platform-specific graph transformations. The result is a function that
can be executed from the framework bridge. Transformers also allocate
and deallocate, as well as read and write tensors under direction of the
bridge.
engineering complexities. Our conpiler-inspired approach means that developers
have fewer constraints imposed by frameworks when working with their models;
they can pick and choose only the components they need to build custom algorithms
for advanced deep learning tasks. Furthermore, if working with a model that is
already trained (or close to being trained), or if they wish to pivot and add a
new layer to an existing model, the data scientist can :doc:`../howto/import`
and start working with :doc:`../ops/index` more quickly.
How does it work?
------------------
The *nGraph core* uses a **strongly-typed and platform-neutral stateless graph
representation** for computations. Each node, or *op*, in the graph corresponds
to one :term:`step` in a computation, where each step produces zero or more
tensor outputs from zero or more tensor inputs. For a more detailed dive into
how this works, read our documentation on how to :doc:`../howto/execute`.
How do I connect it to a framework?
------------------------------------
Currently, we offer *framework bridges* for some `widely-supported frameworks`_.
The bridge acts as an intermediary between the *ngraph core* and the framework,
providing a means to use various execution platforms. The result is a function
that can be executed from the framework bridge.
Given that we have no way to predict how many more frameworks might be invented
for either model or framework-specific purposes, it would be nearly impossible
for us to create bridges for every framework that currently exists (or that will
exist in the future). Thus, the library provides a way for developers to write
or contribute "bridge code" for various frameworks. We welcome such
contributions from the DL community.
How do I connect a DL training or inference model to nGraph?
-------------------------------------------------------------
Framework bridge code is *not* the only way to connect a model (function graph)
to nGraph's :doc:`../ops/index`. We've also built an importer for models that
have been exported from a framework and saved as serialized file, such as ONNX.
To learn how to convert such serialized files to an nGraph model, please see
the :doc:`../howto/import` documentation.
What's next?
-------------
We developed Intel nGraph to simplify the realization of optimized deep
learning performance across frameworks and hardware platforms. You can
read more about design decisions and what is tentatively in the pipeline
for development in our `SysML conference paper`_.
.. _frontend: http://neon.nervanasys.com/index.html/
.. _SysML conference paper: https://arxiv.org/pdf/1801.08058.pdf
.. _MXNet: http://mxnet.incubator.apache.org/
.. _TensorFlow: https://www.tensorflow.org/
We developed nGraph to simplify the realization of optimized deep learning
performance across frameworks and hardware platforms. You can read more about
design decisions and what is tentatively in the pipeline for development in
our `arXiv paper`_ from the 2018 SysML conference.
.. _widely-supported frameworks: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html
.. _arXiv paper: https://arxiv.org/pdf/1801.08058.pdf
.. _Intel® MKL-DNN: https://github.com/intel/mkl-dnn
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment