Commit 30b1652c authored by Leona C's avatar Leona C

Doc validation, cleanup, remove stuff we are not using, etc

parent 45e9aa4e
...@@ -33,79 +33,6 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ...@@ -33,79 +33,6 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Licenses for incorporated software Licenses for incorporated software
================================== ==================================
The included smartypants module, included as sphinx.util.smartypants,
is available under the following license:
----------------------------------------------------------------------
SmartyPants_ license::
Copyright (c) 2003 John Gruber
(https://daringfireball.net/projects/smartypants/)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
* Neither the name "SmartyPants" nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
This software is provided by the copyright holders and
contributors "as is" and any express or implied warranties,
including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose are
disclaimed. In no event shall the copyright owner or contributors
be liable for any direct, indirect, incidental, special,
exemplary, or consequential damages (including, but not limited
to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on
any theory of liability, whether in contract, strict liability, or
tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of
such damage.
smartypants.py license::
smartypants.py is a derivative work of SmartyPants.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
This software is provided by the copyright holders and
contributors "as is" and any express or implied warranties,
including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose are
disclaimed. In no event shall the copyright owner or contributors
be liable for any direct, indirect, incidental, special,
exemplary, or consequential damages (including, but not limited
to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on
any theory of liability, whether in contract, strict liability, or
tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of
such damage.
----------------------------------------------------------------------
The included JQuery JavaScript library is available under the MIT The included JQuery JavaScript library is available under the MIT
license: license:
...@@ -132,34 +59,6 @@ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION ...@@ -132,34 +59,6 @@ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------------------- ----------------------------------------------------------------------
The included Underscore JavaScript library is available under the MIT
license:
----------------------------------------------------------------------
Copyright (c) 2009 Jeremy Ashkenas, DocumentCloud
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
-------------------------------------------------------------------------------
The included implementation of NumpyDocstring._parse_numpydoc_see_also_section The included implementation of NumpyDocstring._parse_numpydoc_see_also_section
was derived from code under the following license: was derived from code under the following license:
......
# nGraph library # nGraph library
Welcome to Intel® nGraph™, an open source C++ library and Welcome to Intel® nGraph™, an open source C++ library and compiler.
compiler. This project enables modern compute platforms to run and This project enables modern compute platforms to run and train Deep
train Deep Neural Network (DNN) models. It is framework-neutral and Neural Network (DNN) models. It is framework-neutral and supports a
supports a variety of backends used by Deep Learning (DL) frameworks. variety of backends used by Deep Learning (DL) frameworks.
![nGraph ecosystem][ngraph-ecosystem] ![nGraph ecosystem][ngraph-ecosystem]
...@@ -12,7 +12,7 @@ supports a variety of backends used by Deep Learning (DL) frameworks. ...@@ -12,7 +12,7 @@ supports a variety of backends used by Deep Learning (DL) frameworks.
See our [install] docs for how to get started. See our [install] docs for how to get started.
For this early release, we provide [framework integration guides] to For this early release, we provide [framework integration guides] to
compile MXNet and TensorFlow-based projects. If you already have a compile MXNet and TensorFlow-based projects. If you already have a
trained model, we've put together a getting started guide for trained model, we've put together a getting started guide for
[how to import] a deep learning model and start working with the nGraph [how to import] a deep learning model and start working with the nGraph
APIs. APIs.
...@@ -28,16 +28,16 @@ We welcome community contributions to nGraph. If you have an idea how ...@@ -28,16 +28,16 @@ We welcome community contributions to nGraph. If you have an idea how
to improve the library: to improve the library:
* Share your proposal via [GitHub issues]. * Share your proposal via [GitHub issues].
* Ensure you can build the product and run all the examples with your patch * Ensure you can build the product and run all the examples with your patch.
* In the case of a larger feature, create a test * In the case of a larger feature, create a test.
* Submit a [pull request] * Submit a [pull request].
* We will review your contribution and, if any additional fixes or * We will review your contribution and, if any additional fixes or
modifications are necessary, may provide feedback to guide you. When modifications are necessary, may provide feedback to guide you. When
accepted, your pull request will be merged the repository. accepted, your pull request will be merged to the repository.
[install]: http://ngraph.nervanasys.com/docs/latest/install.html [install]: http://ngraph.nervanasys.com/docs/latest/install.html
[framework integration guides]: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html [framework integration guides]: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html
[Github issues]: https://github.com/NervanaSystems/ngraph/issues [Github issues]: https://github.com/NervanaSystems/ngraph/issues
[pull request]: https://github.com/NervanaSystems/ngraph/pulls [pull request]: https://github.com/NervanaSystems/ngraph/pulls
[how to import]: http://ngraph.nervanasys.com/docs/latest/howto/import.html [how to import]: http://ngraph.nervanasys.com/docs/latest/howto/import.html
[ngraph-ecosystem]: doc/sphinx/source/graphics/ngraph-ecosystem.png "nGraph Ecosystem" [ngraph-ecosystem]: doc/sphinx/source/graphics/ngraph-ecosystem3.png "nGraph Ecosystem"
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
{%- if hasdoc('copyright') %} {%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; <a href="{{ path }}">Copyright</a> {{ copyright }}. {% endtrans %} {% trans path=pathto('copyright'), copyright=copyright|e %}&copy; <a href="{{ path }}">Copyright</a> {{ copyright }}. {% endtrans %}
{%- else %} {%- else %}
<span class="crt-size">{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.</span> <br/><div class="brandnote"> Intel nGraph library contains trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. * Other names and brands may be claimed as the property of others; see <a class="reference internal" href="branding-notice.html">branding notice</a> for more information.</class>{% endtrans %} <span class="crt-size">{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.</span> <br/><div class="brandnote"> Intel nGraph library contains trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. * Other names and brands may be claimed as the property of others; see <a href="http://ngraph.nervanasys.com/docs/latest/branding-notice.html">branding notice</a> for more information.</class>{% endtrans %}
{%- endif %} {%- endif %}
{%- endif %} {%- endif %}
......
...@@ -1836,8 +1836,16 @@ div[class^='highlight'] td.code { ...@@ -1836,8 +1836,16 @@ div[class^='highlight'] td.code {
width: 100%; width: 100%;
} }
.wy-menu-vertical p.caption {
font-weight: bold;
text-transform: uppercase;
font-size: 110%;
color: #fff;
white-space: nowrap;
}
code, p.caption { code, p.caption {
font-family: Inconsolata, sans, monospace; font-family: Consolas, sans, monospace;
color: #A79992; color: #A79992;
font-size: 0.99em; font-size: 0.99em;
line-height: 1.39em; line-height: 1.39em;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -9,3 +9,4 @@ sticky_navigation = True ...@@ -9,3 +9,4 @@ sticky_navigation = True
logo_only = logo_only =
collapse_navigation = False collapse_navigation = False
display_version = True display_version = True
use_bower = FALSE
...@@ -81,6 +81,7 @@ language = 'en' ...@@ -81,6 +81,7 @@ language = 'en'
# This patterns also effect to html_static_path and html_extra_path # This patterns also effect to html_static_path and html_extra_path
exclude_patterns = [] exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = 'sphinx'
...@@ -187,7 +188,7 @@ texinfo_documents = [ ...@@ -187,7 +188,7 @@ texinfo_documents = [
'Miscellaneous'), 'Miscellaneous'),
] ]
html_add_permalinks = "" html_add_permalinks = "true"
breathe_projects = { breathe_projects = {
"ngraph": "../../doxygen/xml", "ngraph": "../../doxygen/xml",
......
...@@ -10,26 +10,28 @@ Framework Integration Guides ...@@ -10,26 +10,28 @@ Framework Integration Guides
.. _neon_intg: .. _neon_intg:
Neon |trade| neon |trade|
============ ============
Use ``neon`` as a frontend Use ``neon`` as a frontend for nGraph backends
--------------------------- -----------------------------------------------
``neon`` is a open source Deep Learning framework specifically designed to be ``neon`` is a open source Deep Learning framework. For info about how to
powered by |InG| backends. interact and use a model with this framework, see the `ngraph-neon docs`_.
This section covers installation only.
.. important:: The numbered instructions below pick up from where .. important:: The numbered instructions below pick up from where
the :doc:`install` installation instructions left off, and they presume the :doc:`install` instructions left off, and they presume that your system
that your system already has the library installed at ``$HOME/ngraph_dist`` as already has the library installed at ``$HOME/ngraph_dist`` as the default
the default location. If the |nGl| code has not yet been installed to your location. If the |nGl| code has not yet been installed to your system, you
system, you can follow the instructions on the `ngraph-neon python README`_ to can follow the instructions on the `ngraph-neon python README`_ to install
install everything at once. If the |nGl| code already is installed, everything at once. If the |nGl| code base already is installed on a system,
use this process.
#. Set the ``NGRAPH_CPP_BUILD_PATH`` and the ``LD_LIBRARY_PATH`` path to the
#. Set the ``NGRAPH_CPP_BUILD_PATH`` and the ``LD_LIBRARY_PATH`` path to the location location where you built the nGraph libraries. (This example shows the default
where you built the nGraph libraries: location):
.. code-block:: bash .. code-block:: bash
...@@ -44,8 +46,8 @@ powered by |InG| backends. ...@@ -44,8 +46,8 @@ powered by |InG| backends.
$ sudo apt-get install python3-pip $ sudo apt-get install python3-pip
#. (Optionally) activate a virtualenv if you like working with virtualenvs: #. (Optionally) activate a virtualenv if you like working with virtualenvs and
and go to the `python` subdirectory of the ``ngraph`` repo: go to the `python` subdirectory of the ``ngraph`` repo:
.. code-block:: console .. code-block:: console
...@@ -54,15 +56,17 @@ powered by |InG| backends. ...@@ -54,15 +56,17 @@ powered by |InG| backends.
(venv)$ cd ngraph/python (venv)$ cd ngraph/python
(venv)$ pip install -U . (venv)$ pip install -U .
#. See `this file`_ if you want detail about how to run unit tests. To start #. See `this file`_ if you want detail about how to run unit tests. See the
working with models, see the `ngraph-neon repo's README`_ to start working documentation `ngraph-neon docs`_ to start working with models.
with models.
.. _mxnet_intg: .. _mxnet_intg:
Compile MXNet\* with ``libngraph`` MXNet\*
================================== ========
Compile MXNet with nGraph
--------------------------
.. important:: These instructions pick up from where the :doc:`install` .. important:: These instructions pick up from where the :doc:`install`
installation instructions left off, so they presume that your system already installation instructions left off, so they presume that your system already
...@@ -146,8 +150,11 @@ Compile MXNet\* with ``libngraph`` ...@@ -146,8 +150,11 @@ Compile MXNet\* with ``libngraph``
.. _tensorflow_intg: .. _tensorflow_intg:
Build TensorFlow\* with an XLA plugin to ``libngraph`` TensorFlow\*
====================================================== =============
Build with an XLA plugin to ``libngraph``
------------------------------------------
.. important:: These instructions pick up where the :doc:`install` .. important:: These instructions pick up where the :doc:`install`
installation instructions left off, so they presume that your system already installation instructions left off, so they presume that your system already
...@@ -274,4 +281,5 @@ your cloned version of `ngraph-tensorflow`_: ...@@ -274,4 +281,5 @@ your cloned version of `ngraph-tensorflow`_:
.. _ngraph-tensorflow: https://github.com/NervanaSystems/ngraph-tensorflow .. _ngraph-tensorflow: https://github.com/NervanaSystems/ngraph-tensorflow
.. _/examples/mnist: https://github.com/NervanaSystems/ngraph-tensorflow/tree/develop/tensorflow/compiler/plugin/ngraph/examples/mnist .. _/examples/mnist: https://github.com/NervanaSystems/ngraph-tensorflow/tree/develop/tensorflow/compiler/plugin/ngraph/examples/mnist
.. _ngraph-neon python README: https://github.com/NervanaSystems/ngraph/blob/master/python/README.md .. _ngraph-neon python README: https://github.com/NervanaSystems/ngraph/blob/master/python/README.md
.. _ngraph-neon repo's README: https://github.com/NervanaSystems/ngraph-neon/blob/master/README.md .. _ngraph-neon repo's README: https://github.com/NervanaSystems/ngraph-neon/blob/master/README.md
\ No newline at end of file .. _ngraph-neon docs: https://github.com/NervanaSystems/ngraph-neon/tree/master/doc
\ No newline at end of file
...@@ -13,22 +13,22 @@ ...@@ -13,22 +13,22 @@
.. limitations under the License. .. limitations under the License.
.. --------------------------------------------------------------------------- .. ---------------------------------------------------------------------------
##################### ###############
Intel nGraph library nGraph library
##################### ###############
Welcome to Intel® nGraph™, an open source C++ library and compiler. This Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train :abbr:`Deep Neural Network (DNN)` project enables modern compute platforms to run and train :abbr:`Deep Neural Network (DNN)`
models. It is framework-neutral and supports a variety of backends used by models. It is framework-neutral and supports a variety of backends used by
:abbr:`Deep Learning (DL)` frameworks. :abbr:`Deep Learning (DL)` frameworks.
.. figure:: ../graphics/ngraph-ecosystem.png .. image:: graphics/ngraph-ecosys.png
:width: 585px :width: 585px
For this early release, we've provided :doc:`framework-integration-guides` to For this early release, we've provided :doc:`framework-integration-guides` to
compile and run MXNet\* and TensorFlow\*-based projects. If you already have compile and run MXNet\* and TensorFlow\*-based projects. If you already have
a trained model, we've got a section on How to :doc:`howto/import` that model a trained model, see our section on How to :doc:`howto/import` and start working
start working with the nGraph APIs. with the nGraph APIs.
.. note:: The library code is under active development as we're continually .. note:: The library code is under active development as we're continually
adding support for more ops, more frameworks, and more backends. adding support for more ops, more frameworks, and more backends.
...@@ -46,19 +46,19 @@ Further project details can be found on our :doc:`project/about` page. ...@@ -46,19 +46,19 @@ Further project details can be found on our :doc:`project/about` page.
======= =======
Sections Contents
========= ========
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:name: tocmaster :name: tocmaster
:caption: Table of Contents :caption: Documentation
install.rst install.rst
framework-integration-guides.rst
graph-basics.rst graph-basics.rst
howto/index.rst howto/index.rst
ops/index.rst ops/index.rst
framework-integration-guides.rst
project/index.rst project/index.rst
......
...@@ -40,7 +40,7 @@ The CMake procedure installs ``ngraph_dist`` to the installing user's ``$HOME`` ...@@ -40,7 +40,7 @@ The CMake procedure installs ``ngraph_dist`` to the installing user's ``$HOME``
directory as the default location. See the :file:`CMakeLists.txt` file for directory as the default location. See the :file:`CMakeLists.txt` file for
details about how to change or customize the install location. details about how to change or customize the install location.
The process documented here will work on Ubuntu 16.04 (LTS) The process documented here will work on Ubuntu\* 16.04 (LTS)
#. (Optional) Create something like ``/opt/libraries`` and (with sudo), #. (Optional) Create something like ``/opt/libraries`` and (with sudo),
give ownership of that directory to your user. Creating such a placeholder give ownership of that directory to your user. Creating such a placeholder
...@@ -83,7 +83,8 @@ The process documented here will work on Ubuntu 16.04 (LTS) ...@@ -83,7 +83,8 @@ The process documented here will work on Ubuntu 16.04 (LTS)
.. code-block:: console .. code-block:: console
$ make # note: make -j <N> may work, but sometimes results in out-of-memory errors if too many compilation processes are used $ make # note: make -j <N> may work, but sometimes results in out-of-memory
# errors if too many compilation processes are used
#. (Optional, requires `doxygen`_, `Sphinx`_, and `breathe`_). Run ``make html`` #. (Optional, requires `doxygen`_, `Sphinx`_, and `breathe`_). Run ``make html``
...@@ -137,12 +138,12 @@ To perform unit tests on the install: ...@@ -137,12 +138,12 @@ To perform unit tests on the install:
Compile a framework with ``libngraph`` Compile a framework with ``libngraph``
====================================== ======================================
After building and installing nGraph++ on your system, there are two likely After building and installing nGraph on your system, there are two likely
paths for what you'll want to do next: either compile a framework to run a DL paths for what you'll want to do next: either compile a framework to run a DL
training model, or load an import "already-trained" model for inference on an training model, or load an import of an "already-trained" model for inference
Intel nGraph++ enabled backend on an Intel nGraph-enabled backend.
For this former case, this early |release| release, :doc:`framework-integration-guides`, For the former case, this early |version|, :doc:`framework-integration-guides`,
can help you get started with a training a model on a supported framework. can help you get started with a training a model on a supported framework.
* :doc:`neon<framework-integration-guides>` framework, * :doc:`neon<framework-integration-guides>` framework,
...@@ -154,7 +155,7 @@ exported, serialized model, you can skip the section on frameworks and go direct ...@@ -154,7 +155,7 @@ exported, serialized model, you can skip the section on frameworks and go direct
to our :doc:`../howto/import` documentation. to our :doc:`../howto/import` documentation.
Please keep in mind that both of these are under continuous development, and will Please keep in mind that both of these are under continuous development, and will
be updated frequently in the coming months. Stay tuned! be updated frequently in the coming months. Stay tuned!
.. _doxygen: https://www.stack.nl/~dimitri/doxygen/ .. _doxygen: https://www.stack.nl/~dimitri/doxygen/
...@@ -162,7 +163,6 @@ be updated frequently in the coming months. Stay tuned! ...@@ -162,7 +163,6 @@ be updated frequently in the coming months. Stay tuned!
.. _breathe: https://breathe.readthedocs.io/en/latest/ .. _breathe: https://breathe.readthedocs.io/en/latest/
.. _llvm.org: https://www.llvm.org .. _llvm.org: https://www.llvm.org
.. _NervanaSystems: https://github.com/NervanaSystems/ngraph/blob/master/README.md .. _NervanaSystems: https://github.com/NervanaSystems/ngraph/blob/master/README.md
.. _website docs: http://ngraph.nervanasys.com/index.html/index.html
.. _googletest framework: https://github.com/google/googletest.git .. _googletest framework: https://github.com/google/googletest.git
.. _ONNX: http://onnx.ai .. _ONNX: http://onnx.ai
.. _frontend framework: http://neon.nervanasys.com/index.html/ .. _website docs: http://ngraph.nervanasys.com/docs/latest/
\ No newline at end of file
...@@ -8,24 +8,24 @@ project enables modern compute platforms to run and train ...@@ -8,24 +8,24 @@ project enables modern compute platforms to run and train
:abbr:`Deep Neural Network (DNN)` models. It is framework-neutral and supports :abbr:`Deep Neural Network (DNN)` models. It is framework-neutral and supports
a variety of backends used by :abbr:`Deep Learning (DL)` frameworks. a variety of backends used by :abbr:`Deep Learning (DL)` frameworks.
.. figure:: graphics/ngraph-ecosystem.png .. figure:: ../graphics/ngraph-ecosys.png
:width: 585px :width: 585px
The nGraph library translates a framework’s representation of computations into The nGraph library translates a framework’s representation of computations into
an :abbr:`Intermediate Representation (IR)` designed to promote computational an :abbr:`Intermediate Representation (IR)` designed to promote computational
efficiency on target hardware. Initially-supported backends include Intel efficiency on target hardware. Initially-supported backends include Intel
Architecture CPUs, the Intel® Nervana Neural Network Processor™ (NNP), Architecture CPUs, the Intel® Nervana Neural Network Processor™ (NNP),
and NVIDIA\* GPUs. Currently-supported compiler optimizations include efficient and NVIDIA\* GPUs.
memory management and data layout abstraction.
Why is this needed? Why was this needed?
-------------------- ---------------------
When Deep Learning (DL) frameworks first emerged as the vehicle for training When Deep Learning (DL) frameworks first emerged as the vehicle for training
and inference models, they were designed around kernels optimized for a models, they were designed around kernels optimized for a particular platform.
particular platform. As a result, many backend details were being exposed in As a result, many backend details were being exposed in the model definitions,
the model definitions, making the adaptability and portability of DL models making the adaptability and portability of DL models to other, or more advanced
to other or more advanced backends inherently complex and expensive. backends inherently complex and expensive.
The traditional approach means that an algorithm developer cannot easily adapt The traditional approach means that an algorithm developer cannot easily adapt
his or her model to different backends. Making a model run on a different his or her model to different backends. Making a model run on a different
...@@ -35,36 +35,61 @@ to similar ops in the new framework, and finally make the necessary changes ...@@ -35,36 +35,61 @@ to similar ops in the new framework, and finally make the necessary changes
for the preferred backend configuration on the new framework. for the preferred backend configuration on the new framework.
We designed the Intel nGraph project to substantially reduce these kinds of We designed the Intel nGraph project to substantially reduce these kinds of
engineering complexities. While optimized kernels for deep-learning primitives engineering complexities. Our conpiler-inspired approach means that developers
are provided through the project and via libraries like Intel® Math Kernel have fewer constraints imposed by frameworks when working with their models;
Library (Intel® MKL) for Deep Neural Networks (Intel® MKL-DNN), there are they can pick and choose only the components they need to build custom algorithms
several compiler-inspired ways in which performance can be further optimized. for advanced deep learning tasks. Furthermore, if working with a model that is
already trained (or close to being trained), or if they wish to pivot and add a
======= new layer to an existing model, the data scientist can :doc:`../howto/import`
and start working with :doc:`../ops/index` more quickly.
The *nGraph core* uses a strongly-typed and platform-neutral stateless graph
representation for computations. Each node, or *op*, in the graph corresponds
to one step in a computation, where each step produces zero or more tensor How does it work?
outputs from zero or more tensor inputs. ------------------
There is a *framework bridge* for each supported framework which acts as The *nGraph core* uses a **strongly-typed and platform-neutral stateless graph
an intermediary between the *ngraph core* and the framework. A *transformer* representation** for computations. Each node, or *op*, in the graph corresponds
plays a similar role between the ngraph core and the various execution to one :term:`step` in a computation, where each step produces zero or more
platforms. tensor outputs from zero or more tensor inputs. For a more detailed dive into
how this works, read our documentation :doc:`../howto/execute`.
Transformers compile the graph using a combination of generic and
platform-specific graph transformations. The result is a function that
can be executed from the framework bridge. Transformers also allocate How do I connect it to a framework?
and deallocate, as well as read and write tensors under direction of the ------------------------------------
bridge.
Currently, we offer *framework bridges* for some `widely-supported frameworks`_.
The bridge acts as an intermediary between the *ngraph core* and the framework,
providing a means to use various execution platforms. The result is a function
that can be executed from the framework bridge.
Given that we have no way to predict how many more frameworks might be invented
for either model or framework-specific purposes, it would be nearly impossible
for us to create bridges for every framework that currently exists (or that will
exist in the future). Thus, the library provides a way for developers to write
or contribute "bridge code" for various frameworks. We welcome such
contributions from the DL community.
How do I connect a DL training or inference model to nGraph?
-------------------------------------------------------------
Framework bridge code is *not* the only way to connect a model (function graph)
to nGraph's :doc:`../ops/index`. We've also built an importer for models that
have been exported from a framework and saved as serialized file, such as ONNX.
To learn how to convert such serialized files to an nGraph model, please see
the :doc:`../howto/import` documentation.
What's next?
-------------
We developed Intel nGraph to simplify the realization of optimized deep We developed nGraph to simplify the realization of optimized deep learning
learning performance across frameworks and hardware platforms. You can performance across frameworks and hardware platforms. You can read more about
read more about design decisions and what is tentatively in the pipeline design decisions and what is tentatively in the pipeline for development in
for development in our `SysML conference paper`_. our `arXiv paper`_ from the 2018 SysML conference.
.. _frontend: http://neon.nervanasys.com/index.html/
.. _SysML conference paper: https://arxiv.org/pdf/1801.08058.pdf .. _widely-supported frameworks: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html
.. _MXNet: http://mxnet.incubator.apache.org/ .. _arXiv paper: https://arxiv.org/pdf/1801.08058.pdf
.. _TensorFlow: https://www.tensorflow.org/ .. _Intel® MKL-DNN: https://github.com/intel/mkl-dnn
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment