Unverified Commit f3e4673c authored by Jennifer Myers's avatar Jennifer Myers Committed by GitHub

Merge pull request #696 from NervanaSystems/leona/20_Mar

Updates to docs
parents fc9018dc 5481af83
......@@ -39,6 +39,7 @@ extensions = ['sphinx.ext.mathjax',
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
static_path = ['static']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
......
......@@ -38,13 +38,13 @@ devices, or it might mean programatically adjusting a model or the compute
resources it requires, at an unknown or arbitray time after it has been deemed
to be trained well enough.
To get started, we've provided a basic example for how to :doc:`execute` a
computation with an nGraph backend; this is analogous to a framework bridge.
To get started, we've provided a basic example for how to :doc:`execute` with
an nGraph backend; this is analogous to a framework bridge.
For data scientists or algorithm developers who are trying to extract specifics
about the state of a model at a certain node, or who want to optimize a model
at a more granular level, we provide an example for how to :doc:`import` a
model and run inference after it has been exported from a DL framework.
at a more granular level, we provide an example for how to :doc:`import` and
run inference after it has been exported from a DL framework.
This section is under development; we'll continually populate it with more
articles geared toward data scientists, algorithm designers, framework developers,
......
......@@ -17,31 +17,52 @@
nGraph library
###############
Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train :abbr:`Deep Neural Network (DNN)`
models. It is framework neutral and supports a variety of backends used by
:abbr:`Deep Learning (DL)` frameworks.
.. image:: graphics/ngraph-ecosys.png
:width: 585px
Welcome to nGraph™, an open-source C++ compiler library for running and
training :abbr:`Deep Neural Network (DNN)` models. This project is
framework-neutral and can target a variety of modern devices or platforms.
For this early release, we've provided :doc:`framework-integration-guides` to
compile and run MXNet\* and TensorFlow\*-based projects. If you already have
a trained model, see our section on How to :doc:`howto/import` and start working
with the nGraph APIs.
.. figure:: graphics/ngraph-ecosystem.png
:width: 585px
nGraph currently supports :doc:`three popular <framework-integration-guides>`
frameworks for :abbr:`Deep Learning (DL)` models through what we call
a :term:`bridge` that can be integrated during the framework's build time.
For developers working with other frameworks (even those not listed above),
we've created a :doc:`How to Guide <howto/index>` guide so you can learn how to create
custom bridge code that can be used to :doc:`compile and run <howto/execute>`
a training model.
We've recently added initial support for the ONNX format. Developers who
already have a "trained" model can use nGraph to bypass a lot of the
framework-based complexity and :doc:`howto/import` to test or run it
on targeted and efficient backends with our user-friendly ``ngraph_api``.
With nGraph, data scientists can focus on data science rather than worrying
about how to adapt models to train and run efficiently on different devices.
Supported platforms
--------------------
Initially-supported backends include:
* Intel® Architecture Processors (CPUs),
* Intel® Nervana™ Neural Network Processor™ (NNPs), and
* NVIDIA\* CUDA (GPUs).
Tentatively in the pipeline, we plan to add support for more backends,
including:
* :abbr:`Field Programmable Gate Arrays (FPGA)` (FPGAs)
* Movidius
.. note:: The library code is under active development as we're continually
adding support for more kinds of DL models and ops, framework compiler
optimizations, and backends.
The nGraph library translates a framework’s representation of computations
into an :abbr:`Intermediate Representation (IR)` that promotes computational
efficiency on target hardware. Initially-supported backends include Intel
Architecture CPUs (``CPU``), the Intel® Nervana Neural Network Processor™ (Intel®
``NNP``), and NVIDIA\* GPUs. Currently-supported compiler optimizations include
efficient memory management and data layout abstraction.
Further project details can be found on our :doc:`project/about` page.
Further project details can be found on our :doc:`project/about` page, or see
our :doc:`install` guide for how to get started.
......
......@@ -3,19 +3,47 @@
About
=====
Welcome to Intel® nGraph™, an open source C++ library and compiler. This
project enables modern compute platforms to run and train
:abbr:`Deep Neural Network (DNN)` models. It is framework-neutral and supports
a variety of backends used by :abbr:`Deep Learning (DL)` frameworks.
Welcome to nGraph™, an open-source C++ compiler library for running and
training :abbr:`Deep Neural Network (DNN)` models. This project is
framework-neutral and can target a variety of modern devices or platforms.
.. figure:: ../graphics/ngraph-ecosys.png
:width: 585px
The nGraph library translates a framework’s representation of computations into
an :abbr:`Intermediate Representation (IR)` designed to promote computational
efficiency on target hardware. Initially-supported backends include Intel
Architecture CPUs, the Intel® Nervana Neural Network Processor™ (NNP),
and NVIDIA\* GPUs.
.. figure:: ../graphics/ngraph-ecosystem.png
:width: 585px
nGraph currently supports :doc:`three popular <../framework-integration-guides>`
frameworks for :abbr:`Deep Learning (DL)` models through what we call
a :term:`bridge` that can be integrated during the framework's build time.
For developers working with other frameworks (even those not listed above),
we've created a :doc:`How to Guide <../howto/index>` so you can learn how to create
custom bridge code that can be used to :doc:`compile and run <../howto/execute>`
a training model.
We've recently added initial support for the `ONNX`_ format. Developers who
already have a "trained" model can use nGraph to bypass a lot of the
framework-based complexity and :doc:`../howto/import` to test or run it
on targeted and efficient backends with our user-friendly ``ngraph_api``.
With nGraph, data scientists can focus on data science rather than worrying
about how to adapt models to train and run efficiently on different devices.
Supported platforms
--------------------
Initially-supported backends include:
* Intel® Architecture Processors (CPUs),
* Intel® Nervana™ Neural Network Processor™ (NNPs), and
* NVIDIA\* CUDA (GPUs).
Tentatively in the pipeline, we plan to add support for more backends,
including:
* :abbr:`Field Programmable Gate Arrays (FPGA)` (FPGAs)
* `Movidius`_ compute stick
.. note:: The library code is under active development as we're continually
adding support for more kinds of DL models and ops, framework compiler
optimizations, and backends.
Why was this needed?
......@@ -35,7 +63,7 @@ to similar ops in the new framework, and finally make the necessary changes
for the preferred backend configuration on the new framework.
We designed the Intel nGraph project to substantially reduce these kinds of
engineering complexities. Our conpiler-inspired approach means that developers
engineering complexities. Our compiler-inspired approach means that developers
have fewer constraints imposed by frameworks when working with their models;
they can pick and choose only the components they need to build custom algorithms
for advanced deep learning tasks. Furthermore, if working with a model that is
......@@ -89,7 +117,7 @@ design decisions and what is tentatively in the pipeline for development in
our `arXiv paper`_ from the 2018 SysML conference.
.. _widely-supported frameworks: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html
.. _arXiv paper: https://arxiv.org/pdf/1801.08058.pdf
.. _ONNX: http://onnx.ai
.. _Intel® MKL-DNN: https://github.com/intel/mkl-dnn
.. _Movidius: https://developer.movidius.com/
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment