Commit 104ec6fe authored by Leona C's avatar Leona C

Better details on the about page

parent fc9018dc
...@@ -39,6 +39,7 @@ extensions = ['sphinx.ext.mathjax', ...@@ -39,6 +39,7 @@ extensions = ['sphinx.ext.mathjax',
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ['_templates']
static_path = ['static']
# The suffix(es) of source filenames. # The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string: # You can specify multiple suffix as a list of string:
......
...@@ -38,13 +38,13 @@ devices, or it might mean programatically adjusting a model or the compute ...@@ -38,13 +38,13 @@ devices, or it might mean programatically adjusting a model or the compute
resources it requires, at an unknown or arbitray time after it has been deemed resources it requires, at an unknown or arbitray time after it has been deemed
to be trained well enough. to be trained well enough.
To get started, we've provided a basic example for how to :doc:`execute` a To get started, we've provided a basic example for how to :doc:`execute` with
computation with an nGraph backend; this is analogous to a framework bridge. an nGraph backend; this is analogous to a framework bridge.
For data scientists or algorithm developers who are trying to extract specifics For data scientists or algorithm developers who are trying to extract specifics
about the state of a model at a certain node, or who want to optimize a model about the state of a model at a certain node, or who want to optimize a model
at a more granular level, we provide an example for how to :doc:`import` a at a more granular level, we provide an example for how to :doc:`import` and
model and run inference after it has been exported from a DL framework. run inference after it has been exported from a DL framework.
This section is under development; we'll continually populate it with more This section is under development; we'll continually populate it with more
articles geared toward data scientists, algorithm designers, framework developers, articles geared toward data scientists, algorithm designers, framework developers,
......
...@@ -22,7 +22,7 @@ project enables modern compute platforms to run and train :abbr:`Deep Neural Net ...@@ -22,7 +22,7 @@ project enables modern compute platforms to run and train :abbr:`Deep Neural Net
models. It is framework neutral and supports a variety of backends used by models. It is framework neutral and supports a variety of backends used by
:abbr:`Deep Learning (DL)` frameworks. :abbr:`Deep Learning (DL)` frameworks.
.. image:: graphics/ngraph-ecosys.png .. image:: ../static/ngraph-ecosystem.png
:width: 585px :width: 585px
For this early release, we've provided :doc:`framework-integration-guides` to For this early release, we've provided :doc:`framework-integration-guides` to
......
...@@ -3,19 +3,39 @@ ...@@ -3,19 +3,39 @@
About About
===== =====
Welcome to Intel® nGraph™, an open source C++ library and compiler. This Welcome to nGraph™, an open-source C++ compiler library for running and
project enables modern compute platforms to run and train training :abbr:`Deep Neural Network (DNN)` models. This project is
:abbr:`Deep Neural Network (DNN)` models. It is framework-neutral and supports framework-neutral and can target a variety of modern devices or platforms.
a variety of backends used by :abbr:`Deep Learning (DL)` frameworks.
.. figure:: ../graphics/ngraph-ecosys.png .. figure:: ../graphics/ngraph-ecosystem.png
:width: 585px :width: 585px
The nGraph library translates a framework’s representation of computations into nGraph currently supports :doc:`three of the most popular <framework-integration-guides>`
an :abbr:`Intermediate Representation (IR)` designed to promote computational frameworks for :abbr:`Deep Learning (DL)` models. through what we call
efficiency on target hardware. Initially-supported backends include Intel a :term:`bridge` that can be integrated during the framework's build time.
Architecture CPUs, the Intel® Nervana Neural Network Processor™ (NNP), For developers working with other frameworks (even those not listed above),
and NVIDIA\* GPUs. we've created a :doc:`howto/index` guide so you can teach yourself how to
create bridge code that can be used to :doc:`howto/execute`.
With nGraph, data scientists can focus on data science rather than worrying
about how to adapt models to train and run efficiently on different devices.
We've recently added initial support for the `ONNX`_ format. Developers who already have a "trained" model can use nGraph to bypass a lot
of the framework-based complexity and :doc:`howto/import` to test or run it
on targeted and efficient backends with our user-friendly ``ngraph_api``.
Supported platforms
--------------------
Initially-supported backends include
* Intel Architecture CPUs,
* the Intel® Nervana Neural Network Processor™ (NNP), and
* NVIDIA\* CUDA GPUs.
Tentaively in the pipeline, we'll be adding backend support for
* :abbr:`Field Programmable Gate Arrays (FPGA)`
* Movidius compute stick
Why was this needed? Why was this needed?
...@@ -90,6 +110,7 @@ our `arXiv paper`_ from the 2018 SysML conference. ...@@ -90,6 +110,7 @@ our `arXiv paper`_ from the 2018 SysML conference.
.. _widely-supported frameworks: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html .. _widely-supported frameworks: http://ngraph.nervanasys.com/docs/latest/framework-integration-guides.html
.. _ONNX: http://onnx.ai
.. _arXiv paper: https://arxiv.org/pdf/1801.08058.pdf .. _arXiv paper: https://arxiv.org/pdf/1801.08058.pdf
.. _Intel® MKL-DNN: https://github.com/intel/mkl-dnn .. _Intel® MKL-DNN: https://github.com/intel/mkl-dnn
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment