index.rst 5.87 KB
Newer Older
1
.. ---------------------------------------------------------------------------
L.S. Cook's avatar
L.S. Cook committed
2
.. Copyright 2018 Intel Corporation
3 4 5 6 7 8 9 10 11 12 13 14
.. Licensed under the Apache License, Version 2.0 (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
..      http://www.apache.org/licenses/LICENSE-2.0
..
.. Unless required by applicable law or agreed to in writing, software
.. distributed under the License is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
.. ---------------------------------------------------------------------------
15

16

17 18
.. This documentation is available online at 
.. http://ngraph.nervanasys.com/docs/latest/
19

20 21 22 23 24 25 26 27 28 29 30 31

########
nGraph 
########

Welcome to the nGraph documentation site. nGraph is an open-source C++ library 
and runtime / compiler suite for :abbr:`Deep Learning (DL)` ecosystems. Our goal 
is to empower algorithm designers, data scientists, framework architects, 
software engineers, and others with the means to make their work :ref:`portable`, 
:ref:`adaptable`, and :ref:`deployable` across the most modern 
:abbr:`Machine Learning (ML)` hardware available today: optimized Deep Learning
computation devices.
32

L.S. Cook's avatar
L.S. Cook committed
33 34
.. figure:: graphics/599px-Intel-ngraph-ecosystem.png
   :width: 599px   
35 36
  

37
.. _portable:
38

39 40
Portable
========
41

42 43 44 45 46 47 48 49 50 51 52
One of nGraph's key features is **framework neutrality**. While we currently 
support :doc:`three popular <framework-integration-guides>` frameworks with 
pre-optimized deployment runtimes for training :abbr:`Deep Neural Network (DNN)`, 
models, you are not limited to these when choosing among frontends. Architects 
of any framework (even those not listed above) can use our documentation for how
to :doc:`compile and run <howto/execute>` a training model and design or tweak 
a framework to bridge directly to the nGraph compiler. With a *portable* model 
at the core of your :abbr:`DL (Deep Learning)` ecosystem, it's no longer 
necessary to bring large datasets to the model for training; you can take your 
model -- in whole, or in part -- to where the data lives and save potentially 
significant or quantifiable machine resources.  
53 54 55



56
.. _adaptable: 
57

58 59
Adaptable
=========
L.S. Cook's avatar
L.S. Cook committed
60

61 62 63 64 65 66 67 68
We've recently begun support for the `ONNX`_ format. Developers who already have 
a "trained" :abbr:`DNN (Deep Neural Network)` model can use nGraph to bypass 
significant framework-based complexity and :doc:`import it <howto/import>` 
to test or run on targeted and efficient backends with our user-friendly 
Python-based API. See the `ngraph onnx companion tool`_ to get started. 


.. csv-table::
L.S. Cook's avatar
L.S. Cook committed
69
   :header: "Framework", "Bridge Available?", "ONNX Support?"
70 71 72 73
   :widths: 27, 10, 10

   TensorFlow, Yes, Yes
   MXNet, Yes, Yes
74
   PaddlePaddle, Coming Soon, Yes
L.S. Cook's avatar
L.S. Cook committed
75 76 77
   PyTorch, No, Yes
   CNTK, No, Yes
   Other, Custom, Custom
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105


.. _deployable:

Deployable
==========

It's no secret that the :abbr:`DL (Deep Learning)` ecosystem is evolving 
rapidly. Benchmarking comparisons can be blown steeply out of proportion by 
subtle tweaks to batch or latency numbers here and there. Where traditional 
GPU-based training excels, inference can lag and vice versa. Sometimes what we
care about is not "speed at training a large dataset" but rather latency 
compiling a complex multi-layer algorithm locally, and then outputting back to 
an edge network, where it can be analyzed by an already-trained model. 

Indeed, when choosing among topologies, it is important to not lose sight of 
the ultimate deployability and machine-runtime demands of your component in
the larger ecosystem. It doesn't make sense to use a heavy-duty backhoe to 
plant a flower bulb. Furthermore, if you are trying to develop an entirely 
new genre of modeling for a :abbr:`DNN (Deep Neural Network)` component, it 
may be especially beneficial to consider ahead of time how portable and 
mobile you want that model to be within the rapidly-changing ecosystem.  
With nGraph, any modern CPU can be used to design, write, test, and deploy 
a training or inference model. You can then adapt and update that same core 
model to run on a variety of backends:  


.. csv-table::
L.S. Cook's avatar
L.S. Cook committed
106
   :header: "Backend", "Current support", "Future nGraph support"
107 108 109
   :widths: 35, 10, 10

   Intel® Architecture Processors (CPUs), Yes, Yes
L.S. Cook's avatar
L.S. Cook committed
110 111 112 113
   Intel® Nervana™ Neural Network Processor (NNPs), Yes, Yes
   AMD\* GPUs, via PlaidML, Yes
   NVIDIA\* GPUs, via PlaidML, Some 
   Intel® Architecture GPUs, Yes, Yes 
114
   :abbr:`Field Programmable Gate Arrays (FPGA)` (FPGAs), Coming soon, Yes
L.S. Cook's avatar
L.S. Cook committed
115
   Intel Movidius™ Myriad™ 2 (VPU), Coming soon, Yes
116 117 118 119
   Other, Not yet, Ask

The value we're offering to the developer community is empowerment: we are
confident that Intel® Architecture already provides the best computational 
L.S. Cook's avatar
L.S. Cook committed
120 121
resources available for the breadth of ML/DL tasks. We welcome ideas and 
`contributions`_ from the community. 
L.S. Cook's avatar
L.S. Cook committed
122

123
Further project details can be found on our :doc:`project/about` page, or see 
L.S. Cook's avatar
L.S. Cook committed
124
our :doc:`buildlb` guide for how to get started.
Scott Cyphers's avatar
Scott Cyphers committed
125 126


L.S. Cook's avatar
L.S. Cook committed
127
.. note:: The Library code is under active development as we're continually 
128
   adding support for more kinds of DL models and ops, framework compiler 
L.S. Cook's avatar
L.S. Cook committed
129
   optimizations, and backends.
130

L.S. Cook's avatar
L.S. Cook committed
131 132

=======
133

134 135
Contents
========
Scott Cyphers's avatar
Scott Cyphers committed
136

137 138 139
.. toctree::
   :maxdepth: 1
   :name: tocmaster
140
   :caption: Documentation
141

142
   buildlb.rst
L.S. Cook's avatar
L.S. Cook committed
143
   graph-basics.rst
144
   howto/index.rst
Scott Cyphers's avatar
Scott Cyphers committed
145
   ops/index.rst
146
   framework-integration-guides.rst
147
   frameworks/index.rst
148
   fusion/index.rst
L.S. Cook's avatar
L.S. Cook committed
149
   programmable/index.rst
150
   distr/index.rst
151
   python_api/index.rst
152
   project/index.rst
153 154 155 156 157


Indices and tables
==================

158
   * :ref:`search`   
159
   * :ref:`genindex`
Scott Cyphers's avatar
Scott Cyphers committed
160

161
     
L.S. Cook's avatar
L.S. Cook committed
162
.. _ONNX: http://onnx.ai
163 164
.. _ngraph onnx companion tool: https://github.com/NervanaSystems/ngraph-onnx
.. _Movidius: https://www.movidius.com/
L.S. Cook's avatar
L.S. Cook committed
165
.. _contributions: https://github.com/NervanaSystems/ngraph#how-to-contribute