Commit 536342f1 authored by L.S. Cook's avatar L.S. Cook Committed by Robert Kimball

Edit index page for new howto section (#578)

parent 0c43f175
...@@ -92,3 +92,8 @@ Glossary ...@@ -92,3 +92,8 @@ Glossary
Tensors are maps from *coordinates* to scalar values, all of the Tensors are maps from *coordinates* to scalar values, all of the
same type, called the *element type* of the tensor. same type, called the *element type* of the tensor.
model description
A description of a program's fundamental operations that are
used by a framework to generate inputs for computation.
.. execute.rst .. execute-cmp.rst
###################### ######################
Execute a Computation Execute a Computation
...@@ -7,10 +7,10 @@ Execute a Computation ...@@ -7,10 +7,10 @@ Execute a Computation
This section explains how to manually perform the steps that would normally be This section explains how to manually perform the steps that would normally be
performed by a framework :term:`bridge` to execute a computation. Intel® nGraph performed by a framework :term:`bridge` to execute a computation. Intel® nGraph
library is targeted toward automatic construction; it is far easier for a library is targeted toward automatic construction; it is far easier for a
processing unit (GPU, CPU, or NNP) to run a computation than it is for a user processing unit (GPU, CPU, or an `Intel Nervana NNP`_) to run a computation than
to map out how that computation happens. Unfortunately, things that make by-hand it is for a user to map out how that computation happens. Unfortunately, things
graph construction simpler tend to make automatic construction more difficult, that make by-hand graph construction simpler tend to make automatic construction
and vice versa. more difficult, and vice versa.
Here we will do all the bridge steps manually. The :term:`model description` Here we will do all the bridge steps manually. The :term:`model description`
we're explaining is based on the :file:`abc.cpp` file in the ``/doc/examples/`` we're explaining is based on the :file:`abc.cpp` file in the ``/doc/examples/``
...@@ -25,7 +25,7 @@ user) must be able to carry out in order to successfully execute a computation: ...@@ -25,7 +25,7 @@ user) must be able to carry out in order to successfully execute a computation:
* :ref:`invoke_cmp` * :ref:`invoke_cmp`
* :ref:`access_outputs` * :ref:`access_outputs`
The final code is a the end of this page, on :ref:`all_together`. The final code is at the :ref:`end of this page <all_together>`.
.. _define_cmp: .. _define_cmp:
...@@ -64,8 +64,8 @@ deallocated when they are no longer needed. A brief summary of shared ...@@ -64,8 +64,8 @@ deallocated when they are no longer needed. A brief summary of shared
pointers is given in the glossary. pointers is given in the glossary.
Every node has zero or more *inputs*, zero or more *outputs*, and zero or more Every node has zero or more *inputs*, zero or more *outputs*, and zero or more
*attributes*. The specifics for each :cpp::type:: permitted on a core *attributes*. The specifics for each ``type`` permitted on a core ``Op``-specific
``Op``-specific basis can be discovered in :doc:`ops` docs. For our basis can be discovered in our :doc:`../ops/index` docs. For our
purpose to :ref:`define a computation <define_cmp>`, nodes should be thought of purpose to :ref:`define a computation <define_cmp>`, nodes should be thought of
as essentially immutable; that is, when constructing a node, we need to supply as essentially immutable; that is, when constructing a node, we need to supply
all of its inputs. We get this process started with ops that have no inputs, all of its inputs. We get this process started with ops that have no inputs,
...@@ -233,4 +233,5 @@ Put it all together ...@@ -233,4 +233,5 @@ Put it all together
.. _Intel MKL-DNN: https://01.org/mkl-dnn .. _Intel MKL-DNN: https://01.org/mkl-dnn
\ No newline at end of file .. _Intel Nervana NNP: https://ai.intel.com/intel-nervana-neural-network-processors-nnp-redefine-ai-silicon/
\ No newline at end of file
...@@ -3,26 +3,43 @@ ...@@ -3,26 +3,43 @@
How to How to
====== ======
.. note:: This section is aimed at intermediate users of Intel nGraph library. .. note:: This section is aimed at intermediate-level developers working with
It assumes a developer has understanding of the concepts in the previous the nGraph library. It assumes a developer has understanding of the concepts
sections. It does not assume knowledge of any particular frontend framework. in the previous sections. It does not assume knowledge of any particular
frontend framework.
The "How to" articles in this section explain how to do specific tasks with
Intel nGraph. The recipes are all framework agnostic; in other words, any The "How to" articles in this section explain how to do specific tasks with the
frontend framework that wishes to access the optimizations inherent in nGraph Intel nGraph library. The recipes are all framework agnostic; in other words,
will either need to do these things programatically through the framework, or to if an entity (framework or user) wishes to make use of target-based computational
provide documentation for the user. Our primary audience is users who have resources, it can either:
already decided that they want the performance optimizations available through
the nGraph library's management of custom backends. * Do the tasks programatically through the framework, or
* Provide a clear model definition with documentation for the computational
resources needed.
Since our primary audience is developers who are pushing the boundaries of deep
learning systems, we go beyond the use of deep learning primitives, and include
APIs and documentation for developers who want the ability to write programs
that use custom backends. For example, we know that GPU resources can be useful
backends for *some* kinds of algorithmic operations while they impose inherent
limitations and slow down others. We are barely scraping the surface of what is
possible for a hybridized approach to many kinds of training and inference-based
computational tasks.
One of our goals with the nGraph project is to enable developers with tools to
build programs that quickly access and process data with or from a breadth of
edge and network devices. Furthermore, we want them to be able to make use of
the best kind of computational resources for the kind of data they are processing,
after it has been gathered.
To get started, we've provided a basic example for how to execute a computation To get started, we've provided a basic example for how to execute a computation
that can run on an nGraph backend; this is analogous to a framework bridge. that can run on an nGraph backend; this is analogous to a framework bridge.
This section is under development; it will eventually contain articles targeted This section is under development; it will eventually be populated with more
toward data scientists, algorithm designers, framework developers, and backend articles geared toward data scientists, algorithm designers, framework developers,
engineers -- anyone who wants to pivot on our examples and experiment with the backend engineers, and others. We welcome contributions from the community and
variety of hybridization and performance extractions available through the invite you to experiment with the variety of hybridization and performance
nGraph library. extractions available through the nGraph library.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment