Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
N
ngraph
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
ngraph
Commits
536342f1
Commit
536342f1
authored
Mar 03, 2018
by
L.S. Cook
Committed by
Robert Kimball
Mar 03, 2018
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Edit index page for new howto section (#578)
parent
0c43f175
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
49 additions
and
26 deletions
+49
-26
glossary.rst
doc/sphinx/source/glossary.rst
+5
-0
execute.rst
doc/sphinx/source/howto/execute.rst
+11
-10
index.rst
doc/sphinx/source/howto/index.rst
+33
-16
No files found.
doc/sphinx/source/glossary.rst
View file @
536342f1
...
...
@@ -92,3 +92,8 @@ Glossary
Tensors are maps from *coordinates* to scalar values, all of the
same type, called the *element type* of the tensor.
model description
A description of a program'
s
fundamental
operations
that
are
used
by
a
framework
to
generate
inputs
for
computation
.
doc/sphinx/source/howto/execute.rst
View file @
536342f1
.. execute.rst
.. execute
-cmp
.rst
######################
Execute a Computation
...
...
@@ -7,10 +7,10 @@ Execute a Computation
This section explains how to manually perform the steps that would normally be
performed by a framework :term:`bridge` to execute a computation. Intel® nGraph
library is targeted toward automatic construction; it is far easier for a
processing unit (GPU, CPU, or
NNP) to run a computation than it is for a user
to map out how that computation happens. Unfortunately, things that make by-hand
graph construction simpler tend to make automatic construction more difficult,
and vice versa.
processing unit (GPU, CPU, or
an `Intel Nervana NNP`_) to run a computation than
it is for a user to map out how that computation happens. Unfortunately, things
that make by-hand graph construction simpler tend to make automatic construction
more difficult,
and vice versa.
Here we will do all the bridge steps manually. The :term:`model description`
we're explaining is based on the :file:`abc.cpp` file in the ``/doc/examples/``
...
...
@@ -25,7 +25,7 @@ user) must be able to carry out in order to successfully execute a computation:
* :ref:`invoke_cmp`
* :ref:`access_outputs`
The final code is a
the end of this page, on :ref:`all_together
`.
The final code is a
t the :ref:`end of this page <all_together>
`.
.. _define_cmp:
...
...
@@ -64,8 +64,8 @@ deallocated when they are no longer needed. A brief summary of shared
pointers is given in the glossary.
Every node has zero or more *inputs*, zero or more *outputs*, and zero or more
*attributes*. The specifics for each
:cpp::type:: permitted on a core
``Op``-specific basis can be discovered in :doc:`ops
` docs. For our
*attributes*. The specifics for each
``type`` permitted on a core ``Op``-specific
basis can be discovered in our :doc:`../ops/index
` docs. For our
purpose to :ref:`define a computation <define_cmp>`, nodes should be thought of
as essentially immutable; that is, when constructing a node, we need to supply
all of its inputs. We get this process started with ops that have no inputs,
...
...
@@ -233,4 +233,5 @@ Put it all together
.. _Intel MKL-DNN: https://01.org/mkl-dnn
\ No newline at end of file
.. _Intel MKL-DNN: https://01.org/mkl-dnn
.. _Intel Nervana NNP: https://ai.intel.com/intel-nervana-neural-network-processors-nnp-redefine-ai-silicon/
\ No newline at end of file
doc/sphinx/source/howto/index.rst
View file @
536342f1
...
...
@@ -3,26 +3,43 @@
How to
======
.. note:: This section is aimed at intermediate users of Intel nGraph library.
It assumes a developer has understanding of the concepts in the previous
sections. It does not assume knowledge of any particular frontend framework.
The "How to" articles in this section explain how to do specific tasks with
Intel nGraph. The recipes are all framework agnostic; in other words, any
frontend framework that wishes to access the optimizations inherent in nGraph
will either need to do these things programatically through the framework, or to
provide documentation for the user. Our primary audience is users who have
already decided that they want the performance optimizations available through
the nGraph library's management of custom backends.
.. note:: This section is aimed at intermediate-level developers working with
the nGraph library. It assumes a developer has understanding of the concepts
in the previous sections. It does not assume knowledge of any particular
frontend framework.
The "How to" articles in this section explain how to do specific tasks with the
Intel nGraph library. The recipes are all framework agnostic; in other words,
if an entity (framework or user) wishes to make use of target-based computational
resources, it can either:
* Do the tasks programatically through the framework, or
* Provide a clear model definition with documentation for the computational
resources needed.
Since our primary audience is developers who are pushing the boundaries of deep
learning systems, we go beyond the use of deep learning primitives, and include
APIs and documentation for developers who want the ability to write programs
that use custom backends. For example, we know that GPU resources can be useful
backends for *some* kinds of algorithmic operations while they impose inherent
limitations and slow down others. We are barely scraping the surface of what is
possible for a hybridized approach to many kinds of training and inference-based
computational tasks.
One of our goals with the nGraph project is to enable developers with tools to
build programs that quickly access and process data with or from a breadth of
edge and network devices. Furthermore, we want them to be able to make use of
the best kind of computational resources for the kind of data they are processing,
after it has been gathered.
To get started, we've provided a basic example for how to execute a computation
that can run on an nGraph backend; this is analogous to a framework bridge.
This section is under development; it will eventually
contain articles targeted
toward data scientists, algorithm designers, framework developers, and backend
engineers -- anyone who wants to pivot on our examples and experiment with the
variety of hybridization and performance extractions available through th
e
nGraph library.
This section is under development; it will eventually
be populated with more
articles geared toward data scientists, algorithm designers, framework developers,
backend engineers, and others. We welcome contributions from the community and
invite you to experiment with the variety of hybridization and performanc
e
extractions available through the
nGraph library.
.. toctree::
:maxdepth: 1
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment