Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
N
ngraph
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
ngraph
Commits
e2322b66
Commit
e2322b66
authored
Aug 20, 2019
by
Leona C
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Previously-tested MXNet detail may not be current
parent
177372c3
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
24 additions
and
11 deletions
+24
-11
index.rst
doc/sphinx/source/backends/index.rst
+3
-4
list.rst
doc/sphinx/source/frameworks/validated/list.rst
+1
-1
contribution-guide.rst
doc/sphinx/source/project/contribution-guide.rst
+8
-1
testing_latency.rst
doc/sphinx/source/project/extras/testing_latency.rst
+12
-5
No files found.
doc/sphinx/source/backends/index.rst
View file @
e2322b66
...
...
@@ -21,9 +21,9 @@ from a framework on a CPU, GPU, or ASIC; it can also be used with an
*Interpreter* mode, which is primarily intended for testing, to analyze a
program, or to help a framework developer customize targeted solutions.
..
nGraph also provides a way to use the advanced tensor compiler PlaidML
..
as a backend; you can learn more about this backend and how to build it
..
from source in our documentation: :ref:`ngraph_plaidml_backend`.
nGraph also provides a way to use the advanced tensor compiler PlaidML
as a backend; you can learn more about this backend and how to build it
from source in our documentation: :ref:`ngraph_plaidml_backend`.
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
...
...
@@ -31,7 +31,6 @@ program, or to help a framework developer customize targeted solutions.
Intel® Architecture Processors (CPUs), Yes, Yes
Intel® Nervana™ Neural Network Processor™ (NNPs), Yes, Yes
NVIDIA\* CUDA (GPUs), Yes, Some
AMD\* GPUs, Yes, Some
...
...
doc/sphinx/source/frameworks/validated/list.rst
View file @
e2322b66
...
...
@@ -10,7 +10,7 @@ workloads:
* :ref:`tensorflow_valid`
* :ref:`mxnet_valid`
* :ref:`onnx_valid`
* :
doc:`../../project/extras/testing_latency.rst
`
* :
ref:`testing_latency
`
.. _tensorflow_valid:
...
...
doc/sphinx/source/project/contribution-guide.rst
View file @
e2322b66
.. contribution-guide:
.. project/contribution-guide.rst:
.._contribution_guide:
##################
Contribution guide
...
...
@@ -261,5 +264,8 @@ it is automatically enforced and reduces merge conflicts.
To contribute documentation for your code, please see the :doc:`doc-contributor-README`.
.. include:: doc-contributor-README.rst
.. _Apache 2: https://www.apache.org/licenses/LICENSE-2.0
.. _repo wiki: https://github.com/NervanaSystems/ngraph/wiki
\ No newline at end of file
doc/sphinx/source/project/extras/testing_latency.rst
View file @
e2322b66
..
project
/
extras
/
testing_latency
.
rst
:
..
_testing_latency
:
Testing
latency
===============
..
important
::
This
tutorial
was
tested
using
previous
versions
.
While
it
is
not
currently
or
officially
supported
in
the
latest
nGraph
Compiler
stack
|
version
|,
some
configuration
options
may
still
work
.
Many
open
-
source
DL
frameworks
provide
a
layer
where
experts
in
data
science
can
make
use
of
optimizations
contributed
by
machine
learning
engineers
.
Having
a
common
API
benefits
both
:
it
simplifies
deployment
and
makes
it
easier
for
ML
engineers
working
on
advanced
deep
learning
hardware
to
bring
highly
-
optimized
performance
to
a
wide
range
of
models
,
especially
in
inference
.
performance
to
a
wide
range
of
models
,
especially
in
inference
.
One
DL
framework
with
advancing
efforts
on
graph
optimizations
is
Apache
MXNet
\*,
where
`
Intel
has
contributed
efforts
showing
`
_
how
to
work
with
our
...
...
@@ -17,7 +24,7 @@ nGraph Compiler stack as an `experimental backend`_. Our approach provides
optimizations
**
than
would
be
available
to
the
MXNet
framework
alone
**,
for
reasons
outlined
in
our
`
introduction
`
_
documentation
.
Note
that
the
MXNet
bridge
requires
trained
models
only
;
it
does
not
support
distributed
training
.
training
.
...
...
@@ -62,7 +69,7 @@ install MXNet to the virtual environment:
Now
we
're ready to use nGraph to run any model on a CPU backend. Building MXNet
with nGraph automatically enabled nGraph on your model scripts, and you
shouldn'
t
need
to
do
anything
special
.
If
you
run
into
trouble
,
you
can
disable
nGraph
by
setting
nGraph
by
setting
..
code
-
block
::
console
...
...
@@ -81,14 +88,14 @@ Note that the nGraph-MXNet bridge supports static graphs only (dynamic graphs
are
in
the
works
);
so
for
this
example
,
we
begin
by
converting
the
gluon
model
into
a
static
graph
.
Also
note
that
any
model
with
a
saved
checkpoint
can
be
considered
a
"static graph"
in
nGraph
.
For
this
example
,
we
'll presume that the
model is pre-trained.
model is pre-trained.
.. literalinclude:: ../../../../examples/subgraph_snippets/mxnet-gluon-example.py
:language: python
:lines: 17-32
To load the model into nGraph, we simply bind the symbol into an Executor.
To load the model into nGraph, we simply bind the symbol into an Executor.
.. literalinclude:: ../../../../examples/subgraph_snippets/mxnet-gluon-example.py
:language: python
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment