Commit 6433a8f0 authored by Leona C's avatar Leona C Committed by Scott Cyphers

Documentation for Dynamic Shapes and additional graph construction options (#3930)

* Initial dynamic shapes doc

* Basics on dynamic shapes, with example code

* Add glossary defs and dynamic shapes example

* Slightly better organization

* Address make style check failure, maybe

* Test dynamic shapes doc w 0.27.0-rc.0+9aa81d9

* Resolve doc build error w new opset versioning

* Review comments addressed

* Add theme-relevant revised illustrations from collab_ngai

* style

* Style fixes

* Run make style-apply with clang-format-3.9
parent 27d26fa2
//*****************************************************************************
// Copyright 2017-2019 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//*****************************************************************************
#include <iostream>
#include <ngraph/ngraph.hpp>
using namespace ngraph;
int main()
{
// Create and compile a graph where the provided info of shape of x is
// (2,?)
auto x_shape_info = PartialShape{2, Dimension::dynamic()};
auto x = make_shared<op::Parameter>(element::i32, x_shape_info);
auto a = x + x;
auto f = make_shared<Function>({a}, {x});
auto be = runtime::backend::create();
auto ex = be->compile(f);
// Create a dynamic tensor of shape (2,?)
auto t_out = be->create_dynamic_tensor(element::i32, x_shape_info);
// Call the graph to write a value with shape (2,3) to t_out
auto t_in = be->create_tensor(element::i32, Shape{2, 3});
t_in->write();
ex->call({t_out}, {t_in})
// Call the graph again, to write a value with a different shape to
// t_out.
t_in = be->create_tensor(element::i32, Shape{2, 20});
t_in->write();
ex->call({t_out}, {t_in})
// Get the result. At this point t_out->get_shape() would return
// Shape{2,20},
// but t_out->get_partial_shape() would return "(2,?)"
float r[2][3];
t_result->read(&r, 0, sizeof(r));
std::cout << "[" << std::endl;
for (size_t i = 0; i < s[0]; ++i)
{
std::cout << " [";
for (size_t j = 0; j < s[1]; ++j)
{
std::cout << r[i][j] << ' ';
}
std::cout << ']' << std::endl;
}
std::cout << ']' << std::endl;
return 0;
}
......@@ -51,7 +51,7 @@ How to use?
#. A single iteration of the executable is executed by calling the ``call``
method on the ``Executable`` object.
.. figure:: ../graphics/ExecutionInterfaceRunGraphs.png
.. figure:: ../graphics/execution-interface-run-graph.svg
:width: 650px
The execution interface for nGraph
......
......@@ -106,7 +106,7 @@ The process documented here will work on Ubuntu\* 16.04 (LTS) or on Ubuntu
.. code-block:: console
$ cmake .. [-DNGRAPH_USE_PREBUILT_LLVM=OFF] [-DNGRAPH_TARGET_ARCH=skylake-avx512]
$ cmake .. [-DNGRAPH_TARGET_ARCH=skylake-avx512]
#. Run ``$ make`` and ``make install`` to install ``libngraph.so`` and the
header files to ``~/ngraph_dist``:
......
......@@ -5,17 +5,40 @@ Execute a computation
######################
This section explains how to manually perform the steps that would normally be
performed by a framework :term:`bridge` to execute a computation. The nGraph
library is targeted toward automatic construction; it is far easier for a
processing unit (GPU, CPU, or an `Intel Nervana NNP`_) to run a computation than
it is for a human to map out how that computation happens. Unfortunately, things
performed by a framework :term:`bridge` to execute a computation. nGraph graphs
are targeted toward automatic construction; it is far easier for a processor
(a CPU, GPU, or `purpose-built silicon`_) to execute a computation than it is
for a human to map out how that computation happens. Unfortunately, things
that make by-hand graph construction simpler tend to make automatic construction
more difficult, and vice versa.
Here we will do all the bridge steps manually. The :term:`model description`
walk-through below is based on the :file:`abc.cpp` code in the ``/doc/examples/``
directory. We'll be deconstructing the steps that must happen (either programmatically
or manually) in order to successfully execute a computation:
Nevertheless, it can be helpful to break down what is happening during graph
construction. The documetation that follows explains two approaches frameworks
can use to compile with nGraph operations:
* :ref:`Using complete shapes <scenario_one>`
* :ref:`Using partial shapes <scenario_two>`
The nGraph :abbr:`Intermediate Representation (IR)` uses a strong, dynamic
type system, including static shapes. This means that at compilation, every
tensor (or, equivalently, every node output) in the graph is assigned
**complete shape information**; that is, one and only one shape. The static
process by which this assignment takes place is called :term:`shape propagation`.
In the :ref:`first scenario <scenario_one>`, the :term:`model description`
walk-through is based on the :file:`abc.cpp` code in the ``/doc/examples/abc``
directory, and it deconstructs the steps that must happen (either programmatically
or manually) in order to successfully execute a computation given complete
shape information.
.. _scenario_one:
Scenario One: Using Complete Shapes
===================================
A step-by-step example of how a framework might execute with complete shape
information is provided here. For a step-by-step example using dynamic
shapes, see :ref:`scenario_two`.
* :ref:`define_cmp`
* :ref:`specify_backend`
......@@ -25,13 +48,11 @@ or manually) in order to successfully execute a computation:
* :ref:`invoke_cmp`
* :ref:`access_outputs`
The full code is at the :ref:`end of this page <all_together>`.
.. _define_cmp:
Define the computation
======================
----------------------
To a :term:`framework`, a computation is simply a transformation of inputs to
outputs. While a :term:`bridge` can programmatically construct the graph
......@@ -111,10 +132,10 @@ function, in the order they are to be passed to the compiled function. A
.. _specify_backend:
Specify the backend upon which to run the computation
=====================================================
-----------------------------------------------------
For a framework bridge, a *backend* is the environment that can perform the
computations; it can be done with a CPU, GPU, or an Intel Nervana NNP. A
computations; it can be done with a CPU, GPU, or `purpose-built silicon`_. A
*transformer* can compile computations for a backend, allocate and deallocate
tensors, and invoke computations.
......@@ -123,7 +144,7 @@ and allocate backends. A backend is somewhat analogous to a multi-threaded
process.
There are two backends for the CPU: the optimized ``"CPU"`` backend, which uses
the `Intel MKL-DNN`_, and the ``"INTERPRETER"`` backend, which runs reference
the `DNNL`_, and the ``"INTERPRETER"`` backend, which runs reference
versions of kernels that favor implementation clarity over speed. The
``"INTERPRETER"`` backend can be slow, and is primarily intended for testing.
See the documentation on :doc:`runtime options for various backends <../../backends/index>`
......@@ -139,7 +160,7 @@ To continue with our original example and select the ``"CPU_Backend"``:
.. _compile_cmp:
Compile the computation
=======================
-----------------------
Compilation triggers something that can be used as a factory for producing a
``CallFrame`` which is a *function* and its associated *state* that can run
......@@ -152,7 +173,7 @@ thread needs to execute the function at the same time, create multiple
.. _allocate_backend_storage:
Allocate backend storage for the inputs and outputs
===================================================
---------------------------------------------------
At the graph level, functions are stateless. They do have internal state related
to execution, but there is no user-visible state. Variables must be passed as
......@@ -182,7 +203,7 @@ with ``Tensor``.
.. _initialize_inputs:
Initialize the inputs
=====================
---------------------
Next we need to copy some data into the tensors.
......@@ -196,7 +217,7 @@ copying data to/from the tensor.
.. _invoke_cmp:
Invoke the computation
======================
----------------------
To invoke the function, we simply pass argument and resultant tensors to the
call frame:
......@@ -209,7 +230,7 @@ call frame:
.. _access_outputs:
Access the outputs
==================
------------------
We can use the ``read`` method to access the result:
......@@ -217,10 +238,10 @@ We can use the ``read`` method to access the result:
:language: cpp
:lines: 60-77
.. _all_together:
.. _sshp:
Put it all together
===================
Compiling with Complete Shape Information
-----------------------------------------
.. literalinclude:: ../../../../examples/abc/abc.cpp
:language: cpp
......@@ -228,7 +249,96 @@ Put it all together
:caption: "The (a + b) * c example for executing a computation on nGraph"
.. _scenario_two:
Scenario Two: Known Partial Shape
=================================
The :ref:`second scenario <scenario_two>` involves the use of dynamic tensors.
A :term:`dynamic tensor` is a tensor whose shape can change from one "iteration"
to the next. When a dynamic tensor is created, a framework :term:`bridge` might
supply only *partial* shape information: it might be **all** the tensor
dimensions, **some** of the tensor dimensions, or **none** of the tensor
dimensions; furthermore, the rank of the tensor may be left unspecified.
The "actual" shape of the tensor is not specified until some function writes
some value to it. The actual shape can change when the value of the tensor
is overwritten. It is the backend’s responsibility to set the actual shape.
The :term:`model description` for the second scenario based on the
:file:`partial_shape.cpp` code in the ``/doc/examples/dynamic_tensor``
directory, and it deconstructs the steps that must happen (either
programmatically or manually) in order to successfully retreive shape data.
* :ref:`create_dyn_tensor`
* :ref:`call_graph_vw_`
* :ref:`call_graph_vwnew`
* :ref:`kpsh`
Create and compile a graph for ``f(x) = x + x`` where the provided info
of shape ``x`` is ``(2,?)``:
.. literalinclude:: ../../../../examples/dynamic_tensor/partial_shape.cpp
:language: cpp
:lines: 27-32
.. _create_dyn_tensor:
Create a dynamic tensor
-----------------------
Create a dynamic tensor of shape ``(2,?)``
.. literalinclude:: ../../../../examples/dynamic_tensor/partial_shape.cpp
:language: cpp
:lines: 35
At this point, ``t_out->get_shape()`` would throw an exception, while
``t_out->get_partial_shape()`` would return ``"(2,?)"``.
.. _call_graph_vw_:
Write shape
-----------
Call the graph to write a value with shape (2,3) to t_out
.. literalinclude:: ../../../../examples/dynamic_tensor/partial_shape.cpp
:language: cpp
:lines: 38-40
At this point, ``t_out->get_shape()`` would return ``Shape{2,3}``,
while ``t_out->get_partial_shape()`` would return ``"(2,?)"``.
.. _call_graph_vwnew:
Write new shape
---------------
Call the graph again, to write a value with a different shape to ``t_out``.
.. literalinclude:: ../../../../examples/dynamic_tensor/partial_shape.cpp
:language: cpp
:lines: 44-45
At this point, ``t_out->get_shape()`` would return ``Shape{2,20}``,
while ``t_out->get_partial_shape()`` would return ``"(2,?)"``.
.. _kpsh:
Compiling with Known Partial Shape
----------------------------------
.. literalinclude:: ../../../../examples/dynamic_tensor/partial_shape.cpp
:language: cpp
:linenos:
:caption: "Full code for compiling with dynamic tensors and partial shape"
.. _purpose-built silicon: https://www.intel.ai/nervana-nnp
.. _DNNL: https://intel.github.io/mkl-dnn/
.. _Intel MKL-DNN: https://01.org/mkl-dnn
.. _Intel Nervana NNP: https://ai.intel.com/intel-nervana-neural-network-processors-nnp-redefine-ai-silicon/
......@@ -30,34 +30,5 @@ resources, it can either:
.. note:: This section is aimed at intermediate-level developers. It assumes an
understanding of the concepts in the previous sections. It does not assume
knowledge of any particular frontend framework.
Since our primary audience is developers who are pushing the boundaries of deep
learning systems, we go beyond the use of deep learning primitives, and include
APIs and documentation for developers who want the ability to write programs
that use custom backends. For example, we know that GPU resources can be useful
backends for *some* kinds of algorithmic operations while they impose inherent
limitations or slow down others.
One of our goals with the nGraph library is to enable developers with tools to
quickly build programs that access and process data from a breadth of edge and
networked devices. This might mean bringing compute resources closer to edge
devices, or it might mean programatically adjusting a model or the compute
resources it requires, at an unknown or arbitrary time after it has been deemed
to be trained well enough.
To get started, we've provided a basic example for how to :doc:`execute` a
computation that can run on an nGraph backend; this is analogous to a
framework bridge. We also provide a larger example for training and
evaluating a simple MNIST MLP model.
For data scientists or algorithm developers who are trying to extract specifics
about the state of a model at a certain node, or who want to optimize a model
at a more granular level, we provide an example for how to :doc:`import` and
run inference after it has been exported from a DL framework.
This section is under development; we'll continually populate it with more
articles geared toward data scientists, algorithm designers, framework developers,
backend engineers, and others. We welcome ideas and contributions from the
community.
knowledge of any particular frontend framework.
......@@ -62,7 +62,7 @@ hardware-specific primitives; here they get matched via Intel® MKL-DNN.
.. _figure-simple-compiler:
.. figure:: ../../graphics/simple-compiler-passes.png
.. figure:: ../../graphics/simple-compiler-passes.svg
:width: 750px
:alt: Simple kernel fusion
......
......@@ -4,8 +4,38 @@
Dynamic Shapes
==============
.. toctree::
:name:
:maxdepth: 1
For an example on how to use dynamic shapes, see the :ref:`scenario_two`
documentation.
Runtime Error Checking
----------------------
Static type-checking in the presence of dynamic shapes will make optimistic
assumptions about things like shape mismatches. For example, if an elementwise
op is provided inputs of shapes ``(2,?)`` and ``(?,5)``, the type checker will
proceed under the assumption that the user is not going to pass tensors with
inconsistent shape at runtime, and therefore infer an output shape of ``(2,5)``.
That means that shape mismatches can now occur at runtime.
.. _partial_shapes:
PartialShape, Dimension, and Rank Classes
-----------------------------------------
Partial shape information is expressed via the ``PartialShape``, ``Dimension``,
and ``Rank`` classes.
.. note:: ``Rank`` is an alias for ``Dimension``, used when the value represents
the number of axes in a shape, rather than the size of one dimension in a shape.
.. doxygenclass:: ngraph::PartialShape
:project: ngraph
:members:
.. doxygenclass:: ngraph::Dimension
:project: ngraph
:members:
......@@ -13,4 +13,4 @@ Working with Frameworks
onnx_integ.rst
paddle_integ.rst
tensorflow_connect.rst
other.rst
other/index.rst
......@@ -47,8 +47,8 @@ nGraph code in one place allows for easy maintenance.
.. _figure-paddle-design:
.. figure:: ../graphics/paddlepaddle_design.png
:width: 555px
.. figure:: ../graphics/paddlepaddle_design.svg
:width: 100%
:alt:
*Figure A* above depicts nGraph access from PaddlePaddle. The PaddlePaddle
......@@ -65,7 +65,7 @@ is organized in the following file structure:
.. _figure-paddle-dir:
.. figure:: ../graphics/PaddlePaddleDir.svg
:width: 555px
:width: 100%
:alt:
Compilation of nGraph is handled by the ``ngraph.cmake`` file in the
......
......@@ -101,11 +101,25 @@ Glossary
In the context of a function graph, the term "result" refers to
what stands in for the returned value.
dynamic tensor
A tensor whose shape can change from one "iteration" to the next. When
created, a framework :term:`bridge` might supply only *partial* shape
information: it might be **all** the tensor dimensions, **some** of the
tensor dimensions, or **none** of the tensor dimensions; furthermore,
the rank of the tensor may be left unspecified.
shape
The shape of a tensor is a tuple of non-negative integers that
represents an exclusive upper bound for coordinate values.
shape propagation
The static process by which assignment of every tensor (or,
equivalently, every node output) in the graph is assigned
**complete shape information**.
shared pointer
The C++ standard template library has the template
......
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -4,7 +4,7 @@
List of Core ``ops``
####################
Not currently a comprehensive list.
Some operations are experimental.
:ref:`more_about`
......@@ -160,10 +160,9 @@ Not currently a comprehensive list.
More about Core Ops
-------------------
An ``Op``'s primary role is to function as a node in a ddirected acyclic
An ``Op``'s primary role is to function as a node in a directed acyclic
computation graph.
*Core ops* are ops that are available and generally useful to all framework
bridges and that can be compiled by all transformers. A framework bridge may
define framework-specific ops to simplify graph construction, provided that the
......@@ -188,14 +187,6 @@ where there is no ambiguity.
If a framework supports extending the set of ops it offers, a bridge may even
expose transformer-specific ops to the framework user.
.. figure:: ../graphics/tablengraphops.png
:width: 535px
:alt: Operations Available in the nGraph IR
Operations Available in the nGraph IR
.. important:: Our design philosophy is that the graph is not a script for
running kernels; rather, our compilation will match ``ops`` to appropriate
kernels for the backend(s) in use. Thus, we expect that adding of new Core
......
......@@ -57,6 +57,6 @@ Mathematical Definition
C++ Interface
=============
.. doxygenclass:: ngraph::op::OneHot
.. doxygenclass:: ngraph::op::v0::OneHot
:project: ngraph
:members:
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment