Commit 0bf21af9 authored by L.S. Cook's avatar L.S. Cook Committed by Scott Cyphers

Amazon codeshare (#429)

* WIP on finding a good format for op docs in RST

* A few more scribbles

* fix up branch for Amazon code share

* add conf.py configuration details from aproctor's branch for doxy-breathe integration

* update section on how to build the documentation with breathe install details

* Remove empty file on training, update framework integration notes

* Add CentOS stub, fix spelling, core op definition, add to glossary.

* more documentation cleanup on README and installation and testing

* more cleanup of docs for TernsorFlow

* Simplify Dot Autodiff (#412)

* Simplify Dot Autodiff

* remove commented code

* Remove TupleType, ValueType (#411)

* Remove TupleType, ValueType

* Fix compile error.

* Change convolution reference to work with f32 (#409)

* Drwebb/gpu backend dot op (#413)

* Drwebb/gpu backend dot op (#387)

* GPU Dot prod emitter switch statement

* cuBLAS dot kernel call

* Flush out arg substitution into gpu dot kernel call

* Drwebb/gpu backend dot op (#392)

* Take in CodeWriter into gpu op emitters

* Introduce GPU function gen based on pass functions

* Additional gpu emitter stubs

* link cublas in to unit test and ngraph

* Use static code gen methods for GPU, add new GPU op stubs

* use pass manager to declare functions / cublas Updates

* Prune down gpu_external_function wip

* Switch back to GPU tensor views in GPU backend

* Pass in cublas handle to GPU external function

* cuMalloc memory in gpu tensor view

* Use cuda runtime malloc and free for tensor view managment c

* change GPU tensor view init, and use GPU tensor view for GPU call frame

* include headers as system dirs

* GPU tensor printing utility function

* cublasSetPointer to device mode / Fix copyright notification lowercasing

* Passing GPU dot product test using cuBLAS

Clean up

* Changes from review

* Add an overivew.

* Intro for building graphs.

* Refactor docs so that Doxygen and Sphinx are integrated (Sphinx depends on Doxygen with the docstrings stuff)

Still need to resolve a lingering assumption that the build dir is contained in private-ngraph-cpp. It's proving to be surprisingly tricky.

* Added the TensorFlow XLA build information and example of how to run MNIST MLP with TF/nGraph

* Updated TF integration guide for clarity. Added files from cyphers-amazon branch. Add minor changes to sphinx-doxy to test apis

* Small revision of overview and add graphic from arXiv paper

* WIP more editing, picking up from where I left off last week

* Fix garbled sentence edit

* WIP Edit for readability and such
:

* Better font rendering on all architectures included with our custom theme

* Cleanup current version of documentation.  Add NeoSans font binaries to make local font rendering of h1 h2 etc

* Missed merge conflict

* Add something on functions, don't forward-reference parameters

* What we have so far into a PR for review

* Need file for cmake

* Missing header

* Remove duplicate file

* added breathe package to contrib/docker/Dockerfile.ngraph_cpp
parent b408a08e
# Copyright 2017 Nervana Systems Inc.
# Copyright 2018 Nervana Systems Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
......
......@@ -19,5 +19,6 @@ RUN pip install --upgrade pip
# installed sphinx with pip to get the updated version 1.6.5
# allows for make html build under the doc/source directory as an interim build process
RUN pip install sphinx
RUN pip install breathe
WORKDIR /home
......@@ -11,24 +11,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
add_custom_target( docs
if ("${NGRAPH_BUILD_DOCS}" MATCHES "^ON$")
add_custom_target( docs
COMMENT "Build all of the documentation types selected during CMake configuration."
)
set(DOCS_TARGET_IS_EMPTY TRUE)
add_subdirectory( doxygen )
add_subdirectory( sphinx )
if (DOCS_TARGET_IS_EMPTY)
add_custom_target( docs-is-noop-error
add_subdirectory( doxygen )
add_subdirectory( sphinx )
else()
add_custom_target( docs
COMMAND echo
COMMAND echo "The 'docs' target does nothing because every kind of doc was disabled during configuration"
COMMAND echo "The 'docs' target is disabled. To enable the building of documentation, re-run cmake with the option -DNGRAPH_BUILD_DOCS=ON."
COMMAND echo
COMMAND false
VERBATIM
)
add_dependencies( docs docs-is-noop-error )
endif()
......@@ -11,38 +11,41 @@
# See the License for the specific language governing permissions and
# limitations under the License.
set(NGRAPH_BUILD_DOXYGEN_DOCS FALSE
CACHE BOOL
"The NGraph build system shall contain a target for Doxygen-based docs."
)
find_package(Doxygen REQUIRED)
if ("${NGRAPH_DOXYGEN_WARN_IF_UNDOCUMENTED}" MATCHES "^ON$")
set(DOXYGEN_WARN_IF_UNDOCUMENTED YES)
else()
set(DOXYGEN_WARN_IF_UNDOCUMENTED NO)
endif()
if (NGRAPH_BUILD_DOXYGEN_DOCS)
find_package(Doxygen REQUIRED)
if ("${NGRAPH_DOXYGEN_QUIET}" MATCHES "^ON$")
set(DOXYGEN_QUIET YES)
else()
set(DOXYGEN_QUIET NO)
endif()
set(DOXYGEN_IN "${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in")
set(DOXYGEN_OUT "${CMAKE_CURRENT_BINARY_DIR}/Doxyfile")
configure_file("${DOXYGEN_IN}" "${DOXYGEN_OUT}" @ONLY)
set(DOXYGEN_IN "${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in")
set(DOXYGEN_OUT "${CMAKE_CURRENT_BINARY_DIR}/Doxyfile")
configure_file("${DOXYGEN_IN}" "${DOXYGEN_OUT}" @ONLY)
add_custom_target(doxygen-docs
add_custom_target(doxygen-docs
ALL
COMMAND "${DOXYGEN_EXECUTABLE}" "${DOXYGEN_OUT}"
WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}"
COMMENT "Generating documentation with Doxygen"
VERBATIM )
add_dependencies( docs doxygen-docs )
set(DOCS_TARGET_IS_EMPTY FALSE PARENT_SCOPE)
add_dependencies( docs doxygen-docs )
install(
install(
DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/html/"
DESTINATION "${NGRAPH_INSTALL_DOC}/api-reference/html"
OPTIONAL
)
install(
install(
DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/latex/"
DESTINATION "${NGRAPH_INSTALL_DOC}/api-reference/latex"
OPTIONAL
)
endif()
PROJECT_NAME = "ngraph++"
PROJECT_BRIEF = "Nervana graph compiler"
PROJECT_NAME = "Intel® nGraph™ library"
PROJECT_BRIEF = "Intel® nGraph™ library"
OUTPUT_DIRECTORY = @CMAKE_CURRENT_BINARY_DIR@
INPUT = @CMAKE_SOURCE_DIR@/src
RECURSIVE = YES
EXCLUDE_PATTERNS = json.hpp
USE_MATHJAX = YES
GENERATE_XML = YES
WARN_IF_UNDOCUMENTED = @DOXYGEN_WARN_IF_UNDOCUMENTED@
QUIET = @DOXYGEN_QUIET@
# Minimal makefile for Sphinx documentation
# Robust Makefile for Sphinx documentation
#
# You can set these variables from the command line.
......@@ -18,3 +18,112 @@ help:
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
doxy-code:
$(Q)(cat ngraph.doxyfile ; echo "STRIP_FROM_PATH=${NGRAPH_BASE}" ) | doxygen - 2>&1 | tee doc.log
doxy: doxy-code
clean:
@rm -rf $(BUILDDIR)/*
@rm -rf html
@rm -rf xml
@rm -rf doxygen
@rm -rf latex
htmldocs: doxy html
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json: prep
$(SPHINXBUILD) -t $(DOC_TAG) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@rm -rf samples
@rm -rf boards
@echo
@echo "Build finished; now you can process the JSON files."
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/ngraph"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/ngraph"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
This source diff could not be displayed because it is too large. You can view the blob instead.
# Maintainer: Gavin Lloyd
# https://github.intel.com/gavinllo/ttf-neo-sans-intel
pkgname=ttf-neo-sans-intel
pkgver=1.00
pkgrel=3
pkgdesc='Versatile, futuristic typeface for Intel-branded material'
arch=('ANY')
depends=('fontconfig' 'xorg-font-utils')
source=('NeoSansIntel-Italic.ttf'
'NeoSansIntel-LightItalic.ttf'
'NeoSansIntel-Light.ttf'
'NeoSansIntel-MediumItalic.ttf'
'NeoSansIntel-Medium.ttf'
'NeoSansIntel.ttf')
sha256sums=('be2f036d58320bd0fab7cca7327b806840ddfedfdc4e44a520a85bd53a1ed7b3'
'ce45deb38ad2749ba25cbb76084955e34a86f627043f1f0f8f8073720115545c'
'd522c9c3905532680f8bb8068fa340200d2e5e45376ea89d97bcc8edbce8eff8'
'61b3ce0ed96b6f343c8ac0a94471ed504708782bee7d9df88fadc564640ffbba'
'6cd878034142c390eeb98d2a17ee1b949c2f8ded0a8684d3b17e0fe4203a8fd8'
'303bc44874e23a563775e5d463a6ec3dd7bdfc7948fa95d65a45fa965bf5ee28')
package() {
install -d $pkgdir/usr/share/fonts/TTF/
install -m644 *.ttf $pkgdir/usr/share/fonts/TTF/
}
.. about:
About
=====
Welcome to the Intel nGraph project, an open source C++ library for developers
of :abbr:`Deep Learning (DL)` (DL) systems and frameworks. Here you will find
a suite of components, documentation, and APIs that can be used with
:abbr:`Deep Neural Network (DNN)` models defined in a variety of frameworks.
The nGraph library translates a framework’s representation of computations into
a neutral-:abbr:`Intermediate Representation (IR)` designed to promote
computational efficiency on target hardware; it works on Intel and non-Intel
platforms.
.. figure:: graphics/fig.jpeg
The *nGraph core* uses a strongly-typed and platform-neutral stateless graph
representation for computations. Each node, or *op*, in the graph corresponds
to one step in a computation, where each step produces zero or more tensor
outputs from zero or more tensor inputs.
There is a *framework bridge* for each supported framework which acts as
an intermediary between the *ngraph core* and the framework. A *transformer*
plays a similar role between the ngraphcore and the various execution
platforms.
Transformers compile the graph using a combination of generic and
platform-specific graph transformations. The result is a function that
can be executed from the framework bridge. Transformers also allocate
and deallocate, as well as read and write, tensors under direction of the
bridge.
For this early |release| release, we provide framework integration guides
to
* :ref:`mxnet_intg`,
* :ref:`tensorflow_intg`, and
* Try neon™ `frontend`_ framework for training GPU-performant models.
Integration guides for each of these other frameworks is tentatively
forthcoming and/or open to the community for contributions and sample
documentation:
* `Chainer`_,
* `PyTorch`_,
* `Caffe2`_, and
* Frameworks not yet written (for algorithms that do not yet exist).
.. _Caffe2: https://github.com/caffe2/
.. _PyTorch: http://pytorch.org/
.. _Chainer: https://chainer.org/
.. _frontend: http://neon.nervanasys.com/index.html/
......@@ -4,3 +4,7 @@ API
###
.. TODO don't add Python APIs that will break the build.
Sections
********
......@@ -22,10 +22,11 @@ This script does *not* modify the source code.
Core Ops
--------
We have some core ops. Other ops may be added to core when they
have sufficient documentation and examples of those ops in practice
or potentially-practical use cases.
Our design philosophy is that the graph is not a script for running kernels, but, rather,
that the graph should describe the computation in terms of ops that are building blocks,
and compilation should match these ops to appropriate kernels for the backend(s) in use.
Thus, we expect that adding core ops should be infrequent. Instead, functionality should
be added by adding functions that build sub-graphs from existing core ops.
Coding style
......
......@@ -34,7 +34,8 @@ needs_sphinx = '1.6.5'
extensions = ['sphinx.ext.mathjax',
'sphinx.ext.ifconfig',
'sphinx.ext.viewcode',
'sphinx.ext.autodoc'
'sphinx.ext.autodoc',
'breathe'
]
# Add any paths that contain templates here, relative to this directory.
......@@ -62,9 +63,9 @@ author = 'Intel Corporation'
# built documents.
#
# The short X.Y version.
version = '0.5.1'
version = 'alpha'
# The full version, including alpha/beta/rc tags.
release = '0.5.1'
release = 'alpha'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
......@@ -189,6 +190,16 @@ texinfo_documents = [
html_add_permalinks = ""
breathe_projects = {
"nGraph": "../../../build/doc/doxygen/xml",
}
breathe_default_project = "nGraph"
breathe_projects = {
"nGraph": "xml"
}
rst_epilog = u"""
.. |codename| replace:: Intel nGraph
......
......@@ -27,7 +27,7 @@ with respect to additions or feature requests.
If you prefer to use a containerized application, like Jupyter\* notebooks,
Google Docs\*, or MS Word\* to write and share documentation contributions,
you can convert the ``doc/source/.rst`` files to another format with a tool
you can convert the ``doc/sphinx/source/*.rst`` files to another format with a tool
like ``pypandoc`` and share a link to your docs on our `wiki`_.
Another option is to fork the `ngraph repo`_, essentially snapshotting it at
......@@ -38,8 +38,7 @@ our wiki.
.. note:: Please do not submit Jupyter* notebook code to the Intel nGraph library
repos; best practice is to maintain any project-specific examples, tests, or
walk-throughs separately. Alternatively, you may wish to upstream documentation
contributions directly to whatever frontend framework supports the rendering and
reproducibility of your example.
contributions directly to whatever frontend framework supports your example.
......@@ -126,21 +125,26 @@ Build the Documentation
Right now the minimal version of Sphinx needed to build the documentation is
Sphinx v. 1.6.5. This can be installed with `pip3` either to a virtual environment, or
to your base system if you plan to contribute much to docs.
Sphinx v. 1.6.5. This can be installed with `pip3`, either to a virtual
environment, or to your base system if you plan to contribute much to docs.
`Breathe`_ can also be installed to build C++ API documentation (currently WIP).
To build documentation locally, run:
.. code-block:: console
$ pip3 install [-I] Sphinx==1.6.5 [--user]
$ pip3 install [-I] breathe [--user]
$ cd doc/sphinx/
$ make html
For tips similar to this, see the `sphinx`_ stable reST documentation.
.. _ngraph repo: https://github.com/NervanaSystems/ngraph/
.. _ngraph repo: https://github.com/NervanaSystems/ngraph-cpp/
.. _documentation repo: https://github.com/NervanaSystems/ngraph/tree/master/doc
.. _sphinx: http://www.sphinx-doc.org/en/stable/rest.html
.. _wiki: https://github.com/NervanaSystems/ngraph/wiki/
.. _Breathe: https://breathe.readthedocs.io/en/latest/
......@@ -5,14 +5,35 @@ Glossary
.. glossary::
function graph
The Intel nGraph library uses a function graph to represent an ``op``'s
parameters and results.
op
An op represents an operation. Ops are stateless and have zero or more
inputs and zero or more outputs. Some ops have additional constant
attributes. Every output of an op corresponds to a tensor and has an
element type and a shape. The element types and shapes of the outputs of
an op are determined by the inputs and attributes of the op.
tensors
Tensors are maps from *coordinates* to scalar values, all of the same type,
called the *element type* of the tensor.
parameter
In the context of a function graph, a "paramater" refers
to what "stands in" for an argument in an ``op`` definition.
In the context of a function graph, a "parameter" refers to what "stands
in" for an argument in an ``op`` definition.
result
In the context of a function graph, the term "result" refers to what
stands in for the returned *value*.
stands in for the returned value.
shape
The shape of a tensor is a tuple of non-negative integers that represents an
exclusive upper bound for coordinate values.
step
An abstract "action" that produces zero or more tensor outputs from zero or more tensor
inputs. Steps correspond to *ops* that connect *nodes*.
function graph
The Intel nGraph library uses a function graph to represent an ``op``'s
parameters and results.
.. build-a-functiongraph:
Defining a function graph on the nGraph library
###############################################
.. graph-basics:
Graph Basics
============
To build a function graph with the nGraph library, first understand the ways
that the library will handle graph values before and during compilation. Since
it can be fairly easy to confuse C++ terms with their counterparts in the
``ngraph`` function (and with the lower-level C++ representations of those
counterparts), we provide this reference.
Descriptions of ngraph values
-----------------------------
- *Element values* are integers, floats, etc.
- Each ``type`` of element value is described by an ``ElementType``.
- A C++ :cpp:type:`type` is required for referencing literals during
compilation.
- The :cpp:type:`type`'s ``value`` may be represented differently in a
compiled compilation. For example, a 32-bit float can hold a 16-bit float.
- A *value* in a graph is either a tensor view or a tuple.
- A **tensor view** is an indexed collection of element values, all of
the same element type. An element value is not a graph value; a 0-rank
tensor holds one element value and serves the same purpose.
- A **tuple** is 0 or more values, which can consist of tuples and
tensor views.
- Analogous to the value are "value types", also defined recursively.
- **Tensor view types** These types describe indexed collections of
primitive types. They are specified by a shape and an primitive
type for the elements.
.. TODO add Doxy links corresponding to these tensor view types'
APIs or use the literalinclude better
- **Tuple types** These are cartesian product types for tuples of
tuples and tensors, described by a sequence of tuple types and
tensor view types.
Tensors
-------
.. TODO add basic semantics
*Tensors* are maps from coordinates to scalar values, all of the same type,
called the *element type* of the tensor. Coordinates are tuples of non-negative
integers; all the coordinates for a tensor have the same length, called the
*rank* of the tensor. We often use :math:`n`-tensor for tensors with rank
:math:`n`. An :math:`n`-dimensional array is a common implementation of a
tensor, and the two terms are often used interchangeably. However, a tensor
could just as easily be a function that returns 0 for every coordinate.
The :term:`shape` of a tensor is a tuple of non-negative integers that
represents an exclusive upper bound for coordinate values. A tensor has an
element for every coordinate less than the shape, so the *size* of the tensor
is the product of the values in the shape.
An :math:`n`-dimensional array is a common implementation of a tensor, and the
two terms are often used interchangeably, but a tensor could just as easily be
a function that returns 0 for every coordinate.
In the graph, every op input must be associated with an op output, and every op
output must have a constant element type and shape that will correspond to the
tensors used in the computation.
Ops
---
The graph is a composition of tensor computations, called ``ops``, which are
nodes in the graph. In the graph, every :term:`op` *input* must be associated
with an op *output*, and every op output must have a constant element type and
shape to correspond with the tensors used in the computation. Every op has:
* zero or more inputs, and
* zero or more outputs;
these represent tensors that will be provided during execution. Ops may also
have additional attributes that do not change during execution.
Graph function
---------------
Function definition begins with creating one or more ``Parameter`` ops,
which represent the tensors that will be supplied as arguments to the function.
Parameters have no inputs and attributes for the element type and shape of the
tensor that will be provided as an argument. The unique output of the
``Parameter`` will have the provided element type and shape.
Constructed ops have element types and shapes for each of their outputs, which
are determined during op construction from the element types and shapes
associated with the inputs, as well as additional attributes of the ops. For
example, tensor addition is defined for two tensors of the same shape and size
and results in a tensor with the same element type and shape:
.. math::
(A+B)_I = A_I + B_I
Here, :math:`X_I` means the value of a coordinate :math:`I` for the tensor
:math:`X`. So the value of sum of two tensors is a tensor whose value at a
coordinate is the sum of the elements are that coordinate for the two inputs.
Unlike many frameowrks, it says nothing about storage or arrays.
An ``Add`` op is used to represent a tensor sum. To construct an Add op, each of
the two inputs of the ``Add`` must be associated with some output of some
already-created op. All outputs of constructed ops have element types and shapes,
so when the Add is constructed, it verifies that the two outputs associated with
its two inputs have the same element type and shape and sets its output to have
the same element type and shape.
Since all nodes supplying outputs for inputs to a new node must exist before the
new node can be created, it is impossible to construct a cyclic graph.
Furthermore, type-checking can be performed as the ops are constructed.
Functions
---------
Ops are grouped together in an ``ExternalFunction``, which describes a
computation that can be invoked on tensor arguments to compute tensor
results. The caller provides tensors in the form of row-major arrays
for each argument and each computed result. The same array can be used
for more than one argument, but each result must use a distinct array,
and argument arrays cannot be used as result arrays.
The ``ExternalFunction`` has ``Parameter``, a vector of ``Parameter`` ops,
where no ``Parameter`` op may appear more than once in the vector.
Each ``Parameter`` op has attributes for its shape and element type;
arrays passed to the function must have the same shape and element type.
The ``ExternalFunction`` also has ``Nodes``, a vector of ops that
are the results being computed (Note: We may require the results to
be ``Result`` ops in the future. A ``Result`` op would have a single
input and no outputs, and complement the zero input single output
``Parameter`` op.)
During execution, the output of the nth ``Parameter`` op will be the tensor
corresponding to the array provided as the nth argument, and the outputs
of all result ops will be written into the result arrays in row-major
order.
.. important:: During graph building, most of the storage associated
with values is *implicit*. During compilation, *explicit* storage
......@@ -84,20 +140,16 @@ sources: *literals*, *calls* to ops (built-in ops or user-defined ops AKA
zero or more run-time parameters of *arbitrary* value types and a result
whose type is the tuple type of the types of the parameters.
- *functions* are user-defined ops.
#. **Functions*** are user-defined ops.
- A user-defined function is "external" if it can be called externally.
- The result is a graph node that depends only on parameters.
- The result's ``type`` of call to a function is determined from the
types of the arguments.
- Any external function interacting with the graph at the level of
user-defined ``op`` must specify a type for each of its parameters.
- The result's type of call to a function is determined from the types of the arguments.
- Any external function interacting with the graph at the level of user-defined op must specify a type for each of its parameters.
#. *Parameters* of user-defined *functions* may also be a source of a graph's
values. Externally-callable functions must specify a type for each parameter.
Building a Graph
================
......
This diff is collapsed.
This diff is collapsed.
.. model-phases:
.. NOTE this is mostly just placeholder text designed to start a discussion around
the ways we can highlight something other than "run MNIST models" for training
as a feature of the nGraph library.
Phases
======
With the optimizations built into Intel nGraph library core, you can
train a model and quickly iterate upon (or with) learnings from your
original dataset. Once the model's data has been trained with the nGraph
library, it is essentially "freed" from the original framework that you
wrangled it into, and you can apply different kinds of operations and
tests to further refine to the goals of your data science.
.. For example, let's say that you notice the `MNIST` MLP dataset running
with MXNet on nGraph trains itself to 0.997345 or 1.00000 accuracy after
only 10 Epochs. The original model was written to train the dataset for
20 Epochs. This means that there are potentially 10 wasted cycles of
compute power that can be used elsewhere.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment