Commit 2c048174 authored by L.S. Cook's avatar L.S. Cook Committed by DawnStone

Scaffolding for userdocs (#402)

* adding work and organization for docs

* Summary of changes

- removed rst file from css folder
- added substitutions to conf file so folks don't have to type out the whole name
- added branding notice so link in footer works
- added glossary so we can have a place for terms relative to libngraph
- added placeholder for MXNet and TF frontend under new section
- cleaned up api page

* Fix code fencing error and indentation issue causing build to throw error

* adding MXNet integration guide details and other miscellaneous structure

* fix unicode error in rst epilog

* testing with unicode marker for rst_epilog string

* replace api scaffolding with TODO to avoid breaking Jenkins
parent b2232e6f
......@@ -13,7 +13,7 @@
set(NGRAPH_BUILD_SPHINX_DOCS FALSE
CACHE BOOL
"The NGraph build system shall contain a target for '.rst'-based overview docs."
"The nGraph build system shall contain a target for '.rst'-based overview docs."
)
if (NGRAPH_BUILD_SPHINX_DOCS)
......
......@@ -18,7 +18,7 @@
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; <a href="{{ path }}">Copyright</a> {{ copyright }}. {% endtrans %}
{%- else %}
<span class="crt-size">{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.</span> <br/><div class="brandnote"> Intel nGraph library contains trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. * Other names and brands may be claimed as the property of others; see <a class="reference internal" href="../content_assets/branding-notice.html">branding notice</a> for more information.</class>{% endtrans %}
<span class="crt-size">{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.</span> <br/><div class="brandnote"> Intel nGraph library contains trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. * Other names and brands may be claimed as the property of others; see <a class="reference internal" href="branding-notice.html">branding notice</a> for more information.</class>{% endtrans %}
{%- endif %}
{%- endif %}
......
This diff is collapsed.
.. api.rst:
API
###
.. TODO don't add Python APIs that will break the build.
\ No newline at end of file
.. autodiff.rst
Autodiff
########
The ``autodiff`` ...
.. TODO update for cpp
.. branding-notice:
Branding Notice
===============
The Intel® nGraph™ library is an open source project providing code and component
reference for many kinds of machine learning, deep learning, and DNN applications.
Documentation may include references to frontend frameworks, modules, extensions,
or other libraries that may be wholly or partially open source, or that may be
claimed as the property of others.
Intel nGraph library core documentation
---------------------------------------
.. note:: The branding notice below applies to code and documentation
contributions intended to be added directly to Intel nGraph library core.
Use the first or most prominent usage with symbols as described below.
Subsequent references on the same document, or on a file with an
already-present prominent form (such as Sphinx\* documentation sidebars),
may be done as an abbreviated form (sub-bullet items) and/or without the
repeated use of the trademark / branding symbols.
* Intel® Nervana™ Neural Network Processor
* Intel® Nervana™ NNP
* Intel® Xeon Phi™ (CPU processor)
* Intel® Xeon® (CPU processor)
* Intel® nGraph™
* Intel® nGraph™ library
* nGraph library
* ``ngraph`` API
* ``ngraph`` library
* ``ngraph`` backend
* nGraph abstraction layer
* neon™ frontend framework
* Intel® Math Kernel Library
* Intel® MKL
* Intel® Math Kernel Library for Deep Neural Networks
* Intel® MKL-DNN
* Intel® Nervana™ Graph (deprecated)
.. build-a-functiongraph:
Defining a function graph on the nGraph library
###############################################
Graph Basics
============
To build a function graph with the nGraph library, first understand the ways
that the library will handle graph values before and during compilation. Since
it can be fairly easy to confuse C++ terms with their counterparts in the
``ngraph`` function (and with the lower-level C++ representations of those
counterparts), we provide this reference.
Descriptions of ngraph values
-----------------------------
- *Element values* are integers, floats, etc.
- Each ``type`` of element value is described by an ``ElementType``.
- A C++ :cpp:type:`type` is required for referencing literals during
compilation.
- The :cpp:type:`type`'s ``value`` may be represented differently in a
compiled compilation. For example, a 32-bit float can hold a 16-bit float.
- A *value* in a graph is either a tensor view or a tuple.
- A **tensor view** is an indexed collection of element values, all of
the same element type. An element value is not a graph value; a 0-rank
tensor holds one element value and serves the same purpose.
- A **tuple** is 0 or more values, which can consist of tuples and
tensor views.
- Analogous to the value are "value types", also defined recursively.
- **Tensor view types** These types describe indexed collections of
primitive types. They are specified by a shape and an primitive
type for the elements.
.. TODO add Doxy links corresponding to these tensor view types'
APIs or use the literalinclude better
- **Tuple types** These are cartesian product types for tuples of
tuples and tensors, described by a sequence of tuple types and
tensor view types.
.. TODO add basic semantics
.. important:: During graph building, most of the storage associated
with values is *implicit*. During compilation, *explicit* storage
will be assigned in the form *value descriptors*; this storage will
be referred to as the inputs and outputs of those calls.
Sources of values
-----------------
.. note:: The nGraph library includes a number of *built-in ops*. A :
ref:`built-in op` is like a templated function in C++, in that it
can be used with a variety of argument types. Similarly, when the
types of each argument are known in a call, the op must be able to
verify that the arguments are compatible, and it must be able to
determine the ``type`` of the returned value.
The function graph is strongly typed. Every source of a value in the graph
must be associated with a type. In a graph, values come from many possible
sources: *literals*, *calls* to ops (built-in ops or user-defined ops AKA
*functions*), and *parameters* of user-defined functions.
#. *Literals* A value type is associated with each literal, and must be
consistent with the literal's value.
#. *Calls* to **ops**. When called with appropriate arguments, an *op*
produces a return value. All arguments not fixed at compile time
must be values. In the nGraph API, the term :term:`parameter` refers
to what "stands in" for an argument in an ``op`` definition, and :term:`result`
refers to what "stands in" for the returned *value*.
For example, the ``add`` **op** is a built-in op with two run-time
parameters that **must have the same value type**. It produces a
result with the same value type as its parameters.
Another example of a built-in **op** is the ``tuple`` **op** which, has
zero or more run-time parameters of *arbitrary* value types and a result
whose type is the tuple type of the types of the parameters.
- *functions* are user-defined ops.
- A user-defined function is "external" if it can be called externally.
- The result is a graph node that depends only on parameters.
- The result's ``type`` of call to a function is determined from the
types of the arguments.
- Any external function interacting with the graph at the level of
user-defined ``op`` must specify a type for each of its parameters.
#. *Parameters* of user-defined *functions* may also be a source of a graph's
values. Externally-callable functions must specify a type for each parameter.
Building a Graph
================
The function graph is composed of instances of the class ``Node``. Nodes are
created by helpers described below.
.. note:: method ``dependents()`` is a vector of nodes that must be computed
before the result of ``Node`` can be used.
User-defined functions
----------------------
When building a function graph with values derived from "custom" or user-defined
functions, use the following syntax to:
* create a user-defined function: ``make_shared<Function>()``
* get the specified parameter of the function: \* method:``parameter(index)``
* return the type: \* method ``type()``
* set the type to `t`: \* method ``type(ValueType t)``
* set the type to a ``TensorViewType``: \* method ``type(ElementType element_type, Shape shape)``
* get the function's result: \* method ``result()``
* return the node providing the value: \* method ``value()``
* set the node that will provide the value: \* method ``value(Node node)``
Type methods are available as with parameters. A user-defined function is
callable, and can be used to add a call to it in the graph.
Built-in Ops
------------
Calls to built-in ops are created with helper functions generally in the
``op`` namespace. Ops are generally callable singletons that build
calls. When building a function graph with built-in ops,
- ``op::tuple()`` produces an empty tuple
- to add a value to a tuple, use the overload ``Tuple(list<Value>)``
* to add a value to the tuple operation: \* method ``push_back(value)``
* to return the specified component, call \* method ``get(index)``
- where ``index`` is a compile-time value.
Example
-------
::
// Function with 4 parameters
auto cluster_0 = make_shared<Function>(4);
cluster_0->result()->type(element_type_float, Shape {32, 3});
cluster_0->parameter(0)->type(element_type_float, Shape {Shape {7, 3}});
cluster_0->parameter(1)->type(element_type_float, Shape {Shape {3}});
cluster_0->parameter(2)->type(element_type_float, Shape {Shape {32, 7}});
cluster_0->parameter(3)->type(element_type_float, Shape {Shape {32, 7}});
auto arg3 = cluster_0->parameter(3);
// call broadcast op on arg3, broadcasting on axis 1.
auto broadcast_1 = op::broadcast(arg3, 1);
auto arg2 = cluster_0->parameter(2);
auto arg0 = cluster_0->parameter(0);
// call dot op
auto dot = op::dot(arg2, arg0);
// Function returns tuple of dot and broadcast_1.
cluster_0->result()->value(dot);
Defining built-in ops
=====================
This section is WIP.
Built-in ops are used for several purposes:
- Constructing call nodes in the graph.
* Checking type-consistency of arguments
* Specifying the result type for a call
- Indicating preliminary tensor needs
* Index operations are aliased views
* Tuples are unboxed into tensor views
* Remaining ops given vectors of inputs and outputs
- Constructing patterns that will match sub-graphs
- Pre-transformer code generation
- Debug streaming of call descriptions
The general ``Node`` class provides for dependents and node type. The
class ``Call`` subclasses ``Node``. Built-in op implementations can
subclass ``Call`` to provide storage for compile-time parameters, such
as broadcast indices.
The plan is that the abstract class ``Op`` will have methods to be
implemented by built-in ops. Each built-in op corresponds to a callable
singleton (in the ``ngraph::op`` namespace) that constructs the
appropriate ``Call``. As a singleton, the op can conveniently be used as
a constant in patterns. Call objects will be able to find their related
op.
.. code-contributor-README:
Core Contributor Guidelines
###########################
Code formatting
================
All C/C++ source code in the ``libngraph`` repository, including the test code
when practical, should adhere to the project's source-code formatting guidelines.
The script ``maint/apply-code-format.sh`` enforces that formatting at the C/C++
syntactic level.
The script at ``maint/check-code-format.sh`` verifies that the formatting rules
are met by all C/C++ code (again, at the syntax level.) The script has an exit
code of ``0`` when this all code meets the standard; and non-zero otherwise.
This script does *not* modify the source code.
Core Ops
--------
We have some core ops. Other ops may be added to core when they
have sufficient documentation and examples of those ops in practice
or potentially-practical use cases.
Coding style
-------------
.. TODO: add the core coding style Google Doc collab here when final
GitHub
------
- How to submit a PR
- Best practices
- Etc.
......@@ -34,6 +34,7 @@ needs_sphinx = '1.6.5'
extensions = ['sphinx.ext.mathjax',
'sphinx.ext.ifconfig',
'sphinx.ext.viewcode',
'sphinx.ext.autodoc'
]
# Add any paths that contain templates here, relative to this directory.
......@@ -98,17 +99,17 @@ html_theme = 'ngraph_theme'
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
html_logo = 'ngraph_theme/static/favicon.ico'
html_logo = '../ngraph_theme/static/favicon.ico'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = 'ngraph_theme/static/favicon.ico'
html_favicon = '../ngraph_theme/static/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['ngraph_theme/static']
html_static_path = ['../ngraph_theme/static']
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ["../"]
......@@ -187,3 +188,26 @@ texinfo_documents = [
]
html_add_permalinks = ""
rst_epilog = u"""
.. |codename| replace:: Intel nGraph
.. |project| replace:: Intel nGraph library
.. |InG| replace:: Intel® nGraph
.. |nGl| replace:: nGraph library
.. |copy| unicode:: U+000A9 .. COPYRIGHT SIGN
:ltrim:
.. |deg| unicode:: U+000B0 .. DEGREE SIGN
:ltrim:
.. |plusminus| unicode:: U+000B1 .. PLUS-MINUS SIGN
:rtrim:
.. |micro| unicode:: U+000B5 .. MICRO SIGN
:rtrim:
.. |trade| unicode:: U+02122 .. TRADEMARK SIGN
:ltrim:
.. |reg| unicode:: U+000AE .. REGISTERED TRADEMARK SIGN
:ltrim:
"""
.. doc-contributor-README:
.. ---------------------------------------------------------------------------
.. Copyright 2017 Intel Corporation
.. Licensed under the Apache License, Version 2.0 (the "License");
......@@ -48,25 +50,26 @@ When **verbosely** documenting functionality of specific sections of code -- whe
they're entire code blocks within a file, or code strings that are **outside** the
Intel nGraph `documentation repo`_, here is an example of best practice:
Say the file named ``dqn_atari.py`` has some interesting functionality that could
Say the file named `` `` has some interesting functionality that could
benefit from more explanation about one or more of the pieces in context. To keep
the "in context" format, write something like the following in your documentation
source file (``.rst``):
::
.. literalinclude:: ../../test/models/mxnet/10_bucket_LSTM.json
:language: json
:lines: 12-30
.. literalinclude:: ../../../src/ngraph/descriptor/primary_tensor_view.cpp
:language: cpp
:lines: 20-31
And the raw code will render as follows
.. literalinclude:: ../../test/models/mxnet/10_bucket_LSTM.json
:language: python
:lines: 12-30
.. literalinclude:: ../../../src/ngraph/descriptor/primary_tensor_view.cpp
:language: cpp
:lines: 20-31
You can now verbosely explain the code block without worrying about breaking
the code!
the code.
The trick here is to add the file you want to reference relative to the folder
where the ``Makefile`` is that generates the documentation you're writing. See the
......@@ -84,20 +87,19 @@ line numbers, and add a caption "One way to define neon axes within the dqn_atar
::
.. literalinclude:: ../../examples/dqn/dqn_atari.py
:language: python
:lines: 12-49
:linenos:
:caption: Defining action_axes within the dqn_atari.py file for neon frontend
.. literalinclude:: ../../../src/ngraph/descriptor/primary_tensor_view.cpp
:language: cpp
:lines: 20-31
:caption:
and the generated output will show readers of your helpful documentation
.. literalinclude:: ../../examples/dqn/dqn_atari.py
:language: python
:lines: 12-49
:linenos:
:caption: Defining action_axes within the dqn_atari.py file for neon frontend
.. literalinclude:: ../../../src/ngraph/descriptor/primary_tensor_view.cpp
:language: cpp
:lines: 20-31
:caption:
Take note that the ``linenos`` line will add a new context for line numbers
within your file; it will not bring the original line numbering with it. This
......@@ -109,17 +111,29 @@ use to prevent code bloat. A ``literalinclude`` with the ``caption`` option
also generates a permalink (see above) that makes finding "verbose" documentation
easier.
.. build-docs:
Build the Documentation
========================
.. note:: Stuck on how to generate the html? Run these commands; they assume
you start at a command line running within a clone (or a cloned fork) of the
``ngraph`` repo. You do **not** need to run a virtual environment to create
documentation if you don't want; running ``$ make clean`` in the ``doc/`` folder
removes any generated files.
Right now the minimal version of Sphinx needed to build the documentation is
Sphinx v. 1.6.5. This can be installed with `pip3` either to a virtual environment, or
to your base system if you plan to contribute much to docs.
.. code-block:: console
$ pip install -r doc_requirements.txt
$ cd /doc/source/
/ngraph/doc/source$ make html
$ cd doc/sphinx/
$ make html
For tips similar to this, see the `sphinx`_ stable reST documentation.
......
.. framework-integration-guides:
Framework Integration Guides
############################
.. contents::
Compile MXNet with ``libngraph``
================================
#. Add the `MXNet`_ prerequisites to your system, if the system doesn't have them
already:
.. code-block:: console
$ sudo apt-get install -y libopencv-dev curl libatlas-base-dev python
python-pip python-dev python-opencv graphviz python-scipy python-sklearn
libopenblas-dev
#. Set the ``LD_LIBRARY_PATH`` path to the location where we built the libraries:
.. code-block:: bash
export LD_LIBRARY_PATH=$HOME/ngraph_dist/lib/
#. Clone the ``ngraph-mxnet`` repository recursively and checkout the
``ngraph-integration-dev branch``:
.. code-block:: console
$ git clone --recursive git@github.com:NervanaSystems/ngraph-mxnet.git
$ cd ngraph-mxnet && git checkout ngraph-integration-dev
#. Edit the ``make/config.mk`` file from the repo we just checked out to set
the ``USE_NGRAPH`` option to true with `1` and set :envvar:`NGRAPH_DIR`
to point to the installation:
.. code-block:: bash
USE_NGRAPH = 1
NGRAPH_DIR = $(HOME)/ngraph_dist
#. Ensure that the config file has disabled nnpack and mklml.
#. Finally, compile MXNet:
.. code-block:: console
$ make -j $(nproc)
#. After successfully running ``make``, install the python integration packages
your MXNet build needs to run a training example.
.. code-block:: console
$ cd python && pip install -e . && cd ../
#. Confirm a successful integration by running the MNIST training example:
.. code-block:: console
$ python example/image-classification/train_mnist.py
Using ``libngraph`` from Tensorflow as XLA plugin
=================================================
.. TODO: add Avijit's presentation info and process here
.. warning:: Section below is a Work in Progress.
#. Get the `ngraph` fork of TensorFlow from this repo: ``git@github.com:NervanaSystems/ngraph-tensorflow.git``
#. Etc.
#. Go to the end near the following snippet
::
native.new_local_repository(
name = "ngraph_external",
path = "/your/home/directory/where/ngraph_is_installed",
build_file = str(Label("//tensorflow/compiler/plugin/ngraph:ngraph.BUILD")),
)
and modify the following line in the :file:`tensorflow/workspace.bzl` file to
provide an absolute path to ``~/ngraph_dist``
::
path = "/directory/where/ngraph_is_installed"
#. Now run :command:`configure` and follow the rest of the TF build process.
Maintaining ``libngraph``
=========================
TBD
.. _MXNet: http://mxnet.incubator.apache.org/
.. glossary:
Glossary
========
.. glossary::
parameter
In the context of a function graph, a "paramater" refers
to what "stands in" for an argument in an ``op`` definition.
result
In the context of a function graph, the term "result" refers to what
stands in for the returned *value*.
function graph
The Intel nGraph library uses a function graph to represent an ``op``'s
parameters and results.
.. NGraph-C++ documentation master file, created by
sphinx-quickstart on Thu Oct 26 13:46:53 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
.. ---------------------------------------------------------------------------
.. Copyright 2017 Intel Corporation
.. Licensed under the Apache License, Version 2.0 (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
.. http://www.apache.org/licenses/LICENSE-2.0
..
.. Unless required by applicable law or agreed to in writing, software
.. distributed under the License is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
.. ---------------------------------------------------------------------------
Welcome to NGraph-C++'s documentation!
======================================
.. Intel nGraph library core documentation master file, created on Mon Dec 25 13:04:12 2017.
Intel nGraph library
====================
.. toctree::
:maxdepth: 1
:caption: Table Of Contents
:name: tocmaster
installation.rst
testing-libngraph.rst
framework-integration-guides.rst
build-a-functiongraph.rst
.. toctree::
:maxdepth: 1
:caption: Models
:name: Models
training.rst
model-phases.rst
.. toctree::
:maxdepth: 2
:caption: Contents:
:caption: Backend Components
.. toctree::
:maxdepth: 1
:caption: Reference API
api.rst
autodiff.rst
glossary.rst
.. toctree::
:maxdepth: 1
:caption: Project Docs
release-notes.rst
code-contributor-README.rst
.. toctree::
:maxdepth: 0
:hidden:
branding-notice.rst
doc-contributor-README.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`search`
* :ref:`search`
.. installation:
Building the Intel® nGraph™ library
####################################
Build Environments
==================
The |release| version of |project| supports Linux\* or UNIX-based
systems where the system has been recently updated with the following
packages and prerequisites:
.. csv-table::
:header: "Operating System", "Compiler", "Build System", "Status", "Additional Packages"
:widths: 25, 15, 25, 20, 25
:escape: ~
Ubuntu 16.04 (LTS) 64-bit, CLang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 git libtinfo-dev``
Ubuntu 16.04 (LTS) 64-bit, CLang 4.0, CMake 3.5.1 + GNU Make, officially unsupported, ``build-essential cmake clang-4.0 git libtinfo-dev``
Clear Linux\* OS for Intel Architecture, Clang 5.0.1, CMake 3.10.2, experimental, bundles ``machine-learning-basic dev-utils python3-basic python-basic-dev``
Installation Steps
==================
.. note:: If you are developing |nGl| projects on macOS*\, please be
aware that this platform is officially unsupported; see the section
`macOS Development Prerequisites`_ below.
To build |nGl| on one of the supported systems, the default CMake procedure
will install ``ngraph_dist`` to your user's ``$HOME`` directory as
the default install location. See the :file:`CMakeLists.txt` file for more
information.
This guide provides one possible configuration that does not rely on a
virtual environment. You are, of course, free to use a virtual environment,
or to set up user directories and permissions however you like.
#. Since most of a developer's interaction with a frontend framework
will take place locally through Python, set a placeholder directory
where Python bindings can interact more efficiently with the nGraph
library backend components. Create something like ``/opt/local`` and
(presuming you have sudo permissions), give ownership of that local
directory to your user. This will make configuring for various ``PATH``
and environmental variables much more simple later.
.. code-block:: console
$ cd /opt
$ sudo mkdir -p local/libraries
$ sudo chown -R username:username /opt/local
#. Clone the `NervanaSystems` ``ngraph-cpp`` repo to your `/libraries`
directory.
.. code-block:: console
$ cd /opt/local/libraries
$ git clone git@github.com:NervanaSystems/private-ngraph-cpp.git
$ cd private-ngraph-cpp
#. Create a build directory outside of the ``private-ngraph-cpp/src`` directory
tree; something like ``private-ngraph-cpp/build`` should work.
.. code-block:: console
$ mkdir build
#. ``$ cd`` to the build directory and generate the GNUMakefiles in the
customary manner from within your ``build`` directory:
.. code-block:: console
$ cd build && cmake ../
#. Run ``$ make -j8`` and ``make install`` to install ``libngraph.so`` and the
header files to the default location of ``$HOME/ngraph_dist``.
.. code-block:: console
$ make -j8 && make install
#. (Optional, requires `Sphinx`_.) Run ``make html`` inside the
``doc/sphinx`` directory to build HTML docs for the nGraph library.
#. (COMING SOON -- optional, requires `doxygen`_.) TBD
.. macOS Development Prerequisites:
macOS Development Prerequisites
-------------------------------
The repository includes two scripts (``maint/check-code-format.sh`` and
``maint/apply-code-format.sh``) that are used respectively to check adherence
to `libngraph` code formatting conventions, and to automatically reformat code
according to those conventions. These scripts require the command
``clang-format-3.9`` to be in your ``PATH``. Run the following commands
(you will need to adjust them if you are not using bash):
.. code-block:: bash
$ brew install llvm@3.9
$ mkdir -p $HOME/bin
$ ln -s /usr/local/opt/llvm@3.9/bin/clang-format $HOME/bin/clang-format-3.9
$ echo 'export PATH=$HOME/bin:$PATH' >> $HOME/.bash_profile
External library requirements
==============================
TBD
.. _doxygen: https://www.stack.nl/~dimitri/doxygen/
.. _Sphinx: http://www.sphinx-doc.org/en/stable/
.. _NervanaSystems: https://github.com/NervanaSystems/private-ngraph-cpp/blob/master/README.md
.. model-phases:
.. NOTE this is mostly just placeholder text designed to start a discussion around
the ways we can highlight something other than "run MNIST models" for training
as a feature of the nGraph library.
Phases
======
With the optimizations built into Intel nGraph library core, you can
train a model and quickly iterate upon (or with) learnings from your
original dataset. Once the model's data has been trained with the nGraph
library, it is essentially "freed" from the original framework that you
wrangled it into, and you can apply different kinds of operations and
tests to further refine to the goals of your data science.
.. For example, let's say that you notice the `MNIST` MLP dataset running
with MXNet on nGraph trains itself to 0.997345 or 1.00000 accuracy after
only 10 Epochs. The original model was written to train the dataset for
20 Epochs. This means that there are potentially 10 wasted cycles of
compute power that can be used elsewhere.
.. release-notes:
Release Notes
#############
.. testing-libngraph:
Testing the nGraph library
##########################
The |InG| library code base uses the `GTest framework`_ for unit tests. CMake
automatically downloads a copy of the required GTest files when configuring the
build directory.
To perform the unit tests:
#. Create and configure the build directory as described in our
:doc:`installation` guide.
#. Enter the build directory and run ``make check``:
.. code-block:: console
$ cd build/
$ make check
#. To build the full Google test suite (required to compile with MXNet):
.. code-block:: console
$ git clone git@github.com:google/googletest.git
$ cd googletest/ && cmake . && make -j$(nproc) && sudo make install
Compiling a framework with ``libngraph``
========================================
After building and installing the nGraph library to your system, the next
logical step is to compile a framework that you can use to run a
training/inference model with one of function-driven backends that are now
enabled. See our :doc:`model-phases` documentation for more about function-driven
backend design and architecture for algorithms.
Intel nGraph library supports all of the popular frameworks including `MXNet`_,
`TensorFlow`_, `Caffe2`_, `PyTorch`_, `Chainer`_ and the native `neon`_ frontend
framework. Currently we provide integration guides for MXNet and Tensorflow, as
well as legacy documentation for the `neon`_ framework. Integration guides for
each of the other frameworks is forthcoming.
.. _GTest framework: https://github.com/google/googletest.git
.. _MXNet: http://mxnet.incubator.apache.org/
.. _TensorFlow: https://www.tensorflow.org/
.. _Caffe2: https://github.com/caffe2/
.. _PyTorch: http://pytorch.org/
.. _Chainer: https://chainer.org/
.. _neon: http://neon.nervanasys.com/index.html/
.. training:
Training
########
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment