Commit 19802062 authored by L.S. Cook's avatar L.S. Cook Committed by Scott Cyphers

Leona/ops docs (#480)

* WIP with test ops

* update WIP

* Add LLVM compilation detail from INSTALL to installation guide and cleanup todo comments on convolution.rst and remove deprecated file

* Clarify LLVM compilation flag on Ubuntu and delete WIP draft from earlier
parent 6f746c09
.. ---------------------------------------------------------------------------
.. Copyright 2017 Intel Corporation
.. Licensed under the Apache License, Version 2.0 (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
.. http://www.apache.org/licenses/LICENSE-2.0
..
.. Unless required by applicable law or agreed to in writing, software
.. distributed under the License is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
.. ---------------------------------------------------------------------------
Intel® Nervana™ graph
*********************
Intel® Nervana™ graph is Intel's library for developing frameworks that can efficiently run deep learning computations on a variety of compute platforms. It consists of three primary API components:
- An API for creating computational *Intel Nervana graphs*.
- Two higher level frontend APIs (TensorFlow* and neon™) utilizing the Intel Nervana graph API for common deep learning workflows.
- A transformer API for compiling and executing these graphs.
For more information, refer to the `blog post <https://www.intelnervana.com/intel-nervana-graph-preview-release/?_ga=2.139466358.473888884.1509049473-747831713.1505851199/>`_ announcing our
preview release!
Installation
============
Install the base packages for the CPU backend by following the instructions in the installation documentation
`here <https://ngraph.nervanasys.com/docs/latest/installation.html>`_.
After you complete the prerequisites and install the base Intel Nervana graph package as explained in the installation documentation, you will need to install some additional packages to run
Intel Nervana Graph at optimal performance on various compute platforms.
CPU/Intel® architecture transformer
---------------------------------------
To run Intel Nervana graph with optimal performance on a CPU backend, you need to install Intel Nervana graph with Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) support:
1. Download Intel® MKL-DNN from `here <https://github.com/01org/mkl-dnn>`_.
2. Follow the installation instructions in the `README.md <https://github.com/01org/mkl-dnn/blob/master/README.md>`_ to install MKL-DNN.
3. Set the environment variable `MKLDNN_ROOT` to point to the location where you installed Intel MKL-DNN.
::
export MKLDNN_ROOT=/path/to/mkldnn/root
GPU transformer
---------------
To run Intel Nervana graph on a GPU backend, you need to install CUDA* and then enable the GPU transformer:
1. Download and install CUDA according to the `instructions <http://docs.nvidia.com/cuda/cuda-quick-start-guide/index.html>`_.
2. On your system, enable the GPU transformer::
make gpu_prepare
Virtual environment activation
==================================
The virtual environment for Intel Nervana Graph is created when you install the prerequisites described in the installation documentation
`here <https://ngraph.nervanasys.com/docs/latest/installation.html>`_ and in the `README <https://github.com/NervanaSystems/ngraph/blob/master/README.md>`_.
To activate a Python virtualenv, run the following command::
. .venv/bin/activate
Build Intel Nervana graph
=========================
Run the following command within your virtual environment to build Intel Nervana graph::
make install
Additional options
==================
Use these commands to run the tests::
make [test_cpu|test_mkldnn|test_gpu|test_integration]
Before checking in code, ensure there are no "make style" errors by running the following check::
make style
If the check returns any errors, use this command to fix style errors::
make fixstyle
To generate the documentation as HTML files::
sudo apt-get install pandoc
make doc
Examples
========
- *examples/walk_through/* contains several code walk throughs.
- *examples/mnist/mnist_mlp.py* uses the neon front-end to define and train a MLP model on MNIST data.
- *examples/cifar10/cifar10_conv.py* uses the neon front-end to define and train a CNN model on CIFAR10 data.
- *examples/cifar10/cifar10_mlp.py* uses the neon front-end to define and train a MLP model on CIFAR10 data.
- *examples/ptb/char_rnn.py* uses the neon front-end to define and train a character-level RNN model on Penn Treebank data.
Training deep residual networks
===============================
This example demonstrates training a deep residual network as first described in `He et. al. msra1 <http://arxiv.org/abs/1512.03385>`_. It can handle CIFAR10 and Imagenet datasets
Files
-----
- *data.py*: Implements dataloader for CIFAR10 and imagenet dataset.
- *resnet.py*: Defines model for Residual network.
- *train_resnet.py*: Processes command line arguments, like the choice of dataset and number of layers, and trains the Resnet model.
Dataset
-------
The `CIFAR10` Dataset gets downloaded automatically to *~/*. To download and use the dataset from a specific location, set ``--data_dir i1k``.
For imagenet, update ``manifest_root`` to the location of your imagenet dataset in *data.py*. Also update ``path`` to the directory where manifest ``.csv`` files are stored in *data.py*.
Usage
-----
Use the following command to run training on Intel Nervana Graph::
python examples/resnet/train_resnet.py -b <cpu,gpu> --size <20,56> -t 64000 -z <64,128>
Intel Nervana Graph uses the `CIFAR10` dataset by default. If you would like to train using a different dataset, like the ``i1k`` dataset, provide the location of the dataset as ``BASE_DATA_DIR= </path/to/load/file>`` , and then add the ``--dataset: <name of data set>`` argument to the command above.
Citation
--------
`Deep Residual Learning for Image Recognition <http://arxiv.org/abs/1512.03385>`_
......@@ -2485,11 +2485,12 @@ div[class^='highlight'] pre {
}
.rst-content tt big, .rst-content tt em, .rst-content tt big, .rst-content code big, .rst-content tt em, .rst-content code em {
font-size: 100% !important;
line-height: normal;
line-height: 1.0em;
}
.rst-content tt.literal, .rst-content tt.literal, .rst-content code.literal {
font-size: 90% !important;
color: #000;
font-size: 101% !important;
color: #72a1ab;
line-height: 0.91em;
}
.rst-content tt.xref, a .rst-content tt, .rst-content tt.xref, .rst-content code.xref, a .rst-content tt, a .rst-content code {
font-weight: bold;
......
......@@ -2485,11 +2485,12 @@ div[class^='highlight'] pre {
}
.rst-content tt big, .rst-content tt em, .rst-content tt big, .rst-content code big, .rst-content tt em, .rst-content code em {
font-size: 100% !important;
line-height: normal;
line-height: 1.0em;
}
.rst-content tt.literal, .rst-content tt.literal, .rst-content code.literal {
font-size: 90% !important;
color: #000;
font-size: 101% !important;
color: #72a1ab;
line-height: 0.91em;
}
.rst-content tt.xref, a .rst-content tt, .rst-content tt.xref, .rst-content code.xref, a .rst-content tt, a .rst-content code {
font-weight: bold;
......
......@@ -3,8 +3,15 @@
API
###
.. TODO don't add Python APIs that will break the build.
.. Don't add Python APIs that will break the build.
Sections
********
========
......@@ -112,7 +112,7 @@ File Names
.. code-block:: cpp
TEST(file_name, test_name)
TEST(file_name, test_name)
* Transformer-independent tests:
......@@ -124,7 +124,7 @@ File Names
TEST(${BACKEND_NAME}, test_name)
for each test. Fies will be
generated for each transformer and the `${BACKEND_NAME}` will be replaced
generated for each transformer and the ``${BACKEND_NAME}`` will be replaced
with the transformer name.
......@@ -159,7 +159,7 @@ it is automatically enforced and reduces merge conflicts.
#include <file>
* Use this syntax for files that **are changing during development**; they will
be checked for changes during builds. Normally this will be ngraph headers:
be checked for changes during builds. Normally this will be ngraph headers:
.. code-block:: cpp
......@@ -184,7 +184,7 @@ it is automatically enforced and reduces merge conflicts.
Foo x{4, 5};
is preferred over
is preferred over
.. code-block:: cpp
......@@ -213,11 +213,11 @@ is preferred over
auto s = Shape{2,3};
Instead, use
Instead, use
.. code-block:: cpp
.. code-block:: cpp
Shape s{2, 3};
Shape s{2, 3};
* Indicate the type in the variable name.
......
......@@ -191,7 +191,7 @@ texinfo_documents = [
html_add_permalinks = ""
breathe_projects = {
"nGraph": "../../../doxygen/xml",
"": "../../../doxygen/xml",
}
rst_epilog = u"""
......
......@@ -21,6 +21,18 @@ prerequisites:
Ubuntu 16.04 (LTS) 64-bit, CLang 4.0, CMake 3.5.1 + GNU Make, officially unsupported, ``build-essential cmake clang-4.0 git libtinfo-dev``
Clear Linux\* OS for Intel Architecture, CLang 5.0.1, CMake 3.10.2, experimental, bundles ``machine-learning-basic dev-utils python3-basic python-basic-dev``
On Ubuntu 16.04 with ``gcc-5.4.0`` or ``clang-3.9``, the recommended option
is to add ``-DNGRAPH_USE_PREBUILT_LLVM=TRUE`` to the :command:`cmake` command.
This gets a pre-built tarball of LLVM+Clang from `llvm.org`_, and substantially
reduces build times.
If using ``gcc-4.8``, it may be necessary to add symlinksfrom ``gcc`` to
``gcc-4.8``, and from ``g++`` to ``g++-4.8``, in your :envvar:`PATH`, even
if you explicitly specify the ``CMAKE_C_COMPILER`` and ``CMAKE_CXX_COMPILER``
flags when building. (You should NOT supply the `-DNGRAPH_USE_PREBUILT_LLVM`
flag in this case, because the prebuilt tarball supplied on llvm.org is not
compatible with a gcc-4.8 based build.)
Support for macOS is limited; see the macOS development prerequisites
section at the end of this page for details.
......@@ -107,4 +119,5 @@ according to those conventions. These scripts require the command
.. _doxygen: https://www.stack.nl/~dimitri/doxygen/
.. _Sphinx: http://www.sphinx-doc.org/en/stable/
.. _NervanaSystems: https://github.com/NervanaSystems/private-ngraph-cpp/blob/master/README.md
.. _llvm.org: https://www.llvm.org
......@@ -9,8 +9,6 @@ A batched convolution operation.
Basic Operation
===============
In the simplest case, (TODO: explain what convolution is in human words.)
+-----------------+-------------------------+--------------------------------+
| Input Name | Element Type | Shape |
+=================+=========================+================================+
......@@ -27,13 +25,11 @@ In the simplest case, (TODO: explain what convolution is in human words.)
It must be the case that after dilation and padding are applied, the filter fits within the image.
(TODO: pictorial example of basic convolution.)
.. TODO image add
Window Parameters
=================
Two optional parameters affect the... stuff.
+-----------------------------+-----------------------------+------------------------------------+
| Parameter Name | Type | Meaning |
+=============================+=============================+====================================+
......@@ -44,14 +40,12 @@ Two optional parameters affect the... stuff.
| | | filters. |
+-----------------------------+-----------------------------+------------------------------------+
(TODO: pictorial example of the effect of window movement stride.)
(TODO: pictorial example of window before and after dilation.)
.. TODO: pictorial example of the effect of window movement stride.
.. TODO: pictorial example of window before and after dilation.
Image Batch Parameters
======================
Three optional parameters affect the... stuff.
+----------------------------+-----------------------------+---------------------------------------+
| Parameter Name | Type | Meaning |
+============================+=============================+=======================================+
......@@ -65,7 +59,6 @@ Three optional parameters affect the... stuff.
| | | image batch. |
+----------------------------+-----------------------------+---------------------------------------+
(TODO: pictorial examples of the above)
Mathematical Definition
=======================
......@@ -114,7 +107,7 @@ such that
Convolution
-----------
TODO.
.. TODO
Padded, Dilated, Strided Convolution
------------------------------------
......@@ -126,15 +119,12 @@ Padded, Dilated, Strided Convolution
Batched, Padded, Dilated, Strided Convolution
---------------------------------------------
TODO.
.. TODO
C++ Interface
=============
.. doxygenclass:: ngraph::op::Convolution
:members:
Python Interface
================
.. WIP
.. doxygenclass:: ngraph::op::Convolution
:members:
is not merged yet, but could go here!
......@@ -34,7 +34,7 @@ For this early |release| release, we're providing integration guides for:
* `TensorFlow`_, and
* neon™ `frontend framework`_.
Integration guides for other frameworks is tentatively forthcoming.
Integration guides for other frameworks are tentatively forthcoming.
.. _GTest framework: https://github.com/google/googletest.git
.. _MXNet: http://mxnet.incubator.apache.org/
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment