Commit 1575e2d1 authored by Leona C's avatar Leona C Committed by Scott Cyphers

Update namespace table and fix broken links (#2503)

* Cleaner API doc reference for compile call

* Add a useful table for nGraph namespaces

* Remove layout namespace

* Show exploding kernel problem on illustration like IEEE preso

* WIP branch for new documentation restructuring that is a huge pain

* Fix the doc reorg mess

* Fix underline

* List of passes disclaimer note

* Update disclaimers on README

* More cleanup of doc reorg

* Update core docs

* Update overview on core

* Add PR feedback

* Get rid of all the gazillion of doc build errors from rearranging stuff

* Add section on tutorials

* Update branch

* Cleanup intro

* Add better detail to overview

* Revise buildlb instructions and add better title for contributing to doc

* Note about unit tests

* Editing

* Update core overview namespace table and fix more broken links due to ToC changes

* Update normalized boolean build defaults

* Update for PR 2507

* Incorporate new PR feedback review
parent 86394f10
......@@ -8,11 +8,11 @@ Build the C++ Library
* :ref:`centos`
Build Environments
==================
Prerequisites
=============
Release |release| of |project| supports Linux\*-based systems
with the following packages and prerequisites:
Release |release| of |project| supports Linux\*-based systems with the following
packages and prerequisites:
.. csv-table::
:header: "Operating System", "Compiler", "Build System", "Status", "Additional Packages"
......@@ -37,22 +37,54 @@ flag in this case, because the prebuilt tarball supplied on llvm.org is not
compatible with a gcc 4.8-based build.)
Installation Steps
==================
The ``default`` build
---------------------
Running ``cmake`` with no build flags defaults to the following settings; adjust
as needed:
.. code-block:: console
-- NGRAPH_UNIT_TEST_ENABLE: ON
-- NGRAPH_TOOLS_ENABLE: ON
-- NGRAPH_CPU_ENABLE: ON
-- NGRAPH_INTELGPU_ENABLE: OFF
-- NGRAPH_GPU_ENABLE: OFF
-- NGRAPH_INTERPRETER_ENABLE: ON
-- NGRAPH_NOP_ENABLE: ON
-- NGRAPH_GPUH_ENABLE: OFF
-- NGRAPH_GENERIC_CPU_ENABLE: OFF
-- NGRAPH_DISTRIBUTED_MLSL_ENABLE: OFF
-- NGRAPH_DISTRIBUTED_OMPI_ENABLE: OFF
-- NGRAPH_DEBUG_ENABLE: OFF
-- NGRAPH_ONNX_IMPORT_ENABLE: OFF
-- NGRAPH_DEX_ONLY: OFF
-- NGRAPH_CODE_COVERAGE_ENABLE: OFF
-- NGRAPH_LIB_VERSIONING_ENABLE: OFF
-- NGRAPH_PYTHON_BUILD_ENABLE: OFF
-- NGRAPH_USE_PREBUILT_LLVM: OFF
-- NGRAPH_PLAIDML_ENABLE: OFF
.. important:: The default :program:`cmake` procedure (no build flags) will
install ``ngraph_dist`` to an OS-level location like ``/usr/bin/ngraph_dist``
or ``/usr/lib/ngraph_dist``. Here we specify how to build locally to the
location of ``~/ngraph_dist`` with the cmake target ``-DCMAKE_INSTALL_PREFIX=~/ngraph_dist``.
All of the nGraph Library documentation presumes that ``ngraph_dist``
gets installed locally. The system location can be used just as easily by
customizing paths on that system. See the :file:`ngraph/CMakeLists.txt`
file to change or customize the default CMake procedure.
All of the nGraph Library documentation presumes that ``ngraph_dist`` gets
installed locally. The system location can be used just as easily by customizing
paths on that system. See the :file:`ngraph/CMakeLists.txt` file to change or
customize the default CMake procedure.
Install steps
-------------
.. _ubuntu:
Ubuntu 16.04
-------------
~~~~~~~~~~~~
The process documented here will work on Ubuntu\* 16.04 (LTS) or on Ubuntu
18.04 (LTS).
......@@ -123,7 +155,7 @@ The process documented here will work on Ubuntu\* 16.04 (LTS) or on Ubuntu
.. _centos:
CentOS 7.4
-----------
~~~~~~~~~~
The process documented here will work on CentOS 7.4.
......@@ -190,21 +222,13 @@ according to those conventions. These scripts require the command
$ ln -s /usr/local/opt/llvm@3.9/bin/clang-format $HOME/bin/clang-format-3.9
$ echo 'export PATH=$HOME/bin:$PATH' >> $HOME/.bash_profile
Testing the build
=================
The |InG| library code base uses GoogleTest's\* `googletest framework`_
for unit tests. The ``cmake`` command from the :doc:`buildlb` guide
automatically downloaded a copy of the needed ``gtest`` files when
it configured the build directory.
To perform unit tests on the install:
#. Create and configure the build directory as described in our
:doc:`buildlb` guide.
#. Enter the build directory and run ``make check``:
We use the `googletest framework`_ from Google for unit tests. The
``NGRAPH_UNIT_TEST_ENABLE`` build flag is enabled by default when building
with cmake, so to perform unit tests, simply enter the build directory and
run ``make check``:
.. code-block:: console
......@@ -212,26 +236,26 @@ To perform unit tests on the install:
$ make check
Compile a framework with ``libngraph``
======================================
Adding framework support
========================
After building and installing nGraph on your system, there are two likely
paths for what you'll want to do next: either compile a framework to run a DL
training model, or load an import of an "already-trained" model for inference
paths for what you'll want to do next: either compile a framework to run a
DL model, or load an import of an "already-trained" model for inference
on an Intel nGraph-enabled backend.
For the former case, this early |version|, :doc:`frameworks/index`,
can help you get started with a training a model on a supported framework.
can help you get started with a training a model with a supported framework or
companion tool.
* :doc:`MXNet<frameworks/tensorflow_integ>` framework,
* :doc:`TensorFlow<frameworks/mxnet_integ>` framework,
* :doc:`ONNX & ONNXIFI<frameworks/onnx_integ>`, and
* :doc:`PaddlePaddle<frameworks/paddle_integ>` framework.
* :doc:`PaddlePaddle<frameworks/paddle_integ>` framework, or
* :doc:`ONNX<frameworks/onnx_integ>` and the ONNXIFI tool.
For the latter case, if you've followed a tutorial from `ONNX`_, and you have an
exported, serialized model, you can skip the section on frameworks and go directly
to our :doc:`core/constructing-graphs/import` documentation.
exported, serialized model, you can skip the section on frameworks and go
directly to our :doc:`core/constructing-graphs/import` documentation.
Please keep in mind that both of these are under continuous development, and will
be updated frequently in the coming months. Stay tuned!
......@@ -242,6 +266,6 @@ be updated frequently in the coming months. Stay tuned!
.. _breathe: https://breathe.readthedocs.io/en/latest/
.. _llvm.org: https://www.llvm.org
.. _NervanaSystems: https://github.com/NervanaSystems/ngraph/blob/master/README.md
.. _googletest framework: https://github.com/google/googletest.git
.. _ONNX: http://onnx.ai
.. _website docs: http://ngraph.nervanasys.com/docs/latest/
.. _googletest framework: https://github.com/google/googletest.git
\ No newline at end of file
......@@ -61,15 +61,15 @@ descriptions:
:widths: 23, 53, 13, 23
:escape: ~
``ngraph``, The Intel nGraph C++ API, `Nngraph`_, Implicit namespace omitted from most API documentation
``builder``, "Convenience functions that create additional graph nodes to implement commonly-used recipes; for example, auto-broadcast", `Nbuilder`_, Coming Soon
``descriptor``, Descriptors are compile-time representations of objects that will appear at run-time, `Ndescriptor`_, Coming Soon
``op``, Ops used in graph construction, `Nop`_, :doc:`../ops/index`
``runtime``, The objects and methods used for executing the graph, `Nruntime`_, :doc:`../backend-support/cpp-api`
.. _Nngraph: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph
.. _Nbuilder: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph/builder
.. _Ndescriptor: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph/descriptor
.. _Nop: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph/op
.. _Nruntime: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph/runtime
``ngraph``, The Intel nGraph C++ API, `ngraph`_, Implicit namespace omitted from most API documentation
``builder``, "Convenience functions that create additional graph nodes to implement commonly-used recipes; for example, auto-broadcast", `builder`_, " "
``descriptor``, Descriptors are compile-time representations of objects that will appear at run-time, `descriptor`_, " "
``op``, Ops used in graph construction, `op`_, :doc:`../ops/index`
``runtime``, The objects and methods used for executing the graph, `runtime`_, :doc:`../backend-support/cpp-api`
.. _ngraph: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph
.. _builder: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph/builder
.. _descriptor: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph/descriptor
.. _op: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph/op
.. _runtime: https://github.com/NervanaSystems/ngraph/tree/master/src/ngraph/runtime
......@@ -31,7 +31,7 @@ across all workers, and then update the weights.
How? (Generic frameworks)
=========================
* :doc:`../howto/distribute-train`
* :doc:`../core/constructing-graphs/distribute-train`
To synchronize gradients across all workers, the essential operation for data
parallel training, due to its simplicity and scalability over parameter servers,
......@@ -46,9 +46,8 @@ find it worthwhile to experiment with different modes or variations of
distributed training. Deployments using nGraph Library with supported backends
can be configured to train with data parallelism and will soon work with model
parallelism. Distributing workloads is increasingly important, as more data and
bigger models mean the ability to :doc:`../howto/distribute-train` work with
larger and larger datasets, or to work with models having many layers that
aren't designed to fit to a single device.
bigger models mean the ability to :doc:`../core/constructing-graphs/distribute-train`
work with larger and larger datasets, or to work with models having many layers that aren't designed to fit to a single device.
Distributed training with data parallelism splits the data and each worker
node has the same model; during each iteration, the gradients are aggregated
......
......@@ -12,7 +12,7 @@ MXNet\* bridge
* **Training**: For experimental or alternative approaches to distributed
training methodologies, including data parallel training, see the
MXNet-relevant sections of the docs on :doc:`../distr/index` and
:doc:`How to <../howto/index>` topics like :doc:`../howto/distribute-train`.
:doc:`How to <../core/constructing-graphs/index>` topics like :doc:`../core/constructing-graphs/distribute-train`.
.. _README: https://github.com/NervanaSystems/ngraph-mxnet/blob/master/README.md
\ No newline at end of file
......@@ -165,7 +165,7 @@ new Core ops should be infrequent and that most functionality instead gets
added with new functions that build sub-graphs from existing core ops.
For a more detailed dive into how custom bridge code can be implemented, see our
documentation on how to :doc:`../howto/execute`. To learn how TensorFlow and
documentation on how to :doc:`../core/constructing-graphs/execute`. To learn how TensorFlow and
MXNet currently make use of custom bridge code, see the section on
:doc:`../frameworks/index`.
......@@ -183,7 +183,7 @@ Framework bridge code is *not* the only way to connect a model (function graph)
to nGraph's :doc:`../ops/index`. We've also built an importer for models that
have been exported from a framework and saved as serialized file, such as ONNX.
To learn how to convert such serialized files to an nGraph model, please see
the :doc:`../howto/import` documentation.
the :doc:`../core/constructing-graphs/import` documentation.
.. _whats_next:
......
......@@ -15,8 +15,8 @@
.. limitations under the License.
.. ---------------------------------------------------------------------------
nGraph Library docs
===================
Contributing Documentation
==========================
Read this for changes affecting anything in ``ngraph/doc``
----------------------------------------------------------
......@@ -105,8 +105,8 @@ style guide.
.. build-docs:
Build the documentation
=======================
How to build the documentation
-------------------------------
.. note:: Stuck on how to generate the html? Run these commands; they assume
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment