Commit db595a3a authored by L.S. Cook's avatar L.S. Cook Committed by Scott Cyphers

Leona/editing (#498)

* Doc the A-ops.

* Better structure for ops and the docs around them, based on cyphers branch for doc-the-a-ops

* More edits for merge into preview branch

* Update link to framework integration guide page on testing libngraph

* New branch for editing public-facing docs

* Make sure updated graphic gets added, correct compiler version on install page

* Update README to match content on legacy Python repo

* Let's see if this fixes the bad merge

* Working down in doc directory, forgot to update top-level readme with feedback from review

* Correct typo

* Trying to fix the ops

* Try adding convolution manually from master

* Update pictorial image of nGraph IR
parent 2fe7f0f3
# Intel® nGraph™ library project # Intel® nGraph™ library
Welcome to the Intel nGraph project, an open source C++ library for developers Welcome to Intel nGraph, an open source C++ library for developers of Deep
of Deep Learning (DL) systems. Here you will find a suite of components, APIs, Learning (DL) systems. Here you will find a suite of components, APIs, and
and documentation that can be used to compile and run Deep Neural Network (DNN) documentation that can be used to compile and run Deep Neural Network (DNN)
models defined in a variety of frameworks. models defined in a variety of frameworks.
The nGraph library translates a framework’s representation of computations into The nGraph library translates a framework’s representation of computations into
...@@ -14,7 +14,8 @@ and data layout abstraction. ...@@ -14,7 +14,8 @@ and data layout abstraction.
See our [install] docs for how to get started. See our [install] docs for how to get started.
For this early release, we provide framework integration guides to compile For this early release, we provide [framework integration guides] to compile
MXNet and TensorFlow-based projects. MXNet and TensorFlow-based projects.
[install]: http://ngraph.nervanasys.com/docs/cpp/installation.html [install]: http://ngraph.nervanasys.com/docs/cpp/installation.html
\ No newline at end of file [framework integration guides]:http://ngraph.nervanasys.com/docs/cpp/framework-integration-guides.html
...@@ -41,8 +41,6 @@ pickle: ...@@ -41,8 +41,6 @@ pickle:
json: prep json: prep
$(SPHINXBUILD) -t $(DOC_TAG) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json $(SPHINXBUILD) -t $(DOC_TAG) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@rm -rf samples
@rm -rf boards
@echo @echo
@echo "Build finished; now you can process the JSON files." @echo "Build finished; now you can process the JSON files."
......
...@@ -17,10 +17,11 @@ ...@@ -17,10 +17,11 @@
Intel nGraph library project Intel nGraph library project
############################# #############################
Welcome to the Intel nGraph project, an open source C++ library for developers Welcome to Intel nGraph, an open source C++ library for developers of
of :abbr:`Deep Learning (DL)` (DL) systems. Here you will find a suite of :abbr:`Deep Learning (DL)` (DL) systems. Here you will find a suite
components, APIs, and documentation that can be used to compile and run of components, APIs, and documentation that can be used to compile
:abbr:`Deep Neural Network (DNN)` (DNN) models defined in a variety of frameworks. and run :abbr:`Deep Neural Network (DNN)` (DNN) models defined in a
variety of frameworks.
.. figure:: graphics/ngraph-hub.png .. figure:: graphics/ngraph-hub.png
......
...@@ -7,24 +7,23 @@ Install the Intel® nGraph™ library ...@@ -7,24 +7,23 @@ Install the Intel® nGraph™ library
Build Environments Build Environments
================== ==================
The |release| version of |project| supports Linux\* or UNIX-based The |release| version of |project| supports Linux\*-based systems which
systems which have recent updates of the following packages and have recent updates of the following packages and prerequisites:
prerequisites:
.. csv-table:: .. csv-table::
:header: "Operating System", "Compiler", "Build System", "Status", "Additional Packages" :header: "Operating System", "Compiler", "Build System", "Status", "Additional Packages"
:widths: 25, 15, 25, 20, 25 :widths: 25, 15, 25, 20, 25
:escape: ~ :escape: ~
CentOS 7.4 64-bit, CLang 3.4, GCC 4.8 + CMake 2.8, supported, ``patch diffutils zlib1g-dev libtinfo-dev`` CentOS 7.4 64-bit, GCC 4.8, CMake 3.2, supported, ``patch diffutils zlib1g-dev libtinfo-dev``
Ubuntu 16.04 (LTS) 64-bit, CLang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 git libtinfo-dev`` Ubuntu 16.04 (LTS) 64-bit, CLang 3.9, CMake 3.5.1 + GNU Make, supported, ``build-essential cmake clang-3.9 git libtinfo-dev``
Ubuntu 16.04 (LTS) 64-bit, CLang 4.0, CMake 3.5.1 + GNU Make, officially unsupported, ``build-essential cmake clang-4.0 git libtinfo-dev``
Clear Linux\* OS for Intel Architecture, CLang 5.0.1, CMake 3.10.2, experimental, bundles ``machine-learning-basic dev-utils python3-basic python-basic-dev`` Clear Linux\* OS for Intel Architecture, CLang 5.0.1, CMake 3.10.2, experimental, bundles ``machine-learning-basic dev-utils python3-basic python-basic-dev``
On Ubuntu 16.04 with ``gcc-5.4.0`` or ``clang-3.9``, the recommended option Other configurations may work, but aren't tested; on Ubuntu 16.04 with
is to add ``-DNGRAPH_USE_PREBUILT_LLVM=TRUE`` to the :command:`cmake` command. ``gcc-5.4.0`` or ``clang-3.9``, for example, we recommend adding
This gets a pre-built tarball of LLVM+Clang from `llvm.org`_, and substantially ``-DNGRAPH_USE_PREBUILT_LLVM=TRUE`` to the :command:`cmake` command in step 4
reduces build times. below. This gets a pre-built tarball of LLVM+Clang from `llvm.org`_, and will
substantially reduce build time.
If using ``gcc-4.8``, it may be necessary to add symlinksfrom ``gcc`` to If using ``gcc-4.8``, it may be necessary to add symlinksfrom ``gcc`` to
``gcc-4.8``, and from ``g++`` to ``g++-4.8``, in your :envvar:`PATH`, even ``gcc-4.8``, and from ``g++`` to ``g++-4.8``, in your :envvar:`PATH`, even
...@@ -33,7 +32,7 @@ flags when building. (You should NOT supply the `-DNGRAPH_USE_PREBUILT_LLVM` ...@@ -33,7 +32,7 @@ flags when building. (You should NOT supply the `-DNGRAPH_USE_PREBUILT_LLVM`
flag in this case, because the prebuilt tarball supplied on llvm.org is not flag in this case, because the prebuilt tarball supplied on llvm.org is not
compatible with a gcc-4.8 based build.) compatible with a gcc-4.8 based build.)
Support for macOS is limited; see the macOS development prerequisites Support for macOS is limited; see the `macOS development prerequisites`_
section at the end of this page for details. section at the end of this page for details.
...@@ -95,6 +94,7 @@ information about how to change or customize this location. ...@@ -95,6 +94,7 @@ information about how to change or customize this location.
the ``doc/sphinx`` directory to build HTML API docs inside the the ``doc/sphinx`` directory to build HTML API docs inside the
``/docs/doxygen/`` directory. ``/docs/doxygen/`` directory.
.. macos_development_prerequisites:
macOS Development Prerequisites macOS Development Prerequisites
------------------------------- -------------------------------
......
...@@ -125,3 +125,4 @@ C++ Interface ...@@ -125,3 +125,4 @@ C++ Interface
.. doxygenclass:: ngraph::op::Convolution .. doxygenclass:: ngraph::op::Convolution
:members: :members:
\ No newline at end of file
...@@ -17,6 +17,30 @@ Architecture CPUs, the Intel® Nervana Neural Network Processor™ (NNP), ...@@ -17,6 +17,30 @@ Architecture CPUs, the Intel® Nervana Neural Network Processor™ (NNP),
and NVIDIA\* GPUs. Currently-supported compiler optimizations include efficient and NVIDIA\* GPUs. Currently-supported compiler optimizations include efficient
memory management and data layout abstraction. memory management and data layout abstraction.
Why is this needed?
--------------------
When Deep Learning (DL) frameworks first emerged as the vehicle for training
and inference models, they were designed around kernels optimized for a
particular platform. As a result, many backend details were being exposed in
the model definitions, making the adaptability and portability of DL models
to other or more advanced backends inherently complex and expensive.
The traditional approach means that an algorithm developer cannot easily adapt
his or her model to different backends. Making a model run on a different
framework is also problematic because the user must separate the essence of
the model from the performance adjustments made for the backend, translate
to similar ops in the new framework, and finally make the necessary changes
for the preferred backend configuration on the new framework.
We designed the Intel nGraph project to substantially reduce these kinds of
engineering complexities. While optimized kernels for deep-learning primitives
are provided through the project and via libraries like Intel® Math Kernel
Library (Intel® MKL) for Deep Neural Networks (Intel® MKL-DNN), there are
several compiler-inspired ways in which performance can be further optimized.
=======
The *nGraph core* uses a strongly-typed and platform-neutral stateless graph The *nGraph core* uses a strongly-typed and platform-neutral stateless graph
representation for computations. Each node, or *op*, in the graph corresponds representation for computations. Each node, or *op*, in the graph corresponds
to one step in a computation, where each step produces zero or more tensor to one step in a computation, where each step produces zero or more tensor
...@@ -39,4 +63,7 @@ read more about design decisions and what is tentatively in the pipeline ...@@ -39,4 +63,7 @@ read more about design decisions and what is tentatively in the pipeline
for development in our `SysML conference paper`_. for development in our `SysML conference paper`_.
.. _frontend: http://neon.nervanasys.com/index.html/ .. _frontend: http://neon.nervanasys.com/index.html/
.. _SysML conference paper: https://arxiv.org/pdf/1801.08058.pdf .. _SysML conference paper: https://arxiv.org/pdf/1801.08058.pdf
\ No newline at end of file .. _MXNet: http://mxnet.incubator.apache.org/
.. _TensorFlow: https://www.tensorflow.org/
...@@ -31,14 +31,12 @@ training/inference model with one of the backends that are now enabled. ...@@ -31,14 +31,12 @@ training/inference model with one of the backends that are now enabled.
For this early |release| release, we're providing :doc:`framework-integration-guides`, For this early |release| release, we're providing :doc:`framework-integration-guides`,
for: for:
* :doc:`framework-integration-guides` framework, * :doc:`MXNet<framework-integration-guides>` framework,
* :doc:`framework-integration-guides` framework, and * :doc:`Tensorflow<framework-integration-guides>` framework, and
* neon™ `frontend framework`_. * neon™ `frontend framework`_.
Integration guides for other frameworks are tentatively forthcoming. Integration guides for other frameworks are tentatively forthcoming.
.. _GTest framework: https://github.com/google/googletest.git .. _GTest framework: https://github.com/google/googletest.git
.. _MXNet: http://mxnet.incubator.apache.org/
.. _TensorFlow: https://www.tensorflow.org/
.. _frontend framework: http://neon.nervanasys.com/index.html/ .. _frontend framework: http://neon.nervanasys.com/index.html/
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment