Unverified Commit 0cf8b674 authored by Leona C's avatar Leona C Committed by GitHub

Add refs to DLDT doc and remove PlaidML-related docs (#4334)

* Update sitemap to not use a page title

* Update doc for DLDT layers and remove references to PlaidML
parent 8f40bb5d
......@@ -23,10 +23,6 @@ from a framework on a CPU, GPU, or ASIC; it can also be used with an
*Interpreter* mode, which is primarily intended for testing, to analyze a
program, or to help a framework developer customize targeted solutions.
nGraph also provides a way to use the advanced tensor compiler PlaidML
as a backend; you can learn more about this backend and how to build it
from source in our documentation: :ref:`ngraph_plaidml_backend`.
.. csv-table::
:header: "Backend", "Current nGraph support", "Future nGraph support"
:widths: 35, 10, 10
......@@ -104,26 +100,4 @@ depending on the parameters specified.
* ``NGRAPH_INTELGPU_DUMP_FUNCTION`` -- dumps nGraph’s functions
in dot format.
.. _opencl:
OpenCL
------
OpenCL is only needed for the :doc:`plaidml-ng-api/index`; if you have only
a CPU backend, it is not needed.
#. Install the latest Linux driver for your system. You can find a list
of drivers at https://software.intel.com/en-us/articles/opencl-drivers;
You may need to install `OpenCL SDK`_ in case of an ``libOpenCL.so`` absence.
#. Any user added to "video" group:
.. code-block:: console
sudo usermod –a –G video <user_id>
may, for example, be able to find details at the ``/sys/module/[system]/parameters/``
location.
.. _axpy.py example: https://github.com/tensorflow/ngraph-bridge/blob/master/examples/axpy.py
.. _OpenCL SDK: https://software.intel.com/en-us/opencl-sdk
.. backends/plaidml-ng-api/index.rst:
PlaidML from nGraph
===================
.. doxygenclass:: ngraph::runtime::plaidml::PlaidML_Backend
:project: ngraph
:members:
\ No newline at end of file
......@@ -5,7 +5,6 @@ Build and Test
###############
* :ref:`default_ngflags`
* :ref:`ngraph_plaidml_backend`
There are a few common paths to take when manually building the |project|
from source code. Today nGraph supports various developers working on all
......@@ -17,18 +16,6 @@ A "from scratch" source-code build of the nGraph Library enables the CPU,
``Interpreter``, and unit tests by default. See :ref:`default_ngflags`
for more detail.
A "from scratch" source-code build that defaults to the PlaidML backend
contains rich algorithm libraries akin to those that were previously available
only to developers willing to spend extensive time writing, testing, and
customizing kernels. An ``NGRAPH_PLAIDML`` dist can function like a framework
that lets developers compose, train, and even deploy :abbr:`DL (Deep Learning)`
models in their preferred language on neural networks of any size. This is
a good option if, for example, you are working on a laptop with a high-end
GPU that you want to use for compute. See :ref:`ngraph_plaidml_backend`
for instructions on how to build.
In either case, there are some prerequisites that your system will need
to build from sources.
.. _prerequisites:
......@@ -63,6 +50,11 @@ file to change or customize the default CMake procedure.
* :ref:`ubuntu`
* :ref:`centos`
Other Compilation Flags
-----------------------
See :doc:`inspection/index`.
.. _ubuntu:
......@@ -161,59 +153,6 @@ The process documented here will work on CentOS 7.4.
$ make && sudo make install
.. _ngraph_plaidml_backend:
Building nGraph-PlaidML from source
===================================
The following instructions will create the ``~/ngraph_plaidml_dist``
locally:
#. Ensure you have installed the :ref:`prerequisites` for your OS.
#. Install the prerequisites for the backend. Our hybrid ``NGRAPH_PLAIDML``
backend works best with Python3 versions. We recommend that you use a
virtual environment, due to some of the difficulties that users have
seen when trying to install outside of a venv.
.. code-block:: console
$ sudo apt install python3-pip
$ pip install plaidml
$ plaidml-setup
#. Clone the source code, create and enter your build directory:
.. code-block:: console
$ git clone https://github.com/NervanaSystems/ngraph.git
$ cd ngraph && mkdir build && cd build
#. Prepare the CMake files as follows:
.. code-block:: console
$ cmake .. -DCMAKE_INSTALL_PREFIX=~/ngraph_plaidml_dist -DNGRAPH_CPU_ENABLE=OFF -DNGRAPH_PLAIDML_ENABLE=ON
#. Run :command:`make` and ``make install``. Note that if you are building
outside a local or user path, you may need to run ``make install`` as the
root user.
.. code-block:: console
$ make
$ make install
This should create the shared library ``libplaidml_backend.so`` and
nbench. Note that if you built in a virtual environment and run
``make check`` from it, the Google Test may report failures. Full
tests can be run when PlaidML devices are available at the machine
level.
For more about working with the PlaidML backend from nGraph, see our
API documentation :doc:`backends/plaidml-ng-api/index`.
macOS\* development
--------------------
......
......@@ -68,22 +68,6 @@ Now you can start exploring some of the :doc:`onnx_integ` examples.
See also nGraph's :doc:`../python_api/index`.
PlaidML
-------
See the :ref:`ngraph_plaidml_backend` section on how to build the
nGraph-PlaidML.
Other integration paths
-----------------------
If you are considering incorporating components from the nGraph Compiler stack
in your framework or neural network design, another useful doc is the section
on :doc:`other/index` . Contents here are also useful if you are working on
something built-from-scratch, or on an existing framework that is less
widely-supported than the popular frameworks like TensorFlow and PyTorch.
.. _ngraph-tensorflow-bridge: https://pypi.org/project/ngraph-tensorflow-bridge
.. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx
......
......@@ -142,50 +142,19 @@ work for what will ultimately be a fragile setup that is costly to maintain.
**Figure C**: Inevitable scaling problem
Integrating PlaidML with nGraph provides flexbility to support the latest deep
learning models in the absence of hand-optimized kernels for new operations.
PlaidML works together with nGraph to address the exponential growth of
kernels.
PlaidML takes two inputs: the operation defined by the user and the machine
description of the hardware target. It then automatically generates kernels
that are iteratively optimized through an IR known as `Stripe`_. Integration of
PlaidML with nGraph allows users to choose the hardware and framework that
suits their needs, resulting in freedom from kernel libraries.
Solution: nGraph and PlaidML
============================
We developed nGraph and integrated it with PlaidML to allow developers to
accelerate deep learning performance and address the problem of scalable
kernel libraries. To address the problem of scaling backends, nGraph applies
graph-level optimizations to deep learning computations and unifies
computational graphsfrom deep learning frameworks with nGraph IR.
In conjunction with nGraph's graph-level optimizations, PlaidML automatically
applies low-level optimizations to improve deep learning performance.
Additionally, PlaidML offers extensive support for various hardware targets
due to its ability to generate code in LLVM, OpenCL, OpenGL, and Metal.
Given a backend with existing kernel libraries, nGraph can readily support the
target hardware because the backend only needs to support a few primitive
operations. If the hardware supports one of the coding languages supported by
PlaidML, developers must specify the machine description to support the
hardware. Together, nGraph and PlaidML provide the best of both worlds.
This documentation provides technical details of nGraph's core functionality
as well as framework and backend integrations. Creating a compiler stack like
nGraph and PlaidML requires expert knowledge, and we're confident that nGraph
and PlaidML will make life easier for many kinds of developers:
#. Framework owners looking to support new hardware and custom chips.
#. Data scientists and ML developers wishing to accelerate deep learning
performance.
#. New DL accelerator developers creating an end-to-end software stack from a
deep learning framework to their silicon.
.. _Stripe: https://arxiv.org/abs/1903.06498
.. _publication: https://arxiv.org/abs/1801.08058
.. _up to 45X: https://ai.intel.com/ngraph-compiler-stack-beta-release/
.. _more transistors on denser and denser circuits: https://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html
Solution: A customizable graph compiler for complex operations
==============================================================
`OpenVINO toolkit`_ is powered by nGraph capabilities for Graph compilation.
To represent a :abbr:`DL (Deep Learning)` model in real-time and perform
complex operations on that model, users can `build an nGraph function`_.
Once created, it can wrap into a ``CNNNetwork``, creating utility for data
scientists or application developers to use operations that do not depend
on existing Deep Learning (DL) frameworks.
.. _OpenVINO toolkit: http://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html
.. _build an nGraph function: http://docs.openvinotoolkit.org/latest/_docs_IE_DG_nGraphTutorial.html
.. _add custom operations: http://docs.openvinotoolkit.org/latest/_docs_IE_DG_AddingNGraphOps.html
\ No newline at end of file
......@@ -42,7 +42,6 @@
:maxdepth: 1
Basic Concepts <backends/index.rst>
backends/plaidml-ng-api/index.rst
Integrating Other Backends <backends/cpp-api.rst>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment