Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
N
ngraph
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
ngraph
Commits
a8ee3c95
Unverified
Commit
a8ee3c95
authored
Jul 03, 2019
by
Scott Cyphers
Committed by
GitHub
Jul 03, 2019
Browse files
Options
Browse Files
Download
Plain Diff
Merge pull request #3124 from NervanaSystems/leona/doc_igpu
Add initial igpu docs and other important fixes for 0.22
parents
005ba206
f946f097
Show whitespace changes
Inline
Side-by-side
Showing
17 changed files
with
244 additions
and
157 deletions
+244
-157
conf.py
doc/sphinx/conf.py
+15
-72
ngversions.html
doc/sphinx/ngraph_theme/ngversions.html
+5
-7
index.rst
doc/sphinx/source/backends/backend-api/index.rst
+0
-1
cpp-api.rst
doc/sphinx/source/backends/cpp-api.rst
+3
-2
index.rst
doc/sphinx/source/backends/dynamicbackend-api/index.rst
+3
-3
index.rst
doc/sphinx/source/backends/executable-api/index.rst
+0
-1
index.rst
doc/sphinx/source/backends/index.rst
+55
-0
conf.py
doc/sphinx/source/conf.py
+2
-2
generic-configs.rst
doc/sphinx/source/frameworks/generic-configs.rst
+67
-18
mxnet_integ.rst
doc/sphinx/source/frameworks/mxnet_integ.rst
+4
-1
onnx_integ.rst
doc/sphinx/source/frameworks/onnx_integ.rst
+6
-7
index.rst
doc/sphinx/source/index.rst
+0
-1
index.rst
doc/sphinx/source/inspection/index.rst
+3
-3
doc-contributor-README.rst
doc/sphinx/source/project/doc-contributor-README.rst
+35
-8
release-notes.rst
doc/sphinx/source/project/release-notes.rst
+38
-23
BUILDING.md
python/BUILDING.md
+5
-5
README.md
python/README.md
+3
-3
No files found.
doc/sphinx/conf.py
View file @
a8ee3c95
...
...
@@ -76,9 +76,7 @@ author = 'Intel Corporation'
version
=
'0.22'
# The Documentation full version, including alpha/beta/rc tags. Some features
# available in the latest code will not necessarily be documented first.
# rc syntax may be tagged; this documentation supports various rc-naming conventions
# available in the latest code will not necessarily be documented first
release
=
'0.22.0'
# The language for content autogenerated by Sphinx. Refer to documentation
...
...
@@ -107,38 +105,18 @@ todo_include_todos = True
# -- Options for HTML output ----------------------------------------------
html_title
=
'nGraph Compiler stack Documentation'
html_title
=
"Documentation for the nGraph Library and Compiler stack"
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme
=
'ngraph_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
html_theme_path
=
[
"../"
]
if
tags
.
has
(
'release'
):
is_release
=
True
docs_title
=
'Docs /
%
s'
%
(
version
)
%
(
release
)
else
:
is_release
=
False
docs_title
=
'Docs / Latest'
# borrow this from the zephyr docs theme
html_context
=
{
# 'show_license': html_show_license, we have custom footers to attribute
# RTD, WTD, and Sphinx contributors; so we do not enable this
'docs_title'
:
docs_title
,
'is_release'
:
is_release
,
'theme_logo_only'
:
False
,
'current_version'
:
version
,
'versions'
:
(
(
"latest"
,
"../"
),
(
"0.20.0"
,
"/0.20.0/"
),
#not yet sure how we'll do this
(
"0.19.0"
,
"/0.19.0/"
),
(
"0.18.0"
,
"/0.18.0/"
),
(
"0.17.0"
,
"/0.17.0/"
),
(
"0.16.0"
,
"/0.16.0/"
),
)
}
html_logo
=
'../ngraph_theme/static/favicon.ico'
html_logo
=
'../ngraph_theme/static/logo.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
...
...
@@ -151,6 +129,7 @@ html_favicon = '../ngraph_theme/static/favicon.ico'
html_static_path
=
[
'../ngraph_theme/static'
]
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path
=
[
"../"
]
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
...
...
@@ -165,26 +144,6 @@ html_sidebars = {
}
# Custom added feature to allow redirecting old URLs
#
# list of tuples (old_url, new_url) for pages to redirect
# (URLs should be relative to document root, only)
html_redirect_pages
=
[
(
'backend-support'
,
'backends/index'
),
(
'core/core'
,
'core/overview.rst'
),
(
'core/fusion'
,
'core/fusion/index'
),
(
'frameworks/mxnet'
,
'frameworks/mxnet_intg.rst'
),
(
'frameworks/onnx'
,
'frameworks/onnx_intg.rst'
),
(
'frameworks/tensorflow'
,
'frameworks/tensorflow_connect.rst'
),
(
'frameworks/paddle'
,
'frameworks/paddle_integ.rst'
),
(
'inspection/inspection'
,
'inspection/index'
),
(
'releases/release-notes'
,
'releases/index'
),
# ('getting_started/getting_starting', 'getting_started/index'),
# mv to framework-specific helper directory
(
'project/project'
,
'project/index'
),
(
'python_api/'
,
'python_api/index'
),
]
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
...
...
@@ -195,11 +154,11 @@ htmlhelp_basename = 'IntelnGraphlibrarydoc'
latex_elements
=
{
# The paper size ('letterpaper' or 'a4paper').
#
#
'papersize': 'letterpaper',
'papersize'
:
'letterpaper'
,
# The font size ('10pt', '11pt' or '12pt').
#
#
'pointsize': '10pt',
'pointsize'
:
'10pt'
,
# Additional stuff for the LaTeX preamble.
#
...
...
@@ -214,11 +173,10 @@ latex_elements = {
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents
=
[
(
master_doc
,
'nGraphCompilerStack.tex'
,
'nGraph Compiler Stack Documentation'
,
'Intel Corporation'
,
'manual'
),
(
master_doc
,
'nGraphCompilerStack.tex'
,
u
'nGraph Compiler Stack Documentation'
,
u
'Intel Corporation'
,
'manual'
),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
...
...
@@ -244,22 +202,7 @@ breathe_projects = {
}
rst_epilog
=
u"""
.. |codename| replace:: Intel nGraph
.. |project| replace:: Intel nGraph Library
.. |InG| replace:: Intel® nGraph
.. |copy| unicode:: U+000A9 .. COPYRIGHT SIGN
:ltrim:
.. |deg| unicode:: U+000B0 .. DEGREE SIGN
:ltrim:
.. |plusminus| unicode:: U+000B1 .. PLUS-MINUS SIGN
:rtrim:
.. |micro| unicode:: U+000B5 .. MICRO SIGN
:rtrim:
.. |trade| unicode:: U+02122 .. TRADEMARK SIGN
:ltrim:
.. |reg| unicode:: U+000AE .. REGISTERED TRADEMARK SIGN
:ltrim:
.. include:: /replacements.txt
"""
# -- autodoc Extension configuration --------------------------------------
...
...
doc/sphinx/ngraph_theme/ngversions.html
View file @
a8ee3c95
...
...
@@ -6,16 +6,14 @@
</span>
<div
class=
"rst-other-versions"
>
<dl>
<dt>
{{ _('
Previous
Versions') }}
</dt>
<dt>
{{ _('
Recent
Versions') }}
</dt>
<dd>
<!-- Until our https://docs.ngraph.ai/ publishing is set up, we link to GitHub -->
<ul>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.20.0"
>
0.20.0-rc.0
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.19.0-rc.2"
>
0.19.0-rc.2
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.22.0"
>
0.22
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.21.0"
>
0.21.0
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.20.0"
>
0.20.0
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.19.0"
>
0.19.0
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.18.1"
>
0.18.1
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.17.0-rc.1"
>
0.17.0-rc.1
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.16.0-rc.3"
>
0.16.0-rc.3
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.16.0-rc.2"
>
0.16.0-rc.2
</a></li>
<li><a
href=
"https://github.com/NervanaSystems/ngraph/releases/tag/v0.16.0-rc.1"
>
0.16.0-rc.1
</a></li>
</ul></dd>
</dl>
<dl>
...
...
doc/sphinx/source/backends/backend-api/index.rst
View file @
a8ee3c95
...
...
@@ -7,4 +7,3 @@ Backend
.. doxygenclass:: ngraph::runtime::Backend
:project: ngraph
:members:
doc/sphinx/source/backends/cpp-api.rst
View file @
a8ee3c95
...
...
@@ -7,9 +7,10 @@ Backend APIs
:maxdepth: 1
backend-api/index
executable-api/index
hosttensor-api/index
dynamicbackend-api/index
plaidml-ng-api/index
executable-api/index
As of version ``0.15``, there is a new backend API to work with functions that
can be compiled as a runtime ``Executable``. Where previously ``Backend`` used a
...
...
doc/sphinx/source/backends/
hosttensor
-api/index.rst
→
doc/sphinx/source/backends/
dynamicbackend
-api/index.rst
View file @
a8ee3c95
.. backends/hosttensor-api/index.rst:
HostTensor
==========
DynamicBackend
==========
====
.. doxygenclass:: ngraph::runtime::
HostTensor
.. doxygenclass:: ngraph::runtime::
dynamic::DynamicBackend
:project: ngraph
:members:
doc/sphinx/source/backends/executable-api/index.rst
View file @
a8ee3c95
...
...
@@ -7,7 +7,6 @@ Executable
The ``compile`` function on an ``Executable`` has more direct methods to
actions such as ``validate``, ``call``, ``get_performance_data``, and so on.
.. doxygenclass:: ngraph::runtime::Executable
:project: ngraph
:members:
...
...
doc/sphinx/source/backends/index.rst
View file @
a8ee3c95
...
...
@@ -6,6 +6,7 @@ Developer Resources for Backends
* :ref:`what_is_backend`
* :ref:`how_to_use`
* :ref:`miscellaneous_resources`
.. _what_is_backend:
...
...
@@ -60,3 +61,57 @@ interface; each backend implements the following five functions:
for later execution.
* And, finally, the ``call()`` method is used to invoke an nGraph function
against a particular set of tensors.
.. _miscellaneous_resources:
Miscellaneous resources
=======================
Additional resources for device or framework-specific configurations:
OpenCL
------
OpenCL is needed for the :doc:`plaidml-ng-api/index`; this is not needed if
you have only a CPU backend.
#. Install the latest Linux driver for your system. You can find a list
of drivers at https://software.intel.com/en-us/articles/opencl-drivers;
You may need to install `OpenCL SDK`_ in case of an ``libOpenCL.so`` absence.
#. Any user added to "video" group:
.. code-block:: console
sudo usermod –a –G video <user_id>
may, for example, be able to find details at the ``/sys/module/[system]/parameters/`` location.
nGraph Bridge from TensorFlow\*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When specified as the generic backend -- either manually or automatically
from a framework -- ``NGRAPH`` defaults to CPU, and it also allows for
additional device configuration or selection.
Because nGraph can select backends, specifying the ``INTELGPU``
backend as a runtime environment variable also works if one is
present in your system:
:envvar:`NGRAPH_TF_BACKEND="INTELGPU"`
An `axpy.py example`_ is optionally available to test; outputs will vary
depending on the parameters specified.
.. code-block:: console
NGRAPH_TF_BACKEND="INTELGPU" python3 axpy.py
* ``NGRAPH_INTELGPU_DUMP_FUNCTION`` -- dumps nGraph’s functions
in dot format.
.. _axpy.py example: https://github.com/tensorflow/ngraph-bridge/blob/master/examples/axpy.py
.. _OpenCL SDK: https://software.intel.com/en-us/opencl-sdk
doc/sphinx/source/conf.py
View file @
a8ee3c95
...
...
@@ -73,11 +73,11 @@ author = 'Intel Corporation'
# built documents.
#
# The short X.Y version.
version
=
'0.2
1
'
version
=
'0.2
2
'
# The Documentation full version, including alpha/beta/rc tags. Some features
# available in the latest code will not necessarily be documented first
release
=
'0.2
1
.0'
release
=
'0.2
2
.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
...
...
doc/sphinx/source/frameworks/generic-configs.rst
View file @
a8ee3c95
...
...
@@ -5,8 +5,8 @@ Integrating new frameworks
This section details some of the *configuration options* and some of the
*environment variables* that can be used to tune for optimal performance when
your system already has a version of nGraph installed with one o
f our supported
backends.
your system already has a version of nGraph installed with one o
r more of our
supported :doc:`../backends/index`.
Regardless of the framework, after the :doc:`../buildlb` step, a good place
to start usually involves making the libraries available to the framework. On
...
...
@@ -19,15 +19,59 @@ something like:
export LD_LIBRARY_PATH=path/to/ngraph_dist/lib/
Find or display nGraph Version
-------------------------------
Find or display version
=======================
If you're working with the :doc:`../python_api/index`, the following command
may be useful:
.. code-block:: console
python3 -c "import ngraph as ng; print('nGraph version: ',ng.__version__)";
To manually build a newer version than is available from the latest `PyPI`_
(:abbr:`Python Package Index (PyPI)`), see our nGraph Python API `BUILDING.md`_
documentation.
Activate logtrace-related environment variables
===============================================
Another configuration option is to activate ``NGRAPH_CPU_DEBUG_TRACER``,
a runtime environment variable that supports extra logging and debug detail.
This is a useful tool for data scientists interested in outputs from logtrace
files that can, for example, help in tracking down model convergences. It can
also help engineers who might want to add their new ``Backend`` to an existing
framework to compare intermediate tensors/values to references from a CPU
backend.
To activate this tool, set the ``env`` var ``NGRAPH_CPU_DEBUG_TRACER=1``.
It will dump ``trace_meta.log`` and ``trace_bin_data.log``. The names of the
logfiles can be customized.
To specify the names of logs with those flags:
::
NGRAPH_TRACER_LOG = "meta.log"
NGRAPH_BIN_TRACER_LOG = "bin.log"
The meta_log contains::
kernel_name, serial_number_of_op, tensor_id, symbol_of_in_out, num_elements, shape, binary_data_offset, mean_of_tensor, variance_of_tensor
A line example from a unit-test might look like::
K=Add S=0 TID=0_0 >> size=4 Shape{2, 2} bin_data_offset=8 mean=1.5 var=1.25
The binary_log line contains::
tensor_id, binary data (tensor data)
A reference for the implementation of parsing these logfiles can also be found
in the unit test for this feature.
FMV
---
...
...
@@ -37,7 +81,7 @@ number of generic ways to patch or bring architecture-based optimizations to
the :abbr:`Operating System (OS)` that is handling your ML environment. See
the `GCC wiki for details`_.
If your nGraph build is a Neural Network configured on Clear Linux* OS
If your nGraph build is a Neural Network configured on Clear Linux
\
* OS
for Intel® Architecture, and it includes at least one older CPU, the
`following article may be helpful`_.
...
...
@@ -63,11 +107,14 @@ For CPU (and most cuDNN) backends, the preferred layout is currently ``NCHW``.
Intel® Math Kernel Library for Deep Neural Networks
---------------------------------------------------
-The following `KMP options`_ were originally optimized for models using the
.. important:: Intel® MKL-DNN is automatically enabled as part of an
nGraph default :doc:`build <../buildlb>`; you do *not* need to add it
separately or as an additional component to be able to use these
configuration settings.
The following `KMP`_ options were originally optimized for models using the
Intel® `MKL-DNN`_ to train models with the ``NCHW`` data layout; however, other
configurations can be explored. MKL-DNN is automatically enabled as part of an
nGraph compilation; you do *not* need to add MKL-DNN separately or as an
additional component to be able to use these configuration settings.
configurations can be explored.
* ``KMP_BLOCKTIME`` Sets the time, in milliseconds, that a thread should wait
after completing the execution of a parallel region, before sleeping.
...
...
@@ -128,10 +175,11 @@ Convolution shapes
``OMP_NUM_THREADS``
^^^^^^^^^^^^^^^^^^^
The best resource for this configuration option is the `gnu.org site`_
``OMP_NUM_THREADS`` defaults to the number of logical cores. To check the
number of cores on your system, you can run the following on the command-line to
see the details of your CPU:
The best resource for this configuration option is the Intel® OpenMP\* docs
at the following link: `Intel OpenMP documentation`_. ``OMP_NUM_THREADS``
defaults to the number of logical cores. To check the number of cores on your
system, you can run the following on the command-line to see the details
of your CPU:
.. code-block:: console
...
...
@@ -171,11 +219,12 @@ achieve the best performance for NN workloads on CPU platforms. The nGraph
Compiler stack runs on transformers handled by Intel® Architecture (IA), and
thus can make more efficient use of the underlying hardware.
.. _KMP options: https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-controlling-thread-allocation
.. KMP options: https://software.intel.com/en-us/node/522691
.. _PyPI: https://pypi.org/project/ngraph-core
.. _KMP: https://software.intel.com/en-us/node/522691
.. _MKL-DNN: https://github.com/intel/mkl-dnn
.. _
gnu.org site: https://gcc.gnu.org/onlinedocs/libgomp/Environment-Variables.html
.. _
Intel OpenMP documentation: https://www.openmprtl.org/documentation
.. _Movidius: https://www.movidius.com/
.. _BUILDING.md: https://github.com/NervanaSystems/ngraph/blob/master/python/BUILDING.md
.. _GCC wiki for details: https://gcc.gnu.org/wiki/FunctionMultiVersioning
.. _following article may be helpful: https://clearlinux.org/documentation/clear-linux/tutorials/fmv
doc/sphinx/source/frameworks/mxnet_integ.rst
View file @
a8ee3c95
.. frameworks/mxnet_integ.rst:
MXNet\* bridge
===============
==============
.. deprecated:: 0.21.0
* See the nGraph-MXNet `Integration Guide`_ on the nGraph-MXNet repo.
...
...
doc/sphinx/source/frameworks/onnx_integ.rst
View file @
a8ee3c95
...
...
@@ -10,7 +10,7 @@ nGraph's internal representation and converted to ``Function`` objects, which
can be compiled and executed on one of nGraph's backends.
You can use nGraph's Python API to run an ONNX model and nGraph can be used
as an ONNX backend using the add-on package `nGraph
-ONNX <ngraph_onnx>
`_.
as an ONNX backend using the add-on package `nGraph
ONNX
`_.
.. note:: In order to support ONNX, nGraph must be built with the
...
...
@@ -33,8 +33,7 @@ for nGraph, ONNX and NumPy:
Importing an ONNX model
-----------------------
You can download models from the `ONNX Model Zoo <onnx_model_zoo_>`_.
For example ResNet-50:
You can download models from the `ONNX Model Zoo`_. For example, ResNet-50:
::
...
...
@@ -92,9 +91,9 @@ data:
Find more information about nGraph and ONNX in the
`nGraph
-ONNX <ngraph_onnx>
`_ GitHub repository.
`nGraph
ONNX
`_ GitHub repository.
.. _ngraph
_onnx: https://github.com/NervanaSystems/ngraph-onnx/
.. _ngraph
_onnx_
building: https://github.com/NervanaSystems/ngraph-onnx/blob/master/BUILDING.md
.. _
onnx_model_
zoo: https://github.com/onnx/models
.. _ngraph
ONNX: https://github.com/NervanaSystems/ngraph-onnx
.. _ngraph
ONNX
building: https://github.com/NervanaSystems/ngraph-onnx/blob/master/BUILDING.md
.. _
ONNX model
zoo: https://github.com/onnx/models
doc/sphinx/source/index.rst
View file @
a8ee3c95
.. ---------------------------------------------------------------------------
.. Copyright 2018-2019 Intel Corporation
.. Licensed under the Apache License, Version 2.0 (the "License");
.. you may not use this file except in compliance with the License.
...
...
doc/sphinx/source/inspection/index.rst
View file @
a8ee3c95
...
...
@@ -6,9 +6,9 @@ Visualization Tools
nGraph provides serialization and deserialization facilities, along with the
ability to create image formats or a PDF.
When visualization is enabled,
a ``dot`` file gets generated, along with a
``png``. The default can be adjusted by setting the
``NGRAPH_VISUALIZE_TREE_OUTPUT_FORMAT`` flag to another format, like
PDF.
When visualization is enabled,
``svg`` files for your graph get generated. The
default can be adjusted by setting the ``NGRAPH_VISUALIZE_TRACING_FORMAT``
flag to another format, like PNG or
PDF.
.. note:: Large graphs are usually not legible with formats like PDF.
...
...
doc/sphinx/source/project/doc-contributor-README.rst
View file @
a8ee3c95
...
...
@@ -18,9 +18,36 @@
Contributing to documentation
=============================
.. important:: Read this for changes affecting **anything** in ``ngraph/doc``
.. note:: Tips for contributors who are new to the highly-dynamic
environment of documentation in AI software:
For updates to the nGraph Library ``/doc`` repo, please submit a PR with
* A good place to start is "document something you figured out how to
get working". Content changes and additions should be targeted at
something more specific than "developers". If you don't understand
how varied and wide the audience is, you'll inadvertently break or block
things.
* There are experts who work on all parts of the stack; try asking how
documentation changes ought to be made in their respective sections.
* Start with something small. It is okay to add a "patch" to fix a typo
or suggest a word change; larger changes to files or structure require
research and testing first, as well as some logic for why you think
something needs changed.
* Most documentation should wrap at about ``80``. We do our best to help
authors source-link and maintain their own code and contributions;
overwriting something already documented doesn't always improve it.
* Be careful editing files with links already present in them; deleting
links to papers, citations, or sources is discouraged.
* Please do not submit Jupyter* notebook code to the nGraph Library
or core repos; best practice is to maintain any project-specific
examples, tests, or walk-throughs in a separate repository and to link
back to the stable ``op`` or Ops that you use in your project.
For updates within the nGraph Library ``/doc`` repo, please submit a PR with
any changes or ideas you'd like integrated. This helps us maintain trackability
with respect to changes made, additions, deletions, and feature requests.
...
...
@@ -36,7 +63,6 @@ it for a specific use case. Add a note on our wiki to show us what you
did; new and novel applications may have their projects highlighted on an
upcoming `ngraph.ai`_ release.
.. note:: Please do not submit Jupyter* notebook code to the nGraph Library
or core repos; best practice is to maintain any project-specific examples,
tests, or walk-throughs in a separate repository.
...
...
@@ -45,9 +71,11 @@ upcoming `ngraph.ai`_ release.
Documenting source code examples
--------------------------------
When **verbosely** documenting functionality of specific sections of code -- whether
they are entire code blocks within a file, or code strings that are **outside**
the nGraph Library's `documentation repo`_, here is an example of best practice:
When **verbosely** documenting functionality of specific sections of code --
whether they are entire code blocks within a file, or code strings that are
**outside** the nGraph Library's `documentation repo`_, here is an example
of best practice:
Say a file has some interesting functionality that could benefit from more
explanation about one or more of the pieces in context. To keep the "in context"
...
...
@@ -73,8 +101,7 @@ the folder where the ``Makefile`` is that generates the documentation you're
writing.
See the **note** at the bottom of this page for more detail about how
this works in the current |version| version of Intel nGraph library
documentation.
this works in the current |version| version of nGraph Library documentation.
Adding captions to code blocks
...
...
doc/sphinx/source/project/release-notes.rst
View file @
a8ee3c95
...
...
@@ -11,29 +11,36 @@ This page includes additional documentation updates.
We are pleased to announce the release of version |version|-doc.
0.21-doc
==============================
Core updates for |version|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ More ONNX ops
+ Optimizations
+ Don't reseed RNG on each use
0.22-doc
--------
Documentation updates
~~~~~~~~~~~~~~~~~~~~~
+ Initial doc and API for IntelGPU backend.
+ DynamicBackend API.
+ Note deprecation of support of MXNet's ``ngraph-mxnet`` PyPI.
+ Noted changes on graph inspection options resultant from PR 3016.
+ Added better tips and details to doc-contributor-README.
Summary of documentation-related changes:
+ Update :doc:`doc-contributor-README` for new community-based contributions.
+ Added instructions on how to test or display the installed nGraph version.
+ Added instructions on building nGraph bridge (ngraph-bridge).
+ Updated Backend Developer Guides and ToC structure.
+ Tested documentation build on Clear Linux OS; it works.
+ Fixed a few links and redirs affected by filename changes.
+ Some coding adjustments for options to render math symbols, so they can be
documented more clearly and without excessive JS (see replacements.txt).
+ Consistent filenaming on all BE indexes.
+ Remove deprecated TensorAPI.
+
.. important:: Pre-releases (``-rc-0.*``) have newer features, and are less stable.
Core updates for |version|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Changelog on Previous Releases
==============================
For downloads formatted as ``.zip`` and ``tar.gz``, see
https://github.com/NervanaSystems/ngraph/releases.
0.21
----
+ The offset argument in tensor reads and writes has been removed
+ Save/load API
...
...
@@ -43,16 +50,24 @@ Core updates for |version|
+ Provenance improvements
+ offset arg for tensor creation is deprecated
+ static linking support
+ Initial test of 0.21-doc
0.21-doc
--------
.. important:: Pre-releases (``-rc-0.*``) have newer features, and are less stable.
Summary of documentation-related changes:
Changelog on Previous Releases
==============================
+ Updated :doc:`doc-contributor-README` for new community-based contributions.
+ Added instructions on how to test or display the installed nGraph version.
+ Added instructions on building nGraph bridge (ngraph-bridge).
+ Updated Backend Developer Guides and ToC structure.
+ Tested documentation build on Clear Linux OS; it works.
+ Fixed a few links and redirs affected by filename changes.
+ Some coding adjustments for options to render math symbols, so they can be
documented more clearly and without excessive JS (see replacements.txt).
+ Consistent filenaming on all BE indexes.
+ Removed deprecated TensorAPI.
For downloads formatted as ``.zip`` and ``tar.gz``, see
https://github.com/NervanaSystems/ngraph/releases.
0.20
----
...
...
python/BUILDING.md
View file @
a8ee3c95
...
...
@@ -2,10 +2,10 @@
## Building nGraph Python Wheels
If you want to try a newer version of nGraph's Python API than is available
from
PyPI, you can build your own latest version from the source code. Th
is
process is very similar to what is outlined in our
[
ngraph_build
]
instructions
with two
important differences:
If you want to try a newer version of nGraph's Python API than is available
from PyPI, you can build the latest version from source code. This process
is
very similar to what is outlined in our
[
ngraph_build
]
instructions with two
important differences:
1.
You must specify:
`-DNGRAPH_PYTHON_BUILD_ENABLE=ON`
and
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON`
when running
`cmake`
.
...
...
@@ -18,7 +18,7 @@ with two important differences:
After this procedure completes, the
`ngraph/build/python/dist`
directory should
contain the Python packages of the version you cloned. For example, if you
checked out and built
`0.21`
, you may
see something like:
checked out and built
`0.21`
for Python 3.7, you might
see something like:
$ ls python/dist/
ngraph-core-0.21.0rc0.tar.gz
...
...
python/README.md
View file @
a8ee3c95
...
...
@@ -18,8 +18,8 @@ boost compared to native implementations.
nGraph can be used directly with the
[
Python API
][
api_python
]
described here, or
with the
[
C++ API
][
api_cpp
]
described in the
[
core documentation
]
. Alternatively,
its performance benefits can be realized through
a frontend
such as
[
TensorFlow
][
frontend_tf
]
,
[
MXNet
][
frontend_mxnet
]
,
and
[
ONNX
][
frontend_onnx
]
.
its performance benefits can be realized through
frontends
such as
[
TensorFlow
][
frontend_tf
]
,
[
PaddlePaddle
][
paddle_paddle
]
and
[
ONNX
][
frontend_onnx
]
.
You can also create your own custom framework to integrate directly with the
[
nGraph Ops
]
for highly-targeted graph execution.
...
...
@@ -77,7 +77,7 @@ print('Result = ', result)
[
up to 45X
]:
https://ai.intel.com/ngraph-compiler-stack-beta-release/
[
frontend_onnx
]:
https://pypi.org/project/ngraph-onnx/
[
frontend_mxnet
]:
https://pypi.org/project/ngraph-mxnet/
[
paddle_paddle
]:
https://ngraph.nervanasys.com/docs/latest/frameworks/paddle_integ.html
[
frontend_tf
]:
https://pypi.org/project/ngraph-tensorflow-bridge/
[
ngraph_github
]:
https://github.com/NervanaSystems/ngraph
"nGraph on GitHub"
[
ngraph_building
]:
https://github.com/NervanaSystems/ngraph/blob/master/python/BUILDING.md
"Building nGraph"
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment