Commit 9b277b33 authored by Leona C's avatar Leona C

Add doc on iGPU to v0.22 doc

parent 79587d93
......@@ -73,11 +73,11 @@ author = 'Intel Corporation'
# built documents.
#
# The short X.Y version.
version = '0.21'
version = '0.22'
# The Documentation full version, including alpha/beta/rc tags. Some features
# available in the latest code will not necessarily be documented first
release = '0.21.0'
release = '0.22-doc'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
......
......@@ -20,15 +20,45 @@ something like:
Find or display nGraph Version
-------------------------------
==============================
If you're working with the :doc:`../python_api/index`, the following command
may be useful:
.. code-block:: console
python3 -c "import ngraph as ng; print('nGraph version: ',ng.__version__)";
Localize environment variables
==============================
Another generic configuration option is to activate ``NGRAPH_CPU_DEBUG_TRACER``,
a runtime environment variable and useful tool for data scientists.
To activate this tool, set the ``env`` var ``NGRAPH_CPU_DEBUG_TRACER=1``.
It will dump ``trace_meta.log`` and ``trace_bin_data.log``.
To specify the names of logs with those flags:
::
NGRAPH_TRACER_LOG = "meta.log"
NGRAPH_BIN_TRACER_LOG = "bin.log"
The meta_log line contains::
kernel_name, serial_number_of_op, tensor_id, symbol_of_in_out, num_elements, shape, binary_data_offset, mean_of_tensor, variance_of_tensor
A line example from unit-test might look like::
K=Add S=0 TID=0_0 >> size=4 Shape{2, 2} bin_data_offset=8 mean=1.5 var=1.25
The binary_log line contains::
tensor_id, binary data (tensor data)
FMV
---
......
......@@ -10,7 +10,7 @@ nGraph's internal representation and converted to ``Function`` objects, which
can be compiled and executed on one of nGraph's backends.
You can use nGraph's Python API to run an ONNX model and nGraph can be used
as an ONNX backend using the add-on package `nGraph-ONNX <ngraph_onnx>`_.
as an ONNX backend using the add-on package `nGraph ONNX`_.
.. note:: In order to support ONNX, nGraph must be built with the
......@@ -33,8 +33,7 @@ for nGraph, ONNX and NumPy:
Importing an ONNX model
-----------------------
You can download models from the `ONNX Model Zoo <onnx_model_zoo_>`_.
For example ResNet-50:
You can download models from the `ONNX Model Zoo`_. For example, ResNet-50:
::
......@@ -92,9 +91,9 @@ data:
Find more information about nGraph and ONNX in the
`nGraph-ONNX <ngraph_onnx>`_ GitHub repository.
`nGraph ONNX`_ GitHub repository.
.. _ngraph_onnx: https://github.com/NervanaSystems/ngraph-onnx/
.. _ngraph_onnx_building: https://github.com/NervanaSystems/ngraph-onnx/blob/master/BUILDING.md
.. _onnx_model_zoo: https://github.com/onnx/models
.. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx
.. _ngraph ONNX building: https://github.com/NervanaSystems/ngraph-onnx/blob/master/BUILDING.md
.. _ONNX model zoo: https://github.com/onnx/models
......@@ -6,9 +6,9 @@ Visualization Tools
nGraph provides serialization and deserialization facilities, along with the
ability to create image formats or a PDF.
When visualization is enabled, a ``dot`` file gets generated, along with a
``png``. The default can be adjusted by setting the
``NGRAPH_VISUALIZE_TREE_OUTPUT_FORMAT`` flag to another format, like PDF.
When visualization is enabled, ``svg`` files for your graph get generated. The
default can be adjusted by setting the ``NGRAPH_VISUALIZE_TRACING_FORMAT``
flag to another format, like PNG or PDF.
.. note:: Large graphs are usually not legible with formats like PDF.
......
......@@ -18,9 +18,36 @@
Contributing to documentation
=============================
.. important:: Read this for changes affecting **anything** in ``ngraph/doc``
.. note:: Tips for contributors who are new to the highly-dynamic
environment of documentation in AI software:
For updates to the nGraph Library ``/doc`` repo, please submit a PR with
* A good place to start is "document something you figured out how to
get working". Content changes and additions should be targeted at
something more specific than "developers". If you don't understand
how varied and wide the audience is, you'll inadvertently break or block
things.
* There are experts who work on all parts of the stack; try asking how
documentation changes ought to be made in their respective sections.
* Start with something small. It is okay to add a "patch" to fix a typo
or suggest a word change; larger changes to files or structure require
research and testing first, as well as some logic for why you think
something needs changed.
* Most documentation should wrap at about ``80``. We do our best to help
authors source-link and maintain their own code and contributions;
overwriting something already documented doesn't always improve it.
* Be careful editing files with links already present in them; deleting
links to papers, citations, or sources is discouraged.
* Please do not submit Jupyter* notebook code to the nGraph Library
or core repos; best practice is to maintain any project-specific
examples, tests, or walk-throughs in a separate repository and to link
back to the stable ``op`` or Ops that you use in your project.
For updates within the nGraph Library ``/doc`` repo, please submit a PR with
any changes or ideas you'd like integrated. This helps us maintain trackability
with respect to changes made, additions, deletions, and feature requests.
......@@ -31,12 +58,11 @@ files to another format with a tool like ``pypandoc`` and share a link
to your efforts on our `wiki`_.
Another option is to fork the `ngraph repo`_, essentially snapshotting it at
that point in time, and to build a Jupyter\* notebook or other set of docs around
that point in time, and to build Jupyter\* notebook or other set of docs around
it for a specific use case. Add a note on our wiki to show us what you
did; new and novel applications may have their projects highlighted on an
upcoming `ngraph.ai`_ release.
.. note:: Please do not submit Jupyter* notebook code to the nGraph Library
or core repos; best practice is to maintain any project-specific examples,
tests, or walk-throughs in a separate repository.
......@@ -45,9 +71,11 @@ upcoming `ngraph.ai`_ release.
Documenting source code examples
--------------------------------
When **verbosely** documenting functionality of specific sections of code -- whether
they are entire code blocks within a file, or code strings that are **outside**
the nGraph Library's `documentation repo`_, here is an example of best practice:
When **verbosely** documenting functionality of specific sections of code --
whether they are entire code blocks within a file, or code strings that are
**outside** the nGraph Library's `documentation repo`_, here is an example
of best practice:
Say a file has some interesting functionality that could benefit from more
explanation about one or more of the pieces in context. To keep the "in context"
......@@ -73,8 +101,7 @@ the folder where the ``Makefile`` is that generates the documentation you're
writing.
See the **note** at the bottom of this page for more detail about how
this works in the current |version| version of Intel nGraph library
documentation.
this works in the current |version| version of nGraph Library documentation.
Adding captions to code blocks
......
......@@ -2,8 +2,8 @@
## Building nGraph Python Wheels
If you want to try a newer version of nGraph's Python API than is available from
PyPI, you can build your own latest version from the source code. This
If you want to try a newer version of nGraph's Python API than is available
from PyPI, you can build your own latest version from the source code. This
process is very similar to what is outlined in our [ngraph_build] instructions
with two important differences:
......
......@@ -18,8 +18,8 @@ boost compared to native implementations.
nGraph can be used directly with the [Python API][api_python] described here, or
with the [C++ API][api_cpp] described in the [core documentation]. Alternatively,
its performance benefits can be realized through a frontend such as
[TensorFlow][frontend_tf], [MXNet][frontend_mxnet], and [ONNX][frontend_onnx].
its performance benefits can be realized through frontends such as
[TensorFlow][frontend_tf], [PaddlePaddle][paddle_paddle] and [ONNX][frontend_onnx].
You can also create your own custom framework to integrate directly with the
[nGraph Ops] for highly-targeted graph execution.
......@@ -77,7 +77,7 @@ print('Result = ', result)
[up to 45X]: https://ai.intel.com/ngraph-compiler-stack-beta-release/
[frontend_onnx]: https://pypi.org/project/ngraph-onnx/
[frontend_mxnet]: https://pypi.org/project/ngraph-mxnet/
[paddle_paddle]: https://ngraph.nervanasys.com/docs/latest/frameworks/paddle_integ.html
[frontend_tf]: https://pypi.org/project/ngraph-tensorflow-bridge/
[ngraph_github]: https://github.com/NervanaSystems/ngraph "nGraph on GitHub"
[ngraph_building]: https://github.com/NervanaSystems/ngraph/blob/master/python/BUILDING.md "Building nGraph"
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment