Commit 18d20b7b authored by Leona C's avatar Leona C Committed by Scott Cyphers

Fix version module and add latest release notes; update some doc indexes and menus (#3714)

* Release notes 0.26.0-rc.3 and revise bridges overview

* Fix version module and add latest release notes; update some doc menus

* Let release notes be discoverable

* Documentation master index

* Documentation index pg

* Update ops for version

* Update illustration

* Remove old illustration

* Add note for 0-rc.4

* Add section on NGRAPH_PROFILE_PASS to backend API

* rc-0.5

* Final preview 0.26 release notes

* Note changes on Sum validation

* Fix link syntax that breaks on Sphinx 3.0

* Revert file that triggers unnecessary review

* Typo fix

* PR feedback

* Consistency on section title capitalizations

* Update link

* Section title consistency
parent b7a87656
......@@ -84,9 +84,9 @@
<body>
<div id="menu-float" class="menu-float">
<a href="https://www.ngraph.ai">Home</a>
<a href="https://www.youtube.com/embed/C9S0nmNS8bQ">Video</a>
<a href="https://www.ngraph.ai/ecosystem">Ecosystem</a>
<a href="https://www.ngraph.ai" target="_blank">Home</a>
<a href="https://www.youtube.com/embed/C9S0nmNS8bQ" target="_blank">Video</a>
<a href="https://www.ngraph.ai/ecosystem" target="_blank">Ecosystem</a>
<a href="https://ngraph.nervanasys.com/docs/latest">Docs</a>
<a href="https://www.ngraph.ai/tutorials">Tutorials</a>
<a href="https://ngraph.slack.com/"><img src="https://cdn.brandfolder.io/5H442O3W/as/pl546j-7le8zk-5h439l/Slack_Mark_Monochrome_White.png?width=35&height=35"></a>
......
......@@ -10,13 +10,12 @@
<dd><!-- Until our https://docs.ngraph.ai/ publishing is set up, we link to GitHub -->
<ul>
<!-- <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26">0.26</a></li> -->
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26.0">0.26.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26.0">0.25.1</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26.0-rc.3">Prerelease 0.26</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.25.1-rc.4">0.25.1</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.25.0">0.25.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.24.0">0.24.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.23.0">0.23.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.22.0">0.22.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.21.0">0.21.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.22.2-rc.0">0.22.2</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.22.1">0.22.1</a></li>
</ul></dd>
</dl>
<dl>
......
......@@ -1645,6 +1645,7 @@ h1 {
h2 {
font-size: 133%;
text-decoration: underline 4px dotted #D3D3D3;
margin-top: -2px;
}
h2, .rst-content .toctree-wrapper p.caption {
......
......@@ -51,7 +51,7 @@ How to use?
#. A single iteration of the executable is executed by calling the ``call``
method on the ``Executable`` object.
.. figure:: ../graphics/execution-interface.png
.. figure:: ../graphics/ExecutionInterfaceRunGraphs.png
:width: 650px
The execution interface for nGraph
......@@ -69,6 +69,13 @@ interface; each backend implements the following five functions:
* And, finally, the ``call()`` method is used to invoke an nGraph function
against a particular set of tensors.
How to display ngraph-related passes executed during runtime?
-------------------------------------------------------------
One easy way to get info about passes is to set the environment variable
:envvar:`NGRAPH_PROFILE_PASS_ENABLE=1`. With this set, the pass manager
will dump the name and execution time of each pass.
.. _ngraph_bridge:
......
.. howto/index:
.. core/constructing-graphs/index.rst:
Constructing graphs
.. _constructing_graphs:
Constructing Graphs
===================
.. toctree::
......
.. fusion/index.rst:
Pattern matcher
Pattern Matcher
###############
.. toctree::
......
.. fusion/overview.rst
.. core/fusion/overview.rst
.. _fusion_overview:
Overview: Optimize graphs with nGraph Compiler fusions
-------------------------------------------------------
......
.. core/overview.rst:
Overview
========
Basic concepts
==============
.. figure:: ../graphics/whole-stack.png
.. figure:: ../graphics/nGraphCompilerstack.png
:alt: The whole stack
The whole nGraph Compiler stack
......@@ -13,7 +13,7 @@ The nGraph Compiler stack consists of bridges, core, and backends. We'll examine
each of these briefly to get started.
A framework bridge interfaces with the "frontend" Core API. A framework bridge
is a component that sits between a framework like TensorFlow or MXNet, and the
is a component that sits between a framework like TensorFlow or PaddlePaddle, and the
nGraph Core frontend API. A framework bridge does two things: first, it
translates a framework's operations into graphs in nGraph’s in-memory
:abbr:`Intermediary Representation (IR)`. Second, it executes the nGraph IR
......@@ -24,8 +24,9 @@ are some common patterns: a fairly typical example for a graph-based framework
is illustrated here, and consists of basically two phases: a **clustering**
phase and a **translation** phase.
.. figure:: ../graphics/translation-flow-to-ng-fofx.png
:alt: The whole stack
.. figure:: ../graphics/overview-translation-flow.svg
:width: 725px
:alt: Translation flow to an nGraph function graph
Translation flow to an nGraph function
......
.. fusion/passes-that-use-matcher.rst:
.. core/passes/passes-that-use-matcher.rst:
Passes that use Matcher
......
.. core/passes:
.. core/passes/passes.rst:
Compiler passes
.. _core_compiler_passes:
Compiler Passes
===============
.. toctree::
......@@ -11,9 +13,8 @@ Compiler passes
passes-that-use-matcher.rst
Overview
--------
Basic concepts
--------------
*Generic graph optimization passes*
......
:orphan:
.. frameworks/fw_overview:
.. _fw_overview:
Overview
========
A framework is "supported" with a framework :term:`bridge` that can be written or
cloned and used to connect to nGraph device backends while maintaining the
framework's programmatic or user interface. There is a bridge for the
`TensorFlow framework`_. We also have a :doc:`paddle_integ` bridge. Intel
previously contributed work to an MXNet bridge; however, support for this
bridge is no longer active.
`ONNX`_ on its own is not a framework; however, it can be used with nGraph's
:doc:`../python_api/index` to import and execute ONNX models.
.. figure:: ../graphics/overview-framework-bridges.svg
:width: 960px
:alt: Framework bridge to a graph construction
Framework bridge to nGraph
Once connected via the bridge, the framework can then run and train a deep
learning model with various workloads on various backends using nGraph Compiler
as an optimizing compiler available through the framework.
While a :abbr:`Deep Learning (DL)` :term:`framework` is ultimately meant for
end use by data scientists, or for deployment in cloud container environments,
nGraph Core ops and the nGraph C++ Library are designed for framework builders
themselves. We invite anyone working on new and novel frameworks or neural
network designs to explore our highly-modularized stack of components that
can be implemented or integrated in countless ways.
Please read this section if you are considering incorporating components from
the nGraph Compiler stack in your framework or neural network design. Contents
here are also useful if you are working on something built-from-scratch, or on
an existing framework that is less widely-supported than the popular frameworks
like TensorFlow and PyTorch.
.. figure:: ../graphics/overview-translation-flow.svg
:width: 725px
:alt: Translation flow to nGraph function graph
.. _TensorFlow framework: https://github.com/tensorflow/ngraph-bridge/README.md
.. _ONNX: http://onnx.ai/
.. _tune the workload to extract best performance: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations
.. _a few small: https://software.intel.com/en-us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi
.. frameworks/index.rst
Working with Frameworks
=======================
.. include:: overview.rst
#######################
.. toctree::
:maxdepth: 1
getting_started.rst
overview.rst
quickstart.rst
onnx_integ.rst
paddle_integ.rst
tensorflow_connect.rst
generic_configs.rst
other.rst
.. frameworks/onnx_integ.rst:
ONNX Support
============
ONNX overview
=============
nGraph is able to import and execute ONNX models. Models are converted to
nGraph's :abbr:`Intermediate Representation (IR)` and converted to ``Function``
......@@ -78,7 +78,7 @@ data:
Find more information about nGraph and ONNX in the
`nGraph ONNX`_ GitHub\* repository.
`nGraph ONNX`_ GitHub repository.
.. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx
......
.. frameworks/generic_configs.rst:
.. frameworks/other.rst:
.. _generic_configs:
.. _fw_other:
Integrating new frameworks
==========================
Integrating other frameworks
============================
This section details some of the *configuration options* and some of the
*environment variables* that can be used to tune for optimal performance when
......
.. frameworks/overview.rst
.. _fw_overview:
Basic concepts
==============
Overview
========
.. figure:: ../graphics/overview-framework-bridges.svg
:width: 960px
:alt: Bridge to nGraph graph construction API
A framework is "supported" with a framework :term:`bridge` that can be written or
cloned and used to connect to nGraph device backends while maintaining the
framework's programmatic or user interface. A `bridge currently exists`_ for the
TensorFlow framework. We also have a bridge to do :doc:`paddle_integ`. Intel
previously contributed work to an MXNet bridge; however, support for this
bridge is no longer active.
A framework bridge connects to the nGraph graph construction API
`ONNX`_ on its own is not a framework; however, it can be used with nGraph's
:doc:`../python_api/index` to import and execute ONNX models.
To understand how a data science :term:`framework` (:doc:`TensorFlow <tensorflow_connect>`,
PyTorch, :doc:`paddle_integ`, and others) can unlock acceleration available in
the nGraph Compiler, it helps to familiarize yourself with some basic concepts.
.. figure:: ../graphics/overview-framework-bridges.svg
:width: 960px
:alt: JiT compiling of a computation
We use the term :term:`bridge` to describe code that connects to any nGraph
device backend(s) while maintaining the framework's programmatic or user
interface. We have a `bridge for the TensorFlow framework`_. We also have a
:doc:`paddle_integ` bridge. Intel previously :doc:`contributed work to an MXNet bridge <../project/extras/testing_latency>`;
however, support for the MXNet bridge is no longer active.
`ONNX`_ on its own is not a framework; it can be used with nGraph's
:doc:`../python_api/index` to import and execute ONNX models.
:abbr:`Just-in-Time (JiT)` Compiling for computation. nGraph `Core`
components are colored in blue.
Because it is framework agnostic (providing opportunities to optimize at the
graph level), nGraph can do the heavy lifting required by many popular
:doc:`workloads <validated/list>` without any additional effort of the framework user.
Optimizations that were previously available only after careful integration of
a kernel or hardware-specific library are exposed via the
:doc:`Core graph construction API <../core/constructing-graphs/index>`
Once connected via the bridge, the framework can then run and train a deep
learning model with various workloads on various backends using nGraph Compiler
as an optimizing compiler available through the framework.
The illustration above shows how this works.
While a :abbr:`Deep Learning (DL)` :term:`framework` is ultimately meant for
end use by data scientists, or for deployment in cloud container environments,
nGraph Core ops and the nGraph C++ Library are designed for framework builders
themselves. We invite anyone working on new and novel frameworks or neural
network designs to explore our highly-modularized stack of components that
can be implemented or integrated in countless ways.
While a :abbr:`Deep Learning (DL)` framework is ultimately meant for end-use by
data scientists, or for deployment in cloud container environments, nGraph's
:doc:`Core ops <../core/overview>` are designed for framework builders themselves.
We invite anyone working on new and novel frameworks or neural network designs
to explore our highly-modularized stack of components.
Please read this section if you are considering incorporating components from
the nGraph Compiler stack in your framework or neural network design. Contents
here are also useful if you are working on something built-from-scratch, or on
an existing framework that is less widely-supported than the popular frameworks
like TensorFlow and PyTorch.
Please read the :doc:`other` section for other framework-agnostic
configurations available to users of the nGraph Compiler stack.
.. figure:: ../graphics/overview-translation-flow.svg
:width: 725px
:alt: Translation flow to nGraph function graph
:alt: Translation flow to an nGraph function graph
.. _bridge currently exists: https://github.com/tensorflow/ngraph-bridge/README.md
.. _bridge for the TensorFlow framework: https://github.com/tensorflow/ngraph-bridge/README.md
.. _ONNX: http://onnx.ai/
.. _tune the workload to extract best performance: https://ai.intel.com/accelerating-deep-learning-training-inference-system-level-optimizations
.. _a few small: https://software.intel.com/en-us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi
\ No newline at end of file
.. frameworks/paddle_integ.rst:
PaddlePaddle
============
nGraph PaddlePaddle integration overview
----------------------------------------
PaddlePaddle integration
========================
PaddlePaddle is an open source deep learning framework developed by Baidu. It
aims to enable performant large-scale distributed computation for deep learning.
......@@ -67,7 +64,7 @@ is organized in the following file structure:
.. _figure-paddle-dir:
.. figure:: ../graphics/paddlepaddle_directory.png
.. figure:: ../graphics/PaddlePaddleDir.svg
:width: 555px
:alt:
......@@ -135,8 +132,8 @@ and nGraph bridges are provided below:
are managed through a map.
- SetOutputNode: sets the constructed node to the map.
- Related code :
+ ``Paddle/fluid/operators/ngraph/ngraph_bridge.h`` `link to ngraph_bridge header code`_
+ ``Paddle/fluid/operators/ngraph/ngraph_bridge.cc`` `link to ngraph_bridge cpp code`_
- ``Paddle/fluid/operators/ngraph/ngraph_bridge.h`` `link to ngraph_bridge header code`_
- ``Paddle/fluid/operators/ngraph/ngraph_bridge.cc`` `link to ngraph_bridge cpp code`_
nGraph compilation control and trigger method
----------------------------------------------
......@@ -149,8 +146,9 @@ nGraph compilation control and trigger method
#. **Trigger Control** -- ``FLAGS_use_ngraph`` triggers nGraph. If this option
is set to ``true``, nGraph will be triggered by the PaddlePaddle executor
to convert and execute the supported subgraph. `Examples are provided`_ under
``paddle/benchmark/fluid/ngraph``.
to convert and execute the supported subgraph. Demos are provided under
``paddle/benchmark/fluid/train/demo`` (link `train_demo`_) and ``paddle/benchmark/fluid/train/imdb_demo``
(link `imdb_demo`_)
.. _link to ngraph_engine_op header code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_engine_op.h
......@@ -160,5 +158,5 @@ nGraph compilation control and trigger method
.. _located in the ngraph ops: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/operators/ngraph/ops
.. _link to ngraph_bridge header code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_bridge.h
.. _link to ngraph_bridge cpp code: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/ngraph/ngraph_bridge.cc
.. _Examples are provided: https://github.com/PaddlePaddle/Paddle/tree/develop/benchmark/fluid
.. _train_demo: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/train/demo
.. _imdb_demo: https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/fluid/train/imdb_demo
.. frameworks/getting_started.rst
.. frameworks/quickstart.rst
Getting Started
###############
.. _fw_quickstart:
Quick start
===========
No matter what your level of experience with :abbr:`Deep Learning (DL)` systems
may be, nGraph provides a path to start working with the DL stack. Let's begin
with the easiest and most straightforward options.
.. figure:: ../graphics/translation-flow-to-ng-fofx.png
:width: 725px
:alt: Translation flow to nGraph function graph
TensorFlow
----------
The easiest way to get started is to use the latest PyPI `ngraph-tensorflow-bridge`_,
which has instructions for Linux* systems, and tips for users of Mac OS X.
which has instructions for Linux\* systems, and tips for users of Mac OS X.
You can install TensorFlow\* and nGraph to a virtual environment; otherwise, the code
You can install TensorFlow and nGraph in a virtual environment; otherwise, the code
will install to a system location.
.. code-block:: console
......@@ -43,17 +44,17 @@ Output will look something like:
TensorFlow version used for this build: v[version-hash]
CXX11_ABI flag used for this build: boolean
More detail in the `ngraph_bridge examples`_ directory.
More detail in the `ngraph_bridge examples`_ directory.
See also the `diagnostic tools`_.
ONNX
====
----
Another easy way to get started working with the :abbr:`DL (Deep Learning)`
stack is to try the examples available via `nGraph ONNX`_.
Installation
------------
To prepare your environment to use nGraph and ONNX, install the Python packages
for nGraph, ONNX and NumPy:
......@@ -67,18 +68,18 @@ Now you can start exploring some of the :doc:`onnx_integ` examples.
See also nGraph's :doc:`../python_api/index`.
PlaidML
=======
-------
See the :ref:`ngraph_plaidml_backend` section on how to build the
nGraph-PlaidML.
Other integration paths
=======================
-----------------------
If you are considering incorporating components from the nGraph Compiler stack
in your framework or neural network design, another useful doc is the section
on :doc:`generic-configs`. Contents here are also useful if you are working on
on :doc:`other` . Contents here are also useful if you are working on
something built-from-scratch, or on an existing framework that is less
widely-supported than the popular frameworks like TensorFlow and PyTorch.
......@@ -86,3 +87,4 @@ widely-supported than the popular frameworks like TensorFlow and PyTorch.
.. _ngraph-tensorflow-bridge: https://pypi.org/project/ngraph-tensorflow-bridge
.. _ngraph ONNX: https://github.com/NervanaSystems/ngraph-onnx
.. _ngraph_bridge examples: https://github.com/tensorflow/ngraph-bridge/blob/master/examples/README.md
.. _diagnostic tools: https://github.com/tensorflow/ngraph-bridge/blob/master/diagnostics/README.md
\ No newline at end of file
.. frameworks/tensorflow_connect.rst:
Connect TensorFlow\*
====================
nGraph Bridge for TensorFlow
============================
See the `README`_ on the `ngraph_bridge repo`_ for the many ways to connect
......
......@@ -3,7 +3,7 @@
.. _validated:
Validated workloads
Validated Workloads
###################
We have validated performance [#f1]_ for the following workloads:
......
This diff is collapsed.
......@@ -12,9 +12,9 @@
.. limitations under the License.
.. ---------------------------------------------------------------------------
######################
nGraph Compiler stack
######################
nGraph Compiler stack Documentation
###################################
.. _ngraph_home:
......@@ -23,36 +23,25 @@ nGraph Compiler stack
nGraph Compiler stack documentation for version |version|.
Documentation for the latest (master) development branch can be found
at https://ngraph.nervanasys.com/docs/latest
.. https://docs.ngraph.ai/
.. only:: (development or daily)
nGraph Compiler stack documentation for the master tree under development
(version |version|).
The nGraph Library and Compiler stack are provided under the `Apache 2.0 license`_
(found in the LICENSE file in the project's `repo`_). It may also import or reference
packages, scripts, and other files that use licensing.
.. _Apache 2.0 license: https://github.com/NervanaSystems/ngraph/blob/master/LICENSE
.. _repo: https://github.com/NervanaSystems/ngraph
.. toctree::
:name: mastertoctree
:titlesonly:
.. toctree::
:maxdepth: 1
:caption: Getting Started
introduction.rst
features.rst
project/release-notes.rst
.. toctree::
:maxdepth: 1
:maxdepth: 2
:caption: Framework Support
frameworks/index.rst
......@@ -89,15 +78,20 @@ packages, scripts, and other files that use licensing.
.. toctree::
:maxdepth: 1
:caption: Project Metadata
:caption: Contributing
project/release-notes.rst
project/contribution-guide.rst
project/index.rst
project/extras/index.rst
glossary.rst
.. toctree::
:maxdepth: 1
:hidden:
project/release-notes.rst
project/index.rst
project/extras/index.rst
.. only:: html
......
......@@ -160,15 +160,30 @@ Not currently a comprehensive list.
More about Core Ops
-------------------
An ``Op``'s primary role is to function as a node in a directed acyclic graph
dependency computation graph.
An ``Op``'s primary role is to function as a node in a ddirected acyclic
computation graph.
*Core ops* are ops that are available and generally useful to all framework
bridges and that can be compiled by all transformers. A framework bridge may
define framework-specific ops to simplify graph construction, provided that the
bridge can enable every transformer to replace all such ops with equivalent
clusters or subgraphs composed of core ops. In a similar manner, transformers may define
transformer-specific ops to represent kernels or other intermediate operations.
clusters or subgraphs composed of core ops. In a similar manner, transformers
may define transformer-specific ops to represent kernels or other intermediate
operations.
The input and output ports of ops are any of the functions which work with
``Output<Node>/Input<Node>``. Previous functions that worked at the level
of ops are deprecated, like::
Node::get_element_type()
as it does not take any input. This function has been replaced with new
functions like::
Node::get_output_element_type(index)
where there is no ambiguity.
If a framework supports extending the set of ops it offers, a bridge may even
expose transformer-specific ops to the framework user.
......
......@@ -44,6 +44,6 @@ Outputs
C++ Interface
=============
.. doxygenclass:: ngraph::op::Max
.. doxygenclass:: ngraph::op::v0::Max
:project: ngraph
:members: m_axes
......@@ -44,6 +44,6 @@ Outputs
C++ Interface
=============
.. doxygenclass:: ngraph::op::Min
.. doxygenclass:: ngraph::op::v0::Min
:project: ngraph
:members: m_axes
......@@ -4,7 +4,7 @@
.. _contribution_guide:
##################
Contribution guide
Contribution Guide
##################
......
......@@ -2,6 +2,8 @@
:orphan:
.. _release_notes:
Release Notes
#############
......@@ -18,19 +20,33 @@ We are pleased to announce the release of version |version|.
Core updates for |version|
--------------------------
+ All ops support ``Output<Node>`` arguments
+ Additional ops
+ ONNX handling unknown domains
+ Provenance works with builders and fused ops
+ ``RPATH`` for finding openmpi
+ Negative indices/axes fixes
+ Migrate some ``get_argument`` removals
+ Negative indices/axes fixes
+ Better support for MKL-DNN 1.0 (DNNL)
Latest documentation updates
----------------------------
Latest documentation updates for |version|
------------------------------------------
+ Note the only support for nGPU is now through PlaidML; nGraph support for nGPU
(via cuDNN) has been deprecated.
+ iGPU works only with nGraph version `0.24`.
+ Add new Sphinx-friendly theme (can be built natively for an alternative to ngraph.ai docs).
+ Update PaddlePaddle documentation to reflect demo directories instead of example directory.
+ Update doc regarding the validation of ``Sum`` op.
.. important:: Pre-releases (``-rc-0.*``) have newer features, and are less stable.
Changelog on Previous Releases
==============================
......@@ -41,7 +57,7 @@ Changelog on Previous Releases
+ Add rank id to trace file name
+ Allow provenance merging to be disabled
+ Remove some white-listed compiler warnings
+ Provenance on builders and fused op expansions
+ Provenance, builders, ops that make ops, and fused op expansions
0.25.0
......@@ -136,8 +152,8 @@ Changelog on Previous Releases
+ Provenance improvements
0.19
----
pre-0.20
--------
+ More dynamic shape preparation
+ Distributed interface factored out
......@@ -153,11 +169,7 @@ Changelog on Previous Releases
+ Add graph visualization tools to doc
+ Update doxygen to be friendlier to frontends
0.18
----
.. 0.18
+ Python formatting issue
+ mkl-dnn work-around
+ Event tracing improvements
......@@ -166,10 +178,7 @@ Changelog on Previous Releases
+ ONNX quantization
+ More fusions
0.17
----
.. 0.17
+ Allow negative padding in more places
+ Add code generation for some quantized ops
+ Preliminary dynamic shape support
......@@ -177,10 +186,7 @@ Changelog on Previous Releases
+ Pad op takes CoordinateDiff instead of Shape pad values to allow for negative
padding.
0.16
----
.. 0.16
+ NodeInput and NodeOutput classes prepare for simplifications of Node
+ Test improvements
+ Additional quantization ops
......
:orphan:
.. toctree::
:includehidden:
frameworks/index
project/index
python_api/index
inspection/index
core/overview
backends/index
project/extras/index
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment