Unverified Commit e8a1e549 authored by Scott Cyphers's avatar Scott Cyphers Committed by GitHub

Some more op documentation (#918)

* Some more op documentation

* Review comments
parent 89963725
.. allreduce.rst:
##########
#########
AllReduce
##########
#########
.. code-block:: cpp
......
.. get_output_element.rst:
################
GetOutputElement
################
.. code-block:: cpp
GetOutputElement // Operation to select a unique output from an op
Description
===========
Accesses an output of a node.
Inputs
------
+-----------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================+
| ``arg`` | Any | Any |
+-----------------+-------------------------+--------------------------------+
Attributes
----------
+-----------------+----------------------------------------------------------------+
| Name | Description |
+=================+================================================================+
| ``n`` | The output number from the node ``arg`` |
+-----------------+----------------------------------------------------------------+
Outputs
-------
+-----------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================+
| ``output`` | Depends on `arg` | Depends on `arg` |
+-----------------+-------------------------+--------------------------------+
C++ Interface
=============
.. doxygenclass:: ngraph::op::GetOutputElement
:project: ngraph
:members:
......@@ -73,18 +73,24 @@ Not currently a comprehensive list.
exp.rst
floor.rst
function_call.rst
get_output_element.rst
greater_eq.rst
greater.rst
less_eq.rst
less.rst
log.rst
max.rst
maximum.rst
max_pool.rst
min.rst
minimum.rst
multiply.rst
negative.rst
not_equal.rst
not.rst
one_hot.rst
or.rst
pad.rst
softmax.rst
.. max.rst:
###
Max
###
.. code-block:: cpp
Max // Max reduction
Description
===========
Reduces the tensor, eliminating the specified reduction axes by taking the maximum element.
Inputs
------
+-----------------+-------------------------+-------------------------------------+
| Name | Element Type | Shape |
+=================+=========================+=====================================+
| ``arg`` | Any | :math:`(d_1,\dots,d_n)~(n \geq 0)` |
+-----------------+-------------------------+-------------------------------------+
Attributes
----------
+--------------------+----------------------------------------------------------------+
| Name | Description |
+====================+================================================================+
| ``reduction_axes`` | The axis positions (0-based) on which to calculate the max |
+--------------------+----------------------------------------------------------------+
Outputs
-------
+-----------------+-------------------------+------------------------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================================+
| ``output`` | Same as ``arg`` | :math:`(d_i:i\not\in \texttt{reduction_axes})` |
+-----------------+-------------------------+------------------------------------------------+
C++ Interface
=============
.. doxygenclass:: ngraph::op::Max
:project: ngraph
:members: m_axes
.. max_pool.rst:
#######
MaxPool
#######
.. code-block:: cpp
MaxPool // MaxPool operations
Description
===========
Batched max pooling operation, with optional padding and window
stride.
Inputs
------
+-----------------+-------------------------+----------------------------------+
| Name | Element Type | Shape |
+=================+=========================+==================================+
| ``arg`` | any | :math:`(N, C, d_1, \ldots, d_n)` |
+-----------------+-------------------------+----------------------------------+
Attributes
----------
+-------------------------------+-----------------------------------------------+
| Name | Description |
+===============================+===============================================+
| ``window_shape`` | The window shape. |
+-------------------------------+-----------------------------------------------+
| ``window_movement_strides`` | The window movement strides. (defaults to 1s) |
+-------------------------------+-----------------------------------------------+
| ``padding_below`` | The below-padding shape. (defaults to 0s) |
+-------------------------------+-----------------------------------------------+
| ``padding_above`` | The above-padding shape. (defaults to 0s) |
+-------------------------------+-----------------------------------------------+
Outputs
-------
+-----------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================+
| ``output`` | same as ``arg`` | :math:`(N,C,d'_1,\ldots,d'_n)` |
+-----------------+-------------------------+--------------------------------+
The input for max pooling is a data batch tensor of shape
:math:`(N,C,d_1,\dots,d_n)` where :math:`n > 0`, every :math:`d_i >
0`, and where :math:`N` is the batch size, and :math:`C > 0` is the
number of channels (sometimes called features). The dimensions
:math:`(d_1,\dots,d_n` correspond to the shape of an
:math:`n`-dimensional data item in a batch. For example, where
:math:`n=2`, the data may represent a two-dimensional image. It also
has two attributes:
1. *the window shape* a size vector :math:`(w_1,\ldots,w_n)` where every :math:`w_i \le d_i`; and
2. *the window movement strides, optional* a vector of positive integers :math:`(s_1,\dots,s_n)`.
The output has the shape :math:`(N,C,d'_1,\ldots,d'_n)`, where :math:`d'_n = \lceil \frac{d_i - w_i + 1}{s_i} \rceil`.
Mathematical Definition
=======================
Given an input data batch tensor :math:`T_{in}`, the output tensor is defined by the equation
.. math::
T_{out}[a,c,i_1,\dots,i_n] =
\max_{j_1 = s_1 i_1, \dots, j_n = s_n i_n}^{j_1 = s_1 i_1 + w_1 - 1, \dots, j_n = s_n i_n + w_n - 1} (T_{in}[a,c,j_1,\dots,j_n])
C++ Interface
=============
.. doxygenclass:: ngraph::op::MaxPool
:project: ngraph
:members:
.. min.rst:
###
Min
###
.. code-block:: cpp
Min // Min reduction
Description
===========
Reduces the tensor, eliminating the specified reduction axes by taking the minimum element.
Inputs
------
+-----------------+-------------------------+-------------------------------------+
| Name | Element Type | Shape |
+=================+=========================+=====================================+
| ``arg`` | Any | :math:`(d_1,\dots,d_n)~(n \geq 0)` |
+-----------------+-------------------------+-------------------------------------+
Attributes
----------
+--------------------+----------------------------------------------------------------+
| Name | Description |
+====================+================================================================+
| ``reduction_axes`` | The axis positions (0-based) on which to calculate the max |
+--------------------+----------------------------------------------------------------+
Outputs
-------
+-----------------+-------------------------+------------------------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================================+
| ``output`` | Same as ``arg`` | :math:`(d_i:i\not\in \texttt{reduction_axes})` |
+-----------------+-------------------------+------------------------------------------------+
C++ Interface
=============
.. doxygenclass:: ngraph::op::Min
:project: ngraph
:members: m_axes
.. one_hot.rst:
######
OneHot
######
.. code-block:: cpp
OneHot // One-hot expansion
Description
===========
Inputs
------
+-----------------+-------------------------+---------------------------------------------------------+
| Name | Element Type | Shape |
+=================+=========================+=========================================================+
| ``arg`` | Any | :math:`d_1,\dots,d_{m-1},d_{m+1},\dots,d_n)~(n \geq 0)` |
+-----------------+-------------------------+---------------------------------------------------------+
Attributes
----------
+------------------+----------------------------------------------------------------+
| Name | Description |
+==================+================================================================+
| ``shape`` | The desired output shape, including the new one-hot axis. |
+------------------+----------------------------------------------------------------+
| ``one_hot_axis`` | The index within the output shape of the new one-hot axis. |
+------------------+----------------------------------------------------------------+
Outputs
-------
+-----------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================+
| ``output`` | Same as ``arg`` | ``shape`` |
+-----------------+-------------------------+--------------------------------+
Mathematical Definition
=======================
.. math::
\mathtt{output}_{i_0, \ldots, i_{n-1}} =
\begin{cases}
1&\text{if }i_{\mathtt{one\_hot\_axis}} = \mathtt{arg}_{(i : i\ne \mathtt{one\_hot\_axis})}\\
0&\text{otherwise}
\end{cases}
C++ Interface
=============
.. doxygenclass:: ngraph::op::OneHot
:project: ngraph
:members:
.. pad.rst:
###
Pad
###
.. code-block:: cpp
Pad // General padding operation
Description
===========
Adds edge and interior padding.
Inputs
------
+-------------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+===================+=========================+================================+
| ``arg`` | Any | :math:`(d_1, \ldots, d_n)` |
+-------------------+-------------------------+--------------------------------+
| ``arg_pad_value`` | Same as ``arg`` | Scalar |
+-------------------+-------------------------+--------------------------------+
Attributes
----------
+-----------------------+-------------------------------------------------------+
| Name | Description |
+=======================+=======================================================+
| ``padding_below`` | Shape of padding added before ``arg`` |
+-----------------------+-------------------------------------------------------+
| ``padding_above`` | Shape of padding added after ``arg`` |
+-----------------------+-------------------------------------------------------+
| ``padding_interior`` | Shape of padding inserted between elements of ``arg`` |
+-----------------------+-------------------------------------------------------+
Outputs
-------
+-------------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+===================+=========================+================================+
| ``output`` | Same as ``arg`` | :math:`(d'_1, \ldots, d'_n)` |
+-------------------+-------------------------+--------------------------------+
.. math::
d'_i =
\mathtt{padding\_below}_i+d_i\cdot(\mathtt{padding\_interior}_i)+\mathtt{padding\_above}_i
Takes an input tensor of shape :math:`(d_1,\dots,d_n)` and pads by
inserting a scalar :math:`x` supplied as input, in three possible
ways:
1. *exterior padding* inserts copies of :math:`x` *below or above* the
bounds of existing rows, columns, etc.,
2. *interior padding* inserts copies of :math:`x` *between* rows, columns, etc., or
3. both of the above.
The number and position of elements to be inserted along a given axis
is determined by three attributes:
1. *the padding-below* ``Shape`` :math:`(p_1,\ldots,p_n)`,
2. *the padding-above* ``Shape`` :math:`(q_1,\ldots,q_n)`, and
3. *the interior padding* ``Shape`` :math:`(r_1,\ldots,r_n)`.
The output tensor will have the shape :math:`(d'_1,\dots,d'_n)` where
:math:`d'_i = p_i + (d_i - 1)(r_i + 1) + 1 + q_i` if :math:`d_i > 0`,
and :math:`d'_i = p_i + q_i` if :math:`d_i = 0`.
Example: given a :math:`3\times 3` tensor, with interior-padding sizes
of :math:`(1,2)`, padding-below of :math:`(1,2)`, padding-above of
:math:`(1,0)`, and a pad-value of :math:`42`, we obtain: ::
42 42 42 42 42 42 42 42 42
42 42 1 42 42 2 42 42 3
1 2 3 42 42 42 42 42 42 42 42 42
4 5 6 --> 42 42 4 42 42 5 42 42 6
7 8 9 42 42 42 42 42 42 42 42 42
42 42 7 42 42 8 42 42 9
42 42 42 42 42 42 42 42 42
In other words we have inserted one new row between each pair of
adjacent rows, two new columns between each pair of adjacent columns,
one new row at the top and two new columns on the left, and one new
row at the bottom and zero new columns on the right; then filled the
new rows and columns with 42.
.. note::
The terms `below` and `above` here refer respectively to lower- or
higher-numbered coordinate indices, and numbering starts at the
upper-left corner; thus inserting a row "below" actually inserts it
at the "top" of the matrix.
C++ Interface
=============
.. doxygenclass:: ngraph::op::Pad
:project: ngraph
:members:
......@@ -26,7 +26,7 @@ Inputs
| ``arg`` | Any | Any |
+-----------------+-------------------------+--------------------------------+
Parameters
Attributes
----------
+-----------------+----------------------------------------------------------------+
| Name | Description |
......@@ -57,4 +57,4 @@ C++ Interface
.. doxygenclass:: ngraph::op::Softmax
:project: ngraph
:members: m_axes
\ No newline at end of file
:members: m_axes
......@@ -22,25 +22,7 @@ namespace ngraph
{
namespace op
{
/// \brief Operation to get an element from a tuple.
///
/// ## Parameters
///
/// | | Description |
/// | --- | ------------------------------------------------------------------ |
/// | `n` | The position of the element (0-based) to get from the input tuple. |
///
/// ## Inputs
///
/// | | Type | Description |
/// | ------ | ----------------------------------------------------------- | ------------------------------------------ |
/// | `arg` | \f$(T_1,\dots,T_{n-1},T_n,T_{n+1},\dots,T_m)~(m \geq 1)\f$ | An input tuple with at least `n` elements. |
///
/// ## Output
///
/// | Type | Description |
/// | --------- | ------------------------------------- |
/// | \f$T_n\f$ | The `n`th element of the input tuple. |
/// \brief Operation to get an output from a node.
class GetOutputElement : public Node
{
public:
......
......@@ -23,28 +23,6 @@ namespace ngraph
namespace op
{
/// \brief Max-reduction operation.
///
/// Reduces the tensor, eliminating the specified reduction axes by taking the maximum element.
///
/// This is equivalent to Reduce where `arg_init` = -inf and `reduction_function` is \f$f(x,y) = max(x,y)\f$.
///
/// ## Parameters
///
/// | | Description |
/// | -------------------- | -------------------------------------------- |
/// | `reduction_axes` | The axes to eliminate through max-reduction. |
///
/// ## Inputs
///
/// | | Type | Description |
/// | ----- | --------------------------------- | ------------------------------------------------------ |
/// | `arg` | \f$N[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape and numeric element type. |
///
/// ## Output
///
/// | Type | Description |
/// | ----------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
/// | \f$N[\textit{delete}(A,d_1,\dots,d_n)]\f$ | The tensor \f$T\f$, where \f$T\f$ is the input tensor with the `reduction_axes` \f$A\f$ eliminated by taking the maximum element. |
class Max : public util::ArithmeticReduction
{
public:
......
......@@ -24,24 +24,6 @@ namespace ngraph
namespace op
{
/// \brief Batched max pooling operation, with optional padding and window stride.
///
/// (TODO: add an account of the optional padding to this comment.)
///
/// Max pooling takes as its input a data batch tensor of shape \f$(N,C,d_1,\dots,d_n)\f$ where \f$n > 0\f$, every \f$d_i > 0\f$, and where \f$N\f$ is
/// the batch size, and \f$C > 0\f$ is the number of channels (sometimes called features). The dimensions \f$(d_1,\dots,d_n)\f$ correspond to the shape of
/// an \f$n\f$-dimensional data item in a batch. For example, where \f$n=2\f$, the data may represent a two-dimensional image. It also takes two parameters:
///
/// 1. <i>(the window shape)</i> a size vector \f$(w_1,\dots,w_n)\f$ where every \f$w_i \le d_i\f$; and
/// 2. <i>(the window movement strides, optional)</i> a vector of positive integers \f$(s_1,\dots,s_n)\f$.
///
/// The output has the shape \f$(N,C,d'_1,\dots,d'_n)\f$, where \f$d'_n = \lceil \frac{d_i - w_i + 1}{s_i} \rceil\f$.
///
/// Given an input data batch tensor \f$T_\textit{in}\f$, the output tensor is defined by the equation
///
/// \f[
/// T_\textit{out}[a,c,i_1,\dots,i_n] = \max_{j_1 = s_1 i_1, \dots, j_n = s_n i_n}^{j_1 = s_1 i_1 + w_1 - 1, \dots, j_n = s_n i_n + w_n - 1} (T_\textit{in}[a,c,j_1,\dots,j_n])
/// \f]
///
class MaxPool : public util::RequiresTensorViewArgs
{
public:
......
......@@ -23,28 +23,6 @@ namespace ngraph
namespace op
{
/// \brief Min-reduction operation.
///
/// Reduces the tensor, eliminating the specified reduction axes by taking the minimum element.
///
/// This is equivalent to Reduce where `arg_init` = -inf and `reduction_function` is \f$f(x,y) = min(x,y)\f$.
///
/// ## Parameters
///
/// | | Description |
/// | -------------------- | -------------------------------------------- |
/// | `reduction_axes` | The axes to eliminate through min-reduction. |
///
/// ## Inputs
///
/// | | Type | Description |
/// | ----- | --------------------------------- | ------------------------------------------------------ |
/// | `arg` | \f$N[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape and numeric element type. |
///
/// ## Output
///
/// | Type | Description |
/// | ----------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
/// | \f$N[\textit{delete}(A,d_1,\dots,d_n)]\f$ | The tensor \f$T\f$, where \f$T\f$ is the input tensor with the `reduction_axes` \f$A\f$ eliminated by taking the minimum element. |
class Min : public util::ArithmeticReduction
{
public:
......
......@@ -23,39 +23,6 @@ namespace ngraph
namespace op
{
/// \brief Generic constant-padding operation.
///
/// Takes an input tensor of shape \f$(d_1,\dots,d_n)\f$ and pads by inserting a scalar \f$x\f$ supplied as input, in three possible ways:
///
/// 1. <i>(exterior padding)</i> inserts copies of \f$x\f$ <i>below or above</i> the bounds of existing rows, columns, etc.,
/// 2. <i>(interior padding)</i> inserts copies of \f$x\f$ <i>between</i> rows, columns, etc., or
/// 3. both of the above.
///
/// The number and position of elements to be inserted along a given axis is determined by three parameters:
///
/// 1. <i>(the padding-below sizes)</i> a vector of non-negative integers \f$(p_1,\dots,p_n)\f$,
/// 2. <i>(the padding-above sizes)</i> a vector of non-negative integers \f$(q_1,\dots,q_n)\f$, and
/// 3. <i>(the interior padding sizes)</i> a vector of non-negative integers \f$(r_1,\dots,r_n)\f$.
///
/// The output tensor will have the shape \f$(d'_1,\dots,d'_n)\f$ where \f$d'_i = p_i + (d_i - 1)(r_i + 1) + 1 + q_i\f$ if \f$d_i > 0\f$, and \f$d'_i = p_i + q_i\f$ if \f$d_i = 0\f$.
///
/// Example: given a 3x3 tensor, with interior-padding sizes of `{1,2}`, padding-below of `{1,2}`, padding-above of `{1,0}`, and a pad-value of `42`, we obtain:
///
/// ```
/// 42 42 42 42 42 42 42 42 42
/// 42 42 1 42 42 2 42 42 3
/// 1 2 3 42 42 42 42 42 42 42 42 42
/// 4 5 6 --> 42 42 4 42 42 5 42 42 6
/// 7 8 9 42 42 42 42 42 42 42 42 42
/// 42 42 7 42 42 8 42 42 9
/// 42 42 42 42 42 42 42 42 42
/// ```
///
/// In other words we have inserted one new row between each pair of adjacent rows, two new columns between each pair of adjacent columns, one new row at
/// the top and two new columns on the left, and one new row at the bottom and zero new columns on the right; then filled the new rows and columns with `42`.
///
/// (Note that `below` and `above` here refer respectively to lower- or higher-numbered coordinate indices, and numbering starts at the upper-left corner;
/// thus inserting a row "below" actually inserts it at the "top" of the matrix.)
///
class Pad : public util::RequiresTensorViewArgs
{
public:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment