Unverified Commit fd3147bc authored by Jayaram Bobba's avatar Jayaram Bobba Committed by GitHub

Merge branch 'master' into jmenon/cpu_layout_infra

parents 98445357 88bc37f1
.. broadcast.rst:
#########
Broadcast
#########
Description
===========
Operation whose ``output`` tensor ignores axes not in the ``arg``
tensor.
Inputs
------
+-----------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================+
| ``arg`` | Any | Any |
+-----------------+-------------------------+--------------------------------+
Attributes
----------
+---------------------+---------------+------------------------------------+
| Name | Type | Notes |
+=====================+===============+====================================+
| ``shape`` | ``Shape`` | The shape of the output. |
+---------------------+---------------+------------------------------------+
| ``broadcast_axes`` | ``AxisSet`` | Axis positions in ``shape`` that |
| | | are broadcast. |
+---------------------+---------------+------------------------------------+
Outputs
-------
+-----------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================+
| ``output`` | Same as ``arg`` | Same as ``shape``. |
+-----------------+-------------------------+--------------------------------+
The shape of ``arg`` must match ``shape`` with elements in ``broadcast_axes`` removed.
For example, if ``arg`` is :math:`[a, b, c]` then
.. math::
\texttt{Broadcast(arg, Shape{2, 3}, AxisSet{0})} &=
\begin{bmatrix}
a & b & c\\
a & b & c
\end{bmatrix}\\
\texttt{Broadcast(arg, Shape{3, 2}, AxisSet{1})} &=
\begin{bmatrix}
a & a\\
b & b\\
c & c
\end{bmatrix}
Mathematical Definition
=======================
For a coordinate :math:`C`, let :math:`p(C)` be a coordinate with the
axes in ``broadcast_axes`` removed. For example, if
:math:`\texttt{broadcast_axes}=\{1,3\}` then :math:`p([d_0, d_1,
d_2, d_3, d_4]) = [d_0, d_2, d_4]`. Then
.. math::
\texttt{output}_C = \texttt{arg}_{p(C)}.
Backprop
========
.. math::
\overline{\texttt{arg}} \leftarrow \texttt{Sum}(\Delta, \texttt{broadcast_axes}).
C++ Interface
=============
.. doxygenclass:: ngraph::op::Broadcast
:members:
.. ceiling.rst:
#######
Ceiling
#######
Description
===========
Elementwise ceiling operation.
Produces a single output tensor of the same element type and shape as ``arg``,
where the value at each coordinate of ``output`` is the ceiling of the
value at each ``arg`` coordinate.
Inputs
------
+-----------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================+
| ``arg`` | Any | Any |
+-----------------+-------------------------+--------------------------------+
Outputs
-------
+-----------------+-------------------------+--------------------------------+
| Name | Element Type | Shape |
+=================+=========================+================================+
| ``output`` | Same as ``arg`` | Same as ``arg``. |
+-----------------+-------------------------+--------------------------------+
Mathematical Definition
=======================
.. math::
\mathtt{output}_{i_0, \ldots, i_{n-1}} = \lceil \mathtt{arg}_{i_0,
\ldots, i_{n-1}}\rceil
Backprop
========
Not defined by nGraph.
The backprop would be zero for non-integer
input and undefined for integer input; a zero backprop would have
no effect on the backprop to ``arg``, so there is no need for ``Ceiling``
to define a backprop.
C++ Interface
=============
.. doxygenclass:: ngraph::op::Ceiling
:members:
.. concatenate.rst:
###########
Concatenate
###########
Description
===========
Produces a single output tensor of the same element type and shape as ``arg``,
where the value at each coordinate of ``output`` is the absoloute value of the
value at each ``arg`` coordinate.
Inputs
------
+-----------------+-----------------+------------------------------------------------------+
| Name | Type | Notes |
+=================+=================+======================================================+
| ``args`` | ``Nodes`` | All element types the same. |
| | | All shapes the same except on ``concatenation_axis`` |
+-----------------+-----------------+------------------------------------------------------+
Attributes
----------
+-------------------------+----------------------------------+
| Name | Notes |
+=========================+==================================+
| ``concatenation_axis`` | Less than the rank of the shape. |
+-------------------------+----------------------------------+
Outputs
-------
+-----------------+-------------------------+---------------------------------------------------+
| Name | Element Type | Shape |
+=================+=========================+===================================================+
| ``output`` | Same as ``args` | Same as ``arg`` on non-``concatenation_axis`` |
| | | Sum of ``concatenation_axis`` lengths of ``args`` |
+-----------------+-------------------------+---------------------------------------------------+
Mathematical Definition
=======================
We map each tensor in ``args`` to a segment of ``output`` based on the
coordinate at ``coordinate_axis``.
Let
.. math::
s(i) &= \sum_{j<i} \texttt{args}[i].\texttt{shape}\left[\texttt{concatenation_axis}\right]\\
t(i) &= \text{The greatest }j\text{ such that }i \ge s(j)\\
p(C)_i &= \begin{cases}
C_i-s(t(i))&\text{if }i==\texttt{concatenation_axis}\\
C_i&\text{otherwise}
\end{cases}\\
\texttt{output}_C&=\texttt{args}[t(C_i)]_{p(C)}
Backprop
========
We slice the backprop value into the backprops associated with the inputs.
C++ Interface
=============
.. doxygenclass:: ngraph::op::Concatenate
:members:
......@@ -55,4 +55,7 @@ Not currently a comprehensive list.
atan.rst
avg_pool.rst
avg_pool_backprop.rst
broadcast.rst
ceiling.rst
concatenate.rst
convolution.rst
......@@ -23,34 +23,6 @@ namespace ngraph
namespace op
{
/// \brief Operation which "adds" axes to an input tensor, replicating elements from the input as needed along the new axes.
///
/// Informally, a broadcast "adds" axes to the input tensor, replicating elements from the input tensor as needed to fill the new dimensions.
/// The parameter `m_broadcast_axes` indicates which of the output axes is being so added. For example, an output shape of `{2,5,6,2,8}` and
/// broadcast axes of `{1,3,4}` means that the input must have shape `{2,6}`.
///
/// Formally, given a shape or coordinate \f$S = [d_1,\dots,d_n]\f$ and a set of axis indices \f$A\f$, define \f$\textit{del}(S,A)\f$ to be
/// the shape or coordinate obtained by deleting the \f$(a + 1)\f$th dimension from \f$S\f$ for each \f$a \in A\f$. Then given an input
/// tensor \f$T\f$ of shape \f$\textit{del}(S,A)\f$ with element type \f$E\f$, broadcasting axes \f$A\f$ produces a tensor \f$T'\f$ of shape
/// \f$S\f$ with element type \f$E\f$, where \f$T'[i_1,\dots,i_n] = T[del([i_1,\dots,i_n],A)]\f$.
///
/// ## Parameters
///
/// | | Description |
/// | ---------------- | ------------------------------------------------------------------------ |
/// | `shape` | The shape \f$[d_1,\dots,d_n]\f$ of the broadcasted output. |
/// | `broadcast_axes` | The indices \f$A\f$ in the `shape` of each broadcasted (i.e., new) axis. |
///
/// ## Inputs
///
/// | | Type | Description |
/// | ----- | --------------------------------------------------- | --------------------------------------- |
/// | `arg` | \f$E[\mathit{del}([d_1,\dots,d_n],A)]~(n \geq 0)\f$ | A tensor of any shape and element type. |
///
/// ## Output
///
/// | Type | Description |
/// | ---------------------- | ------------------------------------------------------------------------------- |
/// | \f$E[d_1,\dots,d_n]\f$ | The tensor \f$T'\f$, where \f$T'[i_1,\dots,i_n] = T[del([i_1,\dots,i_n],A)]\f$. |
class Broadcast : public RequiresTensorViewArgs
{
public:
......
......@@ -23,18 +23,6 @@ namespace ngraph
namespace op
{
/// \brief Elementwise ceiling operation.
///
/// ## Inputs
///
/// | | Type | Description |
/// | ----- | --------------------------------- | ----------------------------------------------- |
/// | `arg` | \f$N[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of any shape and numeric element type. |
///
/// ## Output
///
/// | Type | Description |
/// | ---------------------- | -------------------------------------------------------------------------------------------- |
/// | \f$N[d_1,\dots,d_n]\f$ | The tensor \f$T\f$, where \f$T[i_1,\dots,i_n] = \lceil \texttt{arg}[i_1,\dots,i_n] \rceil\f$ |
class Ceiling : public UnaryElementwiseArithmetic
{
public:
......
......@@ -25,40 +25,6 @@ namespace ngraph
namespace op
{
/// \brief Concatenation operation.
///
/// Given an axis index \f$a\f$ and a rank \f$r \geq 1\f$ where \f$0 \leq a \lt r\f$, and one or more \f$r\f$-tensors
/// with the same element type whose shapes are the same except possibly at axis \f$a\f$, the tensors are
/// concatenated along axis \f$a\f$.
///
/// For example:
/// 1. Concatenating matrices on axis 0 (the row axis) stacks the matrices from top to bottom.
/// The number of rows in the resulting matrix is the sum of the number of rows for each
/// input matrix.
/// 2. Concatenating matrices on axis 1 (the column axis) concatenates them from left to right.
/// The number of columns in the resulting matrix is the sum of the number of columns for each
/// input matrix.
/// 3. Concatenating 3-tensors on axis 2 (the depth axis) stacks them from front to back.
/// The depth of the resulting tensor is the sum of the total depth for each input tensor.
///
/// The resulting tensor will have the same rank as the input tensors.
///
/// ## Parameters
///
/// | | Description |
/// | -------------------- | -------------------------------------------------------------- |
/// | `concatenation_axis` | The axis \f$a\f$ along which to concatenate the input tensors. |
///
/// ## Inputs
///
/// | | Type | Description |
/// | --------------- | ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
/// | `args`[\f$i\f$] | \f$E[d_1,\dots,d_{a-1},d^i_a,d_{a+1},\dots,d_n]~(n \geq 1)\f$ | One or more input tensors, all of which have the same element type, and the same shape, except possibly at axis \f$a\f$. |
///
/// ## Output
///
/// | Type | Description |
/// | ------------------------------------------------------------ | ----------------------------------------------------------------------------------------------- |
/// | \f$E[d_1,\dots,d_{a-1},\Sigma_i(d^i_a),d_{a+1},\dots,d_n]\f$ | The tensor \f$T\f$, where \f$T\f$ is the concatenation of the input tensors along axis \f$a\f$. |
class Concat : public RequiresTensorViewArgs
{
public:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment