Unverified Commit 8d938e32 authored by Scott Cyphers's avatar Scott Cyphers Committed by GitHub

Doc for abs, update doxygen for ops with .rst doc (#481)

* Doc for abs, update doxygen for ops with .rst doc

* Review
parent 19802062
......@@ -63,6 +63,7 @@ Further overview details can be found on our :doc:`about` page.
:maxdepth: 1
:caption: Ops
ops/abs.rst
ops/convolution.rst
.. toctree::
......@@ -86,4 +87,4 @@ Indices and tables
* :ref:`search`
* :ref:`genindex`
\ No newline at end of file
.. abs.rst:
###
Abs
###
Elementwise absolute value operation.
Produces a single output tensor of the same element type and shape as the input,
where the value at each coordinate of the output is the absoloute value of the
value at each input coordinate.
+-----------------+-------------------------+--------------------------------+
| Input Name | Element Type | Shape |
+=================+=========================+================================+
| ``input`` | Any | Any |
+-----------------+-------------------------+--------------------------------+
+------------------+-------------------------+----------------------------------------------------+
| Output Name | Element Type | Shape |
+==================+=========================+====================================================+
| ``output`` | Same as ``input`` | Same as input. |
+------------------+-------------------------+----------------------------------------------------+
Mathematical Definition
=======================
.. math::
output_{i_0, \ldots, i_{n-1}} = \mathrm{abs}(input_{i_0, \ldots, i_{n-1}})
Backprop
========
.. math::
\overline{input} \leftarrow \mathrm{sgn}(input)\Delta
C++ Interface
=============
.. doxygenclass:: ngraph::op::Abs
:members:
Python Interface
================
is not merged yet, but could go here!
......@@ -26,23 +26,15 @@ namespace ngraph
{
/// \brief Elementwise absolute value operation.
///
/// ## Inputs
///
/// | | Type | Description |
/// | ----- | --------------------------------- | ----------------------------------------------- |
/// | `arg` | \f$N[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of any shape and numeric element type. |
///
/// ## Output
///
/// | Type | Description |
/// | ---------------------- | -------------------------------------------------------------------------------- |
/// | \f$N[d_1,\dots,d_n]\f$ | The tensor \f$T\f$, where \f$T[i_1,\dots,i_n] = |\texttt{arg}[i_1,\dots,i_n]|\f$ |
class Abs : public UnaryElementwiseArithmetic
{
public:
/// \brief Constructs an absolute value operation.
///
/// \param arg Node that produces the input tensor.
/// Output `[d1, ...]`
///
/// \param arg Node that produces the input tensor.<br>
/// `[d1, ...]`
Abs(const std::shared_ptr<Node>& arg)
: UnaryElementwiseArithmetic("Abs", arg)
{
......
......@@ -24,48 +24,27 @@ namespace ngraph
{
/// \brief Batched convolution operation, with optional window dilation and stride.
///
/// Convolution takes two inputs:
///
/// 1. <i>(the idata batch)</i> a tensor of shape \f$(N,C_\textit{in},d_1,\dots,d_n)\f$ where \f$n > 0\f$, every \f$d_i > 0\f$, and where \f$N\f$ is the batch size
/// (number of data items) and \f$C_\textit{in} > 0\f$ is the number of input channels (sometimes called features); and
/// 2. <i>(the filters)</i> a tensor of shape \f$(C_\textit{out},C_\textit{in},d^f_1,\dots,d^f_n)\f$, where \f$C_\textit{out} > 0\f$ is the number of output channels
/// (sometimes called features) and \f$(d^f_1,\dots,d^f_n)\f$ are the filter dimensions. It is required that for all \f$i\f$, \f$0 < l_i(d^f_i - 1) + 1 \le (d_i - 1)*g_i + 1\f$.
/// (See below for the definition of the window dilation \f$l_i\f$ and the data dilation \f$t_i\f$);
///
/// and five optional parameters:
///
/// 3. <i>(the window movement strides)</i> a vector of positive integers \f$(s_1,\dots,s_n)\f$ (default is all ones),
/// 4. <i>(the window dilation strides)</i> a vector of positive integers \f$(l_1,\dots,l_n)\f$ (default is all ones),
/// 5. <i>(the padding below)</i> a vector of (possibly negative) integers \f$(p_1,\dots,p_n)\f$ (default is all zeros),
/// 6. <i>(the padding above)</i> a vector of (possibly negative) integers \f$(q_1,\dots,q_n)\f$ (default is all zeros), and
/// 7. <i>(the data dilation strides)</i> a vector of non-negative integers \f$(q_1,\dots,q_n)\f$ (default is all ones).
///
/// The output has the shape \f$(N,C_\textit{out},d'_1,\dots,d'_n)\f$, where \f$d'_n = \lceil \frac{(d_i - 1) * t_i + 1 + p_i + q_i - l_i(d^f_i - 1)}{s_i} \rceil\f$.
///
/// Given an input data batch tensor \f$T_\textit{in}\f$, first define the <i>transformed input tensor</i> \f$T_\textit{trans}\f$, with shape \f$(N,C_\textit{in},(d_1 - 1)*t_1+1+p_1+q_1,\dots,(d_n - 1)*t_n+1+p_n+q_n)\f$, as follows:
///
/// \f[
/// T_\textit{trans}[a,c,i_1,\dots,i_n] = T[a,c,(i_1 - p_1)/t_1,\dots,(i_n - p_n)/t_n] \text{ if for all }k, t_k evenly divides (i_k - p_k) and p_k \le i_k \lt p_k + (d_k - 1)*t_k + 1, \text{ else } 0
/// \f]
///
/// then, given an input filter tensor \f$T_\textit{filt}\f$, the output tensor \f$T_\textit{out}\f$ is defined by the equation.
///
/// \f[
/// T_\textit{out}[a,c_\textit{out},i_1,\dots,i_n] = \sum_{c_\textit{in}=0,j_1=0,\dots,j_n=0}^{c_\textit{in}=C_\textit{in}-1,j_1=d^f_1-1,\dots,j_n=d^f_n-1} (T_\textit{filt}[c_\textit{out},c_\textit{in},j_1,\dots,j_n] \cdot T_\textit{trans}[a,c_\textit{in},s_1i_1+l_1j_1,\dots,s_ni_n+l_nj_n])
/// \f]
///
class Convolution : public RequiresTensorViewArgs
{
public:
/// \brief Constructs a batched convolution operation.
///
/// \param data_batch The node producing the input data batch tensor.
/// \param filters The node producing the filters tensor.
/// \param window_movement_strides The window movement strides.
/// \param window_dilation_strides The window dilation strides.
/// \param padding_below The padding-below sizes.
/// \param padding_above The padding-above sizes.
/// \param data_dilation_strides The data dilation strides.
/// Output `[N, C_OUT, R1, ... Rf]`
///
/// \param data_batch The node producing the input data batch tensor.<br>
/// `[N, C_IN, D1, ... Df]`
/// \param filters The node producing the filters tensor.<br>
/// `[C_OUT, C_IN, F1, ... Ff]`
/// \param window_movement_strides The window movement strides.<br>
/// `[f]`
/// \param window_dilation_strides The window dilation strides.<br>
/// `[f]`
/// \param padding_below The padding-below sizes.<br>
/// `[f]`
/// \param padding_above The padding-above sizes.<br>
/// `[f]`
/// \param data_dilation_strides The data dilation strides.<br>
/// `[f]`
Convolution(const std::shared_ptr<Node>& data_batch,
const std::shared_ptr<Node>& filters,
const Strides& window_movement_strides,
......@@ -76,12 +55,18 @@ namespace ngraph
/// \brief Constructs a batched convolution operation with no data dilation (i.e., all data dilation strides are 1).
///
/// \param data_batch The node producing the input data batch tensor.
/// \param filters The node producing the filters tensor.
/// \param window_movement_strides The window movement strides.
/// \param window_dilation_strides The window dilation strides.
/// \param padding_below The padding-below sizes.
/// \param padding_above The padding-above sizes.
/// \param data_batch The node producing the input data batch tensor.<br>
/// `[N, C_IN, D1, ... Df]`
/// \param filters The node producing the filters tensor.<br>
/// `[C_OUT, C_IN, F1, ... Ff]`
/// \param window_movement_strides The window movement strides.<br>
/// `[f]`
/// \param window_dilation_strides The window dilation strides.<br>
/// `[f]`
/// \param padding_below The padding-below sizes.<br>
/// `[f]`
/// \param padding_above The padding-above sizes.<br>
/// `[f]`
Convolution(const std::shared_ptr<Node>& data_batch,
const std::shared_ptr<Node>& filters,
const Strides& window_movement_strides,
......@@ -91,10 +76,14 @@ namespace ngraph
/// \brief Constructs a batched convolution operation with no padding or data dilation (i.e., padding above and below are 0 everywhere, and all data dilation strides are 1).
///
/// \param data_batch The node producing the input data batch tensor.
/// \param filters The node producing the filters tensor.
/// \param window_movement_strides The window movement strides.
/// \param window_dilation_strides The window dilation strides.
/// \param data_batch The node producing the input data batch tensor.<br>
/// `[N, C_IN, D1, ... Df]`
/// \param filters The node producing the filters tensor.<br>
/// `[C_OUT, C_IN, F1, ... Ff]`
/// \param window_movement_strides The window movement strides.<br>
/// `[f]`
/// \param window_dilation_strides The window dilation strides.<br>
/// `[f]`
Convolution(const std::shared_ptr<Node>& data_batch,
const std::shared_ptr<Node>& filters,
const Strides& window_movement_strides,
......@@ -102,17 +91,22 @@ namespace ngraph
/// \brief Constructs a batched convolution operation with no window dilation, padding, or data dilation (i.e., padding above and below are 0 everywhere, and all window/data dilation strides are 1).
///
/// \param data_batch The node producing the input data batch tensor.
/// \param filters The node producing the filters tensor.
/// \param window_movement_strides The window movement strides.
/// \param data_batch The node producing the input data batch tensor.<br>
/// `[N, C_IN, D1, ... Df]`
/// \param filters The node producing the filters tensor.<br>
/// `[C_OUT, C_IN, F1, ... Ff]`
/// \param window_movement_strides The window movement strides.<br>
/// `[f]`
Convolution(const std::shared_ptr<Node>& data_batch,
const std::shared_ptr<Node>& filters,
const Strides& window_movement_strides);
/// \brief Constructs a batched convolution operation with no window dilation or movement stride (i.e., padding above and below are 0 everywhere, and all window/data dilation strides and window movement strides are 1).
///
/// \param data_batch The node producing the input data batch tensor.
/// \param filters The node producing the filters tensor.
/// \param data_batch The node producing the input data batch tensor.<br>
/// `[N, C_IN, D1, ... Df]`
/// \param filters The node producing the filters tensor.<br>
/// `[C_OUT, C_IN, F1, ... Ff]`
Convolution(const std::shared_ptr<Node>& data_batch,
const std::shared_ptr<Node>& filters);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment