Commit d603d3e3 authored by Adam Procter's avatar Adam Procter Committed by Scott Cyphers

clang-format comments: /src/ngraph/builder (#3472)

* Plant an updated .clang-format in src/ngraph, with overrides in each subdir thereof

* Whoops, forgot to commit the actual .clang-formats

* Remove src/ngraph/type/.clang-format (it was having no effect)

* Remove src/ngraph/distributed/.clang-format (it was having no effect)

* Remove src/ngraph/codegen/.clang-format (only one file was affected, so it's a wash)

* Remove src/ngraph/autodiff/.clang-format (only one file was affected, so it's a wash)

* Un-relax comment wrapping in src/ngraph/state

* Revert "Un-relax comment wrapping in src/ngraph/state"

This reverts commit 41fc50fb92bffb7f5aca4126eb1267f00dcca727.

* Un-relax comment wrapping in src/ngraph/builder

* Remove .clang-format
parent 1475a9c3
#
# OVERRIDE TO STYLE: Comments do *not* wrap.
#
BasedOnStyle: LLVM
IndentWidth: 4
UseTab: Never
Language: Cpp
Standard: Cpp11
AccessModifierOffset: -4
AlignConsecutiveDeclarations: false
AlignConsecutiveAssignments: false
AlignTrailingComments: true
AllowShortBlocksOnASingleLine: true
AllowShortCaseLabelsOnASingleLine: true
AllowShortFunctionsOnASingleLine: Inline
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: true
BinPackArguments: false
BinPackParameters: false
BreakBeforeBraces: Allman
BreakConstructorInitializersBeforeComma: true
ColumnLimit: 100
CommentPragmas: '.*'
IndentCaseLabels: false
IndentWrappedFunctionNames: true
KeepEmptyLinesAtTheStartOfBlocks: false
NamespaceIndentation: All
PointerAlignment: Left
SpaceAfterCStyleCast: false
SpaceBeforeAssignmentOperators: true
SpaceBeforeParens: ControlStatements
SpaceInEmptyParentheses: false
SpacesInAngles: false
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
SortIncludes: false
ReflowComments: true
IncludeCategories:
- Regex: '^".*'
Priority: 3
- Regex: '^<.*'
Priority: 2
SortIncludes: true
......@@ -68,11 +68,12 @@ namespace ngraph
ngraph::Shape m_final_shape;
};
/// \brief Compute the details regarding what reshape and/or broadcast operations must be applied to
/// arg1 and/or arg2, as well as what the final resulting shape shall be.
/// \brief Compute the details regarding what reshape and/or broadcast operations must be
/// applied to arg1 and/or arg2, as well as what the final resulting shape shall
/// be.
///
/// If this algorithm cannot handle the particular combination of shapes supplied as inputs, throw
/// an ngraph::builder::autobroadcast_incompatible_shapes exception.
/// If this algorithm cannot handle the particular combination of shapes supplied as
/// inputs, throw an ngraph::builder::autobroadcast_incompatible_shapes exception.
///
/// \exception ngraph::builder::autobroadcast_incompatible_shapes
static Autobroadcast_plan
......
......@@ -48,8 +48,8 @@ namespace ngraph
/// The elements in the std::pair returned by this function correspond to those supplied
/// in the std::pair provided via \p args.
///
/// If \p args.first and \p args.second produce identical shapes, then the returned std::pair
/// will have the same value as \p args.
/// If \p args.first and \p args.second produce identical shapes, then the returned
/// std::pair will have the same value as \p args.
///
/// If \p args.first and \p args.second produce different shapes, then this function creates
/// new ngraph::op::Reshape and/or ngraph::op::Broadcast nodes, as needed, to wrap
......@@ -73,14 +73,16 @@ namespace ngraph
std::pair<std::shared_ptr<Node>, std::shared_ptr<Node>>
numpy_broadcast(const std::pair<Output<Node>, Output<Node>>& args);
/// Create a new \p NodeType node, and any additional nodes required to simulate NumPy-style autobroadcast
/// semantics. Intended for binary operations such as "Add".
/// Create a new \p NodeType node, and any additional nodes required to simulate NumPy-style
/// autobroadcast semantics. Intended for binary operations such as "Add".
///
/// \param [in] operand1_reshapeable The first operand to supply to the \p NodeType constructor. Subject to
/// being wrapped with additional nodes required for autobroadcasting. Must not be null.
/// \param [in] operand1_reshapeable The first operand to supply to the \p NodeType
/// constructor. Subject to being wrapped with additional
/// nodes required for autobroadcasting. Must not be null.
///
/// \param [in] operand2_reshapeable The second operand to supply to the \p NodeType constructor. Subject to
/// being wrapped with additional nodes required for autobroadcasting. Must not be null.
/// \param [in] operand2_reshapeable The second operand to supply to the \p NodeType
/// constructor. Subject to being wrapped with additional
/// nodes required for autobroadcasting. Must not be null.
///
/// \return The sink node of any/all nodes created by this function. Will never be null.
///
......@@ -94,18 +96,20 @@ namespace ngraph
return std::make_shared<NodeType>(shaped_op1_op2.first, shaped_op1_op2.second);
}
/// Create a new \p NodeType node, and any additional nodes required to simulate NumPy-style autobroadcast
/// semantics. Intended for non-binary operations such as "Select", where precisely the second and third
/// operands are subject to autobroadcast semantics.
/// Create a new \p NodeType node, and any additional nodes required to simulate NumPy-style
/// autobroadcast semantics. Intended for non-binary operations such as "Select", where
/// precisely the second and third operands are subject to autobroadcast semantics.
///
/// \param [in] operand1 This operand is not subject to autobraodcast logic, and will be passed as-is as
/// the first argument to the \p NodeType constructor.
/// \param [in] operand1 This operand is not subject to autobraodcast logic, and will be
/// passed as-is as the first argument to the \p NodeType constructor.
///
/// \param [in] operand2_reshapeable The second operand to supply to the \p NodeType constructor. Subject to
/// being wrapped with additional nodes required for autobroadcasting. Must not be null.
/// \param [in] operand2_reshapeable The second operand to supply to the \p NodeType
/// constructor. Subject to being wrapped with additional
/// nodes required for autobroadcasting. Must not be null.
///
/// \param [in] operand3_reshapeable The third operand to supply to the \p NodeType constructor. Subject to
/// being wrapped with additional nodes required for autobroadcasting. Must not be null.
/// \param [in] operand3_reshapeable The third operand to supply to the \p NodeType
/// constructor. Subject to being wrapped with additional
/// nodes required for autobroadcasting. Must not be null.
///
/// \return The sink node of any/all nodes created by this function. Will never be null.
///
......
......@@ -23,6 +23,7 @@ namespace ngraph
{
namespace builder
{
// clang-format off
/// \brief Implement's Numpy's multidimensional transpose op. Doubles as DimShuffle.
///
/// If `order` is empty, the vector is transposed by reversing it's axes, i.e.
......@@ -30,8 +31,8 @@ namespace ngraph
/// shape [1,2,4] becomes shape [4,2,1]
///
/// If `order` is provided, it should be a vector of unique axis positions ranging
/// from 0 to N-1, when N is the length of the input shape. In this case, numpy_transpose acts
/// like dimshuffle, so
/// from 0 to N-1, when N is the length of the input shape. In this case, numpy_transpose
/// acts like dimshuffle, so
///
/// shape [1,2,4] with order [1,2,0] becomes shape [2,4,1]
///
......@@ -45,6 +46,7 @@ namespace ngraph
/// | Type | Description |
/// | ---------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
/// | \f$E[d_{n-1},\dots,d_0)]\textit{ or }E[d_{order[0]},\dots,d_{order[n-1]}]\f$ | The tensor \f$T\f$, where \f$T\f$ is the input tensor with the axes reordered via Numpy Transpose rules |
// clang-format on
std::shared_ptr<Node> numpy_transpose(const Output<Node>& value, AxisVector order = {});
} // namespace builder
} // namespace ngraph
......@@ -23,6 +23,7 @@ namespace ngraph
{
namespace builder
{
// clang-format off
/// \brief Sum-based L2 Norm of a Tensor.
///
/// Calculates
......@@ -33,18 +34,20 @@ namespace ngraph
///
/// ## Inputs
///
/// | | Type | Description |
/// | ---------------- | --------------------------------- | ----------------------------------------------------------------------------------------------------- |
/// | `value` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape
/// | `reduction_axes` | AxesSet | The axes to eliminate through reduction (0 indexed). |
/// | | Type | Description |
/// | ---------------- | --------------------------------- | ---------------------------------------------------- |
/// | `value` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape |
/// | `reduction_axes` | AxesSet | The axes to eliminate through reduction (0 indexed). |
///
/// ## Output
///
/// | Type | Description |
/// | ----------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
/// | \f$E[\textit{delete}(A,d_1,\dots,d_n)]\f$ | The tensor \f$T\f$, where \f$T\f$ is the input tensor with the `reduction_axes` \f$A\f$ eliminated by reduction. |
// clang-format on
std::shared_ptr<Node> l2_norm(const Output<Node>& value, const AxisSet& reduction_axes);
// clang-format off
/// \brief Sum-based Mean of a Tensor.
///
/// Calculates
......@@ -55,18 +58,20 @@ namespace ngraph
///
/// ## Inputs
///
/// | | Type | Description |
/// | ---------------- | --------------------------------- | ----------------------------------------------------------------------------------------------------- |
/// | `node` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape
/// | `reduction_axes` | AxesSet | The axes to eliminate through reduction (0 indexed). |
/// | | Type | Description | |
/// | ---------------- | --------------------------------- | ---------------------------------------------------- |
/// | `node` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape |
/// | `reduction_axes` | AxesSet | The axes to eliminate through reduction (0 indexed). |
///
/// ## Output
///
/// | Type | Description |
/// | ----------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
/// | \f$E[\textit{delete}(A,d_1,\dots,d_n)]\f$ | The tensor \f$T\f$, where \f$T\f$ is the input tensor with the `reduction_axes` \f$A\f$ eliminated by reduction. |
// clang-format on
std::shared_ptr<Node> mean(const Output<Node>& node, const AxisSet& reduction_axes);
// clang-format off
/// \brief Sum-based Standard Deviation of a Tensor.
///
/// If bessel_correct is true, calculates
......@@ -81,21 +86,23 @@ namespace ngraph
///
/// ## Inputs
///
/// | | Type | Description |
/// | ------------------- | --------------------------------- | ----------------------------------------------------------------------------------------------------- |
/// | `value` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape
/// | `reduction_axes` | AxesSet | The axes to eliminate through reduction (0 indexed). |
/// | `bessel_correction` | bool (default = false) | Enable Bessel's correction to std_dev for Small sample sizes |
/// | | Type | Description |
/// | ------------------- | --------------------------------- | ------------------------------------------------------------ |
/// | `value` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape |
/// | `reduction_axes` | AxesSet | The axes to eliminate through reduction (0 indexed). |
/// | `bessel_correction` | bool (default = false) | Enable Bessel's correction to std_dev for Small sample sizes |
///
/// ## Output
///
/// | Type | Description |
/// | ----------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
/// | \f$E[\textit{delete}(A,d_1,\dots,d_n)]\f$ | The tensor \f$T\f$, where \f$T\f$ is the input tensor with the `reduction_axes` \f$A\f$ eliminated by reduction. |
// clang-format on
std::shared_ptr<Node> std_dev(const Output<Node>& value,
const AxisSet& reduction_axes,
const bool bessel_correction = false);
// clang-format off
/// \brief Sum-based Variance of a Tensor.
///
/// If bessel_correct is true, calculates
......@@ -110,17 +117,18 @@ namespace ngraph
///
/// ## Inputs
///
/// | | Type | Description |
/// | ------------------- | --------------------------------- | ----------------------------------------------------------------------------------------------------- |
/// | `value | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape
/// | `reduction_axes` | AxesSet | The axes to eliminate through reduction (0 indexed). |
/// | `bessel_correction` | bool (default = false) | Enable Bessel's correction to std_dev for Small sample sizes |
/// | | Type | Description |
/// | ------------------- | --------------------------------- | ------------------------------------------------------------ |
/// | `value | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | An input tensor of any shape |
/// | `reduction_axes` | AxesSet | The axes to eliminate through reduction (0 indexed). |
/// | `bessel_correction` | bool (default = false) | Enable Bessel's correction to std_dev for Small sample sizes |
///
/// ## Output
///
/// | Type | Description |
/// | ----------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
/// | \f$E[\textit{delete}(A,d_1,\dots,d_n)]\f$ | The tensor \f$T\f$, where \f$T\f$ is the input tensor with the `reduction_axes` \f$A\f$ eliminated by reduction. |
// clang-format on
std::shared_ptr<Node> variance(const Output<Node>& value,
const AxisSet& reduction_axes,
const bool bessel_correction = false);
......
......@@ -64,8 +64,9 @@ shared_ptr<Node> builder::flatten(const Output<Node>& value, int axis)
{
auto data_shape = value.get_shape();
// First dimension of output tensor is the product of [d_0, ... d_{axis-1}] dimensions of input tensor.
// The last dimension is the product of the rest of input tensor dimensions: [d_{axis}, ..., d_n]
// First dimension of output tensor is the product of [d_0, ... d_{axis-1}] dimensions of input
// tensor. The last dimension is the product of the rest of input tensor dimensions:
// [d_{axis}, ..., d_n]
size_t first_dim_size =
accumulate(begin(data_shape), next(begin(data_shape), axis), 1UL, multiplies<size_t>());
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment