1. 19 Oct, 2018 2 commits
    • Adam Procter's avatar
      Partial Shapes and Types, Part 4d: Broadcast (#1783) · 34aae47c
      Adam Procter authored
      * Adapt Tensor class to have partial shapes
      
      * Add PartialShapes to Input, Output, Function, Node classes
      
      * Terminological cleanup
      
      * Add PartialShape propagation for Parameter and Result
      
      * Implement partial-shape propagation for elementwise ops
      
      * More comments
      
      * One more comment tweak
      
      * Add tests for the merge functions
      
      * Add merging of undetermined element types
      
      * Fix a goophup in deserializer implementation
      
      * Implement fallback for ops that do not support partial shape/type validation
      
      * Updates for some older unit tests, now that operator[] exists
      
      * Add missing validate_punt_if_incomplete to AllReduce
      
      * Implement partial shape/type propagation for AllReduce
      
      * Implement partial shape/type propagation for Reshape
      
      * Remove unneeded validate_punt from Result
      
      * Implement partial shape/type propagation for Reverse
      
      * Implement partial shape/type validation for ReverseSequence
      
      * Implement partial shape/type validation for ArithmeticReduction
      
      * Better docstrings for the stuff introduced in #1692; remove prototype for unimplemented, unused PartialShape::append()
      
      * One more docstring thing I forgot to save
      
      * Switch terminology from 'determined/undetermined' to 'static/dynamic'
      
      * Switch terminology from 'complete/incomplete' to 'static/dynamic' for shapes; fix up some mushily worded comments
      
      * Fix overzealous edits from the last commit
      
      * Rename one test that escaped the Great Renaming
      
      * Remove unnecessary validate_punt_if_dynamic from Reshape
      
      * Fix comment typo
      
      * Rewrite operator+ and operator* for Dimension as members, not friends
      
      * Formatting tweak
      
      * Show argument types/shapes in long NodeDescription; tank unit tests to block merge
      
      * Fix dynamic element type propagation for elementwise ops, add some unit tests for same
      
      * Fix error message
      
      * Roll 'Not' back to existing behavior (non-boolean input types allowed)
      
      * Add a TODO tag to a todo item
      
      * Add unit tests for partial shape/type propagation with ReverseSequence
      
      * Add unit tests for partial-shape/type propagation for ArithmeticReduction (via Sum)
      
      * Implement partial shape/type validation for Broadcast; implement unit tests for same
      
      * Remove inapplicable TODO
      
      * Implement partial type/shape propagation for GetOutputElement
      
      * Function signatures
      
      * Add implementations, unit tests for relaxes/refines functions
      
      * Generalize project/reduce/inject functions to cover PartialShape, move to shape_util.[ch]pp
      
      * Deal with std::find_if #include issues
      
      * Fix more include madness
      34aae47c
    • Adam Procter's avatar
      Partial Shapes and Types, Part 4b: Concat (#1778) · 9aba28dc
      Adam Procter authored
      * Adapt Tensor class to have partial shapes
      
      * Add PartialShapes to Input, Output, Function, Node classes
      
      * Terminological cleanup
      
      * Add PartialShape propagation for Parameter and Result
      
      * Implement partial-shape propagation for elementwise ops
      
      * More comments
      
      * One more comment tweak
      
      * Add tests for the merge functions
      
      * Add merging of undetermined element types
      
      * Fix a goophup in deserializer implementation
      
      * Implement fallback for ops that do not support partial shape/type validation
      
      * Updates for some older unit tests, now that operator[] exists
      
      * Add missing validate_punt_if_incomplete to AllReduce
      
      * Implement partial shape/type propagation for AllReduce
      
      * Implement partial shape/type propagation for Reshape
      
      * Remove unneeded validate_punt from Result
      
      * Implement partial shape/type propagation for Reverse
      
      * Implement partial shape/type validation for ReverseSequence
      
      * Implement partial shape/type validation for ArithmeticReduction
      
      * Better docstrings for the stuff introduced in #1692; remove prototype for unimplemented, unused PartialShape::append()
      
      * One more docstring thing I forgot to save
      
      * Switch terminology from 'determined/undetermined' to 'static/dynamic'
      
      * Switch terminology from 'complete/incomplete' to 'static/dynamic' for shapes; fix up some mushily worded comments
      
      * Fix overzealous edits from the last commit
      
      * Rename one test that escaped the Great Renaming
      
      * Remove unnecessary validate_punt_if_dynamic from Reshape
      
      * Fix comment typo
      
      * Rewrite operator+ and operator* for Dimension as members, not friends
      
      * Formatting tweak
      
      * Show argument types/shapes in long NodeDescription; tank unit tests to block merge
      
      * Fix dynamic element type propagation for elementwise ops, add some unit tests for same
      
      * Fix error message
      
      * Roll 'Not' back to existing behavior (non-boolean input types allowed)
      
      * Add a TODO tag to a todo item
      
      * Add unit tests for partial shape/type propagation with ReverseSequence
      
      * Add unit tests for partial-shape/type propagation for ArithmeticReduction (via Sum)
      
      * Implement partial shape/type validation for concat
      
      * Fix for a corner case in concat propagation of dynamic shapes; unit tests for concat propagation of dynamic shapes
      
      * Implement partial type/shape propagation for GetOutputElement
      
      * Function signatures
      
      * Add implementations, unit tests for relaxes/refines functions
      
      * Generalize project/reduce/inject functions to cover PartialShape, move to shape_util.[ch]pp
      
      * Deal with std::find_if #include issues
      
      * Fix more include madness
      
      * Remove validate-punt-if-dynamic test because it uses Concat
      9aba28dc
  2. 18 Oct, 2018 3 commits
    • Adam Procter's avatar
      Partial Shapes and Types, Part 4f: Pad (#1799) · de462cba
      Adam Procter authored
      * Adapt Tensor class to have partial shapes
      
      * Add PartialShapes to Input, Output, Function, Node classes
      
      * Terminological cleanup
      
      * Add PartialShape propagation for Parameter and Result
      
      * Implement partial-shape propagation for elementwise ops
      
      * More comments
      
      * One more comment tweak
      
      * Add tests for the merge functions
      
      * Add merging of undetermined element types
      
      * Fix a goophup in deserializer implementation
      
      * Implement fallback for ops that do not support partial shape/type validation
      
      * Updates for some older unit tests, now that operator[] exists
      
      * Add missing validate_punt_if_incomplete to AllReduce
      
      * Implement partial shape/type propagation for AllReduce
      
      * Implement partial shape/type propagation for Reshape
      
      * Remove unneeded validate_punt from Result
      
      * Implement partial shape/type propagation for Reverse
      
      * Implement partial shape/type validation for ReverseSequence
      
      * Implement partial shape/type validation for ArithmeticReduction
      
      * Better docstrings for the stuff introduced in #1692; remove prototype for unimplemented, unused PartialShape::append()
      
      * One more docstring thing I forgot to save
      
      * Switch terminology from 'determined/undetermined' to 'static/dynamic'
      
      * Switch terminology from 'complete/incomplete' to 'static/dynamic' for shapes; fix up some mushily worded comments
      
      * Fix overzealous edits from the last commit
      
      * Rename one test that escaped the Great Renaming
      
      * Remove unnecessary validate_punt_if_dynamic from Reshape
      
      * Fix comment typo
      
      * Rewrite operator+ and operator* for Dimension as members, not friends
      
      * Formatting tweak
      
      * Show argument types/shapes in long NodeDescription; tank unit tests to block merge
      
      * Fix dynamic element type propagation for elementwise ops, add some unit tests for same
      
      * Fix error message
      
      * Roll 'Not' back to existing behavior (non-boolean input types allowed)
      
      * Add a TODO tag to a todo item
      
      * Add unit tests for partial shape/type propagation with ReverseSequence
      
      * Add unit tests for partial-shape/type propagation for ArithmeticReduction (via Sum)
      
      * Implement partial shape/type validation for Pad; add unit tests for same
      
      * Implement partial type/shape propagation for GetOutputElement
      
      * Function signatures
      
      * Add implementations, unit tests for relaxes/refines functions
      
      * Generalize project/reduce/inject functions to cover PartialShape, move to shape_util.[ch]pp
      
      * Deal with std::find_if #include issues
      
      * Fix more include madness
      
      * Formatting
      de462cba
    • Robert Kimball's avatar
      CMake cleanup (#1838) · c4018000
      Robert Kimball authored
      * remove unused setting
      
      * cleanup
      
      * cleanup
      
      * cleanup
      
      * more cleanup
      c4018000
    • Adam Procter's avatar
      Partial Shapes and Types, Part 4a: Implement partial shape/type validation for… · 7b4be37e
      Adam Procter authored
      Partial Shapes and Types, Part 4a: Implement partial shape/type validation for some existing ops (#1756)
      
      7b4be37e
  3. 17 Oct, 2018 2 commits
    • Adam Procter's avatar
      Partial Shapes and Types, Part 3: Framework for partial shape/element type validation (#1728) · 0563a3cf
      Adam Procter authored
      * Adapt Tensor class to have partial shapes
      
      * Add PartialShapes to Input, Output, Function, Node classes
      
      * Terminological cleanup
      
      * Add PartialShape propagation for Parameter and Result
      
      * Implement partial-shape propagation for elementwise ops
      
      * More comments
      
      * One more comment tweak
      
      * Add tests for the merge functions
      
      * Add merging of undetermined element types
      
      * Fix a goophup in deserializer implementation
      
      * Implement fallback for ops that do not support partial shape/type validation
      
      * Updates for some older unit tests, now that operator[] exists
      
      * Add missing validate_punt_if_incomplete to AllReduce
      
      * Better docstrings for the stuff introduced in #1692; remove prototype for unimplemented, unused PartialShape::append()
      
      * One more docstring thing I forgot to save
      
      * Switch terminology from 'determined/undetermined' to 'static/dynamic'
      
      * Switch terminology from 'complete/incomplete' to 'static/dynamic' for shapes; fix up some mushily worded comments
      
      * Fix overzealous edits from the last commit
      
      * Rename one test that escaped the Great Renaming
      
      * Remove unnecessary validate_punt_if_dynamic from Reshape
      
      * Show argument types/shapes in long NodeDescription; tank unit tests to block merge
      
      * Fix dynamic element type propagation for elementwise ops, add some unit tests for same
      
      * Roll 'Not' back to existing behavior (non-boolean input types allowed)
      
      * Add a TODO tag to a todo item
      0563a3cf
    • VINOD KUMAR DEVARAMPATI's avatar
      Constant folding with Quantize (#1833) · c6550bc0
      VINOD KUMAR DEVARAMPATI authored
      * Constant folding with Quantize
      
      * updated with review comments
      c6550bc0
  4. 16 Oct, 2018 2 commits
    • Adam Procter's avatar
      Refactor convolution and pooling type prop (#1817) · 61be3814
      Adam Procter authored
      * WIP
      
      * More WIP
      
      * More chiseling
      
      * Move conv validation utils to a separate file; update unit tests
      
      * Fix invalid attributes in pattern containing ConvolutionBackpropFilters
      
      * Remove zero_const_conv test (it's no longer possible to construct the graph being tested)
      
      * Rename infer_convolution_output_item_shape to infer_windowed_reduction_output_shape and add a boolean flag to control whether window-all-in-padding is allowed
      
      * Add generalized function for inferring pooling fprop, use it in AvgPool/AvgPoolBackprop
      
      * Update MaxPool to use new utility functions
      
      * Fix comment
      
      * Remove faulty and redundant check for window shape relative to pre-padding data shape
      
      * Revert change to pattern construction in cpu_fusion
      
      * Update unit test for maxpool
      
      * Restore unjustly eliminated tests; move some computation to ptrdiff_t for safety; fix wording on some error messages
      
      * Formatting
      61be3814
    • Fenglei's avatar
      reset buffer size, use original input size for memcpy (#1786) · 05aa1be8
      Fenglei authored
      * reset buffer size, use original input size for memcpy
      
      * resolve comment and add test
      
      * update comment
      05aa1be8
  5. 15 Oct, 2018 5 commits
  6. 14 Oct, 2018 1 commit
  7. 13 Oct, 2018 3 commits
  8. 12 Oct, 2018 6 commits
  9. 11 Oct, 2018 1 commit
  10. 10 Oct, 2018 1 commit
    • Nick Korovaiko's avatar
      Reshape Sinking (#1701) · f642bc4c
      Nick Korovaiko authored
      * reshape sinking working on mnist_conv
      
      * forgot to add reshape_sinking files
      
      * refactoring of binary case
      
      * Quantize/Dequantize case, fix add case, add assert
      
      * address bob and scott's feedback
      
      * debug
      
      * fix a bug where reshapes are removed too early
      f642bc4c
  11. 09 Oct, 2018 1 commit
  12. 08 Oct, 2018 3 commits
    • Chris Sullivan's avatar
      Update pad on nvpgu (#1759) · 40ff77bd
      Chris Sullivan authored
      * Add pad with fill operator using the outward-in index pattern.
      
      * Remove static pad and rename build_pad_dynamic -> build_pad. Update maxpool 1d padding.
      
      * Formatting.
      
      * Split build_pad_dynamic into build_pad and build_pad_fill.
      
      * Add test coverage for fixed bug in op::Pad for gpu.
      40ff77bd
    • Jayaram Bobba's avatar
      IAT: Skip reshapes that are removing or adding size-1 dimensions (#1684) · 519b18ac
      Jayaram Bobba authored
      * Reshape optimizations for when unit-sized dimensions are added/removed from tensors
      
      * Added unit tests for eliminating squeeze and expand_dims operations
      
      * Bug fix to expand dims layout
      
      * Style fix
      519b18ac
    • Jayaram Bobba's avatar
      IAT: More convolution folding optimizations (#1712) · 00b4453d
      Jayaram Bobba authored
      * Check output shape when setting memory layout for slice op.
      
      * Miscellaneous fusion and other optimizations for inception-resnetv2
      - ConvBias Batchnorm folding
      - ConvBias Affine folding
      - Check if MKLDNN can slice a given layout and select layouts
        appropriately
      
      * Fixed unit test and bug in conv bias pattern
      
      * Addressed PR feedback
      
      * Addressed PR feedback
      00b4453d
  13. 06 Oct, 2018 1 commit
  14. 05 Oct, 2018 3 commits
    • Jaikrishnan Menon's avatar
      c8858ef2
    • Chris Sullivan's avatar
      RNN fusion (inference) (#1459) · 4df5ea8b
      Chris Sullivan authored
      * Add op::Sigmoid to nvgpu.
      
      * Bring rnn fusion and concat passes over into GPU from IA. This is a temporary move until generalization and gpu specification can occur.
      
      * Add LSTM fusion and cudnn inference kernel. Next need recurrent fusion and layer fusion.
      
      * Formatting
      
      * Removed unecessary extra output from LSTM op (rnn with seq. length = 1, so y = hy).
      
      * Add RNN fusion of LSTM cells within a recurrent layer.
      
      * Formatting.
      
      * Add fusion across RNN layers.
      
      * Formatting.
      
      * Add algebraic simplification.
      
      * Added rnn fusion tests.
      
      * Updated conditional on LSTM fusion to better distinguish bound nodes as ht vs xt.
      
      * Formatting.
      
      * Removed print statements.
      
      * Formatting.
      
      * Committing missing file.
      
      * Remove concat inputs pass and mkldnn references.
      
      * fix cmake paths
      
      * conflict resolution with merge from master.
      
      * remove explicit lstm op support. bare LSTM ops are converted to RNN ops for emission.
      
      * Formatting.
      
      * Use NGRAPH_ASSERT. Formatting of intel copyright.
      
      * Add check on the feature size (shape) of the recurrent (hidden) input and cell state, to ensure they are the same size.
      
      * fix wrong rnn header
      
      * Formatting.
      
      * Add back lstm op to dispatch table.
      
      * Added RNN test which shows cudnn rnn kernel is not producing correct results.
      
      * With update to AlgSimpl. to simplify concat-reshape-slice, the check modifed in this commit needed to be relaxed.
      
      * Bug fix in parameter tensor packing.
      
      * Alias third output element of RNN for cell state (bug fix).
      
      * Resolve numerical correctness issue with negative values in RNN (bug fix).
      Add minimal test to evaluate LSTM and compare with values calculated by hand.
      
      * Add tensor parameter sizes to kernel hash as
      they are kernel-specific.
      
      * Add 2 layer lstm fusion test against by-hand solution.
      
      * Export param concatenation to graph for cudnn kernel at both the single rnn layer and multi-layer.
      
      * Formatting.
      
      * Finishing touches after merge: add support for macro expansed dispatch via op_tbl.
      
      * Simplify macro support for gpu ops.
      
      * Add CUDNN_VERSION >= 7200 defguards for RNN fusion.
      Need to decide how to notify user of increased performance with >= 7200.
      
      * Revert lstm_analytic test to explicitly copy data to tensor params.
      
      * Removed namespace arg from NGRAPH_GPU_OP.
      
      * Refactored macros to different header so op_tbl only contains op list.
      
      * Defguard on cudnn_descriptor<cudnnRNNDataDescriptor_t>.
      
      * doubles -> floats
      
      * Reorg. pass asserts, prepare to replace with non-throwing pass failures.
      
      * Remove Lstm op and replace it with Rnn.
      
      * Format
      
      * Utilize RETURN_IF_FALSE in rnn pass to avoid any RT asserts.
      Note that falling back to raw (no passes) graph for 2rnn_3lstm json from mxnet models
      results in a double free inside of the memory layout pass. Appears to be a bug
      in Reshape pass through.
      
      * Removed print statements. Add check on input data and recurrent data.
      
      * Don't reuse memory for non-destructive ops.
      
      * Add back Rnn test.
      
      * Formatting.
      
      * Clean up comments.
      
      * Update test per review comments.
      4df5ea8b
    • Adam Procter's avatar
      Partial Shapes, Part 2: Adapt Tensor class to have partial shapes (#1718) · a0be5231
      Adam Procter authored
      * Adapt Tensor class to have partial shapes
      
      * Add PartialShapes to Input, Output, Function, Node classes
      
      * Terminological cleanup
      a0be5231
  15. 04 Oct, 2018 2 commits
  16. 02 Oct, 2018 2 commits
    • shssf's avatar
      0e008cc5
    • Pruthvi's avatar
      Pruthvi/rnn fusion (#1677) · 18e41513
      Pruthvi authored
      * WIP input * weights rnn optimization
      
      * concat + slcing + replacing new node works
      
      * WIP unit test case of fusing rnn inputs
      
      * - Added unit test case for fusing rnn input weights
      - registered CPURnnMatFusion_v1/v2 in codegen and DEX
      
      * fixed redeclaration of a variable
      
      * Refactored rnn input traformation passes into a single pass
      
      * Refactored CPURnnMatFusion call back functions
      
      * change random generator range to include -ve values in unit test
      
      * address PR comments
      
      * dont fuse if the shape of the data slices dont match
      18e41513
  17. 01 Oct, 2018 1 commit
  18. 29 Sep, 2018 1 commit