- 19 Oct, 2018 2 commits
-
-
Adam Procter authored
* Adapt Tensor class to have partial shapes * Add PartialShapes to Input, Output, Function, Node classes * Terminological cleanup * Add PartialShape propagation for Parameter and Result * Implement partial-shape propagation for elementwise ops * More comments * One more comment tweak * Add tests for the merge functions * Add merging of undetermined element types * Fix a goophup in deserializer implementation * Implement fallback for ops that do not support partial shape/type validation * Updates for some older unit tests, now that operator[] exists * Add missing validate_punt_if_incomplete to AllReduce * Implement partial shape/type propagation for AllReduce * Implement partial shape/type propagation for Reshape * Remove unneeded validate_punt from Result * Implement partial shape/type propagation for Reverse * Implement partial shape/type validation for ReverseSequence * Implement partial shape/type validation for ArithmeticReduction * Better docstrings for the stuff introduced in #1692; remove prototype for unimplemented, unused PartialShape::append() * One more docstring thing I forgot to save * Switch terminology from 'determined/undetermined' to 'static/dynamic' * Switch terminology from 'complete/incomplete' to 'static/dynamic' for shapes; fix up some mushily worded comments * Fix overzealous edits from the last commit * Rename one test that escaped the Great Renaming * Remove unnecessary validate_punt_if_dynamic from Reshape * Fix comment typo * Rewrite operator+ and operator* for Dimension as members, not friends * Formatting tweak * Show argument types/shapes in long NodeDescription; tank unit tests to block merge * Fix dynamic element type propagation for elementwise ops, add some unit tests for same * Fix error message * Roll 'Not' back to existing behavior (non-boolean input types allowed) * Add a TODO tag to a todo item * Add unit tests for partial shape/type propagation with ReverseSequence * Add unit tests for partial-shape/type propagation for ArithmeticReduction (via Sum) * Implement partial shape/type validation for Broadcast; implement unit tests for same * Remove inapplicable TODO * Implement partial type/shape propagation for GetOutputElement * Function signatures * Add implementations, unit tests for relaxes/refines functions * Generalize project/reduce/inject functions to cover PartialShape, move to shape_util.[ch]pp * Deal with std::find_if #include issues * Fix more include madness
-
Adam Procter authored
* Adapt Tensor class to have partial shapes * Add PartialShapes to Input, Output, Function, Node classes * Terminological cleanup * Add PartialShape propagation for Parameter and Result * Implement partial-shape propagation for elementwise ops * More comments * One more comment tweak * Add tests for the merge functions * Add merging of undetermined element types * Fix a goophup in deserializer implementation * Implement fallback for ops that do not support partial shape/type validation * Updates for some older unit tests, now that operator[] exists * Add missing validate_punt_if_incomplete to AllReduce * Implement partial shape/type propagation for AllReduce * Implement partial shape/type propagation for Reshape * Remove unneeded validate_punt from Result * Implement partial shape/type propagation for Reverse * Implement partial shape/type validation for ReverseSequence * Implement partial shape/type validation for ArithmeticReduction * Better docstrings for the stuff introduced in #1692; remove prototype for unimplemented, unused PartialShape::append() * One more docstring thing I forgot to save * Switch terminology from 'determined/undetermined' to 'static/dynamic' * Switch terminology from 'complete/incomplete' to 'static/dynamic' for shapes; fix up some mushily worded comments * Fix overzealous edits from the last commit * Rename one test that escaped the Great Renaming * Remove unnecessary validate_punt_if_dynamic from Reshape * Fix comment typo * Rewrite operator+ and operator* for Dimension as members, not friends * Formatting tweak * Show argument types/shapes in long NodeDescription; tank unit tests to block merge * Fix dynamic element type propagation for elementwise ops, add some unit tests for same * Fix error message * Roll 'Not' back to existing behavior (non-boolean input types allowed) * Add a TODO tag to a todo item * Add unit tests for partial shape/type propagation with ReverseSequence * Add unit tests for partial-shape/type propagation for ArithmeticReduction (via Sum) * Implement partial shape/type validation for concat * Fix for a corner case in concat propagation of dynamic shapes; unit tests for concat propagation of dynamic shapes * Implement partial type/shape propagation for GetOutputElement * Function signatures * Add implementations, unit tests for relaxes/refines functions * Generalize project/reduce/inject functions to cover PartialShape, move to shape_util.[ch]pp * Deal with std::find_if #include issues * Fix more include madness * Remove validate-punt-if-dynamic test because it uses Concat
-
- 18 Oct, 2018 3 commits
-
-
Adam Procter authored
* Adapt Tensor class to have partial shapes * Add PartialShapes to Input, Output, Function, Node classes * Terminological cleanup * Add PartialShape propagation for Parameter and Result * Implement partial-shape propagation for elementwise ops * More comments * One more comment tweak * Add tests for the merge functions * Add merging of undetermined element types * Fix a goophup in deserializer implementation * Implement fallback for ops that do not support partial shape/type validation * Updates for some older unit tests, now that operator[] exists * Add missing validate_punt_if_incomplete to AllReduce * Implement partial shape/type propagation for AllReduce * Implement partial shape/type propagation for Reshape * Remove unneeded validate_punt from Result * Implement partial shape/type propagation for Reverse * Implement partial shape/type validation for ReverseSequence * Implement partial shape/type validation for ArithmeticReduction * Better docstrings for the stuff introduced in #1692; remove prototype for unimplemented, unused PartialShape::append() * One more docstring thing I forgot to save * Switch terminology from 'determined/undetermined' to 'static/dynamic' * Switch terminology from 'complete/incomplete' to 'static/dynamic' for shapes; fix up some mushily worded comments * Fix overzealous edits from the last commit * Rename one test that escaped the Great Renaming * Remove unnecessary validate_punt_if_dynamic from Reshape * Fix comment typo * Rewrite operator+ and operator* for Dimension as members, not friends * Formatting tweak * Show argument types/shapes in long NodeDescription; tank unit tests to block merge * Fix dynamic element type propagation for elementwise ops, add some unit tests for same * Fix error message * Roll 'Not' back to existing behavior (non-boolean input types allowed) * Add a TODO tag to a todo item * Add unit tests for partial shape/type propagation with ReverseSequence * Add unit tests for partial-shape/type propagation for ArithmeticReduction (via Sum) * Implement partial shape/type validation for Pad; add unit tests for same * Implement partial type/shape propagation for GetOutputElement * Function signatures * Add implementations, unit tests for relaxes/refines functions * Generalize project/reduce/inject functions to cover PartialShape, move to shape_util.[ch]pp * Deal with std::find_if #include issues * Fix more include madness * Formatting
-
Robert Kimball authored
* remove unused setting * cleanup * cleanup * cleanup * more cleanup
-
Adam Procter authored
Partial Shapes and Types, Part 4a: Implement partial shape/type validation for some existing ops (#1756)
-
- 17 Oct, 2018 2 commits
-
-
Adam Procter authored
* Adapt Tensor class to have partial shapes * Add PartialShapes to Input, Output, Function, Node classes * Terminological cleanup * Add PartialShape propagation for Parameter and Result * Implement partial-shape propagation for elementwise ops * More comments * One more comment tweak * Add tests for the merge functions * Add merging of undetermined element types * Fix a goophup in deserializer implementation * Implement fallback for ops that do not support partial shape/type validation * Updates for some older unit tests, now that operator[] exists * Add missing validate_punt_if_incomplete to AllReduce * Better docstrings for the stuff introduced in #1692; remove prototype for unimplemented, unused PartialShape::append() * One more docstring thing I forgot to save * Switch terminology from 'determined/undetermined' to 'static/dynamic' * Switch terminology from 'complete/incomplete' to 'static/dynamic' for shapes; fix up some mushily worded comments * Fix overzealous edits from the last commit * Rename one test that escaped the Great Renaming * Remove unnecessary validate_punt_if_dynamic from Reshape * Show argument types/shapes in long NodeDescription; tank unit tests to block merge * Fix dynamic element type propagation for elementwise ops, add some unit tests for same * Roll 'Not' back to existing behavior (non-boolean input types allowed) * Add a TODO tag to a todo item
-
VINOD KUMAR DEVARAMPATI authored
* Constant folding with Quantize * updated with review comments
-
- 16 Oct, 2018 2 commits
-
-
Adam Procter authored
* WIP * More WIP * More chiseling * Move conv validation utils to a separate file; update unit tests * Fix invalid attributes in pattern containing ConvolutionBackpropFilters * Remove zero_const_conv test (it's no longer possible to construct the graph being tested) * Rename infer_convolution_output_item_shape to infer_windowed_reduction_output_shape and add a boolean flag to control whether window-all-in-padding is allowed * Add generalized function for inferring pooling fprop, use it in AvgPool/AvgPoolBackprop * Update MaxPool to use new utility functions * Fix comment * Remove faulty and redundant check for window shape relative to pre-padding data shape * Revert change to pattern construction in cpu_fusion * Update unit test for maxpool * Restore unjustly eliminated tests; move some computation to ptrdiff_t for safety; fix wording on some error messages * Formatting
-
Fenglei authored
* reset buffer size, use original input size for memcpy * resolve comment and add test * update comment
-
- 15 Oct, 2018 5 commits
-
-
Nick Korovaiko authored
-
Adam Procter authored
* Add type_prop unit tests for Quantize * Fix naming consistency; add test for unsupported RoundMode fail * Add type-prop unit tests for Dequantize
-
Nishant Patel authored
* Switch to scale and offset design from min and max for Quantization * Remove offset and make the quantize ops a single o/p op * move cpu QuantOps to core and create builders * rebase to HEAD * remove convbias and convbiasrelu ctors which take conv * remove mistakenly added quantize.rst * remove offset * Compute scale, move quantization ops to experimental dir and some PR feedback * Normalize license headers
-
Michał Karzyński authored
* [ONNX] Assert all op types supported * Apply clang-format * Address code review comments * Fix #include statements
-
Adam Rogowiec authored
* Update ONNX Squeeze Op implementation to conform with doc. Add unit test. * Apply code-format. * Correct attribute value type. * Change used loop structure. * Modified version of loops. - Without erase and with minimal computation time complexity. * Run CI
-
- 14 Oct, 2018 1 commit
-
-
gcwenger authored
* Improved AvgPool unit test coverage. Fixed small bug that was revealed. * Renamed disabled unit tests to reflect new names. * Ran clang-format on backend_test.in.cpp to fix format. * Renamed cpu_results->backend_results in two unit tests.
-
- 13 Oct, 2018 3 commits
-
-
Robert Kimball authored
-
Robert Kimball authored
-
Nick Korovaiko authored
* unary, binary folding * fix divide wrong template args * add tests * fix merge breaks
-
- 12 Oct, 2018 6 commits
-
-
Robert Kimball authored
* Why am I still needing to fix license headers? * fix a few more in test
-
Ayan Moitra authored
* return nullptr when workspace size is zero+modify insert method to accept const lvalue ref * Unit test added
-
Robert Kimball authored
* move backend specific test files to test/backend directory * remove unit_test_control * move tests back to test root * fix comment * wip * fix manifest
-
Robert Kimball authored
* update test to verify all header files are complete, meaning they include what they use. * disable
-
Ayan Moitra authored
* Project initialization commit * Added unit tests for 3D tensors for argmax * Refactored reduce to be used by argmax argmin. argmax argmin still has some issues. WIP * [WIP]First working version of ArgMax ArgMin * added reduce buffer for the cudnn api calls * added reduce buffer for the cudnn api calls * Further modifications. Using rvalues to pass enums to build reduce method * more unit tests added * Incorporate Fenglei's comments * Incorporating Chris's first set of comments * small change to test file * Resolving clang issue that was causing argmin test to fail * Incorporate Chris's comments * clang format issue
-
Amy Zhuang authored
-
- 11 Oct, 2018 1 commit
-
-
Robert Kimball authored
* updated unit tests * remove debug comments
-
- 10 Oct, 2018 1 commit
-
-
Nick Korovaiko authored
* reshape sinking working on mnist_conv * forgot to add reshape_sinking files * refactoring of binary case * Quantize/Dequantize case, fix add case, add assert * address bob and scott's feedback * debug * fix a bug where reshapes are removed too early
-
- 09 Oct, 2018 1 commit
-
-
Robert Kimball authored
-
- 08 Oct, 2018 3 commits
-
-
Chris Sullivan authored
* Add pad with fill operator using the outward-in index pattern. * Remove static pad and rename build_pad_dynamic -> build_pad. Update maxpool 1d padding. * Formatting. * Split build_pad_dynamic into build_pad and build_pad_fill. * Add test coverage for fixed bug in op::Pad for gpu.
-
Jayaram Bobba authored
* Reshape optimizations for when unit-sized dimensions are added/removed from tensors * Added unit tests for eliminating squeeze and expand_dims operations * Bug fix to expand dims layout * Style fix
-
Jayaram Bobba authored
* Check output shape when setting memory layout for slice op. * Miscellaneous fusion and other optimizations for inception-resnetv2 - ConvBias Batchnorm folding - ConvBias Affine folding - Check if MKLDNN can slice a given layout and select layouts appropriately * Fixed unit test and bug in conv bias pattern * Addressed PR feedback * Addressed PR feedback
-
- 06 Oct, 2018 1 commit
-
-
VINOD KUMAR DEVARAMPATI authored
* added constant folding for dequantize * modified as per review comments
-
- 05 Oct, 2018 3 commits
-
-
Jaikrishnan Menon authored
-
Chris Sullivan authored
* Add op::Sigmoid to nvgpu. * Bring rnn fusion and concat passes over into GPU from IA. This is a temporary move until generalization and gpu specification can occur. * Add LSTM fusion and cudnn inference kernel. Next need recurrent fusion and layer fusion. * Formatting * Removed unecessary extra output from LSTM op (rnn with seq. length = 1, so y = hy). * Add RNN fusion of LSTM cells within a recurrent layer. * Formatting. * Add fusion across RNN layers. * Formatting. * Add algebraic simplification. * Added rnn fusion tests. * Updated conditional on LSTM fusion to better distinguish bound nodes as ht vs xt. * Formatting. * Removed print statements. * Formatting. * Committing missing file. * Remove concat inputs pass and mkldnn references. * fix cmake paths * conflict resolution with merge from master. * remove explicit lstm op support. bare LSTM ops are converted to RNN ops for emission. * Formatting. * Use NGRAPH_ASSERT. Formatting of intel copyright. * Add check on the feature size (shape) of the recurrent (hidden) input and cell state, to ensure they are the same size. * fix wrong rnn header * Formatting. * Add back lstm op to dispatch table. * Added RNN test which shows cudnn rnn kernel is not producing correct results. * With update to AlgSimpl. to simplify concat-reshape-slice, the check modifed in this commit needed to be relaxed. * Bug fix in parameter tensor packing. * Alias third output element of RNN for cell state (bug fix). * Resolve numerical correctness issue with negative values in RNN (bug fix). Add minimal test to evaluate LSTM and compare with values calculated by hand. * Add tensor parameter sizes to kernel hash as they are kernel-specific. * Add 2 layer lstm fusion test against by-hand solution. * Export param concatenation to graph for cudnn kernel at both the single rnn layer and multi-layer. * Formatting. * Finishing touches after merge: add support for macro expansed dispatch via op_tbl. * Simplify macro support for gpu ops. * Add CUDNN_VERSION >= 7200 defguards for RNN fusion. Need to decide how to notify user of increased performance with >= 7200. * Revert lstm_analytic test to explicitly copy data to tensor params. * Removed namespace arg from NGRAPH_GPU_OP. * Refactored macros to different header so op_tbl only contains op list. * Defguard on cudnn_descriptor<cudnnRNNDataDescriptor_t>. * doubles -> floats * Reorg. pass asserts, prepare to replace with non-throwing pass failures. * Remove Lstm op and replace it with Rnn. * Format * Utilize RETURN_IF_FALSE in rnn pass to avoid any RT asserts. Note that falling back to raw (no passes) graph for 2rnn_3lstm json from mxnet models results in a double free inside of the memory layout pass. Appears to be a bug in Reshape pass through. * Removed print statements. Add check on input data and recurrent data. * Don't reuse memory for non-destructive ops. * Add back Rnn test. * Formatting. * Clean up comments. * Update test per review comments.
-
Adam Procter authored
* Adapt Tensor class to have partial shapes * Add PartialShapes to Input, Output, Function, Node classes * Terminological cleanup
-
- 04 Oct, 2018 2 commits
-
-
Nishant Patel authored
* Add conv+bias * Add test case for QuantizedConv2DWithBiasAndRelu and address feedback
-
Fenglei authored
* add a test failed on gpu, pass on cpu * fixed bug * get datatype size * add descript for test * update comment * update comments and name
-
- 02 Oct, 2018 2 commits
-
-
shssf authored
-
Pruthvi authored
* WIP input * weights rnn optimization * concat + slcing + replacing new node works * WIP unit test case of fusing rnn inputs * - Added unit test case for fusing rnn input weights - registered CPURnnMatFusion_v1/v2 in codegen and DEX * fixed redeclaration of a variable * Refactored rnn input traformation passes into a single pass * Refactored CPURnnMatFusion call back functions * change random generator range to include -ve values in unit test * address PR comments * dont fuse if the shape of the data slices dont match
-
- 01 Oct, 2018 1 commit
-
-
Adam Procter authored
-
- 29 Sep, 2018 1 commit
-
-
Robert Kimball authored
* rename files * rename runtime TensorView to Tensor * rename HostTensorView to HostTensor
-