- 25 May, 2019 1 commit
-
-
Robert Kimball authored
* update a few files to build on windows * more fixes
-
- 24 May, 2019 1 commit
-
-
Michał Karzyński authored
* [Fused] LeakyRelu op * Add LeakyRelu to serializer * Add unit tests * Fix merge branch 'master' into mkarzyns/fused_leaky_relu * Change broadcasting rules to NumPy style * Remove std:: and ngraph:: prefixes * Rename CPU Runtime LeakyRelu to CPULeakyRelu * Style apply * Fix cpu_fusion.fuse_leaky_relu test * Use eigen's tanh in the fused sigmoid multiply kernel (#2946) * Merge branch 'master' into mkarzyns/fused_leaky_relu * Add LeakyRelu to Intel GPU backend op list * Add LeakyRelu to Intel GPU backend op list
-
- 22 May, 2019 1 commit
-
-
Tomasz Dołbniak authored
* Split op skeleton * Two ways to construct a fused Split to be able to use it in onnx importer * refactor: move the util::split() helper functions to the core * Split's decompose_op() implementation using a helper function * Use fused Split in the onnx_importer * Code formatting * PR feedback * Split helpers moved to ngraph/builder * Basic UT - split a 1D tensor to 3 equal parts * UT: Split 2D tensor into variable length parts * Code formatting * Catch the proper type of exception in the onnx_importer split() * Initialize members in the correct order * Type prop tests for Split * Code formatting * PR feedback
-
- 17 May, 2019 1 commit
-
-
Tomasz Dołbniak authored
* SquaredDifference implementation * Broadcast input before using it * Simple test of SquaredDifference * SquaredDifference validation tests * Formatting adjustments * Docs correction * Exclude the unit test on iGPU * Keep the includes in a single group * Update intelgpu_backend.cpp * Update unit_test.manifest * UT for the broadcasting path
-
- 16 May, 2019 1 commit
-
-
Michał Karzyński authored
-
- 15 May, 2019 2 commits
-
-
Adam Rogowiec authored
* Extend lp-norm functions to take bias. * Move lp-norm utilities to nGraph core op/util. * Move norm files to builder directory. * Apply clang-format. * Sceleton for GRN operation. * Add GRN implementation. * Fix reshape utility function. * Address review comments. * Add using namespace std. * Review comments. * Few fixes in grn implementation. * Clang format. * Basic UT. * Fix expected data. * Add more UT and skip them on IGPU. * Review comments: const correctness and remove using namespace std statement. * Unblock GRN on IGPU. * Get back GRN op case to switch. * merge error
-
Michał Karzyński authored
* Add fused Squeeze op * Use fused Squeeze op in ONNX importer * Update serializer * Add unit tests * Add type prop tests * Change Squeeze signature to accept a dynamic input for axes * Update src/ngraph/op/fused/squeeze.cpp Co-Authored-By:
Adam Rogowiec <adam.rogowiec@intel.com> * Code review comments * Fix failing unit test * Code review comments * Code review comments * style * Add op to iGPU backend * Add op to iGPU backend
-
- 13 May, 2019 1 commit
-
-
tsocha authored
* Add ScaleShift operator * Add ScaleShift to serializer * Add UT for ScaleShift * Add type_prop tests for ScaleShift * Style-fix * Skip tests on Intel GPU * Review fix 1 * Style fix
-
- 11 May, 2019 2 commits
-
-
Adam Rogowiec authored
* Extend lp-norm functions to take bias. * Move lp-norm utilities to nGraph core op/util. * Move norm files to builder directory. * Normalize fused operator implementation. * Fused op boilerplate. * Fix node validation and normalization across spatial axes. * Add UT normalize across CHW with scalar scale. * Fix expanding input tensor to 4D. * Add more UT for 3D and 2D. * Add more UT, with scale and across HW. * Update to new localization of l2_norm function. * Add type_prop UT and update gpu/igpu manifests. * Apply clang-format. * Add positive UT for type_prop. * Update unit test manifests. * Address review comments. * Add using namespace std. * Remove unnecessary std prefixes. * Remove blacklisted unittests for GPU. * Apply clang-format. * Review comments. * Fix clang errors.
-
tsocha authored
* Basic mean normalization * Add NVM to serializer * Add test for mean normalization * Add support for across_channel atribute * Add test for mvn_mean_normalization splited by channels * Assume that data have n and c channels * Add support for normalize_variance attribute * Add test for full mean variance normalization * Add type prop test * Skip tests on GPU * Use ngraph builder functions instead of my own * Update mvn.cpp * Change order in initializer list * Review fix
-
- 10 May, 2019 2 commits
-
-
Tomasz Dołbniak authored
* Fused Clamp op implementation * Basic clamp test with some edge cases * Dump the expected and actual values for NgraphTestCase * Validate the min and max params for Clamp * Use clamp in clip * Disable Clamp and its test on iGPU * Use getters for Clamp's parameters * Validate clamp's params in pre_validate_and_infer_types * Unit tests for clamp op validation * Revert "Dump the expected and actual values for NgraphTestCase" This reverts commit 3a029d70e62339ee84aadf2bf16e418281b85ff7. * Clamp op docs
-
Adam Rogowiec authored
* Move HardSigmoid to nGraph fused operators. * UT for HardSigmoid fused operator. * Add type_prop UT. * Reorder operations in implementation. * Fix unit tests. * Fix typo. * Apply style-check. * Switch to single-precision parameters. * Disable unit test for IGPU.
-
- 07 May, 2019 1 commit
-
-
tsocha authored
* Move transpose and flatten to ngraph op utils dir * Move gemm operator into ngraph fused ops * Style fix * Add Gemm to serializer * Add type_prop test for gemm * Use Gemm default values * Add UT for Gemm * Fix comments * Little cleanup * Remove artifact headers * Fix gemm documentation * Skip gemm test on GPU * Add test for broadcasting input C * Review fix pt. 1 * Fix typo
-
- 29 Apr, 2019 1 commit
-
-
Jayaram Bobba authored
* Moving GroupConvolution to ngraph fused opset * style fix * remove unused function * IntelGPU backend: Add GroupConvolution operation into main switch
-
- 27 Apr, 2019 2 commits
-
-
Michał Karzyński authored
* Refactor get_default_axis_vector to use std:: functions * Move get_default_axis_vector to ngraph::op * Move reorder_axes to ngraph::op::util * Move reshape helper to ngraph::op::util * Move DepthToSpace to fused ops * Add DepthToSpace docstrings * Move SpaceToDepth to fused ops * Remove redundant ngraph::op::util::get_default_axis_vector function * Add ops to serializer * Change block_size to size_t * Add fused ops tests * Add type prop tests * Add ops to list of ops unsupported on iGPU * Disable tests in iGPU manifest
-
Michał Karzyński authored
* Add fused version of Elu op * Refactor ONNX importer prelu function to use fused op * Style check * Add docstrings * Move make_constant_node to op/util * Use make_constant_node helper * Remove unneeded std:: prefixes * Remove make_constant_node function, use builder::make_constant * Remove redundant includes * Add Elu to serializer * Add Elu tests * Add Elu tests to type prop * Add Elu to list of ops unsupported on iGPU * Add Elu to list of ops unsupported on iGPU * Disable tests in iGPU manifest
-
- 16 Apr, 2019 1 commit
-
-
Jayaram Bobba authored
* - Moves some fused convolution ops to core FusedOps - Adds support for decomposing and replacing multi-output FusedOps - Adds query callbacks to FusedOpDecomposition to check if a FusedOp is supported by a backend - Adds core fusion patterns for FusedOps - * style fix * Added comments on FOP_FUSIONS * gpu convolution 1d bug fix (#2741) * Fix bug with dex-only compilation and addressed PR comments
-