- 18 Feb, 2020 4 commits
-
-
Adam Osewski authored
* Helper function get_axes_mapping. * Enhance Broadcast:v1 NUMPY broadcasting. - Enable NUMPY broadcasting mechanism to work in bothdirections: target_shape <-> arg_shape * Add opset1:squeeze and fix bug in reading squeezed axis idx. * Fix and enhance downgrade pass for Broadcast:v1 * Use Broadcast:v1 in ONNX Expand operator. * Replace Broadcast:v0 with v1 in some helper functions. * Remove call to deprecated legacy_broadcasting helper function. * Add helper get_axes_mapping_output function. * Use directly Broadcast:v1 instead of helper function. * Get back operators from v0 in helper function. * Use helper function and some refactoring. * Add legacy style broadcast helper function for opset1. * User helper broadcasting function for arithmetic operators. * Add empty axis only if its size is equal to one. * Aplly review remarks: - Rename broadcasting function deleting _values_ infix - Remove variables used only once. - Use STL library where possible. - Remove unnecessary conditions. * Add helper for Broadcast:v1. * Fix merge artifact and force unsigned type for argument. * Review. Add additional check for static output. * Apply clang-format. * Fix: call v0 ops in ngraph::builder namespace. * Move opset1 boradcasting helpers from util/broadcasting.hpp * Use autobroadcast built-in mechanism for arithmetic operators in RNN. * Move helper functions to autobroadcast.hpp file. - Update calls with new namespace ngraph::builder - Remove calls using shared_ptr<ngraph::Node> and replace them with one accepting Output<ngraph::Node> - Some small formatting (remove unnecesary namespace prefix) * Remove unused function. * Rename error class to reflect it's NumPy related. * Fix thrown error name in autobroadcast UT. * Code refactoring. - Use one one set of helpers to broadcast node according to NumPy scheme * Documentation formatting. * Remove include to deleted header. * Apply style format. * Remove std:: prefix. * Do reshape and/or broadcast only when necessary. * Remove std:: and ngraph:: prefixes. * UT for numpy_broadcast_for_matmul and legacy boradcast. * Rename helper function. * UT for opset1 legacy broadcast helper function. * Add more UT for get_axes_mapping and style-format. * Review comments. * Restrict if with NGRAPH_WARN to NGRAPH_CHECK. Co-authored-by: Michał Karzyński <postrational@users.noreply.github.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Diego Caballero authored
Co-authored-by: Robert Kimball <robert.kimball@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Michał Karzyński authored
Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Michał Karzyński authored
* Use default_opset in Gemm * Use default_opset in Cos and Cosh * Use default_opset in Slice * Use default_opset in Log * Use explicit opset0 in ScatterND * Use default_opset in Hardmax * Use default_opset in utils * Remove redundant headers * Style check
-
- 16 Feb, 2020 1 commit
-
-
Tomasz Dołbniak authored
* Constify the onnx importer conv * Extract and fix the groups attribute validation for Conv * Check if the convolution's data input rank is static * Validate the groups attribute against channels and filters * Validate the conv operation in a separate function * Dynamically broadcast the conv bias if needed * Import a test model with dynamic batch conv op * Run a conv test with dynamic batch * Cleanup of conv bias handling code * Use a proper Broadcast constructor for bias in onnx conv * Handle dynamic ReduceMean with statically defined rank * Use the target shape rank to construct the default output shape for Broadcast * Handle ONNX Squeeze with dynamic input and static rank * Handle ONNX Shape with dynamic input and static rank * Handle the dynamic target shape in ONNX Reshape * Fix for the ONNX Shape input validation * Handle ONNX Softmax with dynamic input and static rank * Fix the failing Broadcast type prop test * Code formatting * Dont broadcast bias before adding it to the conv node * Drop the conv node validation and rely on the core op implementation checks * Code review feedback * Revert the Broadcast op changes * More code review feedback * Dynamic conv test using ng test case * Obsolete headers removal * Code formatting * Variable names refactor * Disable model_conv_with_dynamic_batch test on GPU * Code formatting Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
- 14 Feb, 2020 4 commits
-
-
Amy Zhuang authored
Co-authored-by: Chris Sullivan <chris.sullivan@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Robert Kimball authored
* Start of rework * Faster multiply simplification * Fix uniform constant check * Support Add op * Fix build error Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Ilya Churaev authored
* Fixed memory blob * Fixed code style Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Katarzyna Mitrus authored
* Use test case in quant tests * Reshape test refactor * Reshape tests refactor * Test case refactor onnx_import * onnx import_in test case refactor * onnx reshape test case refactor * onnx convpool test case refactor * onnx import in test case refactor * onnx reshape test case refactor * Style apply * Style apply * Removed commented code * Infinity check softplus test refactor Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
- 13 Feb, 2020 7 commits
-
-
Diego Caballero authored
* Fix warnings Control reaches end of non-void function * Format * Revert changes + add throw at EOF * Alternative fix Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com> Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Pruthvi authored
* - add fused_op.td to CmakeLists - define pattern to fuse Wx + b and to replace with MatMulBias * - remove table-gen LLVM_TARGET_DEFINATION for fused_ops_pattern.td, fused_ops.td - fix build issues * - change pattern to to match MatMul instead of Dot - support in CMake to register MatMulBias fused Op pattern * - made changes to fusion pattern to match Add( Dot (op1, op2), bias) for MatmulBias - use applyPatternsGreedily instead of applyFullConversion in the graph pass - add unit test inter v/s CPU for MatMulBias * - Affine lowering, verifier logic to NgMatMulBiasOp * add missing header file * - WIP, use NGGemm instead of NGMatMulBias * -undo unintended changes * Addressed PR comments * - refactor the ctor of the NgDialectFusion pass - register NgDialectFusion pass with the PassRegistration * Address PR comments * -add lit test for matmul+bias fusion * -style fix lit test Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Scott Cyphers authored
Co-authored-by: Robert Kimball <robert.kimball@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Leona C authored
* Update sitemap to not use a page title * Update doc for DLDT layers and remove references to PlaidML
-
Evgenya Stepyreva authored
* Set output shape of Deconv with dynamic input * Trigger CI
-
Scott Cyphers authored
-
Chris Sullivan authored
-
- 12 Feb, 2020 11 commits
-
-
Fabian Boemer authored
Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Chris Sullivan authored
* Add ngraph visibility exports for the NVIDIA GPU backend. * Remove GroupConvTranspose emitter. * Disable failing tests that have appeared since removal of GPU backend in v0.25 * Update src/ngraph/runtime/gpu/unit_test.manifest * Add more disabled tests. * Add more disabled tests. Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Robert Kimball authored
* Always use control deps in sorting * Fix build * Deprecate old topological sort Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Evgenya Stepyreva authored
* ConvBackPropData dynamic output shape input * Adam comments Co-authored-by: aslepko <44713115+aslepko@users.noreply.github.com> Co-authored-by: Robert Kimball <robert.kimball@intel.com> Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Robert Kimball authored
* Remove Event tracing and replace with chrome trace * style Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Scott Cyphers authored
* Add some attribute visitors * Need to keep size_t for opset1 * Fix condition * Misso * Missed int64_t * Revert std changes * Header reorg Co-authored-by: aslepko <44713115+aslepko@users.noreply.github.com>
-
Robert Kimball authored
Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Katarzyna Mitrus authored
* Asinh dynamic shape support prototype * Asinh static shape test * Asinh dynamic shapes test * Unified asinh implementation for static and dynamic shapes * Atanh dynamic shapes support * Style apply * Acosh dynamic shape support * Tests update * Style apply * Update PlaidML manifest * Use default tolerance bits * Style apply * Def tolerance bits for dynamic tests * Rename dim_param * Update acosh.prototxt * Update acosh_dyn_shape.prototxt Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Robert Kimball authored
* Start * Move Input * Move Output * Move ostream Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Mateusz Bencer authored
* Extend normalization. Part.1 * Normalize axis. Part.2 * Code review remarks introduced * Fixed normalizes ranges type * Trigger CI Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com> Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
aslepko authored
* Update onnx_import_provenance.in.cpp * Update onnx_import_dyn_shapes.in.cpp * Update constant_folding_one_hot.cpp * Update constant.hpp * Update constant.hpp * Update constant_folding_one_hot.cpp * Update onnx_import_provenance.in.cpp * Update onnx_import_dyn_shapes.in.cpp Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
- 11 Feb, 2020 7 commits
-
-
Adam Osewski authored
* Update table. * More updates. Co-authored-by: Michał Karzyński <postrational@users.noreply.github.com> Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com> Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Diego Caballero authored
This PR disables treating warnings as errors by default so that nGraph can be built with compiler versions that are not currently tested in CI. We could enable Werror in CI for those compiler versions we want to target for warning-free builds. Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Evgenya Stepyreva authored
* Squeeze/Unsqueeze dynamic input type/rank infer * Unit-tests * style * Removed squeeze Rank propagation * Fixed comment * Revert comment back * Comment resolved * Style fixes * Moved unsqueeze axis check * Style * Discussion resolved * Style * Assert in decompose_op, if output shape is not static * Style Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Robert Kimball authored
* Cleanup headers * Fix more headers * Fix compile error * Fix plaid build error Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Denise Kutnick authored
-
Chris Sullivan authored
Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Michał Karzyński authored
Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com> Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
- 10 Feb, 2020 4 commits
-
-
Ilya Churaev authored
* Fixed warning "control reaches end of non-void function" * Fixed codestyle Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Ashok Emani authored
Co-authored-by: asemx <998264+asemx@users.noreply.github.com> Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Robert Kimball authored
This reverts commit 43172068. Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Robert Kimball authored
Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
- 07 Feb, 2020 2 commits
-
-
Diego Caballero authored
* [MLIR] MLIR repo update * Revert test * More MLIR repo forward * Move MLIR repo forward * Add Loop-to-Std lowering pass to nGraph pipeline Co-authored-by: Robert Kimball <robert.kimball@intel.com> Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Robert Kimball authored
* Add replaceable topological sort to Function * Cleanup * Cleanup unit test * Address review comment * Fix missed item in merge Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-