- 21 Feb, 2020 6 commits
-
-
mbencer authored
-
mbencer authored
-
Sang Ik Lee authored
avoid error tautological-constant-out-of-range-compare
-
Sang Ik Lee authored
-
Ilya Churaev authored
Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Diego Caballero authored
* [MLIR] Update MLIR repo * Nagy's fix * Changes related to mlir-opt * Update MLIR commit * Update MLIR commit and callbacks. * Disable 'noalias' attribute. It will be re-introduced in a follow-up commit. * Remove '__mlir' prefix in callback test * Address feedback * Fix EDSC includes * Move MLIR repo forward * Update type converter code * Address feedback Co-authored-by: Amy Zhuang <amyzhuang97@gmail.com>
-
- 20 Feb, 2020 2 commits
-
-
Mateusz Bencer authored
* Switch to PartialShape in onnx_importer ValueInfo * Construct dynamic dimensions out of ONNX dimensions defined as dim_param * Validate the PartialShape of inputs created from an ONNX model with dynamic shapes * Validate the output shape inference for a dynamic ONNX model * Test the execution of an ONNX model with dynamic dimensions * Test the Ax+B with more than one batch size * Provenance tagging adjustments - PartialShape instead of Shape * Correct translation of ONNX shapes to nG shapes * Test the shape of Constant produced by scalar initializers * Review comments & more strict assertions in UT * UT checking a dynamic rank input * Fully dynamic input inference test * first dynamic version * modified UTs * Added assert checks * Added specialised methods * first verion of AvgPool * code review remarks introduced * Changed tests to use default BackendMode value * Reverted not related changes * first verion of AvgPool code review remarks introduced Changed tests to use default BackendMode value * first version of maxpool * Changed PoolingFactory to support dynamic shapes * fixed Pad op * Added Uts to global ops * Code review remarks introduced * code review remarks introduced * Code refactor * Code review remarks introduced Co-authored-by: Tomasz Dołbniak <tomasz.dolbniak@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Ewa Tusień authored
* Added Round op to onnx importer. * Added a new line. * Excluded test from plaidml. * Code formatting. * Added header. * Unabled test on gpu. Co-authored-by: Chris Sullivan <chris.sullivan@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
- 19 Feb, 2020 5 commits
-
-
Amy Zhuang authored
* Update MKLDNN to v1.2. * Change class to struct. * Change to struct. Co-authored-by: Robert Kimball <robert.kimball@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Tomasz Dołbniak authored
-
Dmitry Kurtaev authored
-
mbencer authored
-
mbencer authored
-
- 18 Feb, 2020 6 commits
-
-
Adam Osewski authored
* Helper function get_axes_mapping. * Enhance Broadcast:v1 NUMPY broadcasting. - Enable NUMPY broadcasting mechanism to work in bothdirections: target_shape <-> arg_shape * Add opset1:squeeze and fix bug in reading squeezed axis idx. * Fix and enhance downgrade pass for Broadcast:v1 * Use Broadcast:v1 in ONNX Expand operator. * Replace Broadcast:v0 with v1 in some helper functions. * Remove call to deprecated legacy_broadcasting helper function. * Add helper get_axes_mapping_output function. * Use directly Broadcast:v1 instead of helper function. * Get back operators from v0 in helper function. * Use helper function and some refactoring. * Add legacy style broadcast helper function for opset1. * User helper broadcasting function for arithmetic operators. * Add empty axis only if its size is equal to one. * Aplly review remarks: - Rename broadcasting function deleting _values_ infix - Remove variables used only once. - Use STL library where possible. - Remove unnecessary conditions. * Add helper for Broadcast:v1. * Fix merge artifact and force unsigned type for argument. * Review. Add additional check for static output. * Apply clang-format. * Fix: call v0 ops in ngraph::builder namespace. * Move opset1 boradcasting helpers from util/broadcasting.hpp * Use autobroadcast built-in mechanism for arithmetic operators in RNN. * Move helper functions to autobroadcast.hpp file. - Update calls with new namespace ngraph::builder - Remove calls using shared_ptr<ngraph::Node> and replace them with one accepting Output<ngraph::Node> - Some small formatting (remove unnecesary namespace prefix) * Remove unused function. * Rename error class to reflect it's NumPy related. * Fix thrown error name in autobroadcast UT. * Code refactoring. - Use one one set of helpers to broadcast node according to NumPy scheme * Documentation formatting. * Remove include to deleted header. * Apply style format. * Remove std:: prefix. * Do reshape and/or broadcast only when necessary. * Remove std:: and ngraph:: prefixes. * UT for numpy_broadcast_for_matmul and legacy boradcast. * Rename helper function. * UT for opset1 legacy broadcast helper function. * Add more UT for get_axes_mapping and style-format. * Review comments. * Restrict if with NGRAPH_WARN to NGRAPH_CHECK. Co-authored-by: Michał Karzyński <postrational@users.noreply.github.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Diego Caballero authored
Co-authored-by: Robert Kimball <robert.kimball@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Michał Karzyński authored
Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Michał Karzyński authored
* Use default_opset in Gemm * Use default_opset in Cos and Cosh * Use default_opset in Slice * Use default_opset in Log * Use explicit opset0 in ScatterND * Use default_opset in Hardmax * Use default_opset in utils * Remove redundant headers * Style check
-
mbencer authored
Merge branch 'mbencer/BuilderSplitV1' of https://github.com/NervanaSystems/ngraph into mbencer/BuilderSplitV1
-
mbencer authored
-
- 16 Feb, 2020 1 commit
-
-
Tomasz Dołbniak authored
* Constify the onnx importer conv * Extract and fix the groups attribute validation for Conv * Check if the convolution's data input rank is static * Validate the groups attribute against channels and filters * Validate the conv operation in a separate function * Dynamically broadcast the conv bias if needed * Import a test model with dynamic batch conv op * Run a conv test with dynamic batch * Cleanup of conv bias handling code * Use a proper Broadcast constructor for bias in onnx conv * Handle dynamic ReduceMean with statically defined rank * Use the target shape rank to construct the default output shape for Broadcast * Handle ONNX Squeeze with dynamic input and static rank * Handle ONNX Shape with dynamic input and static rank * Handle the dynamic target shape in ONNX Reshape * Fix for the ONNX Shape input validation * Handle ONNX Softmax with dynamic input and static rank * Fix the failing Broadcast type prop test * Code formatting * Dont broadcast bias before adding it to the conv node * Drop the conv node validation and rely on the core op implementation checks * Code review feedback * Revert the Broadcast op changes * More code review feedback * Dynamic conv test using ng test case * Obsolete headers removal * Code formatting * Variable names refactor * Disable model_conv_with_dynamic_batch test on GPU * Code formatting Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
- 14 Feb, 2020 4 commits
-
-
Amy Zhuang authored
Co-authored-by: Chris Sullivan <chris.sullivan@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Robert Kimball authored
* Start of rework * Faster multiply simplification * Fix uniform constant check * Support Add op * Fix build error Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Ilya Churaev authored
* Fixed memory blob * Fixed code style Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Katarzyna Mitrus authored
* Use test case in quant tests * Reshape test refactor * Reshape tests refactor * Test case refactor onnx_import * onnx import_in test case refactor * onnx reshape test case refactor * onnx convpool test case refactor * onnx import in test case refactor * onnx reshape test case refactor * Style apply * Style apply * Removed commented code * Infinity check softplus test refactor Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
- 13 Feb, 2020 7 commits
-
-
Diego Caballero authored
* Fix warnings Control reaches end of non-void function * Format * Revert changes + add throw at EOF * Alternative fix Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com> Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Pruthvi authored
* - add fused_op.td to CmakeLists - define pattern to fuse Wx + b and to replace with MatMulBias * - remove table-gen LLVM_TARGET_DEFINATION for fused_ops_pattern.td, fused_ops.td - fix build issues * - change pattern to to match MatMul instead of Dot - support in CMake to register MatMulBias fused Op pattern * - made changes to fusion pattern to match Add( Dot (op1, op2), bias) for MatmulBias - use applyPatternsGreedily instead of applyFullConversion in the graph pass - add unit test inter v/s CPU for MatMulBias * - Affine lowering, verifier logic to NgMatMulBiasOp * add missing header file * - WIP, use NGGemm instead of NGMatMulBias * -undo unintended changes * Addressed PR comments * - refactor the ctor of the NgDialectFusion pass - register NgDialectFusion pass with the PassRegistration * Address PR comments * -add lit test for matmul+bias fusion * -style fix lit test Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Scott Cyphers authored
Co-authored-by: Robert Kimball <robert.kimball@intel.com> Co-authored-by: Sang Ik Lee <sang.ik.lee@intel.com>
-
Leona C authored
* Update sitemap to not use a page title * Update doc for DLDT layers and remove references to PlaidML
-
Evgenya Stepyreva authored
* Set output shape of Deconv with dynamic input * Trigger CI
-
Scott Cyphers authored
-
Chris Sullivan authored
-
- 12 Feb, 2020 9 commits
-
-
Fabian Boemer authored
Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Chris Sullivan authored
* Add ngraph visibility exports for the NVIDIA GPU backend. * Remove GroupConvTranspose emitter. * Disable failing tests that have appeared since removal of GPU backend in v0.25 * Update src/ngraph/runtime/gpu/unit_test.manifest * Add more disabled tests. * Add more disabled tests. Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Robert Kimball authored
* Always use control deps in sorting * Fix build * Deprecate old topological sort Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Evgenya Stepyreva authored
* ConvBackPropData dynamic output shape input * Adam comments Co-authored-by: aslepko <44713115+aslepko@users.noreply.github.com> Co-authored-by: Robert Kimball <robert.kimball@intel.com> Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Robert Kimball authored
* Remove Event tracing and replace with chrome trace * style Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Scott Cyphers authored
* Add some attribute visitors * Need to keep size_t for opset1 * Fix condition * Misso * Missed int64_t * Revert std changes * Header reorg Co-authored-by: aslepko <44713115+aslepko@users.noreply.github.com>
-
Robert Kimball authored
Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Katarzyna Mitrus authored
* Asinh dynamic shape support prototype * Asinh static shape test * Asinh dynamic shapes test * Unified asinh implementation for static and dynamic shapes * Atanh dynamic shapes support * Style apply * Acosh dynamic shape support * Tests update * Style apply * Update PlaidML manifest * Use default tolerance bits * Style apply * Def tolerance bits for dynamic tests * Rename dim_param * Update acosh.prototxt * Update acosh_dyn_shape.prototxt Co-authored-by: Robert Kimball <robert.kimball@intel.com>
-
Robert Kimball authored
* Start * Move Input * Move Output * Move ostream Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-