- 07 Jan, 2020 2 commits
-
-
Robert Kimball authored
Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com> Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com>
-
Amy Zhuang authored
* Fix maxpoolbprop layout pass (#4122) * Fix maxpoolbprop layout pass * Use default format for max pooling bprop. * Fix MaxPoolWithIndicesBackprop unit test. * Fix CODEGEN. * Modify MaxPoolWithIndicesBackprop unit test. Co-authored-by: Amy Zhuang <amyzhuang97@gmail.com> Co-authored-by: Scott Cyphers <diyessi@users.noreply.github.com> (cherry picked from commit b5e030d9) * Disable bfloat16 tests for MLIR. Co-authored-by: Nishant Patel <nishant.b.patel@intel.com>
-
- 14 Dec, 2019 1 commit
-
-
Scott Cyphers authored
* [ONNX] Add GatherND op to ONNX importer (#3963) * [ONNX] Added gatherND op to ONNX importer. * Added tests. * Removed new line. * Update onnx_import.in.cpp * Changed tests. * Fix serialization of op's type (#4019) * Fix serializer op name * cleanup * Make sure examples compile (#3981) * Make sure examples compile * Resolve doc build error due to opset versioning and align dynamic tensor doc to cpp example * Add latest rc * Remove deprecated API * Update brief link summary * Dist example * update doc for cpp code examples folder * Fix typo and toc index * Build config for example, deprecation in dist test * style * Update jenkins-trigger.groovy Moving to gitlab. * Fix layernorm flatten issue (#4032) * fix layernorm flatten issue * update ut * checkout output val * fix style * apply tolerance * [MKLDNN] Emit dgemm for 2D DP FP Dot op (#3990) * [MLIR] Update MLIR/LLVM repos * Move MLIR/LLVM repos forward This includes fix to affine fusion algorithm. * Fix issues after merge * Fix lit test * [MKLDNN] Emit dgemm for 2D DP FP Dot op Add support for emitting MKLDNN's double precision FP gemm from a 2D double precision floating point Dot operation. * Removed unnecessarily duplicated pattern * Add f64 matmul support to CPU Emitter + unit test * Add check for DP unsupported bias in cpu_fusion.cpp * Remove GOE from Adjoints class (#3973) * Change generate_adjoints to take an OutputVector instead of a NodeVector for deltas. * Cleanup * Adjoints class convert to use Output<Node> * More cleanup * More cleanup * Post-merge build issues * Don't push initial bprops multiple times * Eliminate GOE correctly * back-compatibility, unit test * Helper in Constant to allow casting values to a different type (#4000) * Helper in Constant to allow casting values to a different type Simplify logic needed to extract values from a Constant node, when the expected data type is specified only as integral or floating point. * Review comment * Review comment Co-Authored-By: Tomasz Socha <tomasz.socha@intel.com> * Style apply * TensorIterator: reshape support (#4038) * Add second decompose pass to INTERPRETER backend (#4036) * Update MKLDNN to v1.0.4. (#3951) * Update MKLDNN to v1.0.4. Build MKLDNN-v1 by default. * Add bf16 support check. * Modify visibility. * Tested doc build for 0.27 with sitemap for ngraph.ai endpoint (#4014) * Make sure examples compile * Resolve doc build error due to opset versioning and align dynamic tensor doc to cpp example * Add latest rc * Remove deprecated API * Update brief link summary * Dist example * update doc for cpp code examples folder * Fix typo and toc index * Build config for example, deprecation in dist test * style * Sitemap for ngraph.ai doc content * Add title to sitemap * resolve docbuild warnings resultant from sitemap link labeling * Doc tag for 0.27.1 * Matmul dyshape_fused_forward_fluid_fix (#4023) * use constructor_validate_and_infer_types() in CumSum ctor (#4044) * - use construct_validate_infer_types() in CumSum ctor * - remove unused variable - relax rank check * Warning * Fix tolerance for all_close_f (#4042) * Fix tolerance for all_close_f * Lower tolerance * use all_close * Use v1::Gather in ONNX Importer (#4037) * Add upgrade and downgrade pass for GroupConvolutionBackpropData ops (#4035) * Add upgrade and downgrade pass for GroupConvolutionBackpropData ops - Add up/downgrade passes for GroupConvolutionBackpropData operators - Improve decompose operatorion of v0::GroupConvolutionBackpropData to support N-dimensional data - Add UT for up/downgrade passes. * Remove unused variable * Fixed constant operation for u1 format (#4045) * Fixed bin constant ops * Added export * Fixed buffer size * Fixed code style * Fix broken serialize and deserialize for Sum and Product (#4050) * v1::Reshape downgrade pass + onnx_importer adjustments (#4046) * Update ONNX importer to use nGraph ops from new opset header (#3994) * Fix NNP-T naming in README (#4048)
-
- 10 Dec, 2019 1 commit
-
-
baojun authored
* fix layernorm flatten issue * update ut * checkout output val * fix style * apply tolerance
-
- 09 Dec, 2019 2 commits
- 08 Dec, 2019 1 commit
-
-
Robert Kimball authored
* Fix serialization of op's type (#4019) * Fix serializer op name * cleanup * Kick off new build * style
-
- 06 Dec, 2019 11 commits
-
-
Geoffrey Wenger authored
-
baojun authored
* add ut to reproduce issue from PR#3931 * disable ut for plaidml
-
Katarzyna Mitrus authored
-
Tomasz Socha authored
-
Mateusz Bencer authored
-
Mateusz Bencer authored
-
Michał Karzyński authored
-
Diego Caballero authored
* [MLIR] Update MLIR/LLVM repos * Move MLIR/LLVM repos forward This includes fix to affine fusion algorithm. * Fix issues after merge * Fix lit test
-
Geoffrey Wenger authored
-
Scott Cyphers authored
* Add exports * Work-around windows issues * windows * Avoid vectors
-
Diego Caballero authored
-
- 05 Dec, 2019 5 commits
-
-
Nagy Mostafa authored
* disable group conv if data dilation is not 1s * Enable group conv decomposition if data or window dilation is enabled. Re-enable a unit-test * PR fix * PR fix * Fix fails
-
Tomasz Dołbniak authored
* fused::v1::Split op skeleton & serialization * Shape inference for Split and upgrade/downgrade passes * Upgrade pass for the Split op * Disable the failing onnx_importer Split op tests on interpreter * Opset1 test correction for v1::Split * Disable the failing Split op unit tests on PlaidML * Fix of the upgrade pass for Split * Remove one of the obsolete v0::Split constructors
-
Robert Kimball authored
Don't imbue locale to cout, imbue a stringstream instead so that it does not contaminate ngraph output (#3989)
-
Jayaram Bobba authored
* Added v1::Select op with support for implicit broadcasting * Addressed PR feedback * Constant folding support for v1::Select op * Remove commented-out code * More shape inference tests
-
- 04 Dec, 2019 7 commits
-
-
Scott Cyphers authored
* Fix some opset bugs Typo in opset0 Include ops.hpp rather than ngraph.hpp * Opset op insertion * Make opsets extendable, able to create instances * Update like replacement * Review comments
-
Gleb Kazantaev authored
* Fixed group convolution infer function * style
-
baojun authored
* Add Fluid reduce_sum and reduce_sum_grad * Temp save. * Try cmake target_sources * Make files compile. * add fluid pool op placeholder * can compile * add global pooling attribute * implement fprop * implement bprop * remove unused
-
Sang Ik Lee authored
* Add FLuid test. * Fix typo. * Fix Typo.
-
Robert Kimball authored
-
Ilya Churaev authored
* Added reproducer for specialize_function * Remove GOEs that might have been introduced while cloning nodes in function specialization * Address PR feedback * Fix incorrect merge
-
Scott Cyphers authored
Typo in opset0 Include ops.hpp rather than ngraph.hpp
-
- 03 Dec, 2019 10 commits
-
-
Tomasz Dołbniak authored
* NonMaxSuppression op skeleton * Validation of the NonMaxSuppresion op * Correct last 'boxes' dimention check * onnx_importer support for NonMaxSuppression * Code formatting * Type and shape inference for NonMaxSuppression * Different initialization of NMS inputs in onnx_importer * Code formatting * Basic type_prop tests for NonMaxSuppression * More type_prop validation for NMS
-
Tomasz Socha authored
* [SPEC] Add new v1::GroupConvolution and v1::GroupConvolutionBackpropData ops. * Sort downgrade passes * Add downgrade pass fro GroupConvolution * WIP I * Fix GroupConvolution validate and infer types * Add upgrade pass for GroupConvolution * Remove unnecesary get_static_groups() method * Disable pass for cases when groups are in filters * Review Fix I * Move ops to fused/group_conv.hpp * Use Op instead of FusedOp again * Move v1::GroupConvolution and v1::GroupConvolutionBackpropData to FusedOp but temporarily disable decomposition
-
Ilya Lavrenov authored
-
Michał Karzyński authored
* Enable multitple int types for OneHot 'depth' param * Move validation logic out of helper * Make helper more generic
-
Gleb Kazantaev authored
* Convert slope to data type * UPdated Mod decomposition to use V1 Subtract
-
Gleb Kazantaev authored
* Fixed Selu op decomposition * Updated Selu op
-
Tomasz Dołbniak authored
-
Mateusz Bencer authored
-
Jayaram Bobba authored
-
baojun authored
* Add Fluid reduce_sum and reduce_sum_grad * Temp save. * Try cmake target_sources * Make files compile. * Add fprop and build Fluid ops by default. * Some fixes. * Fix shape inference issue. * Add bprop. * Fix bprop. * Bug fixes.
-