- 31 May, 2019 1 commit
-
-
Rob Earhart authored
-
- 25 May, 2019 1 commit
-
-
Scott Cyphers authored
* nbench special handle op name for "NNP_XXX" (#2883) * remove throw from dtor (#2854) * add special handle for the 'NNP_xx' op name * style * add special handle for the 'NNP_xx' op name * style * use description as suggest by bob * Remove parent from PlaidML tensor initializer (#2923) * Remove parent from PlaidML tensor initializer * Remove plaidml tensor parent plumbing * style * Add support for move semantics to AlignedBuffer (#2956) * Add move operations to AlignedBuffer * unit test * Create mkldnn primitives at first iteration for codegen - part2 (#2859) * Create mkldnn primitives at first iteration for CODEGEN. OPs: add, lstm, and rnn. * OPs: batchnorm. * OPs: concat and lrn. Remove dead code. * Skip in place concat, relu, reshape, and slice when building node_primitive_string_deps_index map. * Change NGRAPH_ASSERT to NGRAPH_CHECK. * Address PR Feedback. * Create mkldnn primitives at first iteration for CODEGEN. OPs: convertlayout, relu, leakyrelu, boundedrelu, sigmoid, softmax, slice. * Fix bugs. * OPs: quantizedconcat. Check if there are descriptors before emitting code to read desc_file. * OPs: convolution backward. Use macro to write mkldnn memory dims to generated file. * OPs: MaxPoolWithIndices and MaxPoolWithIndicesBackprop. Add unit tests for MaxPoolWithIndices, MaxPoolWithIndicesBackprop, and MaxPoolBackprop. * Fix style error. * OPs: AvgPoolBackprop and MaxPoolBackprop. Add unit test for AvgPoolBackprop. * OPs: DeconvolutionBias. * OPs: Quantize and Dequantize. * OPs: QuantizedDot and QuantizedDotBias. * Use reference kernel for QuantizedConvolution for CODEGEN when mkldnn does not support the parameter types. Get scales for quantization ops in cpu_emitter. * Fix Windows build error: add CPU_BACKEND_API. * Use template for quantization ops. * OPs: QuantizedMatmul. Emit referece kernel for QuantizedDot in CODEGEN. * Remove QuantizedDot from get_scale_index. * Address PR feedback. * [FusedOps] Split (#2951) * Split op skeleton * Two ways to construct a fused Split to be able to use it in onnx importer * refactor: move the util::split() helper functions to the core * Split's decompose_op() implementation using a helper function * Use fused Split in the onnx_importer * Code formatting * PR feedback * Split helpers moved to ngraph/builder * Basic UT - split a 1D tensor to 3 equal parts * UT: Split 2D tensor into variable length parts * Code formatting * Catch the proper type of exception in the onnx_importer split() * Initialize members in the correct order * Type prop tests for Split * Code formatting * PR feedback * Add more infrastructure for specialization of cloned graphs (#2949) * Virtualize some things that crash when layout descriptor is missing * More shape specialization * (very bare) skeleton for dyn elimination * Miscellaneous * Lift i32->int64-only restriction on constant folding for Convert * Add constant folding for ShapeOf, and some tests for new constant folders * Tests for DynElimination * Rename specialize_shapes to specialize_function, and add a unit test for value substitution * Roll back overeager API change in dyn slice bprop (it has to handle right-indexed axes; bummer) * Add a test for dynamic usage of transpose op * Fix warning/error about variable shadowing * Strengthen checks in apply_permutation * Propagate Constant shapes through Transpose * Add CHANGE_DYNAMIC_STATE where appropriate * PR feedback, and fix unit test failure * Fix PR reference in comment * PR comments * Comments for helper funcs * Remove unique_ptr indirection for the AlignedBuffers * Fix incorrect indexing of AlignedBuffer vector (whoops\!) * Remove unnecessary CHANGE_DYAMIC_STATEs * De-update pass property unit test for const folding * Replace mystery runes with all_pass_property_off * Change FusionType to enum class and use EnumMask (#2957) * constexpr ctor for EnumMask * added pass properties to core passes. * change fusion type to have better type safety. * refactor to use enum mask. * remove extra code. * added constants for FusionType backward compatibility. * spelling. * grammar fix. * update visualize tree file extenstions and output formats (#2954) * update visualize tree file extenstions and output formats * fix runtime error * Update version, clean up ToC, add more detail to section on inspectin… (#2947) * Update version, clean up ToC, add more detail to section on inspecting graphs... * Minor adjustments to version module * Move distributed training page to extras since there's not much there * Fix links that break when doing that * Consistent casing on section titles * Orphan governance page so we don't have blank/empty links * Update release notes with new version module structure * PR feedback * Allow NGRAPH_VISUALIZE_TREE_OUTPUT_SHAPES to output partial shapes (#2959) * Remove functions from cpu which were moved to core (#2962) * Remove functions from cpu which were moved to core * Fix a typo * Remove unused function * Move zero padded conv fusions from CPUFusion to CoreFusion. (#2969) * Move zero padded conv fusions from CPUFusion to CoreFusion. * Address PR feedback: move unit tests to core_fusion. * Fix Convert for boolean output type in CODEGEN. (#2958) * Create tensor for the primary backend (#2970) * create tensor for the primary backend * move private objects to protected * [Fused] LeakyRelu op (#2919) * [Fused] LeakyRelu op * Add LeakyRelu to serializer * Add unit tests * Fix merge branch 'master' into mkarzyns/fused_leaky_relu * Change broadcasting rules to NumPy style * Remove std:: and ngraph:: prefixes * Rename CPU Runtime LeakyRelu to CPULeakyRelu * Style apply * Fix cpu_fusion.fuse_leaky_relu test * Use eigen's tanh in the fused sigmoid multiply kernel (#2946) * Merge branch 'master' into mkarzyns/fused_leaky_relu * Add LeakyRelu to Intel GPU backend op list * Add LeakyRelu to Intel GPU backend op list * Make private members protected in hybrid classes (#2975) * make private members protected in hybrid classes * allow overriding the passes * [ONNX] Unit tests for QLinearMatMul (#2706) * [ONNX] Unit test models for QLinearMatMul * [ONNX] Extended types support for NgraphTestCase * [ONNX] Move the value comparators to the NgraphTestCase class * Add test cases * Add shape checking * disable GPU tests * IntelGPU backend: Switch to clDNN which is compatible with gcc4.8 (#2961) * Added accessor methods for layer op attributes (#2964) * Added accessor methods for layer op attributes * style fixes and addressed PR feedback * Add save/load API to runtime (#2955) * API defined * add unit test for save/load with INTERPRETER * Update per review comments * fix compiler error * Backport fix from #2973 (#2976) * CTCGreedyDecoder layer op (#2965) * Added CTCGreedyDecoder layer op * Added comment on seq_len validation checks * Switch some get_inputs uses to use the newer inputs (#2968) * Switch some get_inputs uses to use the newer inputs * Review comments * update a few files to build on windows (#2974) * update a few files to build on windows * more fixes
-
- 24 May, 2019 1 commit
-
-
Scott Cyphers authored
-
- 21 May, 2019 1 commit
-
-
Rob Earhart authored
-
- 17 May, 2019 8 commits
-
-
Mohammad Mahbubuzzaman authored
-
Sang Ik Lee authored
-
Nishant Patel authored
* Add builders for QLinearMatmul for onnxruntime * Generic Dot * Change the onnx bridge and address PR feedback * Fix date * Fix CI failure * Change varibale filter to input1 * const & reference * update branch * Comment * Introduce QuantizedMatmul * change pattern * QDot tests passing * style * PR feedback * Fix pattern * style
-
Sang Ik Lee authored
* Don't use pragma GCC diagnostic error "-Wswitch" pragma GCC diagnostic error "-Wswitch-enum" for GCC 4.8.X * Fix typo. * Style.
-
Adam Procter authored
-
Tomasz Dołbniak authored
* SquaredDifference implementation * Broadcast input before using it * Simple test of SquaredDifference * SquaredDifference validation tests * Formatting adjustments * Docs correction * Exclude the unit test on iGPU * Keep the includes in a single group * Update intelgpu_backend.cpp * Update unit_test.manifest * UT for the broadcasting path
-
tsocha authored
* Add test for i32 gather * Add support for ints to Gather op * Move helper function to anonymous namespace * Add more types * Use static_cast instead of the old one * Style fix * Skip tests on GPU * Add more tests * Skip tests on gpu * Change bool to char
-
Scott Cyphers authored
-
- 16 May, 2019 2 commits
-
-
Jayaram Bobba authored
* Supports more cases for Convolution Bias fusion and also removes redundant namespace qualifiers in core_fusion.cpp * added unit test * select all fusions
-
Michał Karzyński authored
-
- 15 May, 2019 9 commits
-
-
Louis Feng authored
* constexpr ctor for EnumMask * added pass properties to core passes. * added unit tests. * minor fixes.
-
Adam Rogowiec authored
* Extend lp-norm functions to take bias. * Move lp-norm utilities to nGraph core op/util. * Move norm files to builder directory. * Apply clang-format. * Sceleton for GRN operation. * Add GRN implementation. * Fix reshape utility function. * Address review comments. * Add using namespace std. * Review comments. * Few fixes in grn implementation. * Clang format. * Basic UT. * Fix expected data. * Add more UT and skip them on IGPU. * Review comments: const correctness and remove using namespace std statement. * Unblock GRN on IGPU. * Get back GRN op case to switch. * merge error
-
Dmitry Yershov authored
IntelGPU backend: Make ScaleShift op as not supported in callback of FusedOpDecomposition pass (#2940) * IntelGPU backend: Make ScaleShift op as not supported in callback of FusedOpDecomposition pass * style * merge error * Style was correct. Return back 0965fb5 changes * merge error
-
Michał Karzyński authored
This PR removes compiler warnings like this one: ``` src/ngraph/frontend/onnx_import/utils/reduction.hpp:73:28: warning: prior to the resolution of a defect report against ISO C++11, local variable 'op_node' would have been copied despite being returned by name, due to its not matching the function return type ('shared_ptr<ngraph::Node>' vs 'shared_ptr<ngraph::op::ArgMin>') [-Wreturn-std-move-in-c++11] return op_node; ^~~~~~~ ```
-
Dmitry Yershov authored
IntelGPU backend: Change clDNN mode of Relu op to allow ConvolutionBiasAdd fusing inside clDNN (#2939)
-
Scott Cyphers authored
-
Leona C authored
* Add placeholder for doc versioning * Finalize module to include tags of verified releases only
-
Michał Karzyński authored
* Add fused Squeeze op * Use fused Squeeze op in ONNX importer * Update serializer * Add unit tests * Add type prop tests * Change Squeeze signature to accept a dynamic input for axes * Update src/ngraph/op/fused/squeeze.cpp Co-Authored-By: Adam Rogowiec <adam.rogowiec@intel.com> * Code review comments * Fix failing unit test * Code review comments * Code review comments * style * Add op to iGPU backend * Add op to iGPU backend
-
Dmitry Yershov authored
-
- 14 May, 2019 4 commits
-
-
Scott Cyphers authored
* Fix clang compiler warnings * Remove unintended file. * style * Not part of PR * Another extra closure ref * More warnings from merges * Lambda arg was used * Remove remaining osx compiler warnings * style * Try to avoid compiler warning * Same for the other test
-
Dmitry Yershov authored
* IntelGPU backend: Enable FusedOpDecomposition pass * PR2912. Adress comments.
-
Adam Procter authored
I had renamed the directory at one point during development but forgot to update this
-
Robert Kimball authored
-
- 13 May, 2019 9 commits
-
-
Sang Ik Lee authored
* Temp save. * Temp save. * Temp save. * Temp save. * Temp save. * Temp save. * Temp save. * Fix compile errors. * Fix incorrect index. * Fix UT typo. * Interpreter passes UT. * Fix more bugs. * Apply style. * Add shape check for updates tensor. * Merge typo
-
Scott Cyphers authored
* Fix clang compiler warnings * Remove unintended file. * style * Not part of PR * Another extra closure ref * More warnings from merges * Lambda arg was used
-
Robert Kimball authored
-
Robert Kimball authored
-
Dmitry Yershov authored
-
tsocha authored
* Add ScaleShift operator * Add ScaleShift to serializer * Add UT for ScaleShift * Add type_prop tests for ScaleShift * Style-fix * Skip tests on Intel GPU * Review fix 1 * Style fix
-
Sang Ik Lee authored
-
Adam Procter authored
-
Anna Alberska authored
* add gemm operation * style apply * erase if statement * fix the test * enable tests
-
- 11 May, 2019 4 commits
-
-
Chris Sullivan authored
Change CPUTensorRole -> TensorRole in ngraph core and utilize it in GPU backend instead of a local enum. This is also now use in NNP backend. (#2900)
-
Jayaram Bobba authored
-
Tomasz Dołbniak authored
* Dump the expected and actual values for NgraphTestCase * Adapt to changes in master * Some docs and API unification
-
Adam Rogowiec authored
* Extend lp-norm functions to take bias. * Move lp-norm utilities to nGraph core op/util. * Move norm files to builder directory. * Normalize fused operator implementation. * Fused op boilerplate. * Fix node validation and normalization across spatial axes. * Add UT normalize across CHW with scalar scale. * Fix expanding input tensor to 4D. * Add more UT for 3D and 2D. * Add more UT, with scale and across HW. * Update to new localization of l2_norm function. * Add type_prop UT and update gpu/igpu manifests. * Apply clang-format. * Add positive UT for type_prop. * Update unit test manifests. * Address review comments. * Add using namespace std. * Remove unnecessary std prefixes. * Remove blacklisted unittests for GPU. * Apply clang-format. * Review comments. * Fix clang errors.
-