- 02 Jun, 2019 1 commit
-
-
Jayaram Bobba authored
Avoid negative values in int64 initialization for cases where int64 parameters are used as indices (#3004)
-
- 31 May, 2019 5 commits
-
-
Robert Kimball authored
* handle case where a node's output is connected multiple inputs of another node * fix creation of the FunctionCall to have the correct outputs * fix per review comment
-
Sang Ik Lee authored
* Cleanup how compile flags set and used by nGraph and external projects. Set C++11 through CMake and pass it down to external projects. Prefer CMake variables such as CMAKE_POSITION_INDEPENDENT_CODE and CMAKE_CXX_STANDARD instead of explicitly setting compiler dependent flags. Create json compilation database for external projects. CMAKE_CXX_FLAGS is used as common global options for nGraph and external projects. add_compile_options() is used for local options for current and sub directories. add_definitions() is used for setting definitions for current and sub directories. Note: Global options are not passed down to some external projects. Note: mkl-dnn resets CMAKE_CXX_FLAGS internally. Note: TBB and MLSL are not CMake based. Noet: Eigen and json is header only library. * Fix error. * Fix error. (second attempt) * Cleanup code. * Allow check for undefined macro. * Try to fix cldnn issue. * Set type for CMake arguments. * Pass C++ standard to protobuf. * Pass C++ standard down to TBB. * Change how Clang specific flags are handled. * Fix error. * Workaround for compile error on Baidu's PDPD docker. * Fix windows build error.
-
Chris Sullivan authored
-
Rob Earhart authored
-
Sang Ik Lee authored
-
- 30 May, 2019 2 commits
-
-
Jayaram Bobba authored
* Initial implementation of implicit broadcasting for eltwise ops. Only Add supported * Addressed PR feedback * cleanup * Rename Bcast to Broadcast * Autobroadcasting support for rest of elementwise ops * Serializer support for autobroadcast * Added missing autob serialization for Minimum * Added execution unit tests and more op types to implicit broadcast elimination * Addressed PR feedback * Fixes windows build issue * RVO optimization per PR feedback
-
Robert Kimball authored
* serialize constant faster * more speedup
-
- 29 May, 2019 7 commits
-
-
Adam Rogowiec authored
* Draft of FakeQuantize operation along with UTs. * Add FakeQuantize to implemented operators on IGPU. * Get back FakeQuantize op case to switch. * Fix compilation errors. * Skip test for INTERPRETER backend and disable type_prop tests. * Initial implementation covering the most basic case * Cleanup of fake_quantize_with_clip UT * Reformat the cpu unit tests manifest and unlock anothe fake quant UT * Handle the clipping case by subtracting input_low from quantization input * Clip the input data before quantization to avoid Selects * UT manifest fix * Obsolete comment removed * Code formatting * Broadcast input data for non-scalar in/out params * Code formatting * Enable the type prop tests for FakeQuantize * Dequant the data without using the Dequantize op (fixes an edge case)
-
Ilya Churaev authored
-
Adam Rogowiec authored
* Move reshape from utils to builder. * Add aliases to functions in old place and describe changes.
-
gcwenger authored
-
Robert Kimball authored
-
Tomasz Dołbniak authored
* ShuffleChannels implementation * Validation of ShuffleChannels params * Implementation of ShuffleChannels decompose_op() * Formatting adjustments * Corrected implementation and validation of op params * Basic test of ShuffleChannels * Negative axis value test * Default params for the ShuffleChannels op * ShuffleChannels test with floats * ShuffleChannels validation unit tests * PR comments * Compilation error fix * PR feedback and cleanup * Code formatting adjustment * Negative axis value documentation * Docs update (PR feedback) * PR feedback: shape and axis validation * Modify axis semantics on shuffle op * Revert "PR feedback: shape and axis validation" This reverts commit 21b708e710b91da2a7e37a69c0da1f31c7743b47.
-
Dmitry Yershov authored
Switch to clDNN version with conformance fix for 3 ONNX models (DenseNet-121, Inception-v2, ResNet-50) (#2982)
-
- 28 May, 2019 2 commits
-
-
Tomasz Dołbniak authored
-
Leona C authored
* Cleanup section * Add updated illustrations for pattern_matcher and tensor_descriptor * Add subsection link to be consistent
-
- 25 May, 2019 1 commit
-
-
Robert Kimball authored
* update a few files to build on windows * more fixes
-
- 24 May, 2019 10 commits
-
-
Scott Cyphers authored
* Switch some get_inputs uses to use the newer inputs * Review comments
-
Jayaram Bobba authored
* Added CTCGreedyDecoder layer op * Added comment on seq_len validation checks
-
Adam Procter authored
-
Robert Kimball authored
* API defined * add unit test for save/load with INTERPRETER * Update per review comments * fix compiler error
-
Jayaram Bobba authored
* Added accessor methods for layer op attributes * style fixes and addressed PR feedback
-
Dmitry Yershov authored
-
Michał Karzyński authored
* [ONNX] Unit test models for QLinearMatMul * [ONNX] Extended types support for NgraphTestCase * [ONNX] Move the value comparators to the NgraphTestCase class * Add test cases * Add shape checking * disable GPU tests
-
Robert Kimball authored
* make private members protected in hybrid classes * allow overriding the passes
-
Michał Karzyński authored
* [Fused] LeakyRelu op * Add LeakyRelu to serializer * Add unit tests * Fix merge branch 'master' into mkarzyns/fused_leaky_relu * Change broadcasting rules to NumPy style * Remove std:: and ngraph:: prefixes * Rename CPU Runtime LeakyRelu to CPULeakyRelu * Style apply * Fix cpu_fusion.fuse_leaky_relu test * Use eigen's tanh in the fused sigmoid multiply kernel (#2946) * Merge branch 'master' into mkarzyns/fused_leaky_relu * Add LeakyRelu to Intel GPU backend op list * Add LeakyRelu to Intel GPU backend op list
-
Robert Kimball authored
* create tensor for the primary backend * move private objects to protected
-
- 23 May, 2019 6 commits
-
-
Amy Zhuang authored
-
Amy Zhuang authored
* Move zero padded conv fusions from CPUFusion to CoreFusion. * Address PR feedback: move unit tests to core_fusion.
-
gaurides authored
* Remove functions from cpu which were moved to core * Fix a typo * Remove unused function
-
Adam Procter authored
-
Leona C authored
* Update version, clean up ToC, add more detail to section on inspecting graphs... * Minor adjustments to version module * Move distributed training page to extras since there's not much there * Fix links that break when doing that * Consistent casing on section titles * Orphan governance page so we don't have blank/empty links * Update release notes with new version module structure * PR feedback
-
Robert Kimball authored
* update visualize tree file extenstions and output formats * fix runtime error
-
- 22 May, 2019 3 commits
-
-
Louis Feng authored
* constexpr ctor for EnumMask * added pass properties to core passes. * change fusion type to have better type safety. * refactor to use enum mask. * remove extra code. * added constants for FusionType backward compatibility. * spelling. * grammar fix.
-
Adam Procter authored
* Virtualize some things that crash when layout descriptor is missing * More shape specialization * (very bare) skeleton for dyn elimination * Miscellaneous * Lift i32->int64-only restriction on constant folding for Convert * Add constant folding for ShapeOf, and some tests for new constant folders * Tests for DynElimination * Rename specialize_shapes to specialize_function, and add a unit test for value substitution * Roll back overeager API change in dyn slice bprop (it has to handle right-indexed axes; bummer) * Add a test for dynamic usage of transpose op * Fix warning/error about variable shadowing * Strengthen checks in apply_permutation * Propagate Constant shapes through Transpose * Add CHANGE_DYNAMIC_STATE where appropriate * PR feedback, and fix unit test failure * Fix PR reference in comment * PR comments * Comments for helper funcs * Remove unique_ptr indirection for the AlignedBuffers * Fix incorrect indexing of AlignedBuffer vector (whoops\!) * Remove unnecessary CHANGE_DYAMIC_STATEs * De-update pass property unit test for const folding * Replace mystery runes with all_pass_property_off
-
Tomasz Dołbniak authored
* Split op skeleton * Two ways to construct a fused Split to be able to use it in onnx importer * refactor: move the util::split() helper functions to the core * Split's decompose_op() implementation using a helper function * Use fused Split in the onnx_importer * Code formatting * PR feedback * Split helpers moved to ngraph/builder * Basic UT - split a 1D tensor to 3 equal parts * UT: Split 2D tensor into variable length parts * Code formatting * Catch the proper type of exception in the onnx_importer split() * Initialize members in the correct order * Type prop tests for Split * Code formatting * PR feedback
-
- 21 May, 2019 3 commits
-
-
Amy Zhuang authored
* Create mkldnn primitives at first iteration for CODEGEN. OPs: add, lstm, and rnn. * OPs: batchnorm. * OPs: concat and lrn. Remove dead code. * Skip in place concat, relu, reshape, and slice when building node_primitive_string_deps_index map. * Change NGRAPH_ASSERT to NGRAPH_CHECK. * Address PR Feedback. * Create mkldnn primitives at first iteration for CODEGEN. OPs: convertlayout, relu, leakyrelu, boundedrelu, sigmoid, softmax, slice. * Fix bugs. * OPs: quantizedconcat. Check if there are descriptors before emitting code to read desc_file. * OPs: convolution backward. Use macro to write mkldnn memory dims to generated file. * OPs: MaxPoolWithIndices and MaxPoolWithIndicesBackprop. Add unit tests for MaxPoolWithIndices, MaxPoolWithIndicesBackprop, and MaxPoolBackprop. * Fix style error. * OPs: AvgPoolBackprop and MaxPoolBackprop. Add unit test for AvgPoolBackprop. * OPs: DeconvolutionBias. * OPs: Quantize and Dequantize. * OPs: QuantizedDot and QuantizedDotBias. * Use reference kernel for QuantizedConvolution for CODEGEN when mkldnn does not support the parameter types. Get scales for quantization ops in cpu_emitter. * Fix Windows build error: add CPU_BACKEND_API. * Use template for quantization ops. * OPs: QuantizedMatmul. Emit referece kernel for QuantizedDot in CODEGEN. * Remove QuantizedDot from get_scale_index. * Address PR feedback.
-
Robert Kimball authored
* Add move operations to AlignedBuffer * unit test
-
Rob Earhart authored
* Remove parent from PlaidML tensor initializer * Remove plaidml tensor parent plumbing * style
-