1. 28 May, 2019 1 commit
    • Leona C's avatar
      Leona/doc v0.20 (#2971) · 14f16bc1
      Leona C authored
      * Cleanup section
      
      * Add updated illustrations for pattern_matcher and tensor_descriptor
      
      * Add subsection link to be consistent
      14f16bc1
  2. 25 May, 2019 1 commit
  3. 24 May, 2019 10 commits
  4. 23 May, 2019 6 commits
  5. 22 May, 2019 3 commits
    • Louis Feng's avatar
      Change FusionType to enum class and use EnumMask (#2957) · a65b5155
      Louis Feng authored
      * constexpr ctor for EnumMask
      
      * added pass properties to core passes.
      
      * change fusion type to have better type safety.
      
      * refactor to use enum mask.
      
      * remove extra code.
      
      * added constants for FusionType backward compatibility.
      
      * spelling.
      
      * grammar fix.
      a65b5155
    • Adam Procter's avatar
      Add more infrastructure for specialization of cloned graphs (#2949) · da1cacde
      Adam Procter authored
      * Virtualize some things that crash when layout descriptor is missing
      
      * More shape specialization
      
      * (very bare) skeleton for dyn elimination
      
      * Miscellaneous
      
      * Lift i32->int64-only restriction on constant folding for Convert
      
      * Add constant folding for ShapeOf, and some tests for new constant folders
      
      * Tests for DynElimination
      
      * Rename specialize_shapes to specialize_function, and add a unit test for value substitution
      
      * Roll back overeager API change in dyn slice bprop (it has to handle right-indexed axes; bummer)
      
      * Add a test for dynamic usage of transpose op
      
      * Fix warning/error about variable shadowing
      
      * Strengthen checks in apply_permutation
      
      * Propagate Constant shapes through Transpose
      
      * Add CHANGE_DYNAMIC_STATE where appropriate
      
      * PR feedback, and fix unit test failure
      
      * Fix PR reference in comment
      
      * PR comments
      
      * Comments for helper funcs
      
      * Remove unique_ptr indirection for the AlignedBuffers
      
      * Fix incorrect indexing of AlignedBuffer vector (whoops\!)
      
      * Remove unnecessary CHANGE_DYAMIC_STATEs
      
      * De-update pass property unit test for const folding
      
      * Replace mystery runes with all_pass_property_off
      da1cacde
    • Tomasz Dołbniak's avatar
      [FusedOps] Split (#2951) · ba546455
      Tomasz Dołbniak authored
      * Split op skeleton
      
      * Two ways to construct a fused Split to be able to use it in onnx importer
      
      * refactor: move the util::split() helper functions to the core
      
      * Split's decompose_op() implementation using a helper function
      
      * Use fused Split in the onnx_importer
      
      * Code formatting
      
      * PR feedback
      
      * Split helpers moved to ngraph/builder
      
      * Basic UT - split a 1D tensor to 3 equal parts
      
      * UT: Split 2D tensor into variable length parts
      
      * Code formatting
      
      * Catch the proper type of exception in the onnx_importer split()
      
      * Initialize members in the correct order
      
      * Type prop tests for Split
      
      * Code formatting
      
      * PR feedback
      ba546455
  6. 21 May, 2019 3 commits
    • Amy Zhuang's avatar
      Create mkldnn primitives at first iteration for codegen - part2 (#2859) · 9335e41c
      Amy Zhuang authored
      * Create mkldnn primitives at first iteration for CODEGEN.
      
       OPs: add, lstm, and rnn.
      
      *  OPs: batchnorm.
      
      *  OPs: concat and lrn.
      
      Remove dead code.
      
      * Skip in place concat, relu, reshape, and slice when building node_primitive_string_deps_index map.
      
      * Change NGRAPH_ASSERT to NGRAPH_CHECK.
      
      * Address PR Feedback.
      
      * Create mkldnn primitives at first iteration for CODEGEN.
       OPs: convertlayout, relu, leakyrelu, boundedrelu, sigmoid, softmax, slice.
      
      * Fix bugs.
      
      *  OPs: quantizedconcat.
      
      Check if there are descriptors before emitting code to read desc_file.
      
      *  OPs: convolution backward.
      
      Use macro to write mkldnn memory dims to generated file.
      
      *  OPs: MaxPoolWithIndices and MaxPoolWithIndicesBackprop.
      
      Add unit tests for MaxPoolWithIndices, MaxPoolWithIndicesBackprop, and MaxPoolBackprop.
      
      * Fix style error.
      
      *  OPs: AvgPoolBackprop and MaxPoolBackprop.
      
      Add unit test for AvgPoolBackprop.
      
      *  OPs: DeconvolutionBias.
      
      *  OPs: Quantize and Dequantize.
      
      *  OPs: QuantizedDot and QuantizedDotBias.
      
      * Use reference kernel for QuantizedConvolution for CODEGEN when mkldnn does not support the parameter types.
      Get scales for quantization ops in cpu_emitter.
      
      * Fix Windows build error: add CPU_BACKEND_API.
      
      * Use template for quantization ops.
      
      *  OPs: QuantizedMatmul.
      
      Emit referece kernel for QuantizedDot in CODEGEN.
      
      * Remove QuantizedDot from get_scale_index.
      
      * Address PR feedback.
      9335e41c
    • Robert Kimball's avatar
      Add support for move semantics to AlignedBuffer (#2956) · 30f3634e
      Robert Kimball authored
      * Add move operations to AlignedBuffer
      
      * unit test
      30f3634e
    • Rob Earhart's avatar
      Remove parent from PlaidML tensor initializer (#2923) · 5ffb0665
      Rob Earhart authored
      * Remove parent from PlaidML tensor initializer
      
      * Remove plaidml tensor parent plumbing
      
      * style
      5ffb0665
  7. 20 May, 2019 1 commit
  8. 17 May, 2019 8 commits
  9. 16 May, 2019 2 commits
  10. 15 May, 2019 5 commits
    • Louis Feng's avatar
      [Dynamic Shape] Added Pass Properties to Core Passes (#2935) · b2ca3e79
      Louis Feng authored
      * constexpr ctor for EnumMask
      
      * added pass properties to core passes.
      
      * added unit tests.
      
      * minor fixes.
      b2ca3e79
    • Adam Rogowiec's avatar
      [Fused Op] GRN (#2905) · 4fb4be5e
      Adam Rogowiec authored
      * Extend lp-norm functions to take bias.
      
      * Move lp-norm utilities to nGraph core op/util.
      
      * Move norm files to builder directory.
      
      * Apply clang-format.
      
      * Sceleton for GRN operation.
      
      * Add GRN implementation.
      
      * Fix reshape utility function.
      
      * Address review comments.
      
      * Add using namespace std.
      
      * Review comments.
      
      * Few fixes in grn implementation.
      
      * Clang format.
      
      * Basic UT.
      
      * Fix expected data.
      
      * Add more UT and skip them on IGPU.
      
      * Review comments: const correctness and remove using namespace std statement.
      
      * Unblock GRN on IGPU.
      
      * Get back GRN op case to switch.
      
      * merge error
      4fb4be5e
    • Dmitry Yershov's avatar
      IntelGPU backend: Make ScaleShift op as not supported in callback of… · bf869655
      Dmitry Yershov authored
      IntelGPU backend: Make ScaleShift op as not supported in callback of FusedOpDecomposition pass (#2940)
      
      * IntelGPU backend: Make ScaleShift op as not supported in callback of FusedOpDecomposition pass
      
      * style
      
      * merge error
      
      * Style was correct.  Return back 0965fb5 changes
      
      * merge error
      bf869655
    • Michał Karzyński's avatar
      [ONNX] Remove return-std-move-in-c++11 warning (#2937) · 31dc7234
      Michał Karzyński authored
      This PR removes compiler warnings like this one:
      
      ```
      src/ngraph/frontend/onnx_import/utils/reduction.hpp:73:28: warning: prior to the resolution
        of a defect report against ISO C++11, local variable 'op_node' would have been copied
        despite being returned by name, due to its not matching the function return type
        ('shared_ptr<ngraph::Node>' vs 'shared_ptr<ngraph::op::ArgMin>') [-Wreturn-std-move-in-c++11]
                        return op_node;
                               ^~~~~~~
      ```
      31dc7234
    • Dmitry Yershov's avatar
      IntelGPU backend: Change clDNN mode of Relu op to allow ConvolutionBiasAdd… · c11c7068
      Dmitry Yershov authored
      IntelGPU backend: Change clDNN mode of Relu op to allow ConvolutionBiasAdd fusing inside clDNN (#2939)
      
      c11c7068