1. 25 Mar, 2020 1 commit
    • Amy Zhuang's avatar
      [MLIR] Use mkldnn callback for ConvBias. (#4205) · a8a9bcb5
      Amy Zhuang authored
      * [MLIR] Use mkldnn callback for ConvBias.
      
      * Add try catch.
      
      Fix opAttrsVec.
      
      Add rank check for Gemm and MatMul.
      
      * Fix merge error.
      
      * Fix a bug.
      
      * Fix lit test.
      
      * Modify unit test.
      
      * Fix merge error.
      
      * Address PR feedback.
      
      * Address PR feedback.
      
      * Insert callback_init function to module.
      
      * Fix lit tests.
      
      * Fix a bug.
      
      * Use a set of GlobalOps for attributes.
      
      * Address PR feedback.
      
      * Address PR feedback.
      
      * Fix merge error.
      
      * Fix style error.
      
      * Fix style error.
      Co-authored-by: 's avatarScott Cyphers <diyessi@users.noreply.github.com>
      a8a9bcb5
  2. 18 Mar, 2020 1 commit
    • Scott Cyphers's avatar
      GetOutputElement removal preparation (#4425) · 0af33226
      Scott Cyphers authored
      * GetOutputElement removal preparation
      
      Not all outputs are used so don't force them to be connected in replace
      Add pattern that matches on any output
      Remove GOEs by default, allow to disable
      Fix failing core passes/tests with GOE dependency
      
      * Fix MLIR call
      
      * Fix value handle assignment
      
      * Cleanup
      
      * style
      
      * review comments
      
      * Fix onnx tests
      
      * Allow simplifcations to work on multi-values nodes
      
      * Disable goe removal for MLIR test
      
      * null init of Output
      Co-authored-by: 's avatarnmostafa <nagy.h.mostafa@intel.com>
      0af33226
  3. 12 Mar, 2020 1 commit
  4. 04 Mar, 2020 1 commit
  5. 13 Feb, 2020 1 commit
    • Pruthvi's avatar
      [MLIR] MatMulBias Fused Op support in MlIR (#4104) · 925087ba
      Pruthvi authored
      * - add fused_op.td to CmakeLists
      - define pattern to fuse Wx + b and to replace with MatMulBias
      
      * - remove table-gen LLVM_TARGET_DEFINATION for fused_ops_pattern.td,
      fused_ops.td
      - fix build issues
      
      * - change pattern to to match MatMul instead of Dot
      - support in CMake to register MatMulBias fused Op pattern
      
      * - made changes to fusion pattern to match Add( Dot (op1, op2), bias) for
      MatmulBias
      - use applyPatternsGreedily instead of applyFullConversion in the graph
      pass
      - add unit test inter v/s CPU for MatMulBias
      
      * - Affine lowering, verifier logic to NgMatMulBiasOp
      
      * add missing header file
      
      * - WIP, use NGGemm instead of NGMatMulBias
      
      * -undo unintended changes
      
      * Addressed PR comments
      
      * - refactor the ctor of the NgDialectFusion pass
      - register NgDialectFusion pass with the PassRegistration
      
      * Address PR comments
      
      * -add lit test for matmul+bias fusion
      
      * -style fix lit test
      Co-authored-by: 's avatarSang Ik Lee <sang.ik.lee@intel.com>
      925087ba
  6. 29 Jan, 2020 1 commit
  7. 23 Jan, 2020 1 commit
    • Scott Cyphers's avatar
      Cyphers/pattern (#4095) · 3bffe536
      Scott Cyphers authored
      * Make pattern matcher node-based
      
      Simplify implementation
      Add support for Or, Branch
      Start of support for recurrent pattern
      
      * Only save state at branch points
      
      * Factor Or out of label
      
      * Documentation
      
      * Review
      
      * Only ops need to match on shape/output index
      3bffe536
  8. 01 Jan, 2020 1 commit
  9. 11 Dec, 2019 2 commits
    • Robert Kimball's avatar
      Remove GOE from Adjoints class (#3973) · f803feb7
      Robert Kimball authored
      * Change generate_adjoints to take an OutputVector instead of a NodeVector for deltas.
      
      * Cleanup
      
      * Adjoints class convert to use Output<Node>
      
      * More cleanup
      
      * More cleanup
      
      * Post-merge build issues
      
      * Don't push initial bprops multiple times
      
      * Eliminate GOE correctly
      
      * back-compatibility, unit test
      f803feb7
    • Diego Caballero's avatar
      [MKLDNN] Emit dgemm for 2D DP FP Dot op (#3990) · fee3d1a7
      Diego Caballero authored
      * [MLIR] Update MLIR/LLVM repos
      
      * Move MLIR/LLVM repos forward
      
      This includes fix to affine fusion algorithm.
      
      * Fix issues after merge
      
      * Fix lit test
      
      * [MKLDNN] Emit dgemm for 2D DP FP Dot op
      
      Add support for emitting MKLDNN's double precision FP gemm from a 2D double
      precision floating point Dot operation.
      
      * Removed unnecessarily duplicated pattern
      
      * Add f64 matmul support to CPU Emitter + unit test
      
      * Add check for DP unsupported bias in cpu_fusion.cpp
      fee3d1a7
  10. 22 Nov, 2019 1 commit
    • gaurides's avatar
      Fused_op: BatchMatMulTranspose (#3871) · db5b11c8
      gaurides authored
      * Initial commit
      
      * Add decompose_op and unit-test
      
      * Style fix
      
      * Fix CI error
      
      * Address review comments
      
      * Remove CPUBatchFusion
      
      * Address review feedback
      
      * Address review feedback
      
      * Added type_prop tests
      
      * Moved 1 test from cpu to core to keep together
      
      * Address PR comments
      
      * Fix style
      db5b11c8
  11. 08 Nov, 2019 1 commit
  12. 31 Oct, 2019 1 commit
    • gaurides's avatar
      CPU implementation of Gelu op (#3787) · 73fff9f4
      gaurides authored
      * Initial implementation
      
      * Fixed Gelu
      
      * Gelu backprop initial implementation
      
      * Add GeluBackprop fusion
      
      * Gelu and gelu backprop fusion test cases
      
      * Prevent decompose_op() for Gelu/GeluBackpropFactor for some type
      
      * Fixes and cleanup
      
      * Enabled backprop fusion
      
      * Fixed some issues
      
      * Mostly cleanup
      
      * Some more cleanup
      
      * File permissions
      
      * Remove unused variable
      
      * Style check
      
      * Address PR feedback
      
      * Address PR feedback
      
      * Incorporate changes related to latest master
      
      * Style check
      
      * Some more PR feedback related changes
      
      * Remove comment
      
      * Check for relative error
      
      * Retrigger CI
      
      * corrected syntax
      73fff9f4
  13. 22 Oct, 2019 1 commit
  14. 21 Sep, 2019 1 commit
  15. 19 Sep, 2019 1 commit
  16. 05 Sep, 2019 1 commit
    • Amy Zhuang's avatar
      Use mkl-dnn v1.0 or v0.x depending on compilation flag. (#3227) · e26d602a
      Amy Zhuang authored
      * Use mkl-dnn v1.0 or v0.x depending on compilation flag.
      
      * Change cpu builder files.
      
      * Modify cmake files.
      
      Use mkldnn-v1.0 for DEX if NGRAPH_USE_MKLDNN_V1 is set to true, otherwise use mkldnn-v0.x.
      
      CODEGEN only builds with mkldnn-v1.0.
      
      * Implement mkldnn utility functions for mkldnn-v1.0.
      
      User mode scratchpad management for mkldnn-v1.0.
      
      * Query scratchpad size and allocate a buffer of max scratchpad size.
      
      * Do not create mkldnn::memory when query scratchpad size of Reorder.
      
      Modify mkldnn utility functions.
      
      Fix convolution_forward_init and inner_product_forward_init.
      
      Modify CPURuntimeContextCG.
      
      * Add user mode scratchpad to CODEGEN.
      
      * mkldnn-v1.0 splits LSTM states. Update Rnn/Lstm Op accordingly.
      
      * Address PR feedback: use MKLDNN_MAJOR_VERSION.
      
      * Modify cpu rnn fusion pass and related unit tests.
      
      * Change Rnn/Lstm arg types to Output.
      
      * Fix Lstm for CODEGEN.
      
      * Set native layout for Slice when input format is blocked.
      
      * Do not print scratchpad size.
      
      * Change external_mkldnn_v1.cmake.
      
      Fix a typo.
      
      * Add mkldnn_v1.patch for mkldnn-v1.0.
      
      * Address PR feedback.
      
      * Define MKLDNN_ERROR_MESSAGE.
      
      * Address PR feedback: change to NGRAPH_USE_LEGACY_MKLDNN.
      
      * Fix a bug.
      
      * Remove unused variable.
      
      * Fix compiler warnings.
      
      * Fix a bug for CODEGEN.
      
      * Move variable only needed for mkldnn-v0.20 inside #if.
      
      * Remove unused variables.
      
      * No in place Reshape rotation for blocked data layout with mkldnn-v1.0.
      
      * Modify mkldnn_v1.patch to force mkldnn to link to libiomp.
      
      * Fix style.
      
      * Change path for find_library and find_file.
      
      * Do not insert ConvertLayout before/after Quantize/DeQuantize for blocked data layout.
      
      * Write strides information to visualized graph.
      
      * Move variables only needed for mkldnn-v0 under #if.
      
      * Move more variables in rnn fusion.
      
      * Fix ConvertLayout constant folding for mkldnn-v1.0.
      e26d602a
  17. 29 Aug, 2019 1 commit
    • Nagy Mostafa's avatar
      [MLIR] Fixes for cpu_fusion.validate_fuse_gru_inputs (#3511) · ef58667f
      Nagy Mostafa authored
      * WIP
      
      * Fix incorrect CK output adjustment
      
      * Bug fix and enroce sanity check
      
      * Change cycle search depth, and fix sanity check
      
      * cpu_fusion.validate_fuse_gru_inputs passes.
      
      * Fix as_single_output to be able to always create a GOE
      
      * minor fix. style-apply
      
      * Clean up debug msgs
      
      * Switch to backward cycle check
      
      * Enable failing test
      
      * PR fixes
      
      * Address feedback: Add fwd cycle checks. Make cycle checking depth configurable
      ef58667f
  18. 26 Aug, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Disable three more tests with MLIR enabled. (#3496) · 1683e200
      Diego Caballero authored
      * [MLIR] Bump MLIR repo 8/20/2019
      
      MLIR_
      commit 0cdb20a6add19bc96c20dad28589a1e54e4d8469
      Author: Lei Zhang <antiagainst@google.com>
      Date:   Tue Aug 20 13:33:41 2019 -0700
      
          Add spv.specConstant and spv._reference_of
      
      LLVM:
      commit 3b9a27b6908040881dad394022f8c472c15c0784
      Author: Simon Pilgrim <llvm-dev@redking.me.uk>
      Date:   Tue Aug 20 17:54:37 2019 +0000
      
          Fix typo in comment. NFCI.
      
      * [MLIR] Disable three more tests with MLIR enabled.
      
      This PR disables validate_fuse_gru_inputs, reshape_layout_optimizations4
      and reshape_layout_optimizations5:
        1. trivial_in_place_relu_fail: It checks tensors pool offset. There is
           no memory pool in MLIR atm.
        2. validate_fuse_gru_inputs: It creates an infinite cycle in
           MLIR subgraph extraction pass (under investigation).
        3. reshape_layout_optimizations4/5: They fail due to CompiledKernel
           being not expected by CPULayout pass.
      
      * Disable cpu_quant_fusion.qconcat
      1683e200
  19. 21 Aug, 2019 1 commit
  20. 15 Aug, 2019 2 commits
    • Pruthvi's avatar
      LSTM MKLDNN integration for ONNX LSTM op (#3327) · e5d606b8
      Pruthvi authored
      * - Add graph pass method for onnx lstmcell rewrite with lstm cpu op
      - insert reshapes to keep the weights in ldigo format
      - test case for onnx LstmCell to CPU Lstm
      
      * fix typo
      
      * - check LSTMCell for the fused op decomposistion in the backend
      
      * - fix bug in onnx_lstm graph pass
      - passes unit test
      
      * style-fix
      
      * - fix compilation error
      - use IFCO gate ordering for bias
      
      *  - Skip LSTMCell to LSTM CPU fusion for peephole
      
      * - add comment && remove duplicate function
      
      * -use dynamic_pointer_cast to check for constant
      
      * - onnx bias will be of shape (2 * gates_count * hidden_size) bias of Wb and Rb are concatenated, we will split the bias, add and rearrange in order IFCO
      
      * - Use most derived LSTM ctor for pattern matching
      
      * - Style Fix
      
      * style fix
      
      * Address PR comments
      
      * - add support for graph pass (MKLDNN version > 1) for mapping LSTMCell -> LSTM CPU op
      
      * fix unit test failure for MKLDNN V1.0
      e5d606b8
    • Diego Caballero's avatar
      [MLIR] Disable CPU fusion + degug tracer tests in MLIR (#3442) · 5bbd199b
      Diego Caballero authored
      CPU fusion is disabled in MLIR since fused ops are not supported in
      nGraph dialect. CPU debug tracer test doesn't expect CompiledKernel ops
      generated for MLIR.
      5bbd199b
  21. 31 Jul, 2019 1 commit
  22. 23 Jul, 2019 2 commits
  23. 08 Jul, 2019 1 commit
  24. 03 Jul, 2019 1 commit
  25. 01 Jul, 2019 1 commit
  26. 19 Jun, 2019 1 commit
  27. 18 Jun, 2019 1 commit
  28. 14 Jun, 2019 1 commit
    • gaurides's avatar
      Fuse Dropout (#3006) · 8c38db04
      gaurides authored
      * Initial implementation
      
      * Added test case
      
      * Bug fix; Dropout with 2 outputs, WIP
      
      * Fixed in unit-testl; WIP for model
      
      * Nothing is working
      
      * Revert "Nothing is working"
      
      This reverts commit d3ff09bb7a0d0519ab70ac85f2e7f30721afea96.
      
      * Fixed unit-test; fusion with 2 outputs
      
      * Fix style check, file permissions
      
      * Changed input arg to Node
      
      * Fix order of declaration
      
      * Improved performance
      
      * some cleanup
      
      * Fixed CI error
      
      * Fixed review comments
      
      * Fix CI error
      
      * Remove unused variable
      
      * Fix other CI errors
      
      * Changed type
      
      * Fix style check
      
      * Add codegen code for Dropout
      
      * addressed PR feedback; will add codegen support later
      
      * Cleanup; change variable name
      
      * Support for use_seed
      
      * Add setter for use_seed
      
      * Add setter for use_seed
      
      * Fix CI error
      
      * Make use_seed as arg
      
      * Fix CI error
      
      * Fix CI error
      8c38db04
  29. 12 Jun, 2019 1 commit
  30. 08 Jun, 2019 1 commit
  31. 05 Jun, 2019 1 commit
  32. 02 Jun, 2019 2 commits
  33. 24 May, 2019 1 commit
    • Michał Karzyński's avatar
      [Fused] LeakyRelu op (#2919) · 5650e913
      Michał Karzyński authored
      * [Fused] LeakyRelu op
      
      * Add LeakyRelu to serializer
      
      * Add unit tests
      
      * Fix merge branch 'master' into mkarzyns/fused_leaky_relu
      
      * Change broadcasting rules to NumPy style
      
      * Remove std:: and ngraph:: prefixes
      
      * Rename CPU Runtime LeakyRelu to CPULeakyRelu
      
      * Style apply
      
      * Fix cpu_fusion.fuse_leaky_relu test
      
      * Use eigen's tanh in the fused sigmoid multiply kernel (#2946)
      
      * Merge branch 'master' into mkarzyns/fused_leaky_relu
      
      * Add LeakyRelu to Intel GPU backend op list
      
      * Add LeakyRelu to Intel GPU backend op list
      5650e913
  34. 23 May, 2019 2 commits
  35. 22 May, 2019 1 commit
    • Louis Feng's avatar
      Change FusionType to enum class and use EnumMask (#2957) · a65b5155
      Louis Feng authored
      * constexpr ctor for EnumMask
      
      * added pass properties to core passes.
      
      * change fusion type to have better type safety.
      
      * refactor to use enum mask.
      
      * remove extra code.
      
      * added constants for FusionType backward compatibility.
      
      * spelling.
      
      * grammar fix.
      a65b5155