1. 25 Feb, 2020 1 commit
  2. 13 Feb, 2020 1 commit
    • Pruthvi's avatar
      [MLIR] MatMulBias Fused Op support in MlIR (#4104) · 925087ba
      Pruthvi authored
      * - add fused_op.td to CmakeLists
      - define pattern to fuse Wx + b and to replace with MatMulBias
      
      * - remove table-gen LLVM_TARGET_DEFINATION for fused_ops_pattern.td,
      fused_ops.td
      - fix build issues
      
      * - change pattern to to match MatMul instead of Dot
      - support in CMake to register MatMulBias fused Op pattern
      
      * - made changes to fusion pattern to match Add( Dot (op1, op2), bias) for
      MatmulBias
      - use applyPatternsGreedily instead of applyFullConversion in the graph
      pass
      - add unit test inter v/s CPU for MatMulBias
      
      * - Affine lowering, verifier logic to NgMatMulBiasOp
      
      * add missing header file
      
      * - WIP, use NGGemm instead of NGMatMulBias
      
      * -undo unintended changes
      
      * Addressed PR comments
      
      * - refactor the ctor of the NgDialectFusion pass
      - register NgDialectFusion pass with the PassRegistration
      
      * Address PR comments
      
      * -add lit test for matmul+bias fusion
      
      * -style fix lit test
      Co-authored-by: 's avatarSang Ik Lee <sang.ik.lee@intel.com>
      925087ba
  3. 16 Jan, 2020 1 commit
    • Amy Zhuang's avatar
      [MLIR] Use call back for MatMul. (#3838) · c737a573
      Amy Zhuang authored
      * [MLIR] Use call back for MatMul.
      
      * Use callback for Gemm.
      
      * Use mkldnn callback for Softmax.
      
      * Address PR feedback.
      
      * Fix merge errors.
      
      * Change to tail allocation struct.
      
      * Use mkldnn callback for AvgPool.
      
      * Add callbacks for AvgPoolBackprop, MaxPool, and MaxPoolBackprop.
      
      * Fix merge errors.
      
      * Use UnrankedMemRefType for callbacks.
      
      * Address PR feedback.
      
      * Cleanup.
      
      * Address PR feedback.
      
      * Fix a bug.
      
      * Use global variable to hold attributes.
      
      * Convert layout if needed for pooling.
      
      * Address PR feedback.
      
      * Add header.
      
      * Address PR feedback.
      
      * Update Copyright to 2017-2020.
      
      * Address PR feedback.
      Co-authored-by: 's avatarScott Cyphers <diyessi@users.noreply.github.com>
      c737a573
  4. 01 Jan, 2020 1 commit
  5. 18 Nov, 2019 1 commit
  6. 11 Nov, 2019 1 commit
    • Pruthvi's avatar
      [MLIR] Graph pass to lower ngraph to ngraph dialect (#3835) · 6fbed3b9
      Pruthvi authored
      *  WIP graph pass to lower ngraph to ngraph dialect
      
      * resolved compiler errors
      
      * - refactor ngraph-dialect to graph pass
      
      * - fix compilation issue
      - unit test passes
      
      *  - style fix
      
      * Addressed PR comments
      
      * - move NgDialectConversionPass to anonymous namespace
      
      * - use getModule() to access module inside the graph pass
      - address PR comments
      
      *  - fix failing unit test case for negative padding
      - make builder as an object isntead of pointer to an object
      
      * Address PR Comments
      6fbed3b9
  7. 04 Nov, 2019 1 commit
    • Nagy Mostafa's avatar
      [MLIR] New Core Ops (V0) and Ops Versioning in NG dialect (#3764) · 8ef5b0ca
      Nagy Mostafa authored
      * Init commit to implement interface
      
      *  Add two op interfaces for v0 and v1. Add a unit-test
      
      * Add missing files
      
      * Move test to separate file
      
      * Add Fused Op interface
      
      * Missing files
      
      * style
      
      * fused ops
      
      * Remove V1 ops for now
      
      * Added enum attributes. WIP
      
      * Completed non-experiemntal non-fused-ops
      
      * Add ops_attributes
      
      * Minor fixes
      
      * Minor fixes
      
      * Added enum setting/reading test
      
      * style-apply
      
      * Added attributes tests
      
      * Fix dialect init
      
      * style
      
      * fix typo
      
      * Fix merge errors
      
      * Include file with MLIR on
      8ef5b0ca
  8. 29 Oct, 2019 1 commit
    • Nagy Mostafa's avatar
      [MLIR] MLIR Compiler refactoring (#3786) · f143bb13
      Nagy Mostafa authored
      * Re-organize files. Create MLIR backend classes
      
      * WIP
      
      * Refactored. Code compiles
      
      * Moved context to Runtime class to outlive compilation and execution
      
      * style-apply
      
      * Base Runtime class. Few other modifications
      
      * Minor fixes
      
      * Fixed Runtime::run() to take type-erased pointer
      
      * renamed core compiler
      
      * rename backend compiler
      
      * rename runtime compiler
      
      * PR feedback
      
      * Fix build fails
      f143bb13
  9. 10 Oct, 2019 2 commits
  10. 09 Oct, 2019 2 commits
  11. 02 Oct, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Pass optimization level to MLIR ExecutionEngine (#3703) · 1ddda541
      Diego Caballero authored
      An optional optimization level was added to ExecutionEngine, initialized
      to None by default. This caused fast-isel to be used in LLVM CG,
      overriding all the flags related to not using fast-isel.
      
      In this PR we pass the right optimization level to ExecutionEngine.
      1ddda541
  12. 28 Sep, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Enable nGraph dialect in ngraph-opt (#3657) · 9db8f874
      Diego Caballero authored
      * [MLIR] Add support for parsing nGraph tensor type
      
      Initial commit that enables nGraph parsing. It's needed for testing.
      
      * [MLIR] Enable nGraph dialect in ngraph-opt
      
      This PR registers nGraph dialect in ngraph-opt and prepares
      nGraph lowering pass for LIT testing, fixing all the related issues.
      Among other things, lowering pass has to be turned into a function pass,
      dead argument in constructor was removed and `convert-ngraph-to-affine`
      flag was added.
      
      * Fix issue with function name and multiple functions
      
      * Extend module_function.mlir lit test
      
      * Improve module_function.mlir test
      
      Remove ngraph to affine dialect conversion since we just need to verify
      that we can parse and print modules and functions.
      Add verification for parsing the printed code.
      
      * [MLIR] Add support for parsing nGraph element types (#3665)
      
      * [MLIR] Add support for parsing nGraph element types
      
      It introduces initial support for parsing nGraph signed/unsigned
      integer and floating point data types.
      
      * Improve LIT tests
      
      Test parsing and printing of types separately from lowering to affine
      since these tests will evolve differently, particularly for tensor
      types.
      
      * Missed file
      
      I left this file behind in the previous commit
      9db8f874
  13. 25 Sep, 2019 1 commit
  14. 24 Sep, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Add `ngraph` prefix to MLIR flags (#3625) · 1c9a1996
      Diego Caballero authored
      * [MLIR] Add `ngraph` prefix to MLIR flags
      
      Some flags collision with some MLIR flags.
      
      * [MLIR] Add support for nGraph tensor type in parser
      
      Initial commit that enables nGraph parsing. It's needed for testing.
      
      * Rename ngraph print flag
      
      * Rename ngraph dump mlir flags
      
      * Clang format
      
      * Revert "[MLIR] Add support for nGraph tensor type in parser"
      
      This reverts commit ae371d6a5c8ea590322d5d3b9ba110159d4bf5fa.
      1c9a1996
  15. 19 Sep, 2019 1 commit
  16. 17 Sep, 2019 1 commit
  17. 16 Sep, 2019 1 commit
  18. 10 Sep, 2019 2 commits
  19. 30 Aug, 2019 2 commits
  20. 27 Aug, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Introduce flag -print-ngraph-ir-after-all. (#3457) · b71f2462
      Diego Caballero authored
      This PR is a stepping stone towards unifying nGraph MLIRCompiler printing
      flags with those used in MLIR. It enables flag -print-ir-after-all
      implemented in MLIR pass manager and adds flag -print-ngraph-ir-after-all
      to MLIRCompiler so that we can use it to dump IR for those transformations
      that we do directly in MLIRCompiler without using a proper pass. Eventually,
      everything should be working as a pass and the nGraph variant of the
      flag shouldn't be needed. NGRAPH_MLIR_DUMP_ALL macro is no longer
      needed.
      b71f2462
  21. 23 Aug, 2019 1 commit
    • Diego Caballero's avatar
      [mlir] Bump mlir repo 8/20/2019 (#3493) · 4fddf5ad
      Diego Caballero authored
      * [MLIR] Bump MLIR repo 8/20/2019
      
      MLIR_
      commit 0cdb20a6add19bc96c20dad28589a1e54e4d8469
      Author: Lei Zhang <antiagainst@google.com>
      Date:   Tue Aug 20 13:33:41 2019 -0700
      
          Add spv.specConstant and spv._reference_of
      
      LLVM:
      commit 3b9a27b6908040881dad394022f8c472c15c0784
      Author: Simon Pilgrim <llvm-dev@redking.me.uk>
      Date:   Tue Aug 20 17:54:37 2019 +0000
      
          Fix typo in comment. NFCI.
      
      * Address Bob's feedback
      4fddf5ad
  22. 14 Aug, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Enable LLVM vectorization by initializing TTI (#3424) · 79283e3e
      Diego Caballero authored
      * [MLIR] Bump MLIR repo to commit c636f12, 08/09/2019
      
      MLIR Commit:
      commit c636f127ee412ef7279ec0d550f42740824cd9ea
      Author: Alex Zinenko <zinenko@google.com>
      Date:   Fri Aug 9 08:59:45 2019 -0700
      
          LLVM dialect and translation: support global strings
      
      LLVM Commit:
      commit c636f127ee412ef7279ec0d550f42740824cd9ea
      Author: Alex Zinenko <zinenko@google.com>
      Date:   Fri Aug 9 08:59:45 2019 -0700
      
          LLVM dialect and translation: support global strings
      
      * [MLIR] Set optimization level for LLVM optimizer and codegen
      
      Now both LLVM optimizer and codegen are aligned with
      "NGRAPH_MLIR_OPT_LEVEL" macro.
      
      * [MLIR] Enable LLVM vectorization by initializing TTI
      
      This is the final piece to enable LLVM vectorization for MLIR compiler.
      The PR refactors the creation of a target machine in MLIRCompiler so that
      we can use it to initialize TargetTransformInfo with the proper host
      features and LLVM Loop Vectorizer can get the right vector register
      information of the target CPU.
      79283e3e
  23. 13 Aug, 2019 1 commit
  24. 10 Aug, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Bump MLIR repo to commit c636f12, 08/09/2019 (#3422) · c101dcce
      Diego Caballero authored
      MLIR Commit:
      commit c636f127ee412ef7279ec0d550f42740824cd9ea
      Author: Alex Zinenko <zinenko@google.com>
      Date:   Fri Aug 9 08:59:45 2019 -0700
      
          LLVM dialect and translation: support global strings
      
      LLVM Commit:
      commit c636f127ee412ef7279ec0d550f42740824cd9ea
      Author: Alex Zinenko <zinenko@google.com>
      Date:   Fri Aug 9 08:59:45 2019 -0700
      
          LLVM dialect and translation: support global strings
      c101dcce
  25. 09 Aug, 2019 2 commits
    • Nishant Patel's avatar
      [MLIR] Add unary op -- Negative (#3391) · 0273b716
      Nishant Patel authored
      * Add negative op
      
      * Add test case
      
      * Address feedback
      
      * Merge master
      
      * Consolidate to one routine for unary ops
      
      * Change from Negative to Neg
      0273b716
    • Diego Caballero's avatar
      [MLIR] Enable affine loop tiling (#3397) · 14624c03
      Diego Caballero authored
      * [MLIR] Enable affine loop tiling
      
      This PR enables loop tiling optimization in affine dialect. It
      introduces the following flags for configuration.
        - affine-loop-tile: enables/disables the optimization. Disabled by
          default.
        - loop-tile-cache-level: provides the cache level to which apply loop
          tiling to. Cache level size is obtained from LLVM's TTI.
        - loop-tile-cache-size: provides a cache level size that overrides
          cache information from TTI.
      
      The current use of TTI is a bit hacky since we have to pass a fake
      LLVM's function to make it work. However, this should be enough to get
      some basic target information until we have a target model in MLIR or
      find a better approach.
      
      * Address feedback
      
      * Rename flags
      14624c03
  26. 07 Aug, 2019 1 commit
  27. 31 Jul, 2019 1 commit
  28. 30 Jul, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Bump MLIR repo to commit 26c683c, 07/29/2019. (#3310) · 81597f3a
      Diego Caballero authored
      * [MLIR] Bump MLIR repo to commit 59167c2, 07/25/2019.
      
      MLIR commit:
      Author: River Riddle <riverriddle@google.com>
      Date:   Wed Jul 24 16:41:11 2019 -0700
      
          NFC: Use ValueOfRange instead of T in Diagnostic::appendRange.
      
              For iterator_range, T is often the name of another iterator type
              and not the the value of the range.
      
      LLVM commit:
      Author: Marshall Clow <mclow.lists@gmail.com>
      Date:   Thu Jul 25 03:26:05 2019 +0000
      
          Implement change #4 of P1466: Change weekday to accept both 0 and 7
          as Sunday. Add accessors 'c_encoding' and 'iso_encoding' to provide
          different interpretations of the weekday. Remove 'operator unsigned'
      
      * style
      
      * Move MLIR/LLVM repos a bit more forward
      81597f3a
  29. 29 Jul, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Enable affine dialect loop fusion (#3290) · aedd8c2e
      Diego Caballero authored
      * [MLIR] Enable affine dialect loop fusion
      
      Enable affine dialect loop fusion in nGraph pipeline. It also adds an
      opt flag to enable/disable it when ngraph-opt is in place. Fusion seems
      to work for simple cases. It wasn't able to fuse dot + add, though, at
      least in my test case. One example that worked:
      
      Input:
        %6 = alloc() : memref<2500x2500xf32>
        affine.for %i3 = 0 to 2500 {
          affine.for %i4 = 0 to 2500 {
            %7 = load %arg0[%i3, %i4] : memref<2500x2500xf32>
            %8 = load %0[%i3, %i4] : memref<2500x2500xf32>
            %9 = addf %8, %7 : f32
            store %9, %6[%i3, %i4] : memref<2500x2500xf32>
          }
        }
        %10 = alloc() : memref<2500x2500xf32>
        affine.for %i5 = 0 to 2500 {
          affine.for %i6 = 0 to 2500 {
            %11 = load %arg2[%i5, %i6] : memref<2500x2500xf32>
            %12 = load %0[%i5, %i6] : memref<2500x2500xf32>
            %13 = addf %12, %11 : f32
            store %13, %10[%i5, %i6] : memref<2500x2500xf32>
          }
        }
        %14 = alloc() : memref<2500x2500xf32>
        affine.for %i7 = 0 to 2500 {
          affine.for %i8 = 0 to 2500 {
            %15 = load %10[%i7, %i8] : memref<2500x2500xf32>
            %16 = load %6[%i7, %i8] : memref<2500x2500xf32>
            %17 = addf %16, %15 : f32
            store %17, %14[%i7, %i8] : memref<2500x2500xf32>
          }
        }
      
      Output:
        %8 = alloc() : memref<2500x2500xf32>
        affine.for %i3 = 0 to 2500 {
          affine.for %i4 = 0 to 2500 {
            %9 = load %arg2[%i3, %i4] : memref<2500x2500xf32>
            %10 = load %2[%i3, %i4] : memref<2500x2500xf32>
            %11 = addf %10, %9 : f32
            %12 = affine.apply #map2(%i3, %i4, %i3, %i4)
            %13 = affine.apply #map3(%i3, %i4, %i3, %i4)
            store %11, %0[%12, %13] : memref<1x1xf32>
            %14 = load %arg0[%i3, %i4] : memref<2500x2500xf32>
            %15 = load %2[%i3, %i4] : memref<2500x2500xf32>
            %16 = addf %15, %14 : f32
            %17 = affine.apply #map2(%i3, %i4, %i3, %i4)
            %18 = affine.apply #map3(%i3, %i4, %i3, %i4)
            store %16, %1[%17, %18] : memref<1x1xf32>
            %19 = affine.apply #map2(%i3, %i4, %i3, %i4)
            %20 = affine.apply #map3(%i3, %i4, %i3, %i4)
            %21 = load %0[%19, %20] : memref<1x1xf32>
            %22 = affine.apply #map2(%i3, %i4, %i3, %i4)
            %23 = affine.apply #map3(%i3, %i4, %i3, %i4)
            %24 = load %1[%22, %23] : memref<1x1xf32>
            %25 = addf %24, %21 : f32
            store %25, %8[%i3, %i4] : memref<2500x2500xf32>
          }
        }
      
      * Rename MLIR_LLVM_OPTIONS to NGRAPH_MLIR_OPTIONS
      
      Something like this works now:
      NGRAPH_MLIR_OPTIONS="--enable-affine-loop-fusion=false"
      
      * Disable loop fusion by default and fix typo
      aedd8c2e
  30. 26 Jul, 2019 3 commits
  31. 25 Jul, 2019 1 commit
    • Diego Caballero's avatar
      [MLIR] Fix naming convention in MLIR files (#3292) · a095c587
      Diego Caballero authored
      * [MLIR] Fix naming convention in MLIR files
      
      Add naming convention note per file to state which files should use
      nGraph naming convention and which MLIR naming convention and align
      naming convention in those files with such a note.
      
      * Remove m-prefix
      a095c587
  32. 24 Jul, 2019 2 commits