1. 24 May, 2019 1 commit
    • Michał Karzyński's avatar
      [Fused] LeakyRelu op (#2919) · 5650e913
      Michał Karzyński authored
      * [Fused] LeakyRelu op
      
      * Add LeakyRelu to serializer
      
      * Add unit tests
      
      * Fix merge branch 'master' into mkarzyns/fused_leaky_relu
      
      * Change broadcasting rules to NumPy style
      
      * Remove std:: and ngraph:: prefixes
      
      * Rename CPU Runtime LeakyRelu to CPULeakyRelu
      
      * Style apply
      
      * Fix cpu_fusion.fuse_leaky_relu test
      
      * Use eigen's tanh in the fused sigmoid multiply kernel (#2946)
      
      * Merge branch 'master' into mkarzyns/fused_leaky_relu
      
      * Add LeakyRelu to Intel GPU backend op list
      
      * Add LeakyRelu to Intel GPU backend op list
      5650e913
  2. 22 May, 2019 2 commits
    • Adam Procter's avatar
      Add more infrastructure for specialization of cloned graphs (#2949) · da1cacde
      Adam Procter authored
      * Virtualize some things that crash when layout descriptor is missing
      
      * More shape specialization
      
      * (very bare) skeleton for dyn elimination
      
      * Miscellaneous
      
      * Lift i32->int64-only restriction on constant folding for Convert
      
      * Add constant folding for ShapeOf, and some tests for new constant folders
      
      * Tests for DynElimination
      
      * Rename specialize_shapes to specialize_function, and add a unit test for value substitution
      
      * Roll back overeager API change in dyn slice bprop (it has to handle right-indexed axes; bummer)
      
      * Add a test for dynamic usage of transpose op
      
      * Fix warning/error about variable shadowing
      
      * Strengthen checks in apply_permutation
      
      * Propagate Constant shapes through Transpose
      
      * Add CHANGE_DYNAMIC_STATE where appropriate
      
      * PR feedback, and fix unit test failure
      
      * Fix PR reference in comment
      
      * PR comments
      
      * Comments for helper funcs
      
      * Remove unique_ptr indirection for the AlignedBuffers
      
      * Fix incorrect indexing of AlignedBuffer vector (whoops\!)
      
      * Remove unnecessary CHANGE_DYAMIC_STATEs
      
      * De-update pass property unit test for const folding
      
      * Replace mystery runes with all_pass_property_off
      da1cacde
    • Tomasz Dołbniak's avatar
      [FusedOps] Split (#2951) · ba546455
      Tomasz Dołbniak authored
      * Split op skeleton
      
      * Two ways to construct a fused Split to be able to use it in onnx importer
      
      * refactor: move the util::split() helper functions to the core
      
      * Split's decompose_op() implementation using a helper function
      
      * Use fused Split in the onnx_importer
      
      * Code formatting
      
      * PR feedback
      
      * Split helpers moved to ngraph/builder
      
      * Basic UT - split a 1D tensor to 3 equal parts
      
      * UT: Split 2D tensor into variable length parts
      
      * Code formatting
      
      * Catch the proper type of exception in the onnx_importer split()
      
      * Initialize members in the correct order
      
      * Type prop tests for Split
      
      * Code formatting
      
      * PR feedback
      ba546455
  3. 17 May, 2019 1 commit
    • Tomasz Dołbniak's avatar
      [FusedOps] SquaredDifference (#2918) · d99ac8ce
      Tomasz Dołbniak authored
      * SquaredDifference implementation
      
      * Broadcast input before using it
      
      * Simple test of SquaredDifference
      
      * SquaredDifference validation tests
      
      * Formatting adjustments
      
      * Docs correction
      
      * Exclude the unit test on iGPU
      
      * Keep the includes in a single group
      
      * Update intelgpu_backend.cpp
      
      * Update unit_test.manifest
      
      * UT for the broadcasting path
      d99ac8ce
  4. 16 May, 2019 1 commit
  5. 15 May, 2019 2 commits
    • Adam Rogowiec's avatar
      [Fused Op] GRN (#2905) · 4fb4be5e
      Adam Rogowiec authored
      * Extend lp-norm functions to take bias.
      
      * Move lp-norm utilities to nGraph core op/util.
      
      * Move norm files to builder directory.
      
      * Apply clang-format.
      
      * Sceleton for GRN operation.
      
      * Add GRN implementation.
      
      * Fix reshape utility function.
      
      * Address review comments.
      
      * Add using namespace std.
      
      * Review comments.
      
      * Few fixes in grn implementation.
      
      * Clang format.
      
      * Basic UT.
      
      * Fix expected data.
      
      * Add more UT and skip them on IGPU.
      
      * Review comments: const correctness and remove using namespace std statement.
      
      * Unblock GRN on IGPU.
      
      * Get back GRN op case to switch.
      
      * merge error
      4fb4be5e
    • Michał Karzyński's avatar
      [Fused] Squeeze op with dynamic input (#2896) · 092219ec
      Michał Karzyński authored
      * Add fused Squeeze op
      
      * Use fused Squeeze op in ONNX importer
      
      * Update serializer
      
      * Add unit tests
      
      * Add type prop tests
      
      * Change Squeeze signature to accept a dynamic input for axes
      
      * Update src/ngraph/op/fused/squeeze.cpp
      Co-Authored-By: 's avatarAdam Rogowiec <adam.rogowiec@intel.com>
      
      * Code review comments
      
      * Fix failing unit test
      
      * Code review comments
      
      * Code review comments
      
      * style
      
      * Add op to iGPU backend
      
      * Add op to iGPU backend
      092219ec
  6. 13 May, 2019 2 commits
    • Sang Ik Lee's avatar
      scatter_add and scatter_nd_add (#2874) · 7e6c34cf
      Sang Ik Lee authored
      * Temp save.
      
      * Temp save.
      
      * Temp save.
      
      * Temp save.
      
      * Temp save.
      
      * Temp save.
      
      * Temp save.
      
      * Fix compile errors.
      
      * Fix incorrect index.
      
      * Fix UT typo.
      
      * Interpreter passes UT.
      
      * Fix more bugs.
      
      * Apply style.
      
      * Add shape check for updates tensor.
      
      * Merge typo
      7e6c34cf
    • tsocha's avatar
      [Fused Op] Add ScaleShift operator (#2892) · 6f961de3
      tsocha authored
      * Add ScaleShift operator
      
      * Add ScaleShift to serializer
      
      * Add UT for ScaleShift
      
      * Add type_prop tests for ScaleShift
      
      * Style-fix
      
      * Skip tests on Intel GPU
      
      * Review fix 1
      
      * Style fix
      6f961de3
  7. 11 May, 2019 2 commits
    • Adam Rogowiec's avatar
      [Fused op] Normalize (#2888) · fffbaa89
      Adam Rogowiec authored
      * Extend lp-norm functions to take bias.
      
      * Move lp-norm utilities to nGraph core op/util.
      
      * Move norm files to builder directory.
      
      * Normalize fused operator implementation.
      
      * Fused op boilerplate.
      
      * Fix node validation and normalization across spatial axes.
      
      * Add UT normalize across CHW with scalar scale.
      
      * Fix expanding input tensor to 4D.
      
      * Add more UT for 3D and 2D.
      
      * Add more UT, with scale and across HW.
      
      * Update to new localization of l2_norm function.
      
      * Add type_prop UT and update gpu/igpu manifests.
      
      * Apply clang-format.
      
      * Add positive UT for type_prop.
      
      * Update unit test manifests.
      
      * Address review comments.
      
      * Add using namespace std.
      
      * Remove unnecessary std prefixes.
      
      * Remove blacklisted unittests for GPU.
      
      * Apply clang-format.
      
      * Review comments.
      
      * Fix clang errors.
      fffbaa89
    • tsocha's avatar
      [Fused Op] Add new fused operator: MVN(Mean Variance Normalization) (#2887) · 58dc9d09
      tsocha authored
      * Basic mean normalization
      
      * Add NVM to serializer
      
      * Add test for mean normalization
      
      * Add support for across_channel atribute
      
      * Add test for mvn_mean_normalization splited by channels
      
      * Assume that data have n and c channels
      
      * Add support for normalize_variance attribute
      
      * Add test for full mean variance normalization
      
      * Add type prop test
      
      * Skip tests on GPU
      
      * Use ngraph builder functions instead of my own
      
      * Update mvn.cpp
      
      * Change order in initializer list
      
      * Review fix
      58dc9d09
  8. 10 May, 2019 4 commits
    • Jayaram Bobba's avatar
      Enable non-constructor use of shape inference (#2875) · 76fb19b0
      Jayaram Bobba authored
      * Enable non-constructor use of shape inference
      
      * Move GPU BatchNormTrainingWithStats shape inference out of the constructor
      
      * Addressed PR feedback
      76fb19b0
    • Jayaram Bobba's avatar
      Added extra attributes to DynSlice (#2862) · 8b091114
      Jayaram Bobba authored
      * Add more slicing attributes to dynslice
      
      * Added output shape computation for dyn slice
      
      * Bug fixes and added unit tests
      
      * Style fix
      
      * Addressed PR feedback
      8b091114
    • Tomasz Dołbniak's avatar
      [FusedOps] Clamp (#2886) · a006b5c4
      Tomasz Dołbniak authored
      * Fused Clamp op implementation
      
      * Basic clamp test with some edge cases
      
      * Dump the expected and actual values for NgraphTestCase
      
      * Validate the min and max params for Clamp
      
      * Use clamp in clip
      
      * Disable Clamp and its test on iGPU
      
      * Use getters for Clamp's parameters
      
      * Validate clamp's params in pre_validate_and_infer_types
      
      * Unit tests for clamp op validation
      
      * Revert "Dump the expected and actual values for NgraphTestCase"
      
      This reverts commit 3a029d70e62339ee84aadf2bf16e418281b85ff7.
      
      * Clamp op docs
      a006b5c4
    • Adam Rogowiec's avatar
      Add HardSigmoid to fused ops. (#2824) · fb0ae59c
      Adam Rogowiec authored
      * Move HardSigmoid to nGraph fused operators.
      
      * UT for HardSigmoid fused operator.
      
      * Add type_prop UT.
      
      * Reorder operations in implementation.
      
      * Fix unit tests.
      
      * Fix typo.
      
      * Apply style-check.
      
      * Switch to single-precision parameters.
      
      * Disable unit test for IGPU.
      fb0ae59c
  9. 09 May, 2019 1 commit
  10. 08 May, 2019 1 commit
  11. 07 May, 2019 1 commit
    • tsocha's avatar
      [Fused Op] Move Gemm operator from onnx import to ngraph fused ops (#2853) · 9244e45b
      tsocha authored
      * Move transpose and flatten to ngraph op utils dir
      
      * Move gemm operator into ngraph fused ops
      
      * Style fix
      
      * Add Gemm to serializer
      
      * Add type_prop test for gemm
      
      * Use Gemm default values
      
      * Add UT for Gemm
      
      * Fix comments
      
      * Little cleanup
      
      * Remove artifact headers
      
      * Fix gemm documentation
      
      * Skip gemm test on GPU
      
      * Add test for broadcasting input C
      
      * Review fix pt. 1
      
      * Fix typo
      9244e45b
  12. 29 Apr, 2019 1 commit
  13. 27 Apr, 2019 2 commits
    • Michał Karzyński's avatar
      [Fused] Add DepthToSpace and SpaceToDepth fused ops (#2811) · e5f489a2
      Michał Karzyński authored
      * Refactor get_default_axis_vector to use std:: functions
      
      * Move get_default_axis_vector to ngraph::op
      
      * Move reorder_axes to ngraph::op::util
      
      * Move reshape helper to ngraph::op::util
      
      * Move DepthToSpace to fused ops
      
      * Add DepthToSpace docstrings
      
      * Move SpaceToDepth to fused ops
      
      * Remove redundant ngraph::op::util::get_default_axis_vector function
      
      * Add ops to serializer
      
      * Change block_size to size_t
      
      * Add fused ops tests
      
      * Add type prop tests
      
      * Add ops to list of ops unsupported on iGPU
      
      * Disable tests in iGPU manifest
      e5f489a2
    • Michał Karzyński's avatar
      [Fused Ops] Add fused version of Elu (#2797) · d07e38e0
      Michał Karzyński authored
      * Add fused version of Elu op
      
      * Refactor ONNX importer prelu function to use fused op
      
      * Style check
      
      * Add docstrings
      
      * Move make_constant_node to op/util
      
      * Use make_constant_node helper
      
      * Remove unneeded std:: prefixes
      
      * Remove make_constant_node function, use builder::make_constant
      
      * Remove redundant includes
      
      * Add Elu to serializer
      
      * Add Elu tests
      
      * Add Elu tests to type prop
      
      * Add Elu to list of ops unsupported on iGPU
      
      * Add Elu to list of ops unsupported on iGPU
      
      * Disable tests in iGPU manifest
      d07e38e0
  14. 22 Apr, 2019 1 commit
    • Jayaram Bobba's avatar
      Adding auto pad to convolution and pooling (#2743) · aa3692d2
      Jayaram Bobba authored
      * Adding auto pad to convolution
      
      * Added auto pad to pooling ops and moved auto pad computation to utility method
      
      * Added serializer support for autopadding. workaround for clang macro warning
      
      * Style fix
      
      * Addressed PR feedback
      
      * Fix docstrings for same_upper and same_lower
      aa3692d2
  15. 17 Apr, 2019 1 commit
    • Sang Ik Lee's avatar
      gather, gather_nd (#2742) · 59632bac
      Sang Ik Lee authored
      * Temp.
      
      * Put all the dummy files.
      
      * Remove some compile errors.
      
      * WIP: Add gather and gather_nd kernels.
      
      * Temp save.
      
      * Update comments for gather.
      
      * Implement reference gather.
      
      * Validate and infer shape.
      
      * Style.
      
      * Fix compile issues.
      
      * Add serializer support.
      
      * Fix interpreter compilation issues.
      
      * WIP: Add UT
      
      * WIP: Add UT
      
      * gather_nd UT passing.
      
      * Fix gather with no axis.
      
      * Fix gather issue.
      
      * Update unit_test.manifest for backends and add gather, gather_nd  support for generic cpu.
      
      * Add type_prop tests.
      
      * Add CPU builders.
      
      * Fix codegen.
      
      * Make some UT numbers more readable.
      
      * Style.
      
      * Update Copyright Year
      
      * Update Copyright Year
      
      * Fix Typo.
      
      * Remove unused variable.
      
      * fix nv gpu build error
      
      * Fix intel gpu compilation.
      
      * Add basic docstring.
      
      * Allow 1D indices for gather_nd.
      
      * Allow scalar indices for gather.
      
      * Update unit_test manifest files.
      
      * Style.
      
      * Add indices element type check and add failing type_prop checks.
      
      * Update docstring.
      
      * Fix incorrect test names in unit_test.manifest
      
      * Missing header
      59632bac
  16. 16 Apr, 2019 1 commit
    • Jayaram Bobba's avatar
      Moves some fused convolution ops to core FusedOps (#2733) · 6b5016e5
      Jayaram Bobba authored
      * - Moves some fused convolution ops to core FusedOps
      - Adds support for decomposing and replacing multi-output FusedOps
      - Adds query callbacks to FusedOpDecomposition to check if a FusedOp is
        supported by a backend
      - Adds core fusion patterns for FusedOps
      -
      
      * style fix
      
      * Added comments on FOP_FUSIONS
      
      * gpu convolution 1d bug fix (#2741)
      
      * Fix bug with dex-only compilation and addressed PR comments
      6b5016e5
  17. 11 Apr, 2019 1 commit
    • Louis Feng's avatar
      [Dynamic Shape] Moving BatchDot to Core Op (#2691) · cc8dd452
      Louis Feng authored
      * batch dot WIP.
      
      * cpu backend refactor and unit tests pass.
      
      * WIP.
      
      * batch dot interpreter impelementation.
      
      * minor clean up.
      
      * more clean up.
      
      * patching the gpu backends.
      
      * added more tests, fixes, etc.
      
      * fixed compile error.
      
      * renamed batch dot to batch matmul.
      
      * refactor WIP.
      
      * fixes some tests and formating.
      
      * more fixes.
      cc8dd452
  18. 05 Apr, 2019 2 commits
    • Jayaram Bobba's avatar
      Adding support for fused ops that are decomposable to core ngraph ops (#2688) · 7775d49d
      Jayaram Bobba authored
      * Initial support for specification of fused ops and type inference
      
      * Added FusedOpDecomposition pass and execution test cases
      
      * Serializer support
      
      * style fix
      
      * Add FusedOpDecomposition to GPU and IGPU backends
      
      * Addressed PR feedback
      
      * Fix comment
      
      * Addressed PR feedback
      7775d49d
    • Adam Procter's avatar
      Change OneHot to accept only integral types (#2689) · 9fea22b2
      Adam Procter authored
      * Change OneHot to accept only non-real types
      
      * Update docstring
      
      * Update Python test
      
      * Add is_integral to element::Type
      
      * Update docs
      
      * Change is_integral to false for boolean
      
      * Revert "Change is_integral to false for boolean"
      
      This reverts commit 099ff378ae7fcbd1d9346665812f6b95e4886186.
      
      * Revert "Add is_integral to element::Type"
      
      This reverts commit 58fdf76fecaefdad10431f9a894523f326f3adca.
      
      * Change is_integral so it is, by definition, !is_real
      9fea22b2
  19. 28 Mar, 2019 1 commit
    • Adam Procter's avatar
      Pass for specialization of shape-relevant inputs (#2645) · beddf528
      Adam Procter authored
      * Skeleton for shape specialization
      
      * Change signature of as_constants to return the vector, with zero elements in case of failure
      
      * Add as_constants implementation for Concat
      
      * Some comments
      
      * Add check for element types of replacements; fix comment typo
      
      * Address PR feedback, and add some comments
      
      * An extra check, and a comment, ahead of the memcpy
      
      * Minor comment wording fix
      beddf528
  20. 25 Mar, 2019 1 commit
    • Louis Feng's avatar
      (Dynamic) Reshape and Slice (#2611) · 2bb5bd50
      Louis Feng authored
      * added dyn_reshape and dyn_slice.
      
      * style fix.
      
      * some fixes.
      
      * added dyn reshape type prop.
      
      * fixed gpu build.
      
      * added headers to gpu emitter.
      2bb5bd50
  21. 24 Mar, 2019 1 commit
    • Nagy Mostafa's avatar
      Dynamic Padding Implementation (#2641) · 81f33056
      Nagy Mostafa authored
      * Inital DynPad implementation
      
      * Inital DynPad implementation
      
      * Fixed DynPad validation. Added Unit-test
      
      * Nits and white-space fixes
      
      * - PR feedback.
      - Added padding rank check tests
      
      * Minor comment fix
      
      * Fix merge error
      81f33056
  22. 21 Mar, 2019 2 commits
    • Mahbub Zaman's avatar
      Dyn broadcast initial (#2564) · 288e2ed4
      Mahbub Zaman authored
      * Adds new core op DynBroadcast
      
      * Adds new core op DynBroadcast
      
      * Fixes build error caused by recent changes in node validation API
      
      * Addresses code review comments.
      
      * Moves new op under experimental.
      
      * Fixes style errors.
      
      * Silee2/external project rpath (#2525)
      
      * Set rpath for mkl-dnn.
      
      * Set library rpath to  on Linux.
      
      * Use patchelf to set rpath of prebuilt mklml library.
      
      * Add patchelf to Linux Dockerfiles.
      
      * Revert "Add patchelf to Linux Dockerfiles."
      
      This reverts commit 1769505a866061552942e19467ddcc0dad0922e8.
      
      * Revert "Use patchelf to set rpath of prebuilt mklml library."
      
      This reverts commit 726f6553a0450520328607177d64baf48fa93dd2.
      
      * Copy cldnn runtime.
      
      * Copy mlsl libraries.
      
      * add unit tests for the two versions of Backend create_tensor (#2607)
      
      * add unit tests for the two versions of Backend create_tensor
      
      * disable new unit test on GPU until we have time to address it
      
      * Resolves merge conflicts
      
      * Addresses code review comments.
      
      * Fixes merge issues
      
      * Fixes style errors
      
      * Fixes type check to use compatible()
      
      * Reverts unintenional change
      
      * Reverts unintenional change
      
      * Fixes typo in comment
      
      * Addresses code review comments.
      288e2ed4
    • tsocha's avatar
      [ONNX] Enable Pad modes for ONNX pad operator (#2590) · f8146495
      tsocha authored
      * Add support for negative padding
      
      * Use std::bind in pad builder check
      
      * Add support for negative padding in CPU backend
      
      * Updated kernel to do pad+slice
      
      * Remove type conversion warnings
      
      * Fix review comments
      
      * Remove interior padding from core op and interpreter stuff
      
      * Update backends other than GPU for retirement of padding_interior
      
      * Skeleton of support for edge/reflect padding
      
      * Post-merge cleanup
      
      * Attempt reference implementation for EDGE.
      
      * Fix the edge-padding reference, and add some unit tests
      
      * Implement REFLECT padding ref; add tests
      
      * Fixes to the CPU stuff so it compiles now
      
      * Fix test
      
      * Add support for different pad modes
      
      * Restore a stub get_padding_interior function, and tweak some stale comments
      
      * Update ONNX importer to not supply interior padding value; add checks for padding-too-small for EDGE and REFLECT
      
      * Typo
      
      * Bop a warning
      
      * Attempt fix to INTELGPU backend
      
      * Attempt another fix to INTELGPU backend
      
      * Fix pyapi
      
      * Style apply
      
      * Add support for padding modes
      
      * Remove unnecesary node validation checks
      
      * Remove tests for minimal reflect and edge pad
      
      * Remove commented tests
      
      * Remove unnecesary Asserts
      
      * Little update of pad documentation
      
      * Monospace for pad_mode options
      
      * Revert "Remove tests for minimal reflect and edge pad"
      
      This reverts commit 81e4787ea47195b832cab1452dde698bc05776fe.
      
      * Revert "Remove unnecesary node validation checks"
      
      This reverts commit 7e68db7564f3c9b1fd40e7db1d1bda4e0677cad9.
      
      * Test only spatial dims
      
      * axis -> spatial axis
      
      * Fix typo
      
      * Style check
      
      * Update test
      
      * Add CoordinateDiff include
      
      * Remove pad_mode from tree visualization
      
      * Convert padding into NVShape
      
      * Skip failing tests on GPU
      
      * Revert mode change
      
      * Remove merge artifact
      
      * Rename pad kernel into pad_ref
      f8146495
  23. 15 Mar, 2019 1 commit
    • Adam Procter's avatar
      (Dynamic) Transpose op (#2594) · 37b95a02
      Adam Procter authored
      * Add construction API for Transpose
      
      * Add type propagation unit tests for Transpose
      
      * Add Transpose to op_tbl, add cases for serializer, stub out execution in INTERPRETER
      
      * Add docs for Transpose
      
      * Remove commented-out code
      
      * Add stub cases for op_tbl-dependent stuff
      
      * Fix missing FAIL()s in the transpose exception checks; fix validate_and_infer_types check
      37b95a02
  24. 05 Mar, 2019 1 commit
  25. 02 Mar, 2019 1 commit
    • Adam Procter's avatar
      New macros for faster node validation checks (#2537) · 6ffbd254
      Adam Procter authored
      * Skeleton for faster validation asserts
      
      * Switch to __VA_ARGS__ for compatibility, remove -Wno-variadic-macros
      
      * Add benchmarks for constructing Add and Convolution
      
      * Quick hack to avoid shadowing inside the CHECK macro
      
      * Quick hack to avoid inadvertent capture inside the macro
      
      * Update convolution, and change a bunch of tests to anticipate the new error class
      6ffbd254
  26. 17 Jan, 2019 1 commit
    • Adam Procter's avatar
      Retire FunctionCall, Reduce, ReduceWindow, SelectAndScatter (#2223) · 18d0993e
      Adam Procter authored
      * Retire Reduce, ReduceWindow, SelectAndScatter
      
      * Remove lingering AnyAllReplacement code
      
      * Remove apparently-now-unused macro
      
      * Remove lingering op/reduce.hpp includes
      
      * Remove FunctionCall
      
      * Update Python stuff to remove FunctionCall, Reduce
      
      * Add Any/All tests to GPU manifest
      
      * Remove deleted .hpp #include from gpu_compiled_function.cpp
      
      * Restore reduce_function.cpp since that is where the All/Any implementations ended up residing
      
      * Add reduce_function.cpp back into CMakeLists.txt
      
      * Remove #include of deleted reduce.hpp
      
      * Re-remove AnyAllReplacement from GPU passes
      
      * Remove deleted #includes from gpu_internal_function.cpp
      
      * Remove git conflict stuff (whoops)
      
      * Add newline at EOF, per review comment
      
      * Address flake8 complaint about unused import
      18d0993e
  27. 03 Jan, 2019 1 commit
  28. 12 Dec, 2018 1 commit
    • Adam Procter's avatar
      "Any" and "All" ops (#2217) · fc216f39
      Adam Procter authored
      * Skip --exclude-libs linker flag on macOS
      
      * Change test to if(LINUX)
      
      * Add "Any" op and AnyAllReplacement pass
      
      * Add AnyAllReplacement to IGPU backend
      
      * Stub (error-out) handlers for GPU and INTELGPU
      
      * Add 'All' op
      
      * Add AnyAllInsertion pass, deprecate deprecable ops, add stubs for INTELGPU
      
      * Add failing unit tests to INTELGPU manifest
      
      * Reduce boilerplate
      
      * Reduce more boilerplate
      
      * Add static keywords
      fc216f39
  29. 11 Dec, 2018 1 commit
    • Nick Korovaiko's avatar
      Embedding fprop (#2053) · 16d88a7f
      Nick Korovaiko authored
      * embedding fprop
      
      * add a new line
      
      * type prop tests
      
      * rename
      
      * add a stub handler for embeddinglookup on intelgpu
      
      * rename embedding.* to embedding_lookup
      
      * rename tests in manifest files
      
      * move embeddinglookup to catchall case
      
      * fix test case breaks after merge
      
      * add a negative test, pull up an assertion
      
      * fix test failures
      16d88a7f
  30. 28 Nov, 2018 1 commit
    • Scott Cyphers's avatar
      Cyphers/bnorm back (#2129) · 403a09ce
      Scott Cyphers authored
      * Fix batchnorm argument order, cleanup some comments, fix backprop
      
      * Merge error
      
      * Clean up training function, organize inference test
      
      * BatchNormInference tests
      
      * Training case
      
      * Training test
      
      * Fix autodiff BatchNorm test
      
      * Cleanup
      
      * Move file to doc checkout
      
      * Update disabled test name in igpu manifest
      Fix unnused variable
      
      * Unit tests disables
      
      * Review comments
      403a09ce