1. 26 Apr, 2018 5 commits
  2. 25 Apr, 2018 4 commits
    • Chris Sullivan's avatar
      CUDNN BatchNorm (inference/forward/backward) (#893) · 23ac5e5a
      Chris Sullivan authored
      * Added cudnn batch norm operation to GPU transformer.
      Brought batchnorm tests out of cpu_tests and into
      backend_tests. Need to add JIRA ticket for interpreter
      SKIPS.
      
      * CUDNN batchnorm is implemented. In the ForwardTraining branch
      CUDNN seems to calculate the batch mean correctly but the batch variance incorrectly.
      Currently the batchnorm output and mean are calculated correctly for tests:
      * GPU.batchnorm_fprop_b2c2h3w3_mean_var
      * GPU.batchnorm_fprop_b1c2h2w2
      * GPU.batchnorm_fprop_b2c2h2w1
      but the variance calculated for the batches in these tests is incorrectly calculated by CUDNN.
      
      Also added an additional test and cleaned up some of the old tests.
      
      * MKLDNN internally utilizes the biased estimate of the population variance
      and the tests have been crafted to suit MKLDNN. According to the original
      batchnorm publication (https://arxiv.org/pdf/1502.03167v3.pdf), population
      (unbiased) statistics should be used for inference, and mini-batch (biased)
      statistics should be used training (forward/backward). For the variance this
      means utlitizing the following equations, respectively:
      
        (biased)   Var[X] = 1/m * Sum_i(x_i-mu)^2      :: used in training
        (unbiased) Var[X] = 1/(m-1) * Sum_i(x_i-mu)^2  :: used in inference
      
        s.t. x_i are elements of X and m = N*D*H*W.
      
      For large batch sizes in inference this may not impact convergence as m >> 1,
      but for small batch sizes it will. CUDNN internally utilizes the unbiased
      variance.
      
      Changes:
      * Added Multiply op to Forward pass of batchnorm to convert
        the unbiased variance to a biased one. The op utilizes the
        blending scaling factors to apply the bias factor.
      * Adds emission for the BatchNormBackprop kernel and cleans up
        the emitter implementation.
      
      * Added hashing to cudnn::batchnorm op.
      
      * Formatting.
      
      * Changed hashing of epsilon in cudnn batchnorm.
      
      * Remove implicit conversion and default case in switch for bn.
      
      * Added skips for IE transformer on batchnorm.
      
      * add cudnn include path to compiler.cpp
      
      * seperate two path
      
      * PR #892 and #825 which were recently merged both forgot skips for the GPU backend.
      Adding them in as they are unimplemented ops.
      
      * The allocation and deletion of primitives was occuring in seperate
      translation units with raw c pointers. Because of this, it was not
      clear that these were being freed appropriate, nor did it indicate
      ownership of the pointers.
      
      In this commit these raw pointers have been converted over to
      std::unique_ptrs such that the construction/destruction is managed
      automatically. Furthermore, GPUPrimitiveEmitter::insert now only
      takes an r-value reference, requiring move-semantics to indicate
      that when inserting a primitive, the GPUPrimitiveEmitter takes
      ownership of the pointer.
      
      All instances of primitive creation have been modified.
      
      * CUDNN_SAFE_CALL
      
      * Removed redundant comment and made variable names more verbose.
      
      * Change from conditionals to case-switch in pooling to conform to
      batchnorm per @fengleitian's suggestion.
      23ac5e5a
    • Fenglei's avatar
      add cudnn include path to compiler.cpp (#902) · b0421577
      Fenglei authored
      * add cudnn include path to compiler.cpp
      
      * seperate two path
      
      * Skipping one_hot tests for CPU as
      CI is failing. JIRA bug report: https://jira01.devtools.intel.com/browse/NGRAPH-1682.
      b0421577
    • Robert Kimball's avatar
    • Robert Kimball's avatar
  3. 24 Apr, 2018 3 commits
  4. 23 Apr, 2018 6 commits
  5. 21 Apr, 2018 3 commits
  6. 20 Apr, 2018 4 commits
  7. 18 Apr, 2018 5 commits
    • Robert Kimball's avatar
      remove obsolete classes (#867) · 37fca35c
      Robert Kimball authored
      * remove obsolete classes
      37fca35c
    • Sang Ik Lee's avatar
      Remove usage of CMAKE_MAKE_PROGRAM as it slows down parallel build (#871) · 392eeb3f
      Sang Ik Lee authored
      * Remove usage of CMAKE_MAKE_PROGRAM as it slows down parallel build
      
      * Make make properly propagate to child and add back targeted build.
      
      * Revert "Make make properly propagate to child and add back targeted build."
      
      This reverts commit b4b1d8db0f0d42850e53d4e0f773261c292ccaf6.
      392eeb3f
    • Chris Sullivan's avatar
      GPU Padding - add support for custom pad value and interior padding (#860) · 0be581c0
      Chris Sullivan authored
      * * cuda_emitter::build_pad now utilizes pad_value.
      
      * Added TypeInfo class for dispatching c-type information from the underlying ngraph element::Type.
        Adjusted test to use all_close when comparing floating point values (max_pool_2d_1channel_1image_overpadded).
      
      * Refactored max_pool_1d into cuda_emitter so that numeric_limits<c_type>::lowest() could be used for initial max value.
      Test max_pool_2d_1channel_1image_padded_negative_values now enabled and passes.
      
      * Removed old function and switch to size_t to match ngraph.
      
      * Added virtual dtor.
      
      * Adding support for interior padding. All op::Pad functionality is now included.
      
      * More info in runtime_error for checking of tensor dimensions. Removed commented code.
      0be581c0
    • Nick Korovaiko's avatar
      Weight Fusion (#853) · 8cb48d37
      Nick Korovaiko authored
      * CPU weight fusion initial version
      
      * add tests for weight_fusion
      
      * address @jbobba's feedback
      
      * before cleaning up convolution_weight_optimization.cpp
      
      * clean up, rename, fix perms, fix format
      8cb48d37
    • Louis Feng's avatar
  8. 17 Apr, 2018 3 commits
  9. 16 Apr, 2018 7 commits