1. 31 Oct, 2018 5 commits
  2. 30 Oct, 2018 4 commits
  3. 29 Oct, 2018 5 commits
  4. 28 Oct, 2018 1 commit
  5. 27 Oct, 2018 3 commits
  6. 26 Oct, 2018 8 commits
  7. 25 Oct, 2018 5 commits
  8. 24 Oct, 2018 9 commits
    • amy.zhuang's avatar
      Rename two variables. · fe456412
      amy.zhuang authored
      fe456412
    • amy.zhuang's avatar
    • Chris Sullivan's avatar
      ArgReduce 64 bit indices (#1862) · 9f0589a8
      Chris Sullivan authored
      * Update ArgReduce to handle i64 indices.
      
      * Formatting.
      
      * Add throw for output types other than int32/64.
      
      * Add output type to hash.
      
      * Add type to throw.
      
      * Interpreter doesn't currently support 64bit output indices for argmin/max and so disabling this test [JIRA:NGRAPH-3183].
      9f0589a8
    • Adam Procter's avatar
      Partial Shapes and Types, Part 4λ: Convolution and backprops (#1890) · ccfcf4f9
      Adam Procter authored
      * Implement partial shape/type propagation for Convolution; fail for want of unit tests
      
      * Implement unit tests for partial shapes/types for Convolution
      ccfcf4f9
    • Nick Korovaiko's avatar
      fix Klockwork warnings CPU part 1 (#1902) · 0d693fc3
      Nick Korovaiko authored
      * fix Klockwork warnings CPU part 1
      
      * fix spelling error
      
      * fix a typo
      0d693fc3
    • Adam Rogowiec's avatar
      [ONNX] Gemm fix (#1877) · 92c1d504
      Adam Rogowiec authored
      * Fix gemm `input_c` broadcasting.
      
      * Comments.
      
      * Add comment
      92c1d504
    • tsocha's avatar
      Enable Trigonometric ops (#1879) · 835ecad9
      tsocha authored
      835ecad9
    • tsocha's avatar
      [ONNX] Non-linear ops (#1864) · a804c3d7
      tsocha authored
      * [ONNX] Non-linear ops
      
      * Style check
      a804c3d7
    • Chris Sullivan's avatar
      Cache and use fprop stats in cudnn batchnorm bprop (#1841) · fbc3a940
      Chris Sullivan authored
      * Temp bn update commit.
      
      * Add CUDNNBatchNorm which adds two additional outputs to batchnorm, the batch mean and batch inv variance.
      The batch mean is the same as the output mean if the cummulative average factor is 1.0. Add BatchNormCache pass which replaces all BatchNorm ops that are inputs to BatchNormBackprop
      with CUDNNBatchNorm which outputs the saved batch statistics directly to the backprop step.
      
      * Updated bn cache pass, removed extra tests, added test checking that provided stats are used in bprop instead of batch stats.
      This test was disabled for interpreter as the reference kernel needs to be updated to use provided statistics.
      
      * Formatting.
      
      * Update to new batch norm API.
      
      * CUDNNBatchNorm -> BatchNormTrainingWithStats
      
      * new line
      
      * Preprocess input variance into BN denominator for cudnn (#1885)
      
      * Add explicit cuda kernel to calculate what cuDNN describes as the inverse
      variance. In reality, the backward cudnn kernel for BN requires 1.0f / sqrt(variance + eps),
      which is the batchnorm denominator for each channel (a numerically stable inverse stddev).
      
      This introduces op annotations for batch norm backprop and updates the cudnn_emitter to support the insertion of this cuda kernel when required.
      
      * Disable second test on INTERPRETER.
      fbc3a940