1. 22 Mar, 2018 7 commits
    • tsocha's avatar
      061dfe00
    • Nick Korovaiko's avatar
      make sure deserializer doesn't add op::Result twice (#714) · 0cad670b
      Nick Korovaiko authored
      * make sure deserializer doesn't add op::Result twice
      0cad670b
    • Pruthvi's avatar
      Pruthvi/bn inference (#670) · 5394ad2d
      Pruthvi authored
      * Added new ctor for bn which supports Inference
      - added mkldnn emitter code for bn inference
      * Added test case for bn inference
      - added support for layout propogation for bn inference
      * added sanity checks for gamma, beta, mean, variance shape in bn
      * added serializer support for bn inference
      5394ad2d
    • Fenglei's avatar
      Dot op that can handle more than 2D on GPU (#645) · 6ebc3c8c
      Fenglei authored
      * general dot for gpu
      6ebc3c8c
    • Chris Sullivan's avatar
      Add reduce sum to the GPU transformer (op::Sum) (#671) · bae77590
      Chris Sullivan authored
      * Current cudnn implementations use only
      a single dimension for the ngraph tensor data (width).
      In this case the tensor format should be set to
      
      CUDNN_TENSOR_NCHW
      
      so that adjacent memory accesses are coalesced (stride=1 for width).
      
      * * Added some kernel emitter helpers that are reused often.
      * Renamed EmitElementwise -> emit_elementwise to match emit<T>.
      * op::Sum now handles trivial case of dim(input_tensor) = dim(output_tensor)
        by performing a memcpy as no axes are reduced.
      
      *   Added general case for Nd descriptors which is used when the tensor
        has more than 4 dimensions. Currently a naive reduce is performed,
        in the future a coordinate transformation could be performed to
        improve the memory layout for the reduction.
      
      * Switched to codegen::CodeWriter::block_begin/end.
      It appears that CodeWriter::block_begin/end is not frequently used for emitters (in cpu and gpu transformers)
      because a block comment is often desired. To this end I added prefix/suffix default parameters to CodeWriter::block_begin/end
      so that this functionality is captured.
      bae77590
    • Chris Sullivan's avatar
      Add op::ReluBackprop to GPU transformer (#712) · 72f4d661
      Chris Sullivan authored
      * Added backprop op for relu and enabled tests.
      72f4d661
    • Nishant Patel's avatar
      Remove examples from pybind wrapper (#698) · 22d1e52b
      Nishant Patel authored
      * Remove examples from pybind wrapper
      * Adding back example (basic.py)
      22d1e52b
  2. 21 Mar, 2018 6 commits
  3. 20 Mar, 2018 16 commits
  4. 19 Mar, 2018 8 commits
  5. 20 Mar, 2018 2 commits
  6. 18 Mar, 2018 1 commit