- 29 Aug, 2018 1 commit
-
-
Robert Kimball authored
* use line comments instead of multiline comments for license header * update more * update new files * more header updates * style
-
- 27 Aug, 2018 1 commit
-
-
Robert Kimball authored
* normalize comments * address review comments
-
- 22 Aug, 2018 1 commit
-
-
Nick Korovaiko authored
* argmax * manifests and serailizer
-
- 21 Aug, 2018 1 commit
-
-
Nick Korovaiko authored
* argmin * address feedbacka argmin * add new lines * addnew lines * address adam's nitpicks * scott's feedback * fix unit tests
-
- 13 Aug, 2018 2 commits
-
-
Robert Kimball authored
* enable parameter validation for all unit tests
-
Jayaram Bobba authored
* Remove validation checks from performance critical code paths and skip layout propagation to inputs * Add templated call method to backend for cases where users need input validation * Added missing return * fix python api compile error due to ngraph api change. * disable parameter validation in python api * make validating call a separate call rather than templated
-
- 08 Aug, 2018 1 commit
-
-
Robert Kimball authored
-
- 02 Aug, 2018 1 commit
-
-
Nick Korovaiko authored
* lrn init * fix comment * mkldnn lrn (#1295) * add serializer + fix compiler warnings
-
- 26 Jul, 2018 1 commit
-
-
shssf authored
* IntelGPUBackend: Broadcast operation * IntelGPUBackend: more tests for Broadcast operation * Move macro to static C function in Broadcast tests
-
- 18 Jul, 2018 2 commits
-
-
Robert Kimball authored
* make pool test check backends other than CPU * more unit test cleanup
-
Jaikrishnan Menon authored
-
- 06 Jul, 2018 1 commit
-
-
Nishant Patel authored
* Usage of mkldnn reshape updated * update reshape condition for mkldnn * Add a test case and order in which conditions are checked
-
- 02 Jul, 2018 1 commit
-
-
Sandeep authored
* declare sigmoid for core fusion * add simple test for sigmoid * info fusion status * cp op as main op * builds as expected * move sigmoid fusion code * add reference kernel * sigmoid bprop reference kernel and clang-format * add delta to bprop * fprop called * compiles bprop * move tests * serializer support * address comments in code * add doc * naming similar to core ops * fix failing test * fix failing test * address clang issue * more changes * change test macro
-
- 28 Jun, 2018 1 commit
-
-
Nishant Patel authored
* Reshape 4d * Support dimshuffles/transpose with MKLDNN * Addressing PR Feedback * Use Eigen for 3D dimshuffles
-
- 20 Jun, 2018 1 commit
-
-
Adam Procter authored
* Fix bug with concat for 0-size tensors * Simplify test for zero-length axes, per PR comments
-
- 15 Jun, 2018 1 commit
-
-
Robert Kimball authored
-
- 12 Jun, 2018 1 commit
-
-
Chris Sullivan authored
* Added op::ReplaceSlice and enabled respective tests. * div64 -> division_by_invariant_multiplication * Added GPUMemoryManager for aggregating memory allocations and copies into a single operation for kernel arguments, and a reusuable memory space for workspace allocations. * Added GPUShape and reworked Shape helpers to be compatible with different shape types. Shape is now implicitly convertable to GPUShape. * Updated shape helpers signature and add conversion operators/constructors for GPUShape. * Removed several unecessary static_casts now that GPUShape is utilized. GPUTensorViewWrapper had a few functions returning std::vector<size_t> instead of Shape/Strides. These were updated as well to take advantage of GPUShape convertion operators. * Forgot to fix lambda for workspace allocations to match that of argspace allocations. * Added GPUShape and reworked Shape helpers to be compatible with different shape types. Shape is now implicitly convertable to GPUShape. * Updated shape helpers signature and add conversion operators/constructors for GPUShape. * Adjust row_major_strides to avoid reversed-copy. * Moved declaration out of loop for clang. * Moved gpu_shape to gpu transformer. * Removed no longer necessary headers. * Added stdexcept header to gpu_shape.hpp * Coordinate->GPUShape * Refactored replace_slice into CudaKernelBuilder. Simplified allocations using new GPUAllocator and GPUMemoryManager. * Refactor allocations to make use of primitive emitter. Now memory primitives are registered at compile time and the gpu memory address is resolved at runtime by ivoking the primitive. * Changed check on 64bit shape to check if high bits are set. * Added const qualifier to data being copied in GPUAllocator::reserve_argspace * Added const qualifier to data being copied in GPUAllocator::reserve_argspace * Replaced runtime host to device memcpys with GPUAllocator reservations in order to move them to compile time. * Forgot to remove no longer necessary buffer freeing from op emitters. * Removed replace slice. * Removed more replace_slice diffs. * Updated replace_slice op to utilize GPUShape and GPUMemoryManager. * Added back missing changes after timeline resolution. * Added spacing between functions in GPUShape and boolean operators in shape.hpp. * Template parameters are UPPER_SNAKE_CASE. * Added unit tests for GPUMemoryManager and added checks that ensure the device memory is allocated prior to address resolution by the memory_primitives. Also exposed the allocation size of the memory manager. * Return type of shape_size should be large enough to encapsulate the full stride of the tensor. This should be 64bits wide regardless of the underlying value_type of the ShapeType. * Upstreaming changes to shape_size (which returns size_t). * cuDNN softmax impl. for all axis activation. * Added catch for per-axis activations. * Removed commended headers. * Added explicit function for queueing kernel argument data rather than inline in the reservation function per @fengleitian recommendation. * Add softmax cuda kernel. It relies on atomic memory addition to global memory, this will add contention and should be optimized in the future. A multilevel reduction can be found in cs/gpu_softmax_cuda_shfl but it requires some further engineering. * Refactored reduce coordinate transform code into a helper and applied it to broadcast. Broadcast added to CUDAEmitter, now supports multiple non-consecutive axes. * Removed change to data_types variable and updated/removed comments. * Refactored softmax into the emission of two fused elementwise collective ops. Added fused elementwise + collective kernels. Softmax is then just the combination of exp_sum_reduce + div_broadcast. * Added default param to GPUAllocator::reserve_workspace to request memory initialization for each invocation of the memory primitive. * GPU workspace memory is zero initialized by default but can be turned off if desired. * Added template parameter to CUDAEmitter::build_elementwise, REDUCE_OP_TYPE, to specify the ngraph op type to use for the reduction in the fusted ew_collective kernel. * Renamed variables and updated a comment. * Removed outdated softmax kernel to avoid confusion. Can be added later when atomic reduce is replaced. * Clang complained about lack of explicit destructor for AxisSet. Since cuda_emitter doesn't need AxisSet specifically, switch to std::set<size_t>. This also has the benefit that in the future, if we wish to emit kernels without ngraph core (for example in a standalone binary via a serialized graph manifest, we don't depend on AxisSet. * softmax -> broadcast in build_broadcast. * Separate elementwise and elementwise_collective.
-
- 06 Jun, 2018 1 commit
-
-
Nishant Patel authored
* Support 3-D pool with mkldnn * Move execute() to test_tools.hpp
-
- 04 Jun, 2018 1 commit
-
-
Robert Kimball authored
* Update cmake files to more modern approach * disable building libraries that are not required * handle more build cases * add versions to backend libs. add start of package target. * add create_backend to backends * temporary workaround to tbb not linking correctly with gcc * install codegen lib * force tbb to link to the cpu backend so that it is available for codegen * fix clang build error * fix warning for codegen build * update cuda header paths * change error message for opening backend shared library * set lib path
-
- 02 Jun, 2018 1 commit
-
-
Yixing Lao authored
-
- 26 May, 2018 1 commit
-
-
Jayaram Bobba authored
* Bug fix to graph control logic to always compute output tensors * Remove stale comments
-
- 25 May, 2018 2 commits
- 18 May, 2018 1 commit
-
-
Nick Korovaiko authored
* use reference kernel for reverse_sequence for int * move tests * resolve CI errors * TEST to NGRAPH_TEST
-
- 10 May, 2018 2 commits
-
-
Yixing Lao authored
* test_control in util
-
Robert Kimball authored
* Add mechanism for disabling specific backend unit tests from a manifest file. Populate the test manifest files for CPU, GPU and INTERPRETER. * update docs for new manifest controlled transformer unit tests
-
- 09 May, 2018 2 commits
-
-
Chris Sullivan authored
* Moved emit_elementwise implementation into CUDAEmitter and added logical_and and logical_or ops. * Updated comment and formatting. * Added check for multi-output elementwise ops.
-
Chris Sullivan authored
* Added op::AvgPool cudnn impl. which works for 2-3 spatial dimesions and no/symmetric padding. Enabled tests. * Added cuda-c implementation of average pool which handles 1-3 spatial dimensions as well as asymmetric padding. This commit also introduces several helper functions for performing fast integer division and fast constant memory access. * Formatting. Removed bool that was used for testing to force the cuda impl. over cudnn. * Added CUDNN AvgPoolBackprop implementation. * Removed inline enum in preference of a helper struct. Removed instances of multiple declarations on a single line. Updated comments. * Removed _prefix to helper functions in anonymous namespace.
-
- 08 May, 2018 2 commits
-
-
Fenglei authored
* add concat op * change to concat * add more code for gpu concat * compile sucess version with bug * add emit_concat_op * runable with wrong result * working version * add some comments * delete old comments. * delete old comments. * remove bug doxyen comments
-
Jayaram Bobba authored
* Make temp memory pools static to avoid memory allocation overheads * Initial implementation for graph control to enable caching and computation reuse * Added sphinx documentation * Turned off memory buffer reuse in CPU transformer to support computation reuse. Added unit test * Change memoizable to cacheable * Change memoizable to cacheable * Rename variables
-
- 05 May, 2018 1 commit
-
-
Fenglei authored
* add code to gpu reverse * add reverse emitter and kernel builder * working versrion
-
- 30 Apr, 2018 1 commit
-
-
varun-intel authored
* interpreter implementation and tests * style * correct * tolerance * skip * type * cast * double * types * format * add bn to the inference engine backend
-
- 27 Apr, 2018 1 commit
-
-
Fenglei authored
* add select op, pass data type for each operand * fix bugs and apply clang format * fix index bug
-
- 25 Apr, 2018 2 commits
-
-
Chris Sullivan authored
* Added cudnn batch norm operation to GPU transformer. Brought batchnorm tests out of cpu_tests and into backend_tests. Need to add JIRA ticket for interpreter SKIPS. * CUDNN batchnorm is implemented. In the ForwardTraining branch CUDNN seems to calculate the batch mean correctly but the batch variance incorrectly. Currently the batchnorm output and mean are calculated correctly for tests: * GPU.batchnorm_fprop_b2c2h3w3_mean_var * GPU.batchnorm_fprop_b1c2h2w2 * GPU.batchnorm_fprop_b2c2h2w1 but the variance calculated for the batches in these tests is incorrectly calculated by CUDNN. Also added an additional test and cleaned up some of the old tests. * MKLDNN internally utilizes the biased estimate of the population variance and the tests have been crafted to suit MKLDNN. According to the original batchnorm publication (https://arxiv.org/pdf/1502.03167v3.pdf), population (unbiased) statistics should be used for inference, and mini-batch (biased) statistics should be used training (forward/backward). For the variance this means utlitizing the following equations, respectively: (biased) Var[X] = 1/m * Sum_i(x_i-mu)^2 :: used in training (unbiased) Var[X] = 1/(m-1) * Sum_i(x_i-mu)^2 :: used in inference s.t. x_i are elements of X and m = N*D*H*W. For large batch sizes in inference this may not impact convergence as m >> 1, but for small batch sizes it will. CUDNN internally utilizes the unbiased variance. Changes: * Added Multiply op to Forward pass of batchnorm to convert the unbiased variance to a biased one. The op utilizes the blending scaling factors to apply the bias factor. * Adds emission for the BatchNormBackprop kernel and cleans up the emitter implementation. * Added hashing to cudnn::batchnorm op. * Formatting. * Changed hashing of epsilon in cudnn batchnorm. * Remove implicit conversion and default case in switch for bn. * Added skips for IE transformer on batchnorm. * add cudnn include path to compiler.cpp * seperate two path * PR #892 and #825 which were recently merged both forgot skips for the GPU backend. Adding them in as they are unimplemented ops. * The allocation and deletion of primitives was occuring in seperate translation units with raw c pointers. Because of this, it was not clear that these were being freed appropriate, nor did it indicate ownership of the pointers. In this commit these raw pointers have been converted over to std::unique_ptrs such that the construction/destruction is managed automatically. Furthermore, GPUPrimitiveEmitter::insert now only takes an r-value reference, requiring move-semantics to indicate that when inserting a primitive, the GPUPrimitiveEmitter takes ownership of the pointer. All instances of primitive creation have been modified. * CUDNN_SAFE_CALL * Removed redundant comment and made variable names more verbose. * Change from conditionals to case-switch in pooling to conform to batchnorm per @fengleitian's suggestion.
-
Fenglei authored
* add cudnn include path to compiler.cpp * seperate two path * Skipping one_hot tests for CPU as CI is failing. JIRA bug report: https://jira01.devtools.intel.com/browse/NGRAPH-1682.
-
- 24 Apr, 2018 1 commit
-
-
Robert Kimball authored
* get all ops working * enable autodiff tests for IE backend
-
- 23 Apr, 2018 2 commits
-
-
Adam Procter authored
* Add logical-and, logical-or ops * Restore accidentally-deleted test * add new ops to IE backend
-
Jayaram Bobba authored
* Enable users to request default/row-major layouts on result nodes * copy default layout attribute when copying the result ops * Result nodes cannot be replaced. use direct graph manipulation instead * Add unit test to verify default layouts on result nodes when requested
-
- 21 Apr, 2018 2 commits
-
-
Adam Straw authored
* ie backend and manager with passing unit tests except for select/function * fix function_call and select * simplify implemenation by removing support for convert and select * remove manager
-
Nishant Patel authored
* Support Concat with mkldnn (two inputs) * Support concat with mkldnn (multiple inputs) * Address feedback * Remove unused variable * Allow rank two tensor to mkldnn for concat & add a test case for 2D inputs * Add mkldnn_any layout to concat * Make API changes to get consistent with master
-