- 21 Apr, 2018 3 commits
-
-
Adam Straw authored
* ie backend and manager with passing unit tests except for select/function * fix function_call and select * simplify implemenation by removing support for convert and select * remove manager
-
Chris Sullivan authored
This better supports non-ubuntu/debian systems.
-
Nishant Patel authored
* Support Concat with mkldnn (two inputs) * Support concat with mkldnn (multiple inputs) * Address feedback * Remove unused variable * Allow rank two tensor to mkldnn for concat & add a test case for 2D inputs * Add mkldnn_any layout to concat * Make API changes to get consistent with master
-
- 20 Apr, 2018 4 commits
-
-
Sang Ik Lee authored
executables not shared libraries. This change will remove the following warning. ld: warning: -pie being ignored. It is only used when linking a main executable
-
Michał Karzyński authored
Force test to fail if any env fails.
-
Robert Kimball authored
* Move runtime::Manager functionality into runtime::Backend * Remove unused files * remove obsolete function
-
L.S. Cook authored
-
- 18 Apr, 2018 5 commits
-
-
Robert Kimball authored
* remove obsolete classes
-
Sang Ik Lee authored
* Remove usage of CMAKE_MAKE_PROGRAM as it slows down parallel build * Make make properly propagate to child and add back targeted build. * Revert "Make make properly propagate to child and add back targeted build." This reverts commit b4b1d8db0f0d42850e53d4e0f773261c292ccaf6.
-
Chris Sullivan authored
* * cuda_emitter::build_pad now utilizes pad_value. * Added TypeInfo class for dispatching c-type information from the underlying ngraph element::Type. Adjusted test to use all_close when comparing floating point values (max_pool_2d_1channel_1image_overpadded). * Refactored max_pool_1d into cuda_emitter so that numeric_limits<c_type>::lowest() could be used for initial max value. Test max_pool_2d_1channel_1image_padded_negative_values now enabled and passes. * Removed old function and switch to size_t to match ngraph. * Added virtual dtor. * Adding support for interior padding. All op::Pad functionality is now included. * More info in runtime_error for checking of tensor dimensions. Removed commented code.
-
Nick Korovaiko authored
* CPU weight fusion initial version * add tests for weight_fusion * address @jbobba's feedback * before cleaning up convolution_weight_optimization.cpp * clean up, rename, fix perms, fix format
-
Louis Feng authored
-
- 17 Apr, 2018 3 commits
-
-
Robert Kimball authored
* reenable unit test
-
arogowie-intel authored
- Set default input axes order.
-
arogowie-intel authored
-
- 16 Apr, 2018 8 commits
-
-
Robert Kimball authored
* remove tensor_call from backends * remove obsolete methods
-
Adam Procter authored
-
Robert Kimball authored
-
Jaikrishnan Menon authored
* CMake: Allow build target arch to be overridden * Add DNGRAPH_TARGET_ARCH option to install docs
-
Nick Korovaiko authored
* get_input_op -> get_argument * more replacing * more replacing2
-
Fenglei authored
-
tsocha authored
* Update default paths in setup.py * Update defaults arguments in tox
-
Robert Kimball authored
* remove obsolete * change to use new Backend API * rename parameter
-
- 13 Apr, 2018 7 commits
-
-
Robert Kimball authored
* remove deprecated * remove all legacy Backend API usage remove deprecated files * pull in changes from master * fix GPU calls * disable tests in convolution generator * update per PR comments. Enable performance counter feature. * update per PR comments * fix build error * fix conditionally compiled test :(
-
Scott Cyphers authored
* BatchNorm documentation * Fix typo, install URL * Switch to desired BatchNorm
-
Nick Korovaiko authored
-
DawnStone authored
fixed variable settings in contrib/docker/make-dimage.sh script
-
arogowie-intel authored
* Add python wrapper for nGraph Reduce operation. - Add UT. * Refactoring. - Add UT case with default reduction on all axes. * Extend `reduce` operation signature to also accept `Function` object. - Add UT case. * Fix formatting errors.
-
Robert Kimball authored
-
Chris Sullivan authored
* Begin prototype of cudnn_emitter. * Added GPURuntimeContext to gpu_external_function for passing through to JIT functions. * gpu_emitters now utilize gpu runtime context. * Moved cublas and cudnn handles into GPURuntimeContext pointer and out of callframe EntryPoint. * Added CUDNNEmitter, comparable to MKLDNNEmitter, which allows for cudnn kernels to be defined via lambda primitives that are emitted and subsequently called during graph execution. An example implementation is provided for op::Sum. * Added GPURuntimeContext to gpu_external_function for passing through to JIT functions. * gpu_emitters now utilize gpu runtime context. * Moved cublas and cudnn handles into GPURuntimeContext pointer and out of callframe EntryPoint. * GPURuntimeContext should be stored as unique_ptr in external function. * GPURuntimeContext should be stored as unique_ptr in external function. * Extract raw pointer from unique for cudnn_emitter. * Removing unrelated code from PR. * GPURuntimeContext needs to be a strict C interface in case the native compiler and clang are utilizing different glibc ABIs. Updated to reflect this. * Added cudnn::primitive typedef for better readability. * Moved allocation of CudaFunctionPool to external function so that it is available during gpu emission. * Fixed too-late initialization of cudart. * Fixed too-late initialization of cudart. * CUDNNEmitter moved into superset class GPUPrimitiveEmitter. The GPUPrimitiveEmitter handles the emission of all gpu primitives, including cudnn, cuda, and cublas. CUBLASEmitter support not yet included. * Added unordered_map for cacheing primitives in the gpu_emitter. * Added dtor to GPUPrimitiveEmitter to cleanup compiled functions. * Adding back a serialized model graph that was accidentally rem* Added a few additional helpers to use ngraph::row_major_strides. * added whitespace per @fengleitian's comment * added whitespace per @fengleitian's comment * Remove implicit type conversions from size_t to int. * Add op::MaxPool, op::MaxPoolBackprop and op::Pad to GPU transformer (#817) * Added pooling for 1 and 2dimensions. 1d uses a cuda kernel and 2d utilizes cudnn. Padding is not yet supported. * Normalized call signature on gpu emission for 1d max pool. Added a few comments. * Max pool backprop impl. inprogress. Amend this commit. * Max pool backprop implemented. Note that cuDNN requests the output tensor for the maxpool operation but it is not required for computation. * Formatting and invokation for maxpool changed. * Fixed too-late initialization of cudart. * Added padding kernel that is used with maxpool. Need to investigate remaining tests. * Changed dimensionality check to correctly determine if data is 1d or not. * Added 3d MaxPooling (forward), verified by forcing 2d case to use Nd pooling routines. * Added 3d MaxPooling (backward), verified by forcing 2d case to use Nd pooling routines. * Moved cudnn prologues for maxpool into ngraph runtime and out of primitive so that the only execution occuring on the JIT runtime is the evaluation of the op kernel. * Refactored forward and backward pooling into single CUDNNEmitter::build_pooling interface with a runtime switch to determine if the op is forward or backward propagation. * Cache preconstructed cudnn kernel for maxpool if it has already been constructed. * Forgot to add padding arrays back into cudnn kernel for MaxPool in the 2d case. * Fixed namespace issues and use join(...,'_') * Refactored 4d/Nd tensor descriptor builder into single function. * Changed conditionals and comments. Now throws if MaxPool on more than 3 spatial dimensions is requested. * Fixed forward declare for GPURuntimeContext (class -> struct). * Clang complains about missing braces on brace-initializer. Fixed implicit conversions. * Fixed implicit conversions (clang). * Reverting changes on autodiff test for maxpool. @Krovatkin will update later.
-
- 12 Apr, 2018 6 commits
-
-
Jaikrishnan Menon authored
-
Fenglei authored
* add slice op, first version * change size to output size * fix bugs * working version * using exist function for join and strides * clang format * revert accidental change
-
Nick Korovaiko authored
* add a getter for root node * recurrent graph rewrite * fix perms, rename match_root -> get_match_root * fix comp errors * make match_root return the topmost match; fix tests
-
Fenglei authored
* add convolution in progress * enable 1 test * convolution in progress * use filter descripter * filter discreptor bug fix * tensor format * add missed dimension calculator * forward convolution 4d without dilation and padding working * data dilation(deconvolution) and enable some test * add backprop convolution data and filter * backprop can compile * pass unit test, but still have problem on padding * 2d, symmtric padding, no data dilation works now * clean up code * extend gpu convolution to nd * fix some bugs * working version for upto 3d convolution, code format. * remove nunecessary changes * add restriction for data dilation and asymmetric padding * clang format * support upto 3D convolution for now * change comments to not implemented * change comments to not implemented * add quary for additional GPU workspace for convolution * clang format * code format * using row_major_strides * using join * fix bug for join * refactor dimension calculation
-
tsocha authored
* Enable BatchNorm op * Enable function call op * Enable get output element op
-
Jaikrishnan Menon authored
-
- 10 Apr, 2018 4 commits
-
-
Yixing Lao authored
* new backend API in graph partition * update API
-
Matthew Brookhart authored
-
Nick Korovaiko authored
* zero dimension tensor elimination init * more ops + refactor + tests * revert pattern.cpp * add internal zero-length test * address Scott's feedback * fix comp errors * proper static init * get rid of unique-ptr * refactor hashmap into virtual get_default_values on op classes * fix formatting
-
Robert Kimball authored
* back out api change
-