- 21 Oct, 2019 1 commit
-
-
Yashas Samaga B L authored
CUDA backend for the DNN module * stub cuda4dnn design * minor fixes for tests and doxygen * add csl public api directory to module headers * add low-level CSL components * add high-level CSL components * integrate csl::Tensor into backbone code * switch to CPU iff unsupported; otherwise, fail on error * add fully connected layer * add softmax layer * add activation layers * support arbitary rank TensorDescriptor * pass input wrappers to `initCUDA()` * add 1d/2d/3d-convolution * add pooling layer * reorganize and refactor code * fixes for gcc, clang and doxygen; remove cxx14/17 code * add blank_layer * add LRN layer * add rounding modes for pooling layer * split tensor.hpp into tensor.hpp and tensor_ops.hpp * add concat layer * add scale layer * add batch normalization layer * split math.cu into activations.cu and math.hpp * add eltwise layer * add flatten layer * add tensor transform api * add asymmetric padding support for convolution layer * add reshape layer * fix rebase issues * add permute layer * add padding support for concat layer * refactor and reorganize code * add normalize layer * optimize bias addition in scale layer * add prior box layer * fix and optimize normalize layer * add asymmetric padding support for pooling layer * add event API * improve pooling performance for some padding scenarios * avoid over-allocation of compute resources to kernels * improve prior box performance * enable layer fusion * add const layer * add resize layer * add slice layer * add padding layer * add deconvolution layer * fix channelwise ReLU initialization * add vector traits * add vectorized versions of relu, clipped_relu, power * add vectorized concat kernels * improve concat_with_offsets performance * vectorize scale and bias kernels * add support for multi-billion element tensors * vectorize prior box kernels * fix address alignment check * improve bias addition performance of conv/deconv/fc layers * restructure code for supporting multiple targets * add DNN_TARGET_CUDA_FP64 * add DNN_TARGET_FP16 * improve vectorization * add region layer * improve tensor API, add dynamic ranks 1. use ManagedPtr instead of a Tensor in backend wrapper 2. add new methods to tensor classes - size_range: computes the combined size of for a given axis range - tensor span/view can be constructed from a raw pointer and shape 3. the tensor classes can change their rank at runtime (previously rank was fixed at compile-time) 4. remove device code from tensor classes (as they are unused) 5. enforce strict conditions on tensor class APIs to improve debugging ability * fix parametric relu activation * add squeeze/unsqueeze tensor API * add reorg layer * optimize permute and enable 2d permute * enable 1d and 2d slice * add split layer * add shuffle channel layer * allow tensors of different ranks in reshape primitive * patch SliceOp to allow Crop Layer * allow extra shape inputs in reshape layer * use `std::move_backward` instead of `std::move` for insert in resizable_static_array * improve workspace management * add spatial LRN * add nms (cpu) to region layer * add max pooling with argmax ( and a fix to limits.hpp) * add max unpooling layer * rename DNN_TARGET_CUDA_FP32 to DNN_TARGET_CUDA * update supportBackend to be more rigorous * remove stray include from preventing non-cuda build * include op_cuda.hpp outside condition #if * refactoring, fixes and many optimizations * drop DNN_TARGET_CUDA_FP64 * fix gcc errors * increase max. tensor rank limit to six * add Interp layer * drop custom layers; use BackendNode * vectorize activation kernels * fixes for gcc * remove wrong assertion * fix broken assertion in unpooling primitive * fix build errors in non-CUDA build * completely remove workspace from public API * fix permute layer * enable accuracy and perf. tests for DNN_TARGET_CUDA * add asynchronous forward * vectorize eltwise ops * vectorize fill kernel * fixes for gcc * remove CSL headers from public API * remove csl header source group from cmake * update min. cudnn version in cmake * add numerically stable FP32 log1pexp * refactor code * add FP16 specialization to cudnn based tensor addition * vectorize scale1 and bias1 + minor refactoring * fix doxygen build * fix invalid alignment assertion * clear backend wrappers before allocateLayers * ignore memory lock failures * do not allocate internal blobs * integrate NVTX * add numerically stable half precision log1pexp * fix indentation, following coding style, improve docs * remove accidental modification of IE code * Revert "add asynchronous forward" This reverts commit 1154b9da9da07e9b52f8a81bdcea48cf31c56f70. * [cmake] throw error for unsupported CC versions * fix rebase issues * add more docs, refactor code, fix bugs * minor refactoring and fixes * resolve warnings/errors from clang * remove haveCUDA() checks from supportBackend() * remove NVTX integration * changes based on review comments * avoid exception when no CUDA device is present * add color code for CUDA in Net::dump
-
- 20 Oct, 2019 1 commit
-
-
Alexander Alekhin authored
-
- 19 Oct, 2019 1 commit
-
-
TH3CHARLie authored
-
- 18 Oct, 2019 1 commit
-
-
Dmitry Matveev authored
* G-API-NG/Streaming: Introduced a Streaming API Now a GComputation can be compiled in a special "streaming" way and then "played" on a video stream. Currently only VideoCapture is supported as an input source. * G-API-NG/Streaming: added threading & real streaming * G-API-NG/Streaming: Added tests & docs on Copy kernel - Added very simple pipeline tests, not all data types are covered yet (in fact, only GMat is tested now); - Started testing non-OCV backends in the streaming mode; - Added required fixes to Fluid backend, likely it works OK now; - Added required fixes to OCL backend, and now it is likely broken - Also added a UMat-based (OCL) version of Copy kernel * G-API-NG/Streaming: Added own concurrent queue class - Used only if TBB is not available * G-API-NG/Streaming: Fixing various issues - Added missing header to CMakeLists.txt - Fixed various CI issues and warnings * G-API-NG/Streaming: Fixed a compile-time GScalar queue deadlock - GStreamingExecutor blindly created island's input queues for compile-time (value-initialized) GScalars which didn't have any producers, making island actor threads wait there forever * G-API-NG/Streaming: Dropped own version of Copy kernel One was added into master already * G-API-NG/Streaming: Addressed GArray<T> review comments - Added tests on mov() - Removed unnecessary changes in garray.hpp * G-API-NG/Streaming: Added Doxygen comments to new public APIs Also fixed some other comments in the code * G-API-NG/Streaming: Removed debug info, added some comments & renamed vars * G-API-NG/Streaming: Fixed own-vs-cv abstraction leak - Now every island is triggered with own:: (instead of cv::) data objects as inputs; - Changes in Fluid backend required to support cv::Mat/Scalar were reverted; * G-API-NG/Streaming: use holds_alternative<> instead of index/index_of test - Also fixed regression test comments - Also added metadata check comments for GStreamingCompiled * G-API-NG/Streaming: Made start()/stop() more robust - Fixed various possible deadlocks - Unified the shutdown code - Added more tests covering different corner cases on start/stop * G-API-NG/Streaming: Finally fixed Windows crashes In fact the problem hasn't been Windows-only. Island thread popped data from queues without preserving the Cmd objects and without taking the ownership over data acquired so when islands started to process the data, this data may be already freed. Linux version worked only by occasion. * G-API-NG/Streaming: Fixed (I hope so) Windows warnings * G-API-NG/Streaming: fixed typos in internal comments - Also added some more explanation on Streaming/OpenCL status * G-API-NG/Streaming: Added more unit tests on streaming - Various start()/stop()/setSource() call flow combinations * G-API-NG/Streaming: Added tests on own concurrent bounded queue * G-API-NG/Streaming: Added more tests on various data types, + more - Vector/Scalar passed as input; - Vector/Scalar passed in-between islands; - Some more assertions; - Also fixed a deadlock problem when inputs are mixed (1 constant, 1 stream) * G-API-NG/Streaming: Added tests on output data types handling - Vector - Scalar * G-API-NG/Streaming: Fixed test issues with IE + Windows warnings * G-API-NG/Streaming: Decoupled G-API from videoio - Now the core G-API doesn't use a cv::VideoCapture directly, it comes in via an abstract interface; - Polished a little bit the setSource()/start()/stop() semantics, now setSource() is mandatory before ANY call to start(). * G-API-NG/Streaming: Fix STANDALONE build (errors brought by render)
-
- 17 Oct, 2019 1 commit
-
-
atalaman authored
G-API: Implement OpenCV render backend * Implement render opencv backend * Fix comment to review * Add comment * Add wrappers for kernels * Fix comments to review * Fix comment to review
-
- 15 Oct, 2019 1 commit
-
-
Alexander Alekhin authored
-
- 14 Oct, 2019 2 commits
-
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
- 12 Oct, 2019 2 commits
-
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
- 11 Oct, 2019 1 commit
-
-
Alexander Alekhin authored
-
- 10 Oct, 2019 2 commits
-
-
OrestChura authored
- unnecessary extra semicolon after member function definition removed
-
Maksim Shabunin authored
-
- 09 Oct, 2019 12 commits
-
-
Alexander Alekhin authored
-
Alexander Alekhin authored
OpenCV 4.1.2
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
Alexander Alekhin authored
OpenCV 3.4.8
-
Maksim Shabunin authored
* Disable posix_memalign by default * core: fix memalign parameter handling
-
Alexander Smorkalov authored
-
Talamanov, Anatoliy authored
-
Marcin Tolysz authored
* Cuda + OpenGL on ARM There might be multiple ways of getting OpenCV compile on Tegra (NVIDIA Jetson) platform, but mainly they modify CUDA(8,9,10...) source code, this one fixes it for all installations. ( https://devtalk.nvidia.com/default/topic/1007290/jetson-tx2/building-opencv-with-opengl-support-/post/5141945/#5141945 et al.). This way is exactly the same as the one proposed but the code change happens in OpenCV. * Updated, The link provided mentions: cuda8 + 9, I have cuda 10 + 10.1 (and can confirm it is still defined this way). NVIDIA is probably using some other "secret" backend with Jetson.
-
Sean McBride authored
-
- 08 Oct, 2019 5 commits
-
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
Alexander Alekhin authored
- _mm256_bslli_epi128() works in GCC 4.9.3+ only - Android NDK r10 doesn't support this instruction
-
Pinaev Danil authored
* Add a few more tests on `any` Added tests: - get_ref_to_val_from_any - update_val_via_ref * Style fix
-
- 07 Oct, 2019 9 commits
-
-
Nikita Shulga authored
Reads and writes to volatile bool are not guaranteed to be atomic.
-
Nikita Shulga authored
This ensures uniform behavior on any C++11 compliant compiler
-
Alexander Alekhin authored
-
Sayed Adel authored
* core: rework and optimize SIMD implementation of dotProd - add new universal intrinsics v_dotprod[int32], v_dotprod_expand[u&int8, u&int16, int32], v_cvt_f64(int64) - add a boolean param for all v_dotprod&_expand intrinsics that change the behavior of addition order between pairs in some platforms in order to reach the maximum optimization when the sum among all lanes is what only matters - fix clang build on ppc64le - support wide universal intrinsics for dotProd_32s - remove raw SIMD and activate universal intrinsics for dotProd_8 - implement SIMD optimization for dotProd_s16&u16 - extend performance test data types of dotprod - fix GCC VSX workaround of vec_mule and vec_mulo (in little-endian it must be swapped) - optimize v_mul_expand(int32) on VSX * core: remove boolean param from v_dotprod&_expand and implement v_dotprod_fast&v_dotprod_expand_fast this changes made depend on "terfendail" review
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
Marcin Tolysz authored
``` #define NPP_VER_MAJOR 10 #define NPP_VER_MINOR 2 #define NPP_VER_PATCH 0 #define NPP_VER_BUILD 243 #define NPP_VERSION (NPP_VER_MAJOR * 1000 + \ NPP_VER_MINOR * 100 + \ NPP_VER_PATCH)
-
Suleyman TURKMEN authored
-
- 05 Oct, 2019 1 commit
-
-
Alexander Alekhin authored
-