- 15 Jun, 2017 2 commits
-
-
dkurt authored
-
Vadim Pisarevsky authored
* rewritten the following layers to be [much] more efficient: convolution, fully connected, activations (relu, tanh, ...), LRN. Use optional AVX optimization for the first two. * eliminated trailing whitespaces
-
- 09 Jun, 2017 1 commit
-
-
Aleksandr Rybnikov authored
* Reuse deep learning output blobs * Changed order for iterating through blobs while seeking memory. Refactored a little.
-
- 06 Jun, 2017 1 commit
-
-
Aleksandr Rybnikov authored
-
- 05 Jun, 2017 1 commit
-
-
dkurt authored
-
- 26 May, 2017 1 commit
-
-
Dmitry Kurtaev authored
-
- 25 May, 2017 1 commit
-
-
Aleksandr Rybnikov authored
-
- 24 May, 2017 4 commits
-
-
dkurt authored
-
Vladislav Sovrasov authored
-
Vladislav Sovrasov authored
-
Aleksandr Rybnikov authored
-
- 23 May, 2017 2 commits
-
-
Aleksandr Rybnikov authored
Made separate functions for computing output shapes for all layers. Removed output blobs allocation from layers
-
dkurt authored
-
- 17 May, 2017 1 commit
-
-
dkurt authored
-
- 10 May, 2017 1 commit
-
-
Aleksandr Rybnikov authored
* Fixed bugs after dnn refactoring. Added new tests for layers * Fixed torch tests
-
- 25 Apr, 2017 5 commits
-
-
Vadim Pisarevsky authored
-
Vadim Pisarevsky authored
-
Aleksandr Rybnikov authored
-
Vladislav Sovrasov authored
-
Vadim Pisarevsky authored
* the first commit in the merged dnn: convert some public API from Blob's to Mat's * temporarily or permantently removed OpenCL optimizations, which are not always stable nor usually very efficient; we'll likely use Halide instead * got rid of Blob and BlobShape completely; use cv::Mat and std::vector<int> instead * fixed a few compile errors * got rid of separate .hpp files with layer declarations; instead, put everything into the respective .cpp files * normalized all the layers' constructors; we concentrate on loading deep networks layers from files instead of constructing them from scratch, so we retained only SomeLayer::SomeLayer(const LayerParams& params); constructors * fixed sample compilation * suppress doxygen warnings * trying to fix python bindings generation for DNN module * temporarily disable python bindings while we refactor the module * fix win32/win64 compile errors; remove trailing whitespaces * fix win32/win64 compile errors; remove trailing whitespaces
-
- 29 Mar, 2017 1 commit
-
-
Aleksandr Rybnikov authored
-
- 22 Feb, 2017 1 commit
-
-
Aleksandr Rybnikov authored
-
- 19 Feb, 2017 1 commit
-
-
Dmitry Kurtaev authored
Getting layer inputs (#1006)
-
- 15 Feb, 2017 3 commits
- 14 Feb, 2017 1 commit
-
-
Alexander Alekhin authored
-
- 09 Feb, 2017 1 commit
-
-
arrybn authored
-
- 08 Feb, 2017 1 commit
-
-
Aleksandr Rybnikov authored
-
- 26 Jan, 2017 1 commit
-
-
Vladislav Sovrasov authored
-
- 25 Jan, 2017 3 commits
-
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
Alexander Alekhin authored
-
- 19 Jan, 2017 2 commits
-
-
Alexander Alekhin authored
-
Vladislav Sovrasov authored
-
- 18 Jan, 2017 1 commit
-
-
arrybn authored
-
- 10 Jan, 2017 1 commit
-
-
arrybn authored
-
- 22 Dec, 2016 1 commit
-
-
Vadim Pisarevsky authored
moved BLAS/LAPACK detection scripts to the main repository. Give preference to the external BLAS over OpenCV's gemm, which improves performance of dnn module by a lot
-
- 23 Nov, 2016 1 commit
-
-
Aleksandr Rybnikov authored
-
- 21 Sep, 2016 1 commit
-
-
Vadim Pisarevsky authored
fixed some compile warnings in dnn & protobuf; improved convolution layer performance when blas is not available by parallelizing gemmCPU() function in dnn
-