Commit e340ff9c authored by pengli's avatar pengli Committed by Alexander Alekhin

Merge pull request #9114 from pengli:dnn_rebase

add libdnn acceleration to dnn module  (#9114)

* import libdnn code
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* add convolution layer ocl acceleration
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* add pooling layer ocl acceleration
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* add softmax layer ocl acceleration
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* add lrn layer ocl acceleration
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* add innerproduct layer ocl acceleration
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* add HAVE_OPENCL macro
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* fix for convolution ocl
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* enable getUMat() for multi-dimension Mat
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* use getUMat for ocl acceleration
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* use CV_OCL_RUN macro
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* set OPENCL target when it is available

and disable fuseLayer for OCL target for the time being
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* fix innerproduct accuracy test
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* remove trailing space
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Fixed tensorflow demo bug.

Root cause is that tensorflow has different algorithm with libdnn
to calculate convolution output dimension.

libdnn don't calculate output dimension anymore and just use one
passed in by config.

* split gemm ocl file

split it into gemm_buffer.cl and gemm_image.cl
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Fix compile failure
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* check env flag for auto tuning
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* switch to new ocl kernels for softmax layer
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* update softmax layer

on some platform subgroup extension may not work well,
fallback to non subgroup ocl acceleration.
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* fallback to cpu path for fc layer with multi output
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* update output message
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* update fully connected layer

fallback to gemm API if libdnn return false
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Add ReLU OCL implementation

* disable layer fusion for now
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Add OCL implementation for concat layer
Signed-off-by: 's avatarWu Zhiwen <zhiwen.wu@intel.com>

* libdnn: update license and copyrights

Also refine libdnn coding style
Signed-off-by: 's avatarWu Zhiwen <zhiwen.wu@intel.com>
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* DNN: Don't link OpenCL library explicitly

* DNN: Make default preferableTarget to DNN_TARGET_CPU

User should set it to DNN_TARGET_OPENCL explicitly if want to
use OpenCL acceleration.

Also don't fusion when using DNN_TARGET_OPENCL

* DNN: refine coding style

* Add getOpenCLErrorString

* DNN: Use int32_t/uint32_t instread of alias

* Use namespace ocl4dnn to include libdnn things

* remove extra copyTo in softmax ocl path
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* update ReLU layer ocl path
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Add prefer target property for layer class

It is used to indicate the target for layer forwarding,
either the default CPU target or OCL target.
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Add cl_event based timer for cv::ocl

* Rename libdnn to ocl4dnn
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>
Signed-off-by: 's avatarwzw <zhiwen.wu@intel.com>

* use UMat for ocl4dnn internal buffer

Remove allocateMemory which use clCreateBuffer directly
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>
Signed-off-by: 's avatarwzw <zhiwen.wu@intel.com>

* enable buffer gemm in ocl4dnn innerproduct
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* replace int_tp globally for ocl4dnn kernels.
Signed-off-by: 's avatarwzw <zhiwen.wu@intel.com>
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* create UMat for layer params
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* update sign ocl kernel
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* update image based gemm of inner product layer
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* remove buffer gemm of inner product layer

call cv::gemm API instead
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* change ocl4dnn forward parameter to UMat
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Refine auto-tuning mechanism.

- Use OPENCV_OCL4DNN_KERNEL_CONFIG_PATH to set cache directory
  for fine-tuned kernel configuration.
  e.g. export OPENCV_OCL4DNN_KERNEL_CONFIG_PATH=/home/tmp,
  the cache directory will be /home/tmp/spatialkernels/ on Linux.

- Define environment OPENCV_OCL4DNN_ENABLE_AUTO_TUNING to enable
  auto-tuning.

- OPENCV_OPENCL_ENABLE_PROFILING is only used to enable profiling
  for OpenCL command queue. This fix basic kernel get wrong running
  time, i.e. 0ms.

- If creating cache directory failed, disable auto-tuning.

* Detect and create cache dir on windows
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Refine gemm like convolution kernel.
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Fix redundant swizzleWeights calling when use cached kernel config.

* Fix "out of resource" bug when auto-tuning too many kernels.

* replace cl_mem with UMat in ocl4dnnConvSpatial class

* OCL4DNN: reduce the tuning kernel candidate.

This patch could reduce 75% of the tuning candidates with less
than 2% performance impact for the final result.
Signed-off-by: 's avatarZhigang Gong <zhigang.gong@intel.com>

* replace cl_mem with umat in ocl4dnn convolution
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* remove weight_image_ of ocl4dnn inner product

Actually it is unused in the computation
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Various fixes for ocl4dnn

1. OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel())
2. Ptr<OCL4DNNInnerProduct<float> > innerProductOp
3. Code comments cleanup
4. ignore check on OCL cpu device
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* add build option for log softmax
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* remove unused ocl kernels in ocl4dnn
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* replace ocl4dnnSet with opencv setTo
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* replace ALIGN with cv::alignSize
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* check kernel build options
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Handle program compilation fail properly.

* Use std::numeric_limits<float>::infinity() for large float number

* check ocl4dnn kernel compilation result
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* remove unused ctx_id
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* change clEnqueueNDRangeKernel to kernel.run()
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* change cl_mem to UMat in image based gemm
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* check intel subgroup support for lrn and pooling layer
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Fix convolution bug if group is greater than 1
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Set default layer preferableTarget to be DNN_TARGET_CPU
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Add ocl perf test for convolution
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Add more ocl accuracy test
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* replace cl_image with ocl::Image2D
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* Fix build failure in elementwise layer
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* use getUMat() to get blob data
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* replace cl_mem handle with ocl::KernelArg
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* dnn(build): don't use C++11, OPENCL_LIBRARIES fix

* dnn(ocl4dnn): remove unused OpenCL kernels

* dnn(ocl4dnn): extract OpenCL code into .cl files

* dnn(ocl4dnn): refine auto-tuning

Defaultly disable auto-tuning, set OPENCV_OCL4DNN_ENABLE_AUTO_TUNING
environment variable to enable it.

Use a set of pre-tuned configs as default config if auto-tuning is disabled.
These configs are tuned for Intel GPU with 48/72 EUs, and for googlenet,
AlexNet, ResNet-50

If default config is not suitable, use the first available kernel config
from the candidates. Candidate priority from high to low is gemm like kernel,
IDLF kernel, basick kernel.

* dnn(ocl4dnn): pooling doesn't use OpenCL subgroups

* dnn(ocl4dnn): fix perf test

OpenCV has default 3sec time limit for each performance test.
Warmup OpenCL backend outside of perf measurement loop.

* use ocl::KernelArg as much as possible
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* dnn(ocl4dnn): fix bias bug for gemm like kernel

* dnn(ocl4dnn): wrap cl_mem into UMat
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* dnn(ocl4dnn): Refine signature of kernel config

- Use more readable string as signture of kernel config
- Don't count device name and vendor in signature string
- Default kernel configurations are tuned for Intel GPU with
  24/48/72 EUs, and for googlenet, AlexNet, ResNet-50 net model.

* dnn(ocl4dnn): swap width/height in configuration

* dnn(ocl4dnn): enable configs for Intel OpenCL runtime only

* core: make configuration helper functions accessible from non-core modules

* dnn(ocl4dnn): update kernel auto-tuning behavior

Avoid unwanted creation of directories

* dnn(ocl4dnn): simplify kernel to workaround OpenCL compiler crash

* dnn(ocl4dnn): remove redundant code

* dnn(ocl4dnn): Add more clear message for simd size dismatch.

* dnn(ocl4dnn): add const to const argument
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* dnn(ocl4dnn): force compiler use a specific SIMD size for IDLF kernel

* dnn(ocl4dnn): drop unused tuneLocalSize()

* dnn(ocl4dnn): specify OpenCL queue for Timer and convolve() method

* dnn(ocl4dnn): sanitize file names used for cache

* dnn(perf): enable Network tests with OpenCL

* dnn(ocl4dnn/conv): drop computeGlobalSize()

* dnn(ocl4dnn/conv): drop unused fields

* dnn(ocl4dnn/conv): simplify ctor

* dnn(ocl4dnn/conv): refactor kernelConfig localSize=NULL

* dnn(ocl4dnn/conv): drop unsupported double / untested half types

* dnn(ocl4dnn/conv): drop unused variable

* dnn(ocl4dnn/conv): alignSize/divUp

* dnn(ocl4dnn/conv): use enum values

* dnn(ocl4dnn): drop unused innerproduct variable
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>

* dnn(ocl4dnn): add an generic function to check cl option support

* dnn(ocl4dnn): run softmax subgroup version kernel first
Signed-off-by: 's avatarLi Peng <peng.li@intel.com>
parent f646f61d
......@@ -665,6 +665,7 @@ CV_EXPORTS const char* convertTypeStr(int sdepth, int ddepth, int cn, char* buf)
CV_EXPORTS const char* typeToStr(int t);
CV_EXPORTS const char* memopTypeToStr(int t);
CV_EXPORTS const char* vecopTypeToStr(int t);
CV_EXPORTS const char* getOpenCLErrorString(int errorCode);
CV_EXPORTS String kernelToStr(InputArray _kernel, int ddepth = -1, const char * name = NULL);
CV_EXPORTS void getPlatfomsInfo(std::vector<PlatformInfo>& platform_info);
......@@ -731,6 +732,21 @@ protected:
Impl* p;
};
class CV_EXPORTS Timer
{
public:
Timer(const Queue& q);
~Timer();
void start();
void stop();
float milliSeconds();
float microSeconds();
float seconds();
protected:
struct Impl;
Impl* p;
};
CV_EXPORTS MatAllocator* getOpenCLAllocator();
......
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#ifndef OPENCV_CONFIGURATION_PRIVATE_HPP
#define OPENCV_CONFIGURATION_PRIVATE_HPP
namespace cv { namespace utils {
CV_EXPORTS bool getConfigurationParameterBool(const char* name, bool defaultValue);
CV_EXPORTS size_t getConfigurationParameterSizeT(const char* name, size_t defaultValue);
CV_EXPORTS cv::String getConfigurationParameterString(const char* name, const char* defaultValue);
}} // namespace
#endif // OPENCV_CONFIGURATION_PRIVATE_HPP
......@@ -51,7 +51,10 @@
#include <inttypes.h>
#endif
#include <opencv2/core/utils/configuration.private.hpp>
#include "opencv2/core/ocl_genbase.hpp"
#include "opencl_kernels_core.hpp"
#define CV_OPENCL_ALWAYS_SHOW_BUILD_LOG 0
#define CV_OPENCL_SHOW_RUN_ERRORS 0
......@@ -4718,6 +4721,102 @@ const char* convertTypeStr(int sdepth, int ddepth, int cn, char* buf)
return buf;
}
const char* getOpenCLErrorString(int errorCode)
{
switch (errorCode)
{
case 0: return "CL_SUCCESS";
case -1: return "CL_DEVICE_NOT_FOUND";
case -2: return "CL_DEVICE_NOT_AVAILABLE";
case -3: return "CL_COMPILER_NOT_AVAILABLE";
case -4: return "CL_MEM_OBJECT_ALLOCATION_FAILURE";
case -5: return "CL_OUT_OF_RESOURCES";
case -6: return "CL_OUT_OF_HOST_MEMORY";
case -7: return "CL_PROFILING_INFO_NOT_AVAILABLE";
case -8: return "CL_MEM_COPY_OVERLAP";
case -9: return "CL_IMAGE_FORMAT_MISMATCH";
case -10: return "CL_IMAGE_FORMAT_NOT_SUPPORTED";
case -11: return "CL_BUILD_PROGRAM_FAILURE";
case -12: return "CL_MAP_FAILURE";
case -13: return "CL_MISALIGNED_SUB_BUFFER_OFFSET";
case -14: return "CL_EXEC_STATUS_ERROR_FOR_EVENTS_IN_WAIT_LIST";
case -15: return "CL_COMPILE_PROGRAM_FAILURE";
case -16: return "CL_LINKER_NOT_AVAILABLE";
case -17: return "CL_LINK_PROGRAM_FAILURE";
case -18: return "CL_DEVICE_PARTITION_FAILED";
case -19: return "CL_KERNEL_ARG_INFO_NOT_AVAILABLE";
case -30: return "CL_INVALID_VALUE";
case -31: return "CL_INVALID_DEVICE_TYPE";
case -32: return "CL_INVALID_PLATFORM";
case -33: return "CL_INVALID_DEVICE";
case -34: return "CL_INVALID_CONTEXT";
case -35: return "CL_INVALID_QUEUE_PROPERTIES";
case -36: return "CL_INVALID_COMMAND_QUEUE";
case -37: return "CL_INVALID_HOST_PTR";
case -38: return "CL_INVALID_MEM_OBJECT";
case -39: return "CL_INVALID_IMAGE_FORMAT_DESCRIPTOR";
case -40: return "CL_INVALID_IMAGE_SIZE";
case -41: return "CL_INVALID_SAMPLER";
case -42: return "CL_INVALID_BINARY";
case -43: return "CL_INVALID_BUILD_OPTIONS";
case -44: return "CL_INVALID_PROGRAM";
case -45: return "CL_INVALID_PROGRAM_EXECUTABLE";
case -46: return "CL_INVALID_KERNEL_NAME";
case -47: return "CL_INVALID_KERNEL_DEFINITION";
case -48: return "CL_INVALID_KERNEL";
case -49: return "CL_INVALID_ARG_INDEX";
case -50: return "CL_INVALID_ARG_VALUE";
case -51: return "CL_INVALID_ARG_SIZE";
case -52: return "CL_INVALID_KERNEL_ARGS";
case -53: return "CL_INVALID_WORK_DIMENSION";
case -54: return "CL_INVALID_WORK_GROUP_SIZE";
case -55: return "CL_INVALID_WORK_ITEM_SIZE";
case -56: return "CL_INVALID_GLOBAL_OFFSET";
case -57: return "CL_INVALID_EVENT_WAIT_LIST";
case -58: return "CL_INVALID_EVENT";
case -59: return "CL_INVALID_OPERATION";
case -60: return "CL_INVALID_GL_OBJECT";
case -61: return "CL_INVALID_BUFFER_SIZE";
case -62: return "CL_INVALID_MIP_LEVEL";
case -63: return "CL_INVALID_GLOBAL_WORK_SIZE";
case -64: return "CL_INVALID_PROPERTY";
case -65: return "CL_INVALID_IMAGE_DESCRIPTOR";
case -66: return "CL_INVALID_COMPILER_OPTIONS";
case -67: return "CL_INVALID_LINKER_OPTIONS";
case -68: return "CL_INVALID_DEVICE_PARTITION_COUNT";
case -69: return "CL_INVALID_PIPE_SIZE";
case -70: return "CL_INVALID_DEVICE_QUEUE";
case -1000: return "CL_INVALID_GL_SHAREGROUP_REFERENCE_KHR";
case -1001: return "CL_PLATFORM_NOT_FOUND_KHR";
case -1002: return "CL_INVALID_D3D10_DEVICE_KHR";
case -1003: return "CL_INVALID_D3D10_RESOURCE_KHR";
case -1004: return "CL_D3D10_RESOURCE_ALREADY_ACQUIRED_KHR";
case -1005: return "CL_D3D10_RESOURCE_NOT_ACQUIRED_KHR";
case -1024: return "clBLAS: Functionality is not implemented";
case -1023: return "clBLAS: Library is not initialized yet";
case -1022: return "clBLAS: Matrix A is not a valid memory object";
case -1021: return "clBLAS: Matrix B is not a valid memory object";
case -1020: return "clBLAS: Matrix C is not a valid memory object";
case -1019: return "clBLAS: Vector X is not a valid memory object";
case -1018: return "clBLAS: Vector Y is not a valid memory object";
case -1017: return "clBLAS: An input dimension (M:N:K) is invalid";
case -1016: return "clBLAS: Leading dimension A must not be less than the "
"size of the first dimension";
case -1015: return "clBLAS: Leading dimension B must not be less than the "
"size of the second dimension";
case -1014: return "clBLAS: Leading dimension C must not be less than the "
"size of the third dimension";
case -1013: return "clBLAS: The increment for a vector X must not be 0";
case -1012: return "clBLAS: The increment for a vector Y must not be 0";
case -1011: return "clBLAS: The memory object for Matrix A is too small";
case -1010: return "clBLAS: The memory object for Matrix B is too small";
case -1009: return "clBLAS: The memory object for Matrix C is too small";
case -1008: return "clBLAS: The memory object for Vector X is too small";
case -1007: return "clBLAS: The memory object for Vector Y is too small";
default: return "Unknown OpenCL error";
}
}
template <typename T>
static std::string kerToStr(const Mat & k)
{
......@@ -5134,4 +5233,175 @@ bool internal::isCLBuffer(UMat& u)
return true;
}
struct Timer::Impl
{
const Queue queue;
Impl(const Queue& q)
: queue(q)
, initted_(false)
, running_(false)
, has_run_at_least_once_(false)
{
init();
}
~Impl()
{
clWaitForEvents(1, &start_gpu_cl_);
clWaitForEvents(1, &stop_gpu_cl_);
clReleaseEvent(start_gpu_cl_);
clReleaseEvent(stop_gpu_cl_);
}
void start()
{
#ifdef HAVE_OPENCL
if (!running())
{
clWaitForEvents(1, &start_gpu_cl_);
clReleaseEvent(start_gpu_cl_);
ocl::Kernel kernel("null_kernel_float", ocl::core::benchmark_oclsrc);
float arg = 0;
clSetKernelArg((cl_kernel)kernel.ptr(), 0, sizeof(arg), &arg);
clEnqueueTask((cl_command_queue)queue.ptr(), (cl_kernel)kernel.ptr(), 0,
NULL, &start_gpu_cl_);
clFinish((cl_command_queue)queue.ptr());
running_ = true;
has_run_at_least_once_ = true;
}
#endif
}
void stop()
{
#ifdef HAVE_OPENCL
if (running())
{
clWaitForEvents(1, &stop_gpu_cl_);
clReleaseEvent(stop_gpu_cl_);
ocl::Kernel kernel("null_kernel_float", ocl::core::benchmark_oclsrc);
float arg = 0;
clSetKernelArg((cl_kernel)kernel.ptr(), 0, sizeof(arg), &arg);
clEnqueueTask((cl_command_queue)queue.ptr(), (cl_kernel)kernel.ptr(), 0,
NULL, &stop_gpu_cl_);
clFinish((cl_command_queue)queue.ptr());
running_ = false;
}
#endif
}
float microSeconds()
{
#ifdef HAVE_OPENCL
if (!has_run_at_least_once())
{
return 0;
}
if (running())
{
stop();
}
cl_ulong startTime, stopTime;
clWaitForEvents(1, &stop_gpu_cl_);
clGetEventProfilingInfo(start_gpu_cl_, CL_PROFILING_COMMAND_END,
sizeof startTime, &startTime, NULL);
clGetEventProfilingInfo(stop_gpu_cl_, CL_PROFILING_COMMAND_START,
sizeof stopTime, &stopTime, NULL);
double us = static_cast<double>(stopTime - startTime) / 1000.0;
elapsed_microseconds_ = static_cast<float>(us);
return elapsed_microseconds_;
#else
return 0;
#endif
}
float milliSeconds()
{
#ifdef HAVE_OPENCL
if (!has_run_at_least_once())
{
return 0;
}
if (running())
{
stop();
}
cl_ulong startTime = 0, stopTime = 0;
clGetEventProfilingInfo(start_gpu_cl_, CL_PROFILING_COMMAND_END,
sizeof startTime, &startTime, NULL);
clGetEventProfilingInfo(stop_gpu_cl_, CL_PROFILING_COMMAND_START,
sizeof stopTime, &stopTime, NULL);
double ms = static_cast<double>(stopTime - startTime) / 1000000.0;
elapsed_milliseconds_ = static_cast<float>(ms);
return elapsed_milliseconds_;
#else
return 0;
#endif
}
float seconds()
{
return milliSeconds() / 1000.f;
}
void init()
{
CV_Assert(queue.getImpl() && queue.getImpl()->isProfilingQueue_);
if (!initted())
{
start_gpu_cl_ = 0;
stop_gpu_cl_ = 0;
initted_ = true;
}
}
inline bool initted() { return initted_; }
inline bool running() { return running_; }
inline bool has_run_at_least_once() { return has_run_at_least_once_; }
bool initted_;
bool running_;
bool has_run_at_least_once_;
float elapsed_milliseconds_;
float elapsed_microseconds_;
cl_event start_gpu_cl_;
cl_event stop_gpu_cl_;
};
Timer::Timer(const Queue& q)
{
p = new Impl(q);
}
Timer::~Timer()
{
if(p)
{
delete p;
p = 0;
}
}
void Timer::start()
{
if(p)
p->start();
}
void Timer::stop()
{
if(p)
p->stop();
}
float Timer::microSeconds()
{ return p ? p->microSeconds() : 0; }
float Timer::milliSeconds()
{ return p ? p->milliSeconds() : 0; }
float Timer::seconds()
{ return p ? p->seconds() : 0; }
}}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void null_kernel_float(float arg) {
float out = arg;
}
......@@ -297,12 +297,6 @@ TLSData<CoreTLSData>& getCoreTlsData();
#define CL_RUNTIME_EXPORT
#endif
namespace utils {
bool getConfigurationParameterBool(const char* name, bool defaultValue);
size_t getConfigurationParameterSizeT(const char* name, size_t defaultValue);
cv::String getConfigurationParameterString(const char* name, const char* defaultValue);
}
extern bool __termination; // skip some cleanups, because process is terminating
// (for example, if ExitProcess() was already called)
......
......@@ -44,6 +44,7 @@
#include "precomp.hpp"
#include <iostream>
#include <opencv2/core/utils/configuration.private.hpp>
#include <opencv2/core/utils/trace.private.hpp>
namespace cv {
......
......@@ -6,6 +6,7 @@
#include <opencv2/core/utils/trace.hpp>
#include <opencv2/core/utils/trace.private.hpp>
#include <opencv2/core/utils/configuration.private.hpp>
#include <cstdarg> // va_start
......
......@@ -267,19 +267,22 @@ UMat Mat::getUMat(int accessFlags, UMatUsageFlags usageFlags) const
UMat hdr;
if(!data)
return hdr;
Size wholeSize;
Point ofs;
locateROI(wholeSize, ofs);
Size sz(cols, rows);
if (ofs.x != 0 || ofs.y != 0)
if (data != datastart)
{
Mat src = *this;
int dtop = ofs.y;
int dbottom = wholeSize.height - src.rows - ofs.y;
int dleft = ofs.x;
int dright = wholeSize.width - src.cols - ofs.x;
src.adjustROI(dtop, dbottom, dleft, dright);
return src.getUMat(accessFlags, usageFlags)(cv::Rect(ofs.x, ofs.y, sz.width, sz.height));
Size wholeSize;
Point ofs;
locateROI(wholeSize, ofs);
Size sz(cols, rows);
if (ofs.x != 0 || ofs.y != 0)
{
Mat src = *this;
int dtop = ofs.y;
int dbottom = wholeSize.height - src.rows - ofs.y;
int dleft = ofs.x;
int dright = wholeSize.width - src.cols - ofs.x;
src.adjustROI(dtop, dbottom, dleft, dright);
return src.getUMat(accessFlags, usageFlags)(cv::Rect(ofs.x, ofs.y, sz.width, sz.height));
}
}
CV_Assert(data == datastart);
......
......@@ -21,6 +21,8 @@ ocv_warnings_disable(CMAKE_CXX_FLAGS -Wno-shadow -Wno-parentheses -Wmaybe-uninit
)
ocv_warnings_disable(CMAKE_CXX_FLAGS /wd4701 /wd4100)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/src/ocl4dnn/include ${OPENCL_INCLUDE_DIRS})
if(MSVC)
add_definitions( -D_CRT_SECURE_NO_WARNINGS=1 )
ocv_warnings_disable(CMAKE_CXX_FLAGS /wd4244 /wd4267 /wd4018 /wd4355 /wd4800 /wd4251 /wd4996 /wd4146
......
......@@ -297,6 +297,7 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN
CV_PROP String name; //!< Name of the layer instance, can be used for logging or other internal purposes.
CV_PROP String type; //!< Type name which was used for creating layer by layer factory.
CV_PROP int preferableTarget; //!< prefer target for layer forwarding
Layer();
explicit Layer(const LayerParams &params); //!< Initializes only #name, #type and #blobs fields.
......
#include "../perf_precomp.hpp"
#include "opencv2/ts/ocl_perf.hpp"
#include <opencv2/dnn/shape_utils.hpp>
#ifdef HAVE_OPENCL
namespace cvtest
{
namespace ocl
{
using std::tr1::tuple;
using std::tr1::get;
using std::tr1::make_tuple;
using std::make_pair;
using namespace perf;
using namespace testing;
using namespace cv;
using namespace cv::dnn;
enum {STRIDE_OFF = 1, STRIDE_ON = 2};
CV_ENUM(StrideSize, STRIDE_OFF, STRIDE_ON);
enum {GROUP_OFF = 1, GROUP_2 = 2};
CV_ENUM(GroupSize, GROUP_OFF, GROUP_2);
//Squared Size
#define SSZ(n) cv::Size(n, n)
typedef std::pair<MatShape, int> InpShapeNumOut;
typedef tuple<Size, InpShapeNumOut, GroupSize, StrideSize> ConvParam; //kernel_size, inp shape, groups, stride
typedef TestBaseWithParam<ConvParam> ConvolutionPerfTest;
static inline MatShape blobShape(int count, int nplanes, int height, int width)
{
int data[] = {count, nplanes, height, width};
return MatShape(data, data+4);
}
OCL_PERF_TEST_P( ConvolutionPerfTest, perf, Combine(
Values(Size(1, 1), Size(3, 3), Size(5, 5), Size(11, 11)),
Values(make_pair(blobShape(1, 4, 224, 224), 64),
make_pair(blobShape(1, 64, 112, 122), 128),
make_pair(blobShape(1, 256, 28, 28), 512)),
GroupSize::all(),
StrideSize::all())
)
{
RNG rng(0);
ConvParam params = GetParam();
int ksz = get<0>(params).width;
MatShape inpShape = get<1>(params).first;
int outCn = get<1>(params).second;
int groups = get<2>(params);
int stride = (ksz >= 11) ? 4 : (int)get<3>(params);
int inpCn = inpShape[1];
int wgtSize[] = { outCn, inpCn/groups, ksz, ksz };
int biasSize[] = { outCn, 1, 1, 1 };
const int wtype = CV_32F;
Mat wgtBlob(4, wgtSize, wtype), biasBlob(4, biasSize, wtype);
Mat inpBlob(4, &inpShape[0], wtype);
rng.fill(biasBlob, RNG::UNIFORM, -1, +1);
rng.fill(wgtBlob, RNG::UNIFORM, -1, +1);
rng.fill(inpBlob, RNG::UNIFORM, -1, +1);
LayerParams lp;
lp.set("num_output", outCn);
lp.set("group", groups);
lp.set("stride", stride);
lp.set("kernel_size", ksz);
lp.blobs.reserve(2);
lp.blobs.push_back(wgtBlob);
lp.blobs.push_back(biasBlob);
std::vector<Mat*> inpBlobs(1, &inpBlob);
std::vector<Mat> outBlobs, internalBlobs;
cv::setNumThreads(cv::getNumberOfCPUs());
Ptr<Layer> layer = cv::dnn::LayerFactory::createLayerInstance("Convolution", lp);
std::vector<MatShape> inputShapes(1, shape(inpBlob)), outShapes, internals;
layer->getMemoryShapes(inputShapes, 0, outShapes, internals);
for (int i = 0; i < outShapes.size(); i++)
{
outBlobs.push_back(Mat(outShapes[i], CV_32F));
}
for (int i = 0; i < internals.size(); i++)
{
internalBlobs.push_back(Mat());
if (total(internals[i]))
internalBlobs.back().create(internals[i], CV_32F);
}
layer->finalize(inpBlobs, outBlobs);
layer->preferableTarget = DNN_TARGET_OPENCL;
Mat inpBlob2D = inpBlob.reshape(1, outCn);
Mat wgtBlob2D = wgtBlob.reshape(1, outCn*(inpCn/groups));
Mat outBlob2D = outBlobs[0].reshape(1, outBlobs[0].size[0]);
declare.in(inpBlob2D, wgtBlob2D, WARMUP_RNG).out(outBlob2D).tbb_threads(cv::getNumThreads());
// warmup
layer->forward(inpBlobs, outBlobs, internalBlobs);
TEST_CYCLE()
{
layer->forward(inpBlobs, outBlobs, internalBlobs);
}
SANITY_CHECK_NOTHING();
}
}
}
#endif
......@@ -40,7 +40,7 @@ public:
if (backend == DNN_BACKEND_DEFAULT && target == DNN_TARGET_OPENCL)
{
#if 0 //defined(HAVE_OPENCL)
#if defined(HAVE_OPENCL)
if (!cv::ocl::useOpenCL())
#endif
{
......
......@@ -875,7 +875,7 @@ struct Net::Impl
if (preferableBackend == DNN_BACKEND_DEFAULT)
{
CV_Assert(preferableTarget == DNN_TARGET_CPU);
CV_Assert(preferableTarget == DNN_TARGET_CPU || preferableTarget == DNN_TARGET_OPENCL);
return;
}
......@@ -1000,6 +1000,7 @@ struct Net::Impl
Ptr<Layer> layerPtr = ld.getLayerInstance();
{
layerPtr->finalize(ld.inputBlobs, ld.outputBlobs);
layerPtr->preferableTarget = preferableTarget;
#if 0
std::cout << "\toutputs:";
size_t noutputs = ld.outputBlobs.size();
......@@ -1026,7 +1027,7 @@ struct Net::Impl
void fuseLayers(const std::vector<LayerPin>& blobsToKeep_)
{
if( !fusion || preferableBackend == DNN_BACKEND_HALIDE )
if( !fusion || !(preferableBackend == DNN_BACKEND_DEFAULT && preferableTarget == DNN_TARGET_CPU))
return;
CV_TRACE_FUNCTION();
......@@ -1236,7 +1237,6 @@ struct Net::Impl
}
layersTimings.resize(lastLayerId + 1, 0);
fuseLayers(blobsToKeep_);
}
......@@ -1402,7 +1402,7 @@ struct Net::Impl
}
else
{
CV_Assert(preferableTarget == DNN_TARGET_CPU);
CV_Assert(preferableTarget == DNN_TARGET_CPU || preferableTarget == DNN_TARGET_OPENCL);
}
return ld.outputBlobs[pin.oid];
}
......@@ -1963,12 +1963,12 @@ int64 Net::getPerfProfile(std::vector<double>& timings)
Importer::~Importer() {}
Layer::Layer() {}
Layer::Layer() { preferableTarget = DNN_TARGET_CPU; }
Layer::Layer(const LayerParams &params)
: blobs(params.blobs), name(params.name), type(params.type)
{
preferableTarget = DNN_TARGET_CPU;
}
void Layer::setParamsFrom(const LayerParams &params)
......
......@@ -43,6 +43,7 @@
#include "../precomp.hpp"
#include "layers_common.hpp"
#include "op_halide.hpp"
#include "opencl_kernels_dnn.hpp"
namespace cv
{
......@@ -174,11 +175,62 @@ public:
}
};
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
int cAxis = clamp(axis, inputs[0]->dims);
if (!(cAxis == 1 && outputs[0].dims == 4 && !padding))
return false;
int bottom_concat_axis;
int concat_size = inputs[0]->size[2] * inputs[0]->size[3];
int top_concat_axis = outputs[0].size[1];
int offset_concat_axis = 0;
UMat inpMat, outMat;
outMat = outputs[0].getUMat(ACCESS_WRITE);
ocl::Kernel kernel;
String buildopt = String("-DDtype=") + ocl::typeToStr(inputs[0]->type()) + String(" ");
if (!kernel.create("concat", ocl::dnn::concat_oclsrc, buildopt))
return false;
for (size_t i = 0; i < inputs.size(); i++)
{
inpMat = inputs[i]->getUMat(ACCESS_READ);
bottom_concat_axis = inputs[i]->size[1];
size_t nthreads = inputs[i]->total();
kernel.set(0, (int)nthreads);
kernel.set(1, ocl::KernelArg::PtrReadOnly(inpMat));
kernel.set(2, (int)inputs[i]->size[0]);
kernel.set(3, (int)concat_size);
kernel.set(4, (int)top_concat_axis);
kernel.set(5, (int)bottom_concat_axis);
kernel.set(6, (int)offset_concat_axis);
kernel.set(7, ocl::KernelArg::PtrWriteOnly(outMat));
if (!kernel.run(1, &nthreads, NULL, false))
return false;
offset_concat_axis += bottom_concat_axis;
}
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
int cAxis = clamp(axis, inputs[0]->dims);
Mat& outMat = outputs[0];
......
......@@ -47,6 +47,10 @@
#include "opencv2/core/hal/intrin.hpp"
#include <iostream>
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
namespace dnn
......@@ -150,6 +154,11 @@ public:
Ptr<BatchNormLayer> bnorm;
Ptr<ScaleLayer> scaleLayer;
#ifdef HAVE_OPENCL
Ptr<OCL4DNNConvSpatial<float> > convolutionOp;
std::vector<UMat> umat_blobs;
#endif
MatShape computeColRowShape(const MatShape &inpShape, const MatShape &outShape) const
{
Size out(outShape[3], outShape[2]);
......@@ -636,6 +645,42 @@ public:
}
};
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
int group = inputs[0]->size[1] / umat_blobs[0].size[1];
if (convolutionOp.empty())
{
OCL4DNNConvConfig config;
config.in_shape = shape(*inputs[0]);
config.out_shape = shape(outputs[0]);
config.kernel = kernel;
config.pad = pad;
config.stride = stride;
config.dilation = dilation;
config.group = group;
config.bias_term = (hasBias()) ? true : false;
convolutionOp = Ptr<OCL4DNNConvSpatial<float> >(new OCL4DNNConvSpatial<float>(config));
}
for (size_t ii = 0; ii < outputs.size(); ii++)
{
UMat inpMat, outMat;
inpMat = inputs[ii]->getUMat(ACCESS_READ);
outMat = outputs[ii].getUMat(ACCESS_WRITE);
int batch_size = inpMat.size[0];
if (!convolutionOp->Forward(inpMat, umat_blobs[0], hasBias() ? umat_blobs[1] : UMat(),
outMat, batch_size))
return false;
}
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
......@@ -649,6 +694,10 @@ public:
int ngroups = inputs[0]->size[1]/blobs[0].size[1];
CV_Assert(outputs[0].size[1] % ngroups == 0);
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
int k, outCn = blobs[0].size[0];
if( weightsMat.empty() )
......@@ -1203,8 +1252,17 @@ static void initConvDeconvLayerFromCaffe(Ptr<BaseConvolutionLayer> l, const Laye
Ptr<BaseConvolutionLayer> ConvolutionLayer::create(const LayerParams &params)
{
Ptr<BaseConvolutionLayer> l(new ConvolutionLayerImpl);
ConvolutionLayerImpl* conv_ptr = new ConvolutionLayerImpl;
Ptr<BaseConvolutionLayer> l(conv_ptr);
initConvDeconvLayerFromCaffe(l, params);
#ifdef HAVE_OPENCL
size_t n = params.blobs.size();
conv_ptr->umat_blobs.resize(n);
for (int i = 0; i < n; i++)
conv_ptr->umat_blobs[i] = params.blobs[i].getUMat(ACCESS_READ);
#endif
return l;
}
......
......@@ -41,9 +41,12 @@
//M*/
#include "../precomp.hpp"
#include "layers_common.hpp"
#include "op_halide.hpp"
#include "opencv2/imgproc.hpp"
#include <opencv2/dnn/shape_utils.hpp>
#include "opencl_kernels_dnn.hpp"
#include <iostream>
namespace cv
{
......@@ -158,6 +161,10 @@ public:
{
CV_TRACE_FUNCTION();
CV_OCL_RUN((this->preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
func.applyOCL(inputs, outputs, internals))
for (size_t i = 0; i < inputs.size(); i++)
{
const Mat &src = *inputs[i];
......@@ -191,6 +198,13 @@ public:
bool run_parallel;
};
#ifdef HAVE_OPENCL
static String oclGetTMacro(const UMat &m)
{
return String("-DT=") + ocl::typeToStr(m.type()) + String(" ");
}
#endif
struct ReLUFunctor
{
typedef ReLULayer Layer;
......@@ -230,6 +244,46 @@ struct ReLUFunctor
}
}
#ifdef HAVE_OPENCL
bool initKernel(ocl::Kernel &ker, const UMat &src) const
{
const char *buildoptSlope = (slope == 0) ? "-DRELU_NO_SLOPE" : "";
String buildopt = oclGetTMacro(src) + buildoptSlope;
if (!ker.create("ReLUForward", ocl::dnn::activations_oclsrc, buildopt))
return false;
if (slope != 0)
ker.set(3, (float)slope);
return true;
}
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
size_t wgSize = ocl::Device::getDefault().maxWorkGroupSize();
for (size_t i = 0; i < inputs.size(); i++)
{
UMat src, dst;
inputs[i]->copyTo(src);
dst = outputs[i].getUMat(ACCESS_WRITE);
CV_Assert(src.isContinuous() && dst.isContinuous() && !src.offset && !dst.offset);
ocl::Kernel ker;
CV_Assert(initKernel(ker, src));
ker.set(0, (int)src.total());
ker.set(1, ocl::KernelArg::PtrReadOnly(src));
ker.set(2, ocl::KernelArg::PtrWriteOnly(dst));
size_t gSize = src.total();
CV_Assert(ker.run(1, &gSize, &wgSize, false));
}
return true;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......@@ -293,6 +347,14 @@ struct ReLU6Functor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......@@ -320,6 +382,14 @@ struct TanHFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......@@ -347,6 +417,14 @@ struct SigmoidFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......@@ -376,6 +454,14 @@ struct ELUFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......@@ -403,6 +489,14 @@ struct AbsValFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......@@ -430,6 +524,14 @@ struct BNLLFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......@@ -479,6 +581,14 @@ struct PowerFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......@@ -524,18 +634,18 @@ struct ChannelsPReLUFunctor
v_float32x4 s4 = v_setall_f32(s), z = v_setzero_f32();
for( ; i <= len - 16; i += 16 )
{
v_float32x4 x0 = v_load(ptr + i);
v_float32x4 x1 = v_load(ptr + i + 4);
v_float32x4 x2 = v_load(ptr + i + 8);
v_float32x4 x3 = v_load(ptr + i + 12);
v_float32x4 x0 = v_load(srcptr + i);
v_float32x4 x1 = v_load(srcptr + i + 4);
v_float32x4 x2 = v_load(srcptr + i + 8);
v_float32x4 x3 = v_load(srcptr + i + 12);
x0 = v_select(x0 >= z, x0, x0*s4);
x1 = v_select(x1 >= z, x1, x1*s4);
x2 = v_select(x2 >= z, x2, x2*s4);
x3 = v_select(x3 >= z, x3, x3*s4);
v_store(ptr + i, x0);
v_store(ptr + i + 4, x1);
v_store(ptr + i + 8, x2);
v_store(ptr + i + 12, x3);
v_store(dstptr + i, x0);
v_store(dstptr + i + 4, x1);
v_store(dstptr + i + 8, x2);
v_store(dstptr + i + 12, x3);
}
#endif
for( ; i < len; i++ )
......@@ -546,6 +656,14 @@ struct ChannelsPReLUFunctor
}
}
#ifdef HAVE_OPENCL
bool applyOCL(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
// TODO: implement OCL version
return false;
}
#endif
#ifdef HAVE_HALIDE
void attachHalide(const Halide::Expr& input, Halide::Func& top)
{
......
......@@ -43,8 +43,13 @@
#include "../precomp.hpp"
#include "layers_common.hpp"
#include "op_halide.hpp"
#include "opencl_kernels_dnn.hpp"
#include <opencv2/dnn/shape_utils.hpp>
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
namespace dnn
......@@ -55,6 +60,11 @@ class FullyConnectedLayerImpl : public InnerProductLayer
public:
enum { VEC_ALIGN = 8 };
#ifdef HAVE_OPENCL
Ptr<OCL4DNNInnerProduct<float> > innerProductOp;
std::vector<UMat> umat_blobs;
#endif
FullyConnectedLayerImpl(const LayerParams& params)
{
setParamsFrom(params);
......@@ -84,6 +94,12 @@ public:
biasMat = blobs[1] = blobs[1].reshape(1, 1);
else
biasMat = Mat::zeros(1, numOutput, weightsMat.type());
#ifdef HAVE_OPENCL
size_t n = blobs.size();
umat_blobs.resize(n);
for (int i = 0; i < n; i++) umat_blobs[i] = blobs[i].getUMat(ACCESS_READ);
#endif
}
bool getMemoryShapes(const std::vector<MatShape> &inputs,
......@@ -238,11 +254,78 @@ public:
bool useAVX2;
};
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &input, std::vector<Mat> &output)
{
int axisCan = clamp(axis, input[0]->dims);
int numOutput = blobs[0].size[0];
int innerSize = blobs[0].size[1];
int outerSize = input[0]->total(0, axisCan);
bool ret = true;
if (innerProductOp.empty())
{
OCL4DNNInnerProductConfig config;
config.num_output = numOutput;
config.bias_term = bias;
config.M = outerSize;
config.K = innerSize;
innerProductOp = Ptr<OCL4DNNInnerProduct<float> >(new OCL4DNNInnerProduct<float>(config));
}
UMat biasOnesMat = UMat::ones(outerSize, 1, umat_blobs[0].type());
for (size_t i = 0; i < input.size(); i++)
{
UMat srcMat, dstMat;
srcMat = input[i]->getUMat(ACCESS_READ);
dstMat = output[i].getUMat(ACCESS_WRITE);
dstMat.setTo(0.0f);
if (!innerProductOp->Forward(srcMat, umat_blobs[0], (bias) ? umat_blobs[1] : UMat(), dstMat))
{
ret = false;
break;
}
if (bias && (outerSize > 1))
{
UMat& biases = umat_blobs[1];
cv::gemm(biasOnesMat, biases, 1, dstMat, 1, dstMat, 0);
}
}
if (ret) return true;
UMat& weights = umat_blobs[0];
for (size_t i = 0; i < input.size(); i++)
{
UMat srcMat, dstMat;
srcMat = input[i]->reshape(1, outerSize).getUMat(ACCESS_READ);
dstMat = output[i].reshape(1, outerSize).getUMat(ACCESS_WRITE);
cv::gemm(srcMat, weights, 1, noArray(), 0, dstMat, GEMM_2_T);
if (bias)
{
UMat& biases = umat_blobs[1];
cv::gemm(biasOnesMat, biases, 1, dstMat, 1, dstMat, 0);
}
}
return true;
}
#endif
void forward(std::vector<Mat*> &input, std::vector<Mat> &output, std::vector<Mat> &)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(input, output))
int axisCan = clamp(axis, input[0]->dims);
int outerSize = input[0]->total(0, axisCan);
......
......@@ -51,6 +51,10 @@
#include "layers/layers_common.simd_declarations.hpp"
#undef CV_CPU_OPTIMIZATION_DECLARATIONS_ONLY
#ifdef HAVE_OPENCL
#include "ocl4dnn.hpp"
#endif
namespace cv
{
namespace dnn
......
......@@ -46,8 +46,13 @@
#include "opencv2/imgproc.hpp"
#include "opencv2/dnn/shape_utils.hpp"
#include "opencv2/core/hal/hal.hpp"
#include "opencl_kernels_dnn.hpp"
#include <algorithm>
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
namespace dnn
......@@ -78,18 +83,64 @@ public:
normBySize = params.get<bool>("norm_by_size", true);
}
#ifdef HAVE_OPENCL
Ptr<OCL4DNNLRN<float> > lrnOp;
#endif
virtual bool supportBackend(int backendId)
{
return backendId == DNN_BACKEND_DEFAULT ||
backendId == DNN_BACKEND_HALIDE && haveHalide();
}
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
if (lrnOp.empty())
{
OCL4DNNLRNConfig config;
config.lrn_type = type == CHANNEL_NRM ?
LRNParameter_NormRegion_ACROSS_CHANNELS :
LRNParameter_NormRegion_WITHIN_CHANNEL;
CHECK_EQ(size % 2, 1)<< "LRN only supports odd values for local_size";
config.local_size = size;
config.alpha = alpha;
config.beta = beta;
config.k = bias;
CHECK_EQ(4, inputs[0]->dims) << "Input must have 4 axes, "
<< "corresponding to (num, channels, height, width)";
config.batch_size = inputs[0]->size[0];
config.channels = inputs[0]->size[1];
config.height = inputs[0]->size[2];
config.width = inputs[0]->size[3];
config.norm_by_size = normBySize;
lrnOp = Ptr<OCL4DNNLRN<float> >(new OCL4DNNLRN<float>(config));
}
UMat inpMat, outMat;
inpMat = inputs[0]->getUMat(ACCESS_READ);
outMat = outputs[0].getUMat(ACCESS_WRITE);
if (!lrnOp->Forward(inpMat, outMat))
return false;
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_Assert(inputs.size() == outputs.size());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
for (int i = 0; i < inputs.size(); i++)
{
CV_Assert(inputs[i]->dims == 4);
......
......@@ -44,10 +44,14 @@
#include "layers_common.hpp"
#include "opencv2/core/hal/intrin.hpp"
#include "op_halide.hpp"
#include "opencl_kernels_dnn.hpp"
#include <float.h>
#include <algorithm>
using std::max;
using std::min;
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
......@@ -81,6 +85,10 @@ public:
ceilMode = params.get<bool>("ceil_mode", true);
}
#ifdef HAVE_OPENCL
Ptr<OCL4DNNPool<float> > poolOp;
#endif
void finalize(const std::vector<Mat*> &inputs, std::vector<Mat> &outputs)
{
CV_Assert(inputs.size() == 1);
......@@ -104,11 +112,59 @@ public:
type == PoolingLayer::AVE && !pad.width && !pad.height);
}
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
if (poolOp.empty())
{
OCL4DNNPoolConfig config;
config.in_shape = shape(*inputs[0]);
config.out_shape = shape(outputs[0]);
config.kernel = kernel;
config.pad = pad;
config.stride = stride;
config.channels = inputs[0]->size[1];
config.pool_method = type == MAX ? LIBDNN_POOLING_METHOD_MAX :
(type == AVE ? LIBDNN_POOLING_METHOD_AVE :
LIBDNN_POOLING_METHOD_STO);
poolOp = Ptr<OCL4DNNPool<float> >(new OCL4DNNPool<float>(config));
}
for (size_t ii = 0; ii < inputs.size(); ii++)
{
UMat inpMat, outMat, maskMat;
inpMat = inputs[ii]->getUMat(ACCESS_READ);
if (type == MAX)
{
outMat = outputs[2 * ii].getUMat(ACCESS_WRITE);
maskMat = outputs[2 * ii + 1].getUMat(ACCESS_WRITE);
} else {
outMat = outputs[ii].getUMat(ACCESS_WRITE);
maskMat = UMat();
}
CV_Assert(inpMat.offset == 0 && outMat.offset == 0);
if (!poolOp->Forward(inpMat, outMat, maskMat))
return false;
}
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
for (size_t ii = 0; ii < inputs.size(); ii++)
{
switch (type)
......
......@@ -43,9 +43,13 @@
#include "../precomp.hpp"
#include "layers_common.hpp"
#include "op_halide.hpp"
#include "opencl_kernels_dnn.hpp"
#include <algorithm>
#include <stdlib.h>
using std::max;
#ifdef HAVE_OPENCL
using namespace cv::dnn::ocl4dnn;
#endif
namespace cv
{
......@@ -63,6 +67,10 @@ public:
setParamsFrom(params);
}
#ifdef HAVE_OPENCL
Ptr<OCL4DNNSoftmax<float> > softmaxOp;
#endif
bool getMemoryShapes(const std::vector<MatShape> &inputs,
const int requiredOutputs,
std::vector<MatShape> &outputs,
......@@ -82,11 +90,91 @@ public:
backendId == DNN_BACKEND_HALIDE && haveHalide() && axisRaw == 1;
}
#ifdef HAVE_OPENCL
bool forward_ocl(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
if (softmaxOp.empty())
{
OCL4DNNSoftmaxConfig config;
config.in_shape = shape(*inputs[0]);
config.axis = axisRaw;
config.channels = inputs[0]->size[axisRaw];
softmaxOp = Ptr<OCL4DNNSoftmax<float> >(new OCL4DNNSoftmax<float>(config));
}
UMat srcMat, dstMat;
srcMat = inputs[0]->getUMat(ACCESS_READ);
dstMat = outputs[0].getUMat(ACCESS_WRITE);
if (!logSoftMax && softmaxOp->Forward(srcMat, dstMat))
return true;
const Mat &src = *inputs[0];
UMat bufMat = internals[0].getUMat(ACCESS_WRITE);
srcMat.copyTo(dstMat);
int axis = clamp(axisRaw, src.dims);
size_t outerSize = src.total(0, axis);
size_t channels = src.size[axis];
size_t innerSize = src.total(axis + 1);
String buildOpts = String("-DT=") + ocl::typeToStr(src.type());
ocl::Kernel kmax, ksub, ksum, kdiv;
if (!kmax.create("kernel_channel_max", ocl::dnn::softmax_oclsrc, buildOpts))
return false;
if (!ksub.create("kernel_channel_subtract", ocl::dnn::softmax_oclsrc, buildOpts))
return false;
if (!ksum.create("kernel_channel_sum", ocl::dnn::softmax_oclsrc, buildOpts))
return false;
if (logSoftMax) buildOpts += " -DLOG_SOFTMAX ";
if (!kdiv.create("kernel_channel_div", ocl::dnn::softmax_oclsrc, buildOpts))
return false;
size_t wgSize = ocl::Device::getDefault().maxWorkGroupSize();
size_t bufSize = internals[0].total();
size_t totalSize = src.total();
kmax.args((int)outerSize, (int)channels, (int)innerSize,
ocl::KernelArg::PtrReadOnly(dstMat), ocl::KernelArg::PtrReadWrite(bufMat));
if (!kmax.run(1, &bufSize, &wgSize, false))
return false;
ksub.args((int)totalSize, (int)outerSize, (int)channels, (int)innerSize,
ocl::KernelArg::PtrReadOnly(bufMat), ocl::KernelArg::PtrReadWrite(dstMat));
if (!ksub.run(1, &totalSize, &wgSize, false))
return false;
cv::exp(dstMat, dstMat);
ksum.args((int)outerSize, (int)channels, (int)innerSize,
ocl::KernelArg::PtrReadOnly(dstMat), ocl::KernelArg::PtrReadWrite(bufMat));
if (!ksum.run(1, &bufSize, &wgSize, false))
return false;
kdiv.args((int)totalSize, (int)outerSize, (int)channels, (int)innerSize,
ocl::KernelArg::PtrReadOnly(bufMat), ocl::KernelArg::PtrReadWrite(dstMat));
if (!kdiv.run(1, &totalSize, &wgSize, false))
return false;
return true;
}
#endif
void forward(std::vector<Mat*> &inputs, std::vector<Mat> &outputs, std::vector<Mat> &internals)
{
CV_TRACE_FUNCTION();
CV_TRACE_ARG_VALUE(name, "name", name.c_str());
CV_OCL_RUN((preferableTarget == DNN_TARGET_OPENCL) &&
OCL_PERFORMANCE_CHECK(ocl::Device::getDefault().isIntel()),
forward_ocl(inputs, outputs, internals))
const Mat &src = *inputs[0];
Mat &dst = outputs[0];
......
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef _OPENCV_LIBDNN_COMMON_HPP_
#define _OPENCV_LIBDNN_COMMON_HPP_
#include "../../precomp.hpp"
#include "../../caffe/glog_emulator.hpp"
#include <opencv2/core/opencl/runtime/opencl_core.hpp>
#ifdef HAVE_OPENCL
// Macro to select the single (_float) or double (_double) precision kernel
#define CL_KERNEL_SELECT(kernel) kernel "_float"
#define OCL_CHECK(condition) \
do { \
cl_int error = (condition); \
CHECK_EQ(error, CL_SUCCESS) << " " << cv::ocl::getOpenCLErrorString(error); \
} while (0)
bool clOptionSupport(cv::String option);
#endif // HAVE_OPENCL
#endif
This diff is collapsed.
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef _OPENCV_GREENTEA_MATH_FUNCTIONS_HPP_
#define _OPENCV_GREENTEA_MATH_FUNCTIONS_HPP_
#include "../../precomp.hpp"
#include "common.hpp"
namespace cv
{
namespace dnn
{
namespace ocl4dnn
{
#ifdef HAVE_OPENCL
enum CBLAS_TRANSPOSE {CblasNoTrans=111, CblasTrans=112, CblasConjTrans=113};
template<typename Dtype>
bool ocl4dnnGEMMCommon(const CBLAS_TRANSPOSE TransB,
const int32_t M, const int32_t N, const int32_t K,
const UMat A, const UMat B,
const UMat B_image, UMat C,
const size_t max_image_size);
template<typename Dtype>
ocl::Image2D ocl4dnnGEMMCopyBufferToImage(UMat buffer, int offset,
bool is_matrix_a, bool transpose,
bool padding, int padded_height,
int padded_width, int height,
int width, int ld);
template<typename Dtype>
bool ocl4dnnGEMV(const CBLAS_TRANSPOSE TransA,
const int32_t M, const int32_t N, const Dtype alpha,
const UMat A, const int32_t offA, const UMat x,
const int32_t offx, const Dtype beta, UMat y,
const int32_t offy);
template<typename Dtype>
bool ocl4dnnAXPY(const int32_t N, const Dtype alpha,
const UMat x, const int32_t offx, UMat y,
const int32_t offy);
#endif // HAVE_OPENCL
} // namespace ocl4dnn
} // namespace dnn
} // namespce cv
#endif
This diff is collapsed.
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include "common.hpp"
#include "opencl_kernels_dnn.hpp"
using namespace cv;
#ifdef HAVE_OPENCL
bool clOptionSupport(cv::String option)
{
cv::String errmsg;
ocl::Program program = ocl::Context::getDefault().getProg(ocl::dnn::dummy_oclsrc, option, errmsg);
return program.ptr() ? true : false;
}
#endif // HAVE_OPENCL
This diff is collapsed.
This diff is collapsed.
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include "common.hpp"
#include "ocl4dnn.hpp"
#include "math_functions.hpp"
#ifdef HAVE_OPENCL
namespace cv { namespace dnn { namespace ocl4dnn {
template<typename Dtype>
OCL4DNNInnerProduct<Dtype>::OCL4DNNInnerProduct(OCL4DNNInnerProductConfig config)
{
bias_term_ = config.bias_term;
transpose_ = config.transpose;
N_ = num_output_ = config.num_output;
M_ = config.M;
K_ = config.K;
phase_test_ = config.phase_test;
image_copied_ = false;
}
template<typename Dtype>
OCL4DNNInnerProduct<Dtype>::~OCL4DNNInnerProduct()
{
}
template<typename Dtype>
bool OCL4DNNInnerProduct<Dtype>::Forward(const UMat& bottom,
const UMat& weight,
const UMat& bias,
UMat& top)
{
bool ret;
if (M_ == 1)
{
ret = ocl4dnnGEMV<Dtype>(CblasNoTrans, N_, K_, (Dtype) 1.,
weight, 0, bottom, 0, (Dtype) 0., top, 0);
if (bias_term_ && ret)
ret = ocl4dnnAXPY<Dtype>(N_, 1, bias, 0, top, 0);
return ret;
}
else
{
ret = false;
size_t max_image_size = std::min(ocl::Device::getDefault().image2DMaxWidth(),
ocl::Device::getDefault().image2DMaxHeight());
if (M_ <= max_image_size &&
N_ <= max_image_size &&
K_ <= max_image_size &&
cv::traits::Depth<Dtype>::value == CV_32F &&
ocl::Device::getDefault().intelSubgroupsSupport())
{
ret = ocl4dnnGEMMCommon<Dtype>(transpose_ ? CblasNoTrans : CblasTrans,
M_, N_, K_, bottom, weight, UMat(), top,
max_image_size);
}
return ret;
}
}
template class OCL4DNNInnerProduct<float>;
} // namespace ocl4dnn
}
}
#endif // HAVE_OPENCL
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include "common.hpp"
#include "ocl4dnn.hpp"
#include "opencl_kernels_dnn.hpp"
#ifdef HAVE_OPENCL
namespace cv { namespace dnn { namespace ocl4dnn {
template<typename Dtype>
OCL4DNNLRN<Dtype>::OCL4DNNLRN(OCL4DNNLRNConfig config)
{
lrn_type_ = config.lrn_type;
phase_test_ = config.phase_test;
size_ = config.local_size;
CHECK_EQ(size_ % 2, 1)<< "LRN only supports odd values for local_size";
alpha_ = config.alpha;
beta_ = config.beta;
k_ = config.k;
norm_by_size_ = config.norm_by_size;
num_ = config.batch_size;
channels_ = config.channels;
height_ = config.height;
width_ = config.width;
}
template<typename Dtype>
bool OCL4DNNLRN<Dtype>::Forward(const UMat& bottom, UMat& top)
{
bool ret = true;
if (!ocl::Device::getDefault().intelSubgroupsSupport())
return false;
switch (lrn_type_)
{
case LRNParameter_NormRegion_ACROSS_CHANNELS:
ret = crossChannelForward(bottom, top);
break;
case LRNParameter_NormRegion_WITHIN_CHANNEL:
//TODO
//WithinChannelForward(bottom_data, top_data);
ret = false;
break;
default:
ret = false;
LOG(FATAL)<< "Unknown normalization region.";
}
return ret;
}
template<typename Dtype>
bool OCL4DNNLRN<Dtype>::crossChannelForward(const UMat& bottom, UMat& top)
{
ocl::Queue queue = ocl::Queue::getDefault();
CHECK_EQ(phase_test_, true) << "Only support forward inference.";
cl_uint argIdx = 0;
int32_t n_threads = num_ * height_ * width_;
size_t global_work_size_[1] = {(size_t)n_threads};
String opts = clOptionSupport("-cl-no-subgroup-ifp") ? " -cl-no-subgroup-ifp " : "";
ocl::Kernel oclk_lrn_fill;
if (!oclk_lrn_fill.create(CL_KERNEL_SELECT("lrn_full_no_scale"), ocl::dnn::ocl4dnn_lrn_oclsrc, opts))
return false;
oclk_lrn_fill.set(argIdx++, n_threads);
oclk_lrn_fill.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_lrn_fill.set(argIdx++, num_);
oclk_lrn_fill.set(argIdx++, channels_);
oclk_lrn_fill.set(argIdx++, height_);
oclk_lrn_fill.set(argIdx++, width_);
oclk_lrn_fill.set(argIdx++, size_);
int size_norm_factor = norm_by_size_ ? size_ : 1;
oclk_lrn_fill.set(argIdx++, alpha_ / size_norm_factor);
oclk_lrn_fill.set(argIdx++, k_);
oclk_lrn_fill.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
oclk_lrn_fill.set(argIdx++, -beta_);
return oclk_lrn_fill.run(1, global_work_size_, NULL, false);
}
template class OCL4DNNLRN<float>;
} // namespace ocl4dnn
}
}
#endif // HAVE_OPENCL
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include <string>
#include <vector>
#include "common.hpp"
#include "ocl4dnn.hpp"
#include "opencl_kernels_dnn.hpp"
#ifdef HAVE_OPENCL
namespace cv { namespace dnn { namespace ocl4dnn {
template<typename Dtype>
OCL4DNNPool<Dtype>::OCL4DNNPool(OCL4DNNPoolConfig config)
{
int dims = config.in_shape.size();
int spatial_dims = 2;
batch_size_ = config.in_shape[0];
channels_ = config.channels;
pool_method_ = config.pool_method;
for (int i = 0; i < spatial_dims; ++i)
{
kernel_shape_.push_back(i == 0 ? config.kernel.height : config.kernel.width);
pad_.push_back(i == 0 ? config.pad.height : config.pad.width);
stride_.push_back(i == 0 ? config.stride.height : config.stride.width);
im_in_shape_.push_back(config.in_shape[dims - spatial_dims + i]);
im_out_shape_.push_back(config.out_shape[dims - spatial_dims + i]);
}
kernel_h_ = kernel_shape_[0];
kernel_w_ = kernel_shape_[1];
stride_h_ = stride_[0];
stride_w_ = stride_[1];
pad_h_ = pad_[0];
pad_w_ = pad_[1];
height_ = im_in_shape_[0];
width_ = im_in_shape_[1];
pooled_height_ = im_out_shape_[0];
pooled_width_ = im_out_shape_[1];
count_ = 1;
for (int i = 0; i < config.out_shape.size(); ++i)
{
count_ *= config.out_shape[i];
}
}
template<typename Dtype>
OCL4DNNPool<Dtype>::~OCL4DNNPool()
{
mask_idx_.release();
}
template<typename Dtype>
bool OCL4DNNPool<Dtype>::Forward(const UMat& bottom,
UMat& top,
UMat& top_mask)
{
bool ret = true;
ocl::Queue queue = ocl::Queue::getDefault();
size_t global[] = { 128 * 128 };
size_t local[] = { 128 };
cl_uint argIdx = 0;
// support 2D case
switch (pool_method_)
{
case LIBDNN_POOLING_METHOD_MAX:
{
if (top_mask.empty() && mask_idx_.empty())
{
mask_idx_.create(1, count_, CV_32FC1);
}
ocl::Kernel oclk_max_pool_forward(CL_KERNEL_SELECT("max_pool_forward"),
cv::ocl::dnn::ocl4dnn_pooling_oclsrc);
if (oclk_max_pool_forward.empty())
return false;
argIdx = 0;
oclk_max_pool_forward.set(argIdx++, count_);
oclk_max_pool_forward.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_max_pool_forward.set(argIdx++, batch_size_);
oclk_max_pool_forward.set(argIdx++, channels_);
oclk_max_pool_forward.set(argIdx++, height_);
oclk_max_pool_forward.set(argIdx++, width_);
oclk_max_pool_forward.set(argIdx++, pooled_height_);
oclk_max_pool_forward.set(argIdx++, pooled_width_);
oclk_max_pool_forward.set(argIdx++, kernel_h_);
oclk_max_pool_forward.set(argIdx++, kernel_w_);
oclk_max_pool_forward.set(argIdx++, stride_h_);
oclk_max_pool_forward.set(argIdx++, stride_w_);
oclk_max_pool_forward.set(argIdx++, pad_h_);
oclk_max_pool_forward.set(argIdx++, pad_w_);
oclk_max_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
oclk_max_pool_forward.set(argIdx++, mask_idx_.empty() ? 0 : 1);
if (mask_idx_.empty())
oclk_max_pool_forward.set(argIdx++, (void *)NULL);
else
oclk_max_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(mask_idx_));
oclk_max_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top_mask));
ret = oclk_max_pool_forward.run(1, global, local, false);
}
break;
case LIBDNN_POOLING_METHOD_AVE:
{
ocl::Kernel oclk_ave_pool_forward(CL_KERNEL_SELECT("ave_pool_forward"),
cv::ocl::dnn::ocl4dnn_pooling_oclsrc);
if (oclk_ave_pool_forward.empty())
return false;
argIdx = 0;
oclk_ave_pool_forward.set(argIdx++, count_);
oclk_ave_pool_forward.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_ave_pool_forward.set(argIdx++, batch_size_);
oclk_ave_pool_forward.set(argIdx++, channels_);
oclk_ave_pool_forward.set(argIdx++, height_);
oclk_ave_pool_forward.set(argIdx++, width_);
oclk_ave_pool_forward.set(argIdx++, pooled_height_);
oclk_ave_pool_forward.set(argIdx++, pooled_width_);
oclk_ave_pool_forward.set(argIdx++, kernel_h_);
oclk_ave_pool_forward.set(argIdx++, kernel_w_);
oclk_ave_pool_forward.set(argIdx++, stride_h_);
oclk_ave_pool_forward.set(argIdx++, stride_w_);
oclk_ave_pool_forward.set(argIdx++, pad_h_);
oclk_ave_pool_forward.set(argIdx++, pad_w_);
oclk_ave_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
ret = oclk_ave_pool_forward.run(1, global, local, false);
}
break;
case LIBDNN_POOLING_METHOD_STO:
{
ocl::Kernel oclk_sto_pool_forward(CL_KERNEL_SELECT("sto_pool_forward_test"),
cv::ocl::dnn::ocl4dnn_pooling_oclsrc);
if (oclk_sto_pool_forward.empty())
return false;
argIdx = 0;
oclk_sto_pool_forward.set(argIdx++, count_);
oclk_sto_pool_forward.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_sto_pool_forward.set(argIdx++, batch_size_);
oclk_sto_pool_forward.set(argIdx++, channels_);
oclk_sto_pool_forward.set(argIdx++, height_);
oclk_sto_pool_forward.set(argIdx++, width_);
oclk_sto_pool_forward.set(argIdx++, pooled_height_);
oclk_sto_pool_forward.set(argIdx++, pooled_width_);
oclk_sto_pool_forward.set(argIdx++, kernel_h_);
oclk_sto_pool_forward.set(argIdx++, kernel_w_);
oclk_sto_pool_forward.set(argIdx++, stride_h_);
oclk_sto_pool_forward.set(argIdx++, stride_w_);
oclk_sto_pool_forward.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
ret = oclk_sto_pool_forward.run(1, global, local, false);
}
break;
default:
{
ret = false;
LOG(FATAL)<< "Unknown pooling method.";
}
}
return ret;
}
template class OCL4DNNPool<float>;
} // namespace ocl4dnn
}
}
#endif // HAVE_OPENCL
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "../../precomp.hpp"
#include <vector>
#include "common.hpp"
#include "ocl4dnn.hpp"
#include "opencl_kernels_dnn.hpp"
#ifdef HAVE_OPENCL
namespace cv { namespace dnn { namespace ocl4dnn {
template<typename Dtype>
OCL4DNNSoftmax<Dtype>::OCL4DNNSoftmax(OCL4DNNSoftmaxConfig config)
{
softmax_axis_ = config.axis;
channels_ = config.channels;
inner_num_ = 1;
outer_num_ = 1;
count_ = 1;
int32_t scale_sz = 1;
for (int32_t i = softmax_axis_ + 1; i < config.in_shape.size(); i++)
inner_num_ *= config.in_shape[i];
use_slm_ = (config.in_shape[softmax_axis_] * inner_num_ + inner_num_ * 17) <= 8192;
for (int32_t i = 0; i < softmax_axis_; i++)
outer_num_ *= config.in_shape[i];
count_ = inner_num_ + outer_num_;
std::vector<int32_t> scale_dims = config.in_shape;
scale_dims[softmax_axis_] = use_slm_ ? 1 : 17;
for (int32_t i = 0; i < scale_dims.size(); i++)
scale_sz *= scale_dims[i];
scale_data_.create(1, scale_sz, CV_32FC1);
}
template<typename Dtype>
OCL4DNNSoftmax<Dtype>::~OCL4DNNSoftmax()
{
scale_data_.release();
}
template<typename Dtype>
bool OCL4DNNSoftmax<Dtype>::Forward(const UMat& bottom, UMat& top)
{
bool ret = false;
ocl::Queue queue = ocl::Queue::getDefault();
bool intel_subgroup = ocl::Device::getDefault().intelSubgroupsSupport();
if (intel_subgroup && inner_num_ < 128)
{
String opts = clOptionSupport("-cl-no-subgroup-ifp") ? " -cl-no-subgroup-ifp " : "";
String kname;
ocl::Kernel oclk_softmax_forward_kernel;
if (use_slm_)
kname = CL_KERNEL_SELECT("softmax_forward_slm");
else
kname = CL_KERNEL_SELECT("softmax_forward");
if (!oclk_softmax_forward_kernel.create(kname.c_str(), ocl::dnn::softmax_loss_oclsrc, opts))
return false;
size_t global_size[] = { 256, (size_t)outer_num_, 1 };
size_t local_size[] = { 256, 1, 1 };
cl_uint argIdx = 0;
if (use_slm_)
{
oclk_softmax_forward_kernel.set(argIdx++, outer_num_);
oclk_softmax_forward_kernel.set(argIdx++, channels_);
oclk_softmax_forward_kernel.set(argIdx++, inner_num_);
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrWriteOnly(scale_data_));
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
oclk_softmax_forward_kernel.set(argIdx++, NULL, channels_ * inner_num_* sizeof(Dtype));
oclk_softmax_forward_kernel.set(argIdx++, NULL, inner_num_* sizeof(Dtype));
oclk_softmax_forward_kernel.set(argIdx++, NULL, 16 * inner_num_* sizeof(Dtype));
}
else
{
oclk_softmax_forward_kernel.set(argIdx++, outer_num_);
oclk_softmax_forward_kernel.set(argIdx++, channels_);
oclk_softmax_forward_kernel.set(argIdx++, inner_num_);
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrWriteOnly(scale_data_));
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrReadOnly(bottom));
oclk_softmax_forward_kernel.set(argIdx++, ocl::KernelArg::PtrWriteOnly(top));
}
ret = oclk_softmax_forward_kernel.run(3, global_size, local_size, false);
}
return ret;
}
template class OCL4DNNSoftmax<float>;
} // namespace ocl4dnn
}
}
#endif // HAVE_OPENCL
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void ReLUForward(const int count, __global const T* in, __global T* out
#ifndef RELU_NO_SLOPE
, T negative_slope
......
__kernel void batchnorm(__global const T *src, int src_offset,
__global const float *meanMat,
float varMeanScale,
__global const float *invStdMat,
__global const float *weight,
__global const float *bias,
int hasWeight, int hasBias,
int width, int height, int channel,
__global T *dst, int dst_offset)
{
int x = get_global_id(0);
int y = get_global_id(1);
int c = get_global_id(2);
if (x >= width || y >= height || c >= channel)
return;
float mean = meanMat[c] * varMeanScale;
float invstd = invStdMat[c];
float w = hasWeight ? weight[c] : 1;
float b = hasBias ? bias[c] : 0;
int index = y * width + x + c * width * height;
T val = (src[index + src_offset] - mean) * w * invstd + b;
dst[index + dst_offset] = val;
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void null_kernel_float(float arg) {
float out = arg;
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (c) 2016-2017 Fabian David Tschopp, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void concat(const int nthreads,
__global const Dtype* in_data,
const int num_concats,
const int concat_size,
const int top_concat_axis,
const int bottom_concat_axis,
const int offset_concat_axis,
__global Dtype* out_data) {
for (int index = get_global_id(0); index < nthreads;
index += get_global_size(0)) {
const int total_concat_size = concat_size * bottom_concat_axis;
const int concat_num = index / total_concat_size;
const int concat_index = index % total_concat_size;
const int top_index = concat_index
+ (concat_num * top_concat_axis + offset_concat_axis) * concat_size;
out_data[top_index] = in_data[index];
}
}
This diff is collapsed.
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#define CONCAT(A,B) A##_##B
#define TEMPLATE(name,type) CONCAT(name,type)
#define Dtype float
__kernel void TEMPLATE(copyWeightsSwizzled, Dtype)
(__global Dtype* weightIn,
__global Dtype* weightOut,
const int kernel_w,
const int kernel_h,
const int channels,
const int outputs,
const int swizzleFactor) {
unsigned int sX = get_global_id(0);
//Original location
//Output location
int outputSublayer = channels / swizzleFactor;
int outputSublayerIndex = channels % swizzleFactor;
int filter = sX / (kernel_w*kernel_h*channels);
int kernel_X = sX % kernel_w;
int kernel_Y = (sX / kernel_w) % kernel_h;
int kernel_C = (sX / (kernel_w * kernel_h)) % channels;
int FP = filter / swizzleFactor;
int F1 = filter % swizzleFactor;
weightOut[FP*(kernel_w*kernel_h*channels*swizzleFactor) + kernel_C*(kernel_w*kernel_h*swizzleFactor) + kernel_Y*(kernel_w*swizzleFactor) + kernel_X*swizzleFactor + F1]
= weightIn[filter*(kernel_w*kernel_h*channels) + kernel_C*(kernel_w*kernel_h) + kernel_Y*kernel_w + kernel_X];
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
__kernel void dummy_kernel()
{
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -40,6 +40,8 @@
//M*/
#include <opencv2/core.hpp>
#include <opencv2/core/ocl.hpp>
#include <opencv2/core/opencl/ocl_defs.hpp>
#include <opencv2/core/utils/trace.hpp>
#include <opencv2/core/softfloat.hpp> // int32_t (MSVS 2010-2013)
#include "cvconfig.h"
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment