The class \texttt{SURF\_GPU} implements Speeded Up Robust Features descriptor. There is fast multi-scale Hessian keypoint detector that can be used to find the keypoints (which is the default option), but the descriptors can be also computed for the user-specified keypoints. Supports only 8 bit grayscale images.
The class \texttt{SURF\_GPU} can store results to GPU and CPU memory and provides static functions to convert results between CPU and GPU version (\texttt{uploadKeypoints}, \texttt{downloadKeypoints}, \texttt{downloadDescriptors}). CPU results has the same format as \hyperref[cv.class.SURF]{cv::SURF} results. GPU results is stored to \texttt{GpuMat}. \texttt{keypoints} matrix is one row matrix with \texttt{CV\_32FC6} type. It contains 6 float values per feature: \texttt{x, y, size, response, angle, octave}. \texttt{descriptors} matrix is \texttt{nFeatures} x \texttt{descriptorSize} matrix with \texttt{CV\_32FC1} type.
The class \texttt{SURF\_GPU} uses some buffers and provides access to it. All buffers can be safely released between function calls.
\cvclass{gpu::BruteForceMatcher\_GPU}
Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches between descriptor sets.
\begin{lstlisting}
template<class Distance>
class BruteForceMatcher_GPU
{
public:
// Add descriptors to train descriptor collection.
The class \texttt{BruteForceMatcher\_GPU} has the similar interface with class \hyperref[cv.class.DescriptorMatcher]{cv::DescriptorMatcher}. It has two groups of match methods: for matching descriptors of one image with other image or with image set. Also all functions have alternative: save results to GPU memory or to CPU memory. \texttt{BruteForceMatcher\_GPU} is templated on the distance metric as \hyperref[cv.class.BruteForceMatcher]{cv::BruteForceMatcher}, but supports only \texttt{L1} and \texttt{L2} distance types.
\cvarg{trainDescs}{Train set of descriptors. This will not be added to train descriptors collection stored in class object.}
\cvarg{trainIdx}{One row \texttt{CV\_32SC1} matrix. Will contain best train index for each query. If some query descriptor masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{One row \texttt{CV\_32FC1} matrix. Will contain best distance for each query. If some query descriptor masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{mask}{Mask specifying permissible matches between input query and train matrices of descriptors.}
\cvarg{trainCollection}{\texttt{GpuMat} with train collection. It can be obtained from train descriptors collection that was set using \texttt{add} method by \hyperref[cppfunc.gpu.BruteForceMatcher.makeGpuCollection]{makeGpuCollection}. Or it can contain user defined collection. It must be one row matrix, each element is a \texttt{DevMem2D} that points to one train descriptors matrix (matrix must have \texttt{CV\_32FC1} type).}
\cvarg{trainIdx}{One row \texttt{CV\_32SC1} matrix. Will contain best train index for each query. If some query descriptor masked out in \texttt{mask} it will contain -1.}
\cvarg{imgIdx}{One row \texttt{CV\_32SC1} matrix. Will contain image train index for each query. If some query descriptor masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{One row \texttt{CV\_32FC1} matrix. Will contain best distance for each query. If some query descriptor masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{maskCollection}{\texttt{GpuMat} with set of masks. It can be obtained from \texttt{std::vector<GpuMat>} by \hyperref[cppfunc.gpu.BruteForceMatcher.makeGpuCollection]{makeGpuCollection}. Or it can contain user defined mask set. It must be empty matrix or one row matrix, each element is a \texttt{PtrStep} that points to one mask (must have \texttt{CV\_8UC1} type).}
Make gpu collection of train descriptors and masks in suitable format for \hyperref[cppfunc.gpu.BruteForceMatcher.matchCollection]{matchCollection} function.
Download \texttt{trainIdx}, \texttt{imgIdx} and \texttt{distance} matrices obtained by \hyperref[cppfunc.gpu.BruteForceMatcher.matchSingle]{matchSingle} or \hyperref[cppfunc.gpu.BruteForceMatcher.matchCollection]{matchCollection} to CPU vector with \hyperref[cv.class.DMatch]{cv::DMatch}.
Find the k best matches for each descriptor from a query set with train descriptors. Found k (or less if not possible) matches are returned in distance increasing order. This function is equivalent of \cvCppCross{DescriptorMatcher::knnMatch}.
Find the k best matches for each descriptor from a query set with train descriptors. Found k (or less if not possible) matches are returned in distance increasing order. Results stored to GPU memory.
\cvarg{trainDescs}{Train set of descriptors. This will not be added to train descriptors collection stored in class object.}
\cvarg{trainIdx}{Matrix \texttt{nQueries} x \texttt{k} with type \texttt{CV\_32SC1}. \texttt{trainIdx.at<int>(queryIdx, i)} will contain index of i'th best trains. If some query descriptor masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{Matrix \texttt{nQuery} x \texttt{k} with type \texttt{CV\_32FC1}. Will contain distance for each query and i'th best trains. If some query descriptor masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{allDist}{Buffer to store all distances between query descriptors and train descriptors. It have size \texttt{nQuery} x \texttt{nTrain} and \texttt{CV\_32F} type. \texttt{allDist.at<float>(queryIdx, trainIdx)} will contain \texttt{FLT\_MAX}, if \texttt{trainIdx} is one from k best, otherwise it will contain distance between \texttt{queryIdx} and \texttt{trainIdx} descriptors.}
\cvarg{k}{Count of best matches will be found per each query descriptor (or less if it's not possible).}
\cvarg{mask}{Mask specifying permissible matches between input query and train matrices of descriptors.}
Download \texttt{trainIdx} and \texttt{distance} matrices obtained by \hyperref[cppfunc.gpu.BruteForceMatcher.knnMatchSingle]{knnMatch} to CPU vector with \hyperref[cv.class.DMatch]{cv::DMatch}. If \texttt{compactResult} is true \texttt{matches} vector will not contain matches for fully masked out query descriptors.
Find the best matches for each query descriptor which have distance less than given threshold. Found matches are returned in distance increasing order. This function is equivalent of \cvCppCross{DescriptorMatcher::radiusMatch}. Works only on device with Compute Capability \texttt{>=} 1.1.
Find the best matches for each query descriptor which have distance less than given threshold. Results stored to GPU memory. Results are not sorted by distance increasing order. Works only on device with Compute Capability \texttt{>=} 1.1.
\cvarg{trainDescs}{Train set of descriptors. This will not be added to train descriptors collection stored in class object.}
\cvarg{trainIdx}{\texttt{trainIdx.at<int>(queryIdx, i)} will contain i'th train index \newline\texttt{(i < min(nMatches.at<unsigned int>(0, queryIdx), trainIdx.cols)}. If \texttt{trainIdx} is empty, it will be created with size \texttt{nQuery} x \texttt{nTrain}. Or it can be allocated by user (it must have \texttt{nQuery} rows and \texttt{CV\_32SC1} type). Cols can be less than \texttt{nTrain}, but it can be that matcher won't find all matches, because it haven't enough memory to store results.}
\cvarg{nMatches}{\texttt{nMatches.at<unsigned int>(0, queryIdx)} will contain matches count for \texttt{queryIdx}. Carefully, \texttt{nMatches} can be greater than \texttt{trainIdx.cols} - it means that matcher didn't find all matches, because it didn't have enough memory.}
\cvarg{distance}{\texttt{distance.at<int>(queryIdx, i)} will contain i'th distance \newline\texttt{(i < min(nMatches.at<unsigned int>(0, queryIdx), trainIdx.cols)}. If \texttt{trainIdx} is empty, it will be created with size \texttt{nQuery} x \texttt{nTrain}. Otherwise it must be also allocated by user (it must have the same size as \texttt{trainIdx} and \texttt{CV\_32FC1} type).}
\cvarg{maxDistance}{The threshold to found match distances.}
\cvarg{mask}{Mask specifying permissible matches between input query and train matrices of descriptors.}
Download \texttt{trainIdx}, \texttt{nMatches} and \texttt{distance} matrices obtained by \hyperref[cppfunc.gpu.BruteForceMatcher.radiusMatchSingle]{radiusMatch} to CPU vector with \hyperref[cv.class.DMatch]{cv::DMatch}. If \texttt{compactResult} is true \texttt{matches} vector will not contain matches for fully masked out query descriptors.
The class \texttt{SURF\_GPU} implements Speeded Up Robust Features descriptor. There is fast multi-scale Hessian keypoint detector that can be used to find the keypoints (which is the default option), but the descriptors can be also computed for the user-specified keypoints. Supports only 8 bit grayscale images.
The class \texttt{SURF\_GPU} can store results to GPU and CPU memory and provides static functions to convert results between CPU and GPU version (\texttt{uploadKeypoints}, \texttt{downloadKeypoints}, \texttt{downloadDescriptors}). CPU results has the same format as \hyperref[cv.class.SURF]{cv::SURF} results. GPU results is stored to \texttt{GpuMat}. \texttt{keypoints} matrix is one row matrix with \texttt{CV\_32FC6} type. It contains 6 float values per feature: \texttt{x, y, size, response, angle, octave}. \texttt{descriptors} matrix is \texttt{nFeatures} x \texttt{descriptorSize} matrix with \texttt{CV\_32FC1} type.
The class \texttt{SURF\_GPU} uses some buffers and provides access to it. All buffers can be safely released between function calls.
\cvclass{gpu::BruteForceMatcher\_GPU}
Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches between descriptor sets.
\begin{lstlisting}
template<class Distance>
class BruteForceMatcher_GPU
{
public:
// Add descriptors to train descriptor collection.
The class \texttt{BruteForceMatcher\_GPU} has the similar interface with class \hyperref[cv.class.DescriptorMatcher]{cv::DescriptorMatcher}. It has two groups of match methods: for matching descriptors of one image with other image or with image set. Also all functions have alternative: save results to GPU memory or to CPU memory. \texttt{BruteForceMatcher\_GPU} is templated on the distance metric as \hyperref[cv.class.BruteForceMatcher]{cv::BruteForceMatcher}, but supports only \texttt{L1} and \texttt{L2} distance types.
\cvarg{trainDescs}{Train set of descriptors. This will not be added to train descriptors collection stored in class object.}
\cvarg{trainIdx}{One row \texttt{CV\_32SC1} matrix. Will contain best train index for each query. If some query descriptor masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{One row \texttt{CV\_32FC1} matrix. Will contain best distance for each query. If some query descriptor masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{mask}{Mask specifying permissible matches between input query and train matrices of descriptors.}
\cvarg{trainCollection}{\texttt{GpuMat} with train collection. It can be obtained from train descriptors collection that was set using \texttt{add} method by \hyperref[cppfunc.gpu.BruteForceMatcher.makeGpuCollection]{makeGpuCollection}. Or it can contain user defined collection. It must be one row matrix, each element is a \texttt{DevMem2D} that points to one train descriptors matrix (matrix must have \texttt{CV\_32FC1} type).}
\cvarg{trainIdx}{One row \texttt{CV\_32SC1} matrix. Will contain best train index for each query. If some query descriptor masked out in \texttt{mask} it will contain -1.}
\cvarg{imgIdx}{One row \texttt{CV\_32SC1} matrix. Will contain image train index for each query. If some query descriptor masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{One row \texttt{CV\_32FC1} matrix. Will contain best distance for each query. If some query descriptor masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{maskCollection}{\texttt{GpuMat} with set of masks. It can be obtained from \texttt{std::vector<GpuMat>} by \hyperref[cppfunc.gpu.BruteForceMatcher.makeGpuCollection]{makeGpuCollection}. Or it can contain user defined mask set. It must be empty matrix or one row matrix, each element is a \texttt{PtrStep} that points to one mask (must have \texttt{CV\_8UC1} type).}
Make gpu collection of train descriptors and masks in suitable format for \hyperref[cppfunc.gpu.BruteForceMatcher.matchCollection]{matchCollection} function.
Download \texttt{trainIdx}, \texttt{imgIdx} and \texttt{distance} matrices obtained by \hyperref[cppfunc.gpu.BruteForceMatcher.matchSingle]{matchSingle} or \hyperref[cppfunc.gpu.BruteForceMatcher.matchCollection]{matchCollection} to CPU vector with \hyperref[cv.class.DMatch]{cv::DMatch}.
Find the k best matches for each descriptor from a query set with train descriptors. Found k (or less if not possible) matches are returned in distance increasing order. This function is equivalent of \cvCppCross{DescriptorMatcher::knnMatch}.
Find the k best matches for each descriptor from a query set with train descriptors. Found k (or less if not possible) matches are returned in distance increasing order. Results stored to GPU memory.
\cvarg{trainDescs}{Train set of descriptors. This will not be added to train descriptors collection stored in class object.}
\cvarg{trainIdx}{Matrix \texttt{nQueries} x \texttt{k} with type \texttt{CV\_32SC1}. \texttt{trainIdx.at<int>(queryIdx, i)} will contain index of i'th best trains. If some query descriptor masked out in \texttt{mask} it will contain -1.}
\cvarg{distance}{Matrix \texttt{nQuery} x \texttt{k} with type \texttt{CV\_32FC1}. Will contain distance for each query and i'th best trains. If some query descriptor masked out in \texttt{mask} it will contain \texttt{FLT\_MAX}.}
\cvarg{allDist}{Buffer to store all distances between query descriptors and train descriptors. It have size \texttt{nQuery} x \texttt{nTrain} and \texttt{CV\_32F} type. \texttt{allDist.at<float>(queryIdx, trainIdx)} will contain \texttt{FLT\_MAX}, if \texttt{trainIdx} is one from k best, otherwise it will contain distance between \texttt{queryIdx} and \texttt{trainIdx} descriptors.}
\cvarg{k}{Count of best matches will be found per each query descriptor (or less if it's not possible).}
\cvarg{mask}{Mask specifying permissible matches between input query and train matrices of descriptors.}
Download \texttt{trainIdx} and \texttt{distance} matrices obtained by \hyperref[cppfunc.gpu.BruteForceMatcher.knnMatchSingle]{knnMatch} to CPU vector with \hyperref[cv.class.DMatch]{cv::DMatch}. If \texttt{compactResult} is true \texttt{matches} vector will not contain matches for fully masked out query descriptors.
Find the best matches for each query descriptor which have distance less than given threshold. Found matches are returned in distance increasing order. This function is equivalent of \cvCppCross{DescriptorMatcher::radiusMatch}. Works only on device with Compute Capability \texttt{>=} 1.1.
Find the best matches for each query descriptor which have distance less than given threshold. Results stored to GPU memory. Results are not sorted by distance increasing order. Works only on device with Compute Capability \texttt{>=} 1.1.
\cvarg{trainDescs}{Train set of descriptors. This will not be added to train descriptors collection stored in class object.}
\cvarg{trainIdx}{\texttt{trainIdx.at<int>(queryIdx, i)} will contain i'th train index \newline\texttt{(i < min(nMatches.at<unsigned int>(0, queryIdx), trainIdx.cols)}. If \texttt{trainIdx} is empty, it will be created with size \texttt{nQuery} x \texttt{nTrain}. Or it can be allocated by user (it must have \texttt{nQuery} rows and \texttt{CV\_32SC1} type). Cols can be less than \texttt{nTrain}, but it can be that matcher won't find all matches, because it haven't enough memory to store results.}
\cvarg{nMatches}{\texttt{nMatches.at<unsigned int>(0, queryIdx)} will contain matches count for \texttt{queryIdx}. Carefully, \texttt{nMatches} can be greater than \texttt{trainIdx.cols} - it means that matcher didn't find all matches, because it didn't have enough memory.}
\cvarg{distance}{\texttt{distance.at<int>(queryIdx, i)} will contain i'th distance \newline\texttt{(i < min(nMatches.at<unsigned int>(0, queryIdx), trainIdx.cols)}. If \texttt{trainIdx} is empty, it will be created with size \texttt{nQuery} x \texttt{nTrain}. Otherwise it must be also allocated by user (it must have the same size as \texttt{trainIdx} and \texttt{CV\_32FC1} type).}
\cvarg{maxDistance}{The threshold to found match distances.}
\cvarg{mask}{Mask specifying permissible matches between input query and train matrices of descriptors.}
Download \texttt{trainIdx}, \texttt{nMatches} and \texttt{distance} matrices obtained by \hyperref[cppfunc.gpu.BruteForceMatcher.radiusMatchSingle]{radiusMatch} to CPU vector with \hyperref[cv.class.DMatch]{cv::DMatch}. If \texttt{compactResult} is true \texttt{matches} vector will not contain matches for fully masked out query descriptors.
The base class for linear or non-linear filters that process rows of 2D arrays. Such filters are used for the "horizontal" filtering parts in separable filters.
The base class for linear or non-linear filters that process columns of 2D arrays. Such filters are used for the "vertical" filtering parts in separable filters.
The class can be used to apply an arbitrary filtering operation to an image. It contains all the necessary intermediate buffers. Pointers to the initialized \texttt{FilterEngine\_GPU} instances are returned by various \texttt{create*Filter\_GPU} functions, see below, and they are used inside high-level functions such as \cvCppCross{gpu::filter2D}, \cvCppCross{gpu::erode}, \cvCppCross{gpu::Sobel} etc.
By using \texttt{FilterEngine\_GPU} instead functions you can avoid unnessesary memory allocation for intermediate buffers and get much better performance:
\begin{lstlisting}
while (...)
{
cv::gpu::GpuMat src = getImg();
cv::gpu::GpuMat dst;
// Allocate and release buffers at each iterations
\texttt{FilterEngine\_GPU} can process a rectangular sub-region of an image. By default, if \texttt{roi == Rect(0,0,-1,-1)}, \texttt{FilterEngine\_GPU} process inner region of image (\texttt{Rect(anchor.x, anchor.y, src\_size.width - ksize.width, src\_size.height - ksize.height)}), because some filters doesn't check indexies outside the image for better perfomace. Which filters supports processing the whole image and which not and image type limitations see below.
The GPU filters doesn't support the in-place mode.
Create the non-separable filter engine with the specified filter.
\cvdefCpp{
Ptr<FilterEngine\_GPU> createFilter2D\_GPU(\par const Ptr<BaseFilter\_GPU>\& filter2D, \par int srcType, int dstType);
}
Usually this function is used inside high-level functions, like \hyperref[cppfunc.gpu.createLinearFilter]{createLinearFilter\_GPU}, \hyperref[cppfunc.gpu.createBoxFilter]{createBoxFilter\_GPU}.
Create normalized 2D box filter. Supports \texttt{CV\_8UC1} and \texttt{CV\_8UC4} source type, dst type must be the same as source type. This filter doesn't check indexies outside the image.
Ptr<BaseFilter\_GPU> getBoxFilter\_GPU(int srcType, int dstType, \par const Size\& ksize, \par Point anchor = Point(-1, -1));
}
\cvCppFunc{gpu::boxFilter}
Smooths the image using the normalized box filter. Supports \texttt{CV\_8UC1} and \texttt{CV\_8UC4} source type, dst type must be the same as source type.
\cvdefCpp{
void boxFilter(const GpuMat\& src, GpuMat\& dst, int ddepth, Size ksize, \par Point anchor = Point(-1,-1));
}
See \cvCppCross{boxFilter}, \hyperref[cppfunc.gpu.createBoxFilter]{createBoxFilter\_GPU}.
Create the non-separable linear filter. Supports \texttt{CV\_8UC1} and \texttt{CV\_8UC4} source type. This filter doesn't check indexies outside the image.
Ptr<BaseFilter\_GPU> getLinearFilter\_GPU(int srcType, int dstType, \par const Mat\& kernel, const Size\& ksize, \par Point anchor = Point(-1, -1));
}
\cvCppFunc{gpu::filter2D}
Applies non-separable 2D linear filter to the image. Supports \texttt{CV\_8UC1} and \texttt{CV\_8UC4} source type.
\cvdefCpp{
void filter2D(const GpuMat\& src, GpuMat\& dst, int ddepth, \par const Mat\& kernel, \par Point anchor=Point(-1,-1));
}
See \cvCppCross{filter2D}, \hyperref[cppfunc.gpu.createLinearFilter]{createLinearFilter\_GPU}.
\cvCppFunc{gpu::Laplacian}
Applies Laplacian operator to the image. Supports \texttt{CV\_8UC1} and \texttt{CV\_8UC4} source type. Supports only \texttt{ksize} = 1 and \texttt{ksize} = 3.
\cvdefCpp{
void Laplacian(const GpuMat\& src, GpuMat\& dst, int ddepth, \par int ksize = 1, double scale = 1);
}
See \cvCppCross{Laplacian}, \cvCppCross{gpu::filter2D}.
Create the primitive row filter with the specified kernel.
\cvdefCpp{
Ptr<BaseRowFilter\_GPU> getLinearRowFilter\_GPU(int srcType, \par int bufType, const Mat\& rowKernel, int anchor = -1, \par int borderType = BORDER\_CONSTANT);
}
Supports only \texttt{CV\_8UC1}, \texttt{CV\_8UC4}, \texttt{CV\_16SC1}, \texttt{CV\_16SC2}, \texttt{CV\_32SC1}, \texttt{CV\_32FC1} source type. There are two version of algorithm: NPP and OpenCV. NPP calls when \texttt{srcType == CV\_8UC1} or \texttt{srcType == CV\_8UC4} and \texttt{bufType == srcType}, otherwise calls OpenCV version. NPP supports only \texttt{BORDER\_CONSTANT} border type and doesn't check indexies outside image. OpenCV version supports only \texttt{CV\_32F} as buffer depth and \texttt{BORDER\_REFLECT101}, \texttt{BORDER\_REPLICATE} and \texttt{BORDER\_CONSTANT} border types and checks indexies outside image.
See also: \hyperref[cppfunc.gpu.getLinearColumnFilter]{getLinearColumnFilter\_GPU}, \cvCppCross{createSeparableLinearFilter}.
Create the primitive column filter with the specified kernel.
\cvdefCpp{
Ptr<BaseColumnFilter\_GPU> getLinearColumnFilter\_GPU(int bufType, \par int dstType, const Mat\& columnKernel, int anchor = -1, \par int borderType = BORDER\_CONSTANT);
}
Supports only \texttt{CV\_8UC1}, \texttt{CV\_8UC4}, \texttt{CV\_16SC1}, \texttt{CV\_16SC2}, \texttt{CV\_32SC1}, \texttt{CV\_32FC1} dst type. There are two version of algorithm: NPP and OpenCV. NPP calls when \texttt{dstType == CV\_8UC1} or \texttt{dstType == CV\_8UC4} and \texttt{bufType == dstType}, otherwise calls OpenCV version. NPP supports only \texttt{BORDER\_CONSTANT} border type and doesn't check indexies outside image. OpenCV version supports only \texttt{CV\_32F} as buffer depth and \texttt{BORDER\_REFLECT101}, \texttt{BORDER\_REPLICATE} and \texttt{BORDER\_CONSTANT} border types and checks indexies outside image.
See also: \hyperref[cppfunc.gpu.getLinearRowFilter]{getLinearRowFilter\_GPU}, \cvCppCross{createSeparableLinearFilter}.
Ptr<FilterEngine\_GPU> createSeparableLinearFilter\_GPU(int srcType, \par int dstType, const Mat\& rowKernel, const Mat\& columnKernel, \par const Point\& anchor = Point(-1,-1), \par int rowBorderType = BORDER\_DEFAULT, \par int columnBorderType = -1);
}
See \hyperref[cppfunc.gpu.getLinearRowFilter]{getLinearRowFilter\_GPU}, \hyperref[cppfunc.gpu.getLinearColumnFilter]{getLinearColumnFilter\_GPU}, \cvCppCross{createSeparableLinearFilter}.
\cvCppFunc{gpu::sepFilter2D}
Applies separable 2D linear filter to the image.
\cvdefCpp{
void sepFilter2D(const GpuMat\& src, GpuMat\& dst, int ddepth, \par const Mat\& kernelX, const Mat\& kernelY, \par Point anchor = Point(-1,-1), \par int rowBorderType = BORDER\_DEFAULT, \par int columnBorderType = -1);
}
See \hyperref[cppfunc.gpu.createSeparableLinearFilter]{createSeparableLinearFilter\_GPU}, \cvCppCross{sepFilter2D}.
Create filter engine for the generalized Sobel operator.
\cvdefCpp{
Ptr<FilterEngine\_GPU> createDerivFilter\_GPU(int srcType, int dstType, \par int dx, int dy, int ksize, \par int rowBorderType = BORDER\_DEFAULT, \par int columnBorderType = -1);
}
See \hyperref[cppfunc.gpu.createSeparableLinearFilter]{createSeparableLinearFilter\_GPU}, \cvCppCross{createDerivFilter}.
\cvCppFunc{gpu::Sobel}
Applies generalized Sobel operator to the image.
\cvdefCpp{
void Sobel(const GpuMat\& src, GpuMat\& dst, int ddepth, int dx, int dy, \par int ksize = 3, double scale = 1, \par int rowBorderType = BORDER\_DEFAULT, \par int columnBorderType = -1);
}
See \hyperref[cppfunc.gpu.createSeparableLinearFilter]{createSeparableLinearFilter\_GPU}, \cvCppCross{Sobel}.
\cvCppFunc{gpu::Scharr}
Calculates the first x- or y- image derivative using Scharr operator.
\cvdefCpp{
void Scharr(const GpuMat\& src, GpuMat\& dst, int ddepth, \par int dx, int dy, double scale = 1, \par int rowBorderType = BORDER\_DEFAULT, \par int columnBorderType = -1);
}
See \hyperref[cppfunc.gpu.createSeparableLinearFilter]{createSeparableLinearFilter\_GPU}, \cvCppCross{Scharr}.
Functions and classes described in this section are used to perform various linear or non-linear filtering operations on 2D images (represented as \cvCppCross{Mat}'s), that is, for each pixel location $(x,y)$ in the source image some its (normally rectangular) neighborhood is considered and used to compute the response. In case of a linear filter it is a weighted sum of pixel values, in case of morphological operations it is the minimum or maximum etc. The computed response is stored to the destination image at the same location $(x,y)$. It means, that the output image will be of the same size as the input image. Normally, the functions supports multi-channel arrays, in which case every channel is processed independently, therefore the output image will also have the same number of channels as the input one.
Functions and classes described in this section are used to perform various linear or non-linear filtering operations on 2D images (represented as \cvCppCross{Mat}'s), that is, for each pixel location $(x,y)$ in the source image some its (normally rectangular) neighborhood is considered and used to compute the response. In case of a linear filter it is a weighted sum of pixel values, in case of morphological operations it is the minimum or maximum etc. The computed response is stored to the destination image at the same location $(x,y)$. It means, that the output image will be of the same size as the input image. Normally, the functions supports multi-channel arrays, in which case every channel is processed independently, therefore the output image will also have the same number of channels as the input one.