Commit d888b810 authored by Vladislav Vinogradov's avatar Vladislav Vinogradov

fixed gpu docs (broken links, missing description, etc)

parent e7579b90
......@@ -343,6 +343,8 @@ The class ``RotatedRect`` replaces the old ``CvBox2D`` and fully compatible with
TermCriteria
------------
.. c:type:: TermCriteria
Termination criteria for iterative algorithms ::
class TermCriteria
......@@ -634,6 +636,8 @@ However, if the object is deallocated in a different way, then the specialized m
Mat
---
.. c:type:: Mat
OpenCV C++ n-dimensional dense array class. ::
class CV_EXPORTS Mat
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -3,35 +3,41 @@ Initalization and Information
.. highlight:: cpp
.. index:: gpu::getCudaEnabledDeviceCount
cv::gpu::getCudaEnabledDeviceCount
gpu::getCudaEnabledDeviceCount
----------------------------------
.. c:function:: int getCudaEnabledDeviceCount()
.. cpp:function:: int gpu::getCudaEnabledDeviceCount()
Returns number of CUDA-enabled devices installed. It is to be used before any other GPU functions calls. If OpenCV is compiled without GPU support this function returns 0.
.. index:: gpu::setDevice
cv::gpu::setDevice
gpu::setDevice
------------------
.. c:function:: void setDevice(int device)
.. cpp:function:: void gpu::setDevice(int device)
Sets device and initializes it for the current thread. Call of this function can be omitted, but in this case a default device will be initialized on fist GPU usage.
:param device: index of GPU device in system starting with 0.
.. index:: gpu::getDevice
cv::gpu::getDevice
gpu::getDevice
------------------
.. c:function:: int getDevice()
.. cpp:function:: int gpu::getDevice()
Returns the current device index, which was set by {gpu::getDevice} or initialized by default.
Returns the current device index, which was set by :cpp:func:`gpu::setDevice` or initialized by default.
.. index:: gpu::GpuFeature
.. _gpu::GpuFeature:
.. index:: gpu::GpuFeature
gpu::GpuFeature
---------------
......@@ -48,17 +54,16 @@ GPU compute features. ::
};
.. index:: gpu::DeviceInfo
.. _gpu::DeviceInfo:
.. index:: gpu::DeviceInfo
gpu::DeviceInfo
---------------
.. c:type:: gpu::DeviceInfo
.. cpp:class:: gpu::DeviceInfo
This class provides functionality for querying the specified GPU properties. ::
class CV_EXPORTS DeviceInfo
class DeviceInfo
{
public:
DeviceInfo();
......@@ -79,87 +84,104 @@ This class provides functionality for querying the specified GPU properties. ::
};
.. index:: gpu::DeviceInfo::DeviceInfo
cv::gpu::DeviceInfo::DeviceInfo
------------------------------- ``_``
.. c:function:: DeviceInfo::DeviceInfo()
gpu::DeviceInfo::DeviceInfo
-------------------------------
.. cpp:function:: gpu::DeviceInfo::DeviceInfo()
.. c:function:: DeviceInfo::DeviceInfo(int device_id)
.. cpp:function:: gpu::DeviceInfo::DeviceInfo(int device_id)
Constructs DeviceInfo object for the specified device. If deviceidparameter is missed it constructs object for the current device.
Constructs :cpp:class:`gpu::DeviceInfo` object for the specified device. If ``device_id`` parameter is missed it constructs object for the current device.
:param device_id: Index of the GPU device in system starting with 0.
.. index:: gpu::DeviceInfo::name
cv::gpu::DeviceInfo::name
gpu::DeviceInfo::name
-------------------------
.. c:function:: string DeviceInfo::name()
.. cpp:function:: string gpu::DeviceInfo::name()
Returns the device name.
.. index:: gpu::DeviceInfo::majorVersion
cv::gpu::DeviceInfo::majorVersion
gpu::DeviceInfo::majorVersion
---------------------------------
.. c:function:: int DeviceInfo::majorVersion()
.. cpp:function:: int gpu::DeviceInfo::majorVersion()
Returns the major compute capability version.
.. index:: gpu::DeviceInfo::minorVersion
cv::gpu::DeviceInfo::minorVersion
gpu::DeviceInfo::minorVersion
---------------------------------
.. c:function:: int DeviceInfo::minorVersion()
.. cpp:function:: int gpu::DeviceInfo::minorVersion()
Returns the minor compute capability version.
.. index:: gpu::DeviceInfo::multiProcessorCount
cv::gpu::DeviceInfo::multiProcessorCount
gpu::DeviceInfo::multiProcessorCount
----------------------------------------
.. c:function:: int DeviceInfo::multiProcessorCount()
.. cpp:function:: int gpu::DeviceInfo::multiProcessorCount()
Returns the number of streaming multiprocessors.
.. index:: gpu::DeviceInfo::freeMemory
cv::gpu::DeviceInfo::freeMemory
gpu::DeviceInfo::freeMemory
-------------------------------
.. c:function:: size_t DeviceInfo::freeMemory()
.. cpp:function:: size_t gpu::DeviceInfo::freeMemory()
Returns the amount of free memory in bytes.
.. index:: gpu::DeviceInfo::totalMemory
cv::gpu::DeviceInfo::totalMemory
gpu::DeviceInfo::totalMemory
--------------------------------
.. c:function:: size_t DeviceInfo::totalMemory()
.. cpp:function:: size_t gpu::DeviceInfo::totalMemory()
Returns the amount of total memory in bytes.
.. index:: gpu::DeviceInfo::supports
cv::gpu::DeviceInfo::supports
gpu::DeviceInfo::supports
-----------------------------
.. c:function:: bool DeviceInfo::supports(GpuFeature feature)
.. cpp:function:: bool gpu::DeviceInfo::supports(GpuFeature feature)
Returns true if the device has the given GPU feature, otherwise false.
:param feature: Feature to be checked. See .
:param feature: Feature to be checked. See :c:type:`gpu::GpuFeature`.
.. index:: gpu::DeviceInfo::isCompatible
cv::gpu::DeviceInfo::isCompatible
gpu::DeviceInfo::isCompatible
---------------------------------
.. c:function:: bool DeviceInfo::isCompatible()
.. cpp:function:: bool gpu::DeviceInfo::isCompatible()
Returns true if the GPU module can be run on the specified device, otherwise false.
.. index:: gpu::TargetArchs
.. _gpu::TargetArchs:
.. index:: gpu::TargetArchs
gpu::TargetArchs
----------------
......@@ -167,32 +189,110 @@ gpu::TargetArchs
This class provides functionality (as set of static methods) for checking which NVIDIA card architectures the GPU module was built for.
bigskip
The following method checks whether the module was built with the support of the given feature:
.. c:function:: static bool builtWith(GpuFeature feature)
.. cpp:function:: static bool gpu::TargetArchs::builtWith(GpuFeature feature)
:param feature: Feature to be checked. See .
:param feature: Feature to be checked. See :c:type:`gpu::GpuFeature`.
There are a set of methods for checking whether the module contains intermediate (PTX) or binary GPU code for the given architecture(s):
.. c:function:: static bool has(int major, int minor)
.. cpp:function:: static bool gpu::TargetArchs::has(int major, int minor)
.. c:function:: static bool hasPtx(int major, int minor)
.. cpp:function:: static bool gpu::TargetArchs::hasPtx(int major, int minor)
.. c:function:: static bool hasBin(int major, int minor)
.. cpp:function:: static bool gpu::TargetArchs::hasBin(int major, int minor)
.. c:function:: static bool hasEqualOrLessPtx(int major, int minor)
.. cpp:function:: static bool gpu::TargetArchs::hasEqualOrLessPtx(int major, int minor)
.. c:function:: static bool hasEqualOrGreater(int major, int minor)
.. cpp:function:: static bool gpu::TargetArchs::hasEqualOrGreater(int major, int minor)
.. c:function:: static bool hasEqualOrGreaterPtx(int major, int minor)
.. cpp:function:: static bool gpu::TargetArchs::hasEqualOrGreaterPtx(int major, int minor)
.. c:function:: static bool hasEqualOrGreaterBin(int major, int minor)
.. cpp:function:: static bool gpu::TargetArchs::hasEqualOrGreaterBin(int major, int minor)
* **major** Major compute capability version.
:param major: Major compute capability version.
* **minor** Minor compute capability version.
:param minor: Minor compute capability version.
According to the CUDA C Programming Guide Version 3.2: "PTX code produced for some specific compute capability can always be compiled to binary code of greater or equal compute capability".
.. index:: gpu::MultiGpuManager
gpu::MultiGpuManager
--------------------
.. c:type:: gpu::MultiGpuManager
Provides functionality for working with many GPUs. ::
class MultiGpuManager
{
public:
MultiGpuManager();
~MultiGpuManager();
// Must be called before any other GPU calls
void init();
// Makes the given GPU active
void gpuOn(int gpu_id);
// Finishes the piece of work on the current GPU
void gpuOff();
static const int BAD_GPU_ID;
};
.. index:: gpu::MultiGpuManager::MultiGpuManager
gpu::MultiGpuManager::MultiGpuManager
----------------------------------------
.. cpp:function:: gpu::MultiGpuManager::MultiGpuManager()
Creates multi GPU manager, but doesn't initialize it.
.. index:: gpu::MultiGpuManager::~MultiGpuManager
gpu::MultiGpuManager::~MultiGpuManager
----------------------------------------
.. cpp:function:: gpu::MultiGpuManager::~MultiGpuManager()
Releases multi GPU manager.
.. index:: gpu::MultiGpuManager::init
gpu::MultiGpuManager::init
----------------------------------------
.. cpp:function:: void gpu::MultiGpuManager::init()
Initializes multi GPU manager.
.. index:: gpu::MultiGpuManager::gpuOn
gpu::MultiGpuManager::gpuOn
----------------------------------------
.. cpp:function:: void gpu::MultiGpuManager::gpuOn(int gpu_id)
Makes the given GPU active.
:param gpu_id: Index of the GPU device in system starting with 0.
.. index:: gpu::MultiGpuManager::gpuOff
gpu::MultiGpuManager::gpuOff
----------------------------------------
.. cpp:function:: void gpu::MultiGpuManager::gpuOff()
Finishes the piece of work on the current GPU.
......@@ -14,10 +14,7 @@ The GPU module depends on the Cuda Toolkit and NVidia Performance Primitives lib
OpenCV GPU module is designed for ease of use and does not require any knowledge of Cuda. Though, such a knowledge will certainly be useful in non-trivial cases, or when you want to get the highest performance. It is helpful to have understanding of the costs of various operations, what the GPU does, what are the preferred data formats etc. The GPU module is an effective instrument for quick implementation of GPU-accelerated computer vision algorithms. However, if you algorithm involves many simple operations, then for the best possible performance you may still need to write your own kernels, to avoid extra write and read operations on the intermediate results.
To enable CUDA support, configure OpenCV using CMake with ``WITH_CUDA=ON`` . When the flag is set and if CUDA is installed, the full-featured OpenCV GPU module will be built. Otherwise, the module will still be built, but at runtime all functions from the module will throw
:func:`Exception` with ``CV_GpuNotSupported`` error code, except for
:func:`gpu::getCudaEnabledDeviceCount()` . The latter function will return zero GPU count in this case. Building OpenCV without CUDA support does not perform device code compilation, so it does not require Cuda Toolkit installed. Therefore, using
:func:`gpu::getCudaEnabledDeviceCount()` function it is possible to implement a high-level algorithm that will detect GPU presence at runtime and choose the appropriate implementation (CPU or GPU) accordingly.
To enable CUDA support, configure OpenCV using CMake with ``WITH_CUDA=ON`` . When the flag is set and if CUDA is installed, the full-featured OpenCV GPU module will be built. Otherwise, the module will still be built, but at runtime all functions from the module will throw :c:type:`Exception` with ``CV_GpuNotSupported`` error code, except for :cpp:func:`gpu::getCudaEnabledDeviceCount`. The latter function will return zero GPU count in this case. Building OpenCV without CUDA support does not perform device code compilation, so it does not require Cuda Toolkit installed. Therefore, using :cpp:func:`gpu::getCudaEnabledDeviceCount` function it is possible to implement a high-level algorithm that will detect GPU presence at runtime and choose the appropriate implementation (CPU or GPU) accordingly.
Compilation for different NVidia platforms.
-------------------------------------------
......@@ -28,19 +25,16 @@ On first call, the PTX code is compiled to binary code for the particular GPU us
By default, the OpenCV GPU module includes:
*
Binaries for compute capabilities 1.3 and 2.0 (controlled by ``CUDA_ARCH_BIN`` in CMake)
* Binaries for compute capabilities 1.1, 1.2, 1.3 and 2.0 (controlled by ``CUDA_ARCH_BIN`` in CMake)
*
PTX code for compute capabilities 1.1 and 1.3 (controlled by ``CUDA_ARCH_PTX`` in CMake)
* PTX code for compute capabilities 1.1 and 1.3 (controlled by ``CUDA_ARCH_PTX`` in CMake)
That means for devices with CC 1.3 and 2.0 binary images are ready to run. For all newer platforms the PTX code for 1.3 is JIT'ed to a binary image. For devices with 1.1 and 1.2 the PTX for 1.1 is JIT'ed. For devices with CC 1.0 no code is available and the functions will throw
:func:`Exception` . For platforms where JIT compilation is performed first run will be slow.
That means for devices with CC 1.1, 1.2, 1.3 and 2.0 binary images are ready to run. For all newer platforms the PTX code for 1.3 is JIT'ed to a binary image. For devices with CC 1.0 no code is available and the functions will throw
:c:type:`Exception`. For platforms where JIT compilation is performed first run will be slow.
If you happen to have GPU with CC 1.0, the GPU module can still be compiled on it and most of the functions will run just fine on such card. Simply add "1.0" to the list of binaries, for example, ``CUDA_ARCH_BIN="1.0 1.3 2.0"`` . The functions that can not be run on CC 1.0 GPUs will throw an exception.
If you happen to have GPU with CC 1.0, the GPU module can still be compiled on it and most of the functions will run just fine on such card. Simply add "1.0" to the list of binaries, for example, ``CUDA_ARCH_BIN="1.0 1.3 2.0"``. The functions that can not be run on CC 1.0 GPUs will throw an exception.
You can always determine at runtime whether OpenCV GPU built binaries (or PTX code) are compatible with your GPU. The function
:func:`gpu::DeviceInfo::isCompatible` return the compatibility status (true/false).
You can always determine at runtime whether OpenCV GPU built binaries (or PTX code) are compatible with your GPU. The function :cpp:func:`gpu::DeviceInfo::isCompatible` return the compatibility status (true/false).
Threading and multi-threading.
------------------------------
......@@ -56,25 +50,14 @@ Multi-GPU
In the current version each of the OpenCV GPU algorithms can use only a single GPU. So, to utilize multiple GPUs, user has to manually distribute the work between the GPUs. Here are the two ways of utilizing multiple GPUs:
*
If you only use synchronous functions, first, create several CPU threads (one per each GPU) and from within each thread create CUDA context for the corresponding GPU using
:func:`gpu::setDevice()` or Driver API. That's it. Now each of the threads will use the associated GPU.
* If you only use synchronous functions, first, create several CPU threads (one per each GPU) and from within each thread create CUDA context for the corresponding GPU using :cpp:func:`gpu::setDevice` or Driver API. That's it. Now each of the threads will use the associated GPU.
*
In case of asynchronous functions, it is possible to create several Cuda contexts associated with different GPUs but attached to one CPU thread. This can be done only by Driver API. Within the thread you can switch from one GPU to another by making the corresponding context "current". With non-blocking GPU calls managing algorithm is clear.
* In case of asynchronous functions, it is possible to create several Cuda contexts associated with different GPUs but attached to one CPU thread. This can be done only by Driver API. Within the thread you can switch from one GPU to another by making the corresponding context "current". With non-blocking GPU calls managing algorithm is clear.
While developing algorithms for multiple GPUs a data passing overhead have to be taken into consideration. For primitive functions and for small images it can be significant and eliminate all the advantages of having multiple GPUs. But for high level algorithms Multi-GPU acceleration may be suitable. For example, Stereo Block Matching algorithm has been successfully parallelized using the following algorithm:
*
Each image of the stereo pair is split into two horizontal overlapping stripes.
* Each image of the stereo pair is split into two horizontal overlapping stripes.
* Each pair of stripes (from the left and the right images) has been processed on a separate Fermi GPU
* The results are merged into the single disparity map.
*
Each pair of stripes (from the left and the right images) has been processed on a separate Fermi GPU
*
The results are merged into the single disparity map.
With this scheme dual GPU gave 180
%
performance increase comparing to the single Fermi GPU. The source code of the example is available at
https://code.ros.org/svn/opencv/trunk/opencv/examples/gpu/
With this scheme dual GPU gave 180 % performance increase comparing to the single Fermi GPU. The source code of the example is available at https://code.ros.org/svn/opencv/trunk/opencv/examples/gpu/.
......@@ -3,144 +3,153 @@ Matrix Reductions
.. highlight:: cpp
.. index:: gpu::meanStdDev
gpu::meanStdDev
-------------------
.. c:function:: void gpu::meanStdDev(const GpuMat\& mtx, Scalar\& mean, Scalar\& stddev)
.. cpp:function:: void gpu::meanStdDev(const GpuMat& mtx, Scalar& mean, Scalar& stddev)
Computes mean value and standard deviation of matrix elements.
:param mtx: Source matrix. ``CV_8UC1`` matrices are supported for now.
:param mtx: Source matrix. ``CV_8UC1`` matrices are supported for now.
:param mean: Mean value.
:param stddev: Standard deviation value.
See also:
:func:`meanStdDev` .
See also: :c:func:`meanStdDev`.
.. index:: gpu::norm
gpu::norm
-------------
.. c:function:: double gpu::norm(const GpuMat\& src, int normType=NORM_L2)
.. cpp:function:: double gpu::norm(const GpuMat& src, int normType=NORM_L2)
Returns norm of matrix (or of two matrices difference).
:param src: Source matrix. Any matrices except 64F are supported.
:param normType: Norm type. ``NORM_L1`` , ``NORM_L2`` and ``NORM_INF`` are supported for now.
:param normType: Norm type. ``NORM_L1``, ``NORM_L2`` and ``NORM_INF`` are supported for now.
.. c:function:: double norm(const GpuMat\& src, int normType, GpuMat\& buf)
.. cpp:function:: double gpu::norm(const GpuMat& src, int normType, GpuMat& buf)
* **src** Source matrix. Any matrices except 64F are supported.
:param src: Source matrix. Any matrices except 64F are supported.
* **normType** Norm type. ``NORM_L1`` , ``NORM_L2`` and ``NORM_INF`` are supported for now.
:param normType: Norm type. ``NORM_L1``, ``NORM_L2`` and ``NORM_INF`` are supported for now.
* **buf** Optional buffer to avoid extra memory allocations. It's resized automatically.
:param buf: Optional buffer to avoid extra memory allocations. It's resized automatically.
.. c:function:: double norm(const GpuMat\& src1, const GpuMat\& src2,
int normType=NORM_L2)
.. cpp:function:: double gpu::norm(const GpuMat& src1, const GpuMat& src2, int normType=NORM_L2)
* **src1** First source matrix. ``CV_8UC1`` matrices are supported for now.
:param src1: First source matrix. ``CV_8UC1`` matrices are supported for now.
* **src2** Second source matrix. Must have the same size and type as ``src1``.
:param src2: Second source matrix. Must have the same size and type as ``src1``.
:param normType: Norm type. ``NORM_L1``, ``NORM_L2`` and ``NORM_INF`` are supported for now.
See also: :c:func:`norm`.
* **normType** Norm type. ``NORM_L1`` , ``NORM_L2`` and ``NORM_INF`` are supported for now.
See also:
:func:`norm` .
.. index:: gpu::sum
gpu::sum
------------
.. c:function:: Scalar gpu::sum(const GpuMat\& src)
.. cpp:function:: Scalar gpu::sum(const GpuMat& src)
.. c:function:: Scalar gpu::sum(const GpuMat\& src, GpuMat\& buf)
.. cpp:function:: Scalar gpu::sum(const GpuMat& src, GpuMat& buf)
Returns sum of matrix elements.
:param src: Source image of any depth except ``CV_64F`` .
:param src: Source image of any depth except ``CV_64F``.
:param buf: Optional buffer to avoid extra memory allocations. It's resized automatically.
See also:
:func:`sum` .
See also: :c:func:`sum`.
.. index:: gpu::absSum
gpu::absSum
---------------
.. c:function:: Scalar gpu::absSum(const GpuMat\& src)
.. cpp:function:: Scalar gpu::absSum(const GpuMat& src)
.. c:function:: Scalar gpu::absSum(const GpuMat\& src, GpuMat\& buf)
.. cpp:function:: Scalar gpu::absSum(const GpuMat& src, GpuMat\& buf)
Returns sum of matrix elements absolute values.
:param src: Source image of any depth except ``CV_64F`` .
:param src: Source image of any depth except ``CV_64F``.
:param buf: Optional buffer to avoid extra memory allocations. It's resized automatically.
.. index:: gpu::sqrSum
gpu::sqrSum
---------------
.. c:function:: Scalar gpu::sqrSum(const GpuMat\& src)
.. cpp:function:: Scalar gpu::sqrSum(const GpuMat& src)
.. c:function:: Scalar gpu::sqrSum(const GpuMat\& src, GpuMat\& buf)
.. cpp:function:: Scalar gpu::sqrSum(const GpuMat& src, GpuMat\& buf)
Returns squared sum of matrix elements.
:param src: Source image of any depth except ``CV_64F`` .
:param src: Source image of any depth except ``CV_64F``.
:param buf: Optional buffer to avoid extra memory allocations. It's resized automatically.
.. index:: gpu::minMax
gpu::minMax
---------------
.. c:function:: void gpu::minMax(const GpuMat\& src, double* minVal, double* maxVal=0, const GpuMat\& mask=GpuMat())
.. cpp:function:: void gpu::minMax(const GpuMat& src, double* minVal, double* maxVal=0, const GpuMat& mask=GpuMat())
.. c:function:: void gpu::minMax(const GpuMat\& src, double* minVal, double* maxVal, const GpuMat\& mask, GpuMat\& buf)
.. cpp:function:: void gpu::minMax(const GpuMat& src, double* minVal, double* maxVal, const GpuMat& mask, GpuMat& buf)
Finds global minimum and maximum matrix elements and returns their values.
:param src: Single-channel source image.
:param minVal: Pointer to returned minimum value. ``NULL`` if not required.
:param minVal: Pointer to returned minimum value. ``NULL`` if not required.
:param maxVal: Pointer to returned maximum value. ``NULL`` if not required.
:param maxVal: Pointer to returned maximum value. ``NULL`` if not required.
:param mask: Optional mask to select a sub-matrix.
:param buf: Optional buffer to avoid extra memory allocations. It's resized automatically.
Function doesn't work with ``CV_64F`` images on GPU with compute capability
:math:`<` 1.3.
See also:
:func:`minMaxLoc` .
Function doesn't work with ``CV_64F`` images on GPU with compute capability :math:`<` 1.3.
See also: :c:func:`minMaxLoc`.
.. index:: gpu::minMaxLoc
gpu::minMaxLoc
------------------
.. c:function:: void gpu::minMaxLoc(const GpuMat& src, double* minVal, double* maxVal=0, Point* minLoc=0, Point* maxLoc=0, const GpuMat& mask=GpuMat())
.. cpp:function:: void gpu::minMaxLoc(const GpuMat& src, double* minVal, double* maxVal=0, Point* minLoc=0, Point* maxLoc=0, const GpuMat& mask=GpuMat())
.. c:function:: void gpu::minMaxLoc(const GpuMat& src, double* minVal, double* maxVal, Point* minLoc, Point* maxLoc, const GpuMat& mask, GpuMat& valbuf, GpuMat& locbuf)
.. cpp:function:: void gpu::minMaxLoc(const GpuMat& src, double* minVal, double* maxVal, Point* minLoc, Point* maxLoc, const GpuMat& mask, GpuMat& valbuf, GpuMat& locbuf)
Finds global minimum and maximum matrix elements and returns their values with locations.
:param src: Single-channel source image.
:param minVal: Pointer to returned minimum value. ``NULL`` if not required.
:param minVal: Pointer to returned minimum value. ``NULL`` if not required.
:param maxVal: Pointer to returned maximum value. ``NULL`` if not required.
:param maxVal: Pointer to returned maximum value. ``NULL`` if not required.
:param minValLoc: Pointer to returned minimum location. ``NULL`` if not required.
:param minValLoc: Pointer to returned minimum location. ``NULL`` if not required.
:param maxValLoc: Pointer to returned maximum location. ``NULL`` if not required.
:param maxValLoc: Pointer to returned maximum location. ``NULL`` if not required.
:param mask: Optional mask to select a sub-matrix.
......@@ -148,18 +157,19 @@ gpu::minMaxLoc
:param locbuf: Optional locations buffer to avoid extra memory allocations. It's resized automatically.
Function doesn't work with ``CV_64F`` images on GPU with compute capability
:math:`<` 1.3.
See also:
:func:`minMaxLoc` .
Function doesn't work with ``CV_64F`` images on GPU with compute capability :math:`<` 1.3.
See also: :c:func:`minMaxLoc`.
.. index:: gpu::countNonZero
gpu::countNonZero
---------------------
.. c:function:: int gpu::countNonZero(const GpuMat\& src)
.. cpp:function:: int gpu::countNonZero(const GpuMat& src)
.. c:function:: int gpu::countNonZero(const GpuMat\& src, GpuMat\& buf)
.. cpp:function:: int gpu::countNonZero(const GpuMat& src, GpuMat& buf)
Counts non-zero matrix elements.
......@@ -167,7 +177,6 @@ gpu::countNonZero
:param buf: Optional buffer to avoid extra memory allocations. It's resized automatically.
Function doesn't work with ``CV_64F`` images on GPU with compute capability
:math:`<` 1.3.
See also:
:func:`countNonZero` .
Function doesn't work with ``CV_64F`` images on GPU with compute capability :math:`<` 1.3.
See also: :c:func:`countNonZero`.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
.. _ImageFiltering:
Image Filtering
===============
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment