Commit 24ccbccf authored by Vadim Pisarevsky's avatar Vadim Pisarevsky

cleaned RST formatting a bit

parent d7b3e254
****************************************
Camera Calibration and 3D Reconstruction
****************************************
*************************************************
calib3d. Camera Calibration and 3D Reconstruction
*************************************************
.. toctree::
:maxdepth: 2
......
This diff is collapsed.
......@@ -3,42 +3,24 @@ Clustering
.. highlight:: cpp
.. index:: kmeans
cv::kmeans
----------
`id=0.0672046481842 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/core/kmeans>`__
.. cfunction:: double kmeans( const Mat\& samples, int clusterCount, Mat\& labels, TermCriteria termcrit, int attempts, int flags, Mat* centers )
Finds the centers of clusters and groups the input samples around the clusters.
:param samples: Floating-point matrix of input samples, one row per sample
:param clusterCount: The number of clusters to split the set by
:param labels: The input/output integer array that will store the cluster indices for every sample
:param termcrit: Specifies maximum number of iterations and/or accuracy (distance the centers can move by between subsequent iterations)
:param attempts: How many times the algorithm is executed using different initial labelings. The algorithm returns the labels that yield the best compactness (see the last function parameter)
:param flags: It can take the following values:
* **KMEANS_RANDOM_CENTERS** Random initial centers are selected in each attempt
......@@ -48,91 +30,46 @@ cv::kmeans
* **KMEANS_USE_INITIAL_LABELS** During the first (and possibly the only) attempt, the
function uses the user-supplied labels instaed of computing them from the initial centers. For the second and further attempts, the function will use the random or semi-random centers (use one of ``KMEANS_*_CENTERS`` flag to specify the exact method)
:param centers: The output matrix of the cluster centers, one row per each cluster center
The function
``kmeans``
implements a k-means algorithm that finds the
centers of
``clusterCount``
clusters and groups the input samples
The function ``kmeans`` implements a k-means algorithm that finds the
centers of ``clusterCount`` clusters and groups the input samples
around the clusters. On output,
:math:`\texttt{labels}_i`
contains a 0-based cluster index for
:math:`\texttt{labels}_i` contains a 0-based cluster index for
the sample stored in the
:math:`i^{th}`
row of the
``samples``
matrix.
:math:`i^{th}` row of the ``samples`` matrix.
The function returns the compactness measure, which is computed as
.. math::
\sum _i \| \texttt{samples} _i - \texttt{centers} _{ \texttt{labels} _i} \| ^2
after every attempt; the best (minimum) value is chosen and the
corresponding labels and the compactness value are returned by the function.
Basically, the user can use only the core of the function, set the number of
attempts to 1, initialize labels each time using some custom algorithm and pass them with
(
``flags``
=
``KMEANS_USE_INITIAL_LABELS``
) flag, and then choose the best (most-compact) clustering.
( ``flags`` = ``KMEANS_USE_INITIAL_LABELS`` ) flag, and then choose the best (most-compact) clustering.
.. index:: partition
cv::partition
-------------
`id=0.0923567235062 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/core/partition>`__
.. cfunction:: template<typename _Tp, class _EqPredicate> int
.. cfunction:: partition( const vector<_Tp>\& vec, vector<int>\& labels, _EqPredicate predicate=_EqPredicate())
Splits an element set into equivalency classes.
:param vec: The set of elements stored as a vector
:param labels: The output vector of labels; will contain as many elements as ``vec`` . Each label ``labels[i]`` is 0-based cluster index of ``vec[i]`` :param predicate: The equivalence predicate (i.e. pointer to a boolean function of two arguments or an instance of the class that has the method ``bool operator()(const _Tp& a, const _Tp& b)`` . The predicate returns true when the elements are certainly if the same class, and false if they may or may not be in the same class
:param labels: The output vector of labels; will contain as many elements as ``vec`` . Each label ``labels[i]`` is 0-based cluster index of ``vec[i]``
:param predicate: The equivalence predicate (i.e. pointer to a boolean function of two arguments or an instance of the class that has the method ``bool operator()(const _Tp& a, const _Tp& b)`` . The predicate returns true when the elements are certainly if the same class, and false if they may or may not be in the same class
The generic function
``partition``
implements an
:math:`O(N^2)`
algorithm for
The generic function ``partition`` implements an
:math:`O(N^2)` algorithm for
splitting a set of
:math:`N`
elements into one or more equivalency classes, as described in
:math:`N` elements into one or more equivalency classes, as described in
http://en.wikipedia.org/wiki/Disjoint-set_data_structure
. The function
returns the number of equivalency classes.
******************
Core Functionality
******************
****************************
core. The Core Functionality
****************************
.. toctree::
:maxdepth: 2
......
This diff is collapsed.
......@@ -3,4 +3,3 @@ Dynamic Structures
.. highlight:: cpp
......@@ -30,8 +30,7 @@ The API Concepts
*"cv"* namespace
----------------
All the OpenCV classes and functions are placed into *"cv"* namespace. Therefore, to access this functionality from your code, use
``cv::`` specifier or ``using namespace cv;`` directive:
All the OpenCV classes and functions are placed into *"cv"* namespace. Therefore, to access this functionality from your code, use ``cv::`` specifier or ``using namespace cv;`` directive:
.. code-block:: c
......@@ -40,9 +39,7 @@ All the OpenCV classes and functions are placed into *"cv"* namespace. Therefore
cv::Mat H = cv::findHomography(points1, points2, CV_RANSAC, 5);
...
or
::
or ::
#include "opencv2/core/core.hpp"
using namespace cv;
......@@ -51,16 +48,13 @@ or
...
It is probable that some of the current or future OpenCV external names conflict with STL
or other libraries, in this case use explicit namespace specifiers to resolve the name conflicts:
::
or other libraries, in this case use explicit namespace specifiers to resolve the name conflicts: ::
Mat a(100, 100, CV_32F);
randu(a, Scalar::all(1), Scalar::all(std::rand()));
cv::log(a, a);
a /= std::log(2.);
Automatic Memory Management
---------------------------
......@@ -68,9 +62,7 @@ OpenCV handles all the memory automatically.
First of all, ``std::vector``, ``Mat`` and other data structures used by the functions and methods have destructors that deallocate the underlying memory buffers when needed.
Secondly, in the case of ``Mat`` this *when needed* means that the destructors do not always deallocate the buffers, they take into account possible data sharing. That is, destructor decrements the reference counter, associated with the matrix data buffer, and the buffer is deallocated if and only if the reference counter reaches zero, that is, when no other structures refer to the same buffer. Similarly, when ``Mat`` instance is copied, not actual data is really copied; instead, the associated with it reference counter is incremented to memorize that there is another owner of the same data. There is also ``Mat::clone`` method that creates a full copy of the matrix data. Here is the example
::
Secondly, in the case of ``Mat`` this *when needed* means that the destructors do not always deallocate the buffers, they take into account possible data sharing. That is, destructor decrements the reference counter, associated with the matrix data buffer, and the buffer is deallocated if and only if the reference counter reaches zero, that is, when no other structures refer to the same buffer. Similarly, when ``Mat`` instance is copied, not actual data is really copied; instead, the associated with it reference counter is incremented to memorize that there is another owner of the same data. There is also ``Mat::clone`` method that creates a full copy of the matrix data. Here is the example ::
// create a big 8Mb matrix
Mat A(1000, 1000, CV_64F);
......@@ -107,7 +99,6 @@ one can use::
That is, ``Ptr<T> ptr`` incapsulates a pointer to ``T`` instance and a reference counter associated with the pointer. See ``Ptr`` description for details.
.. todo::
Should we replace Ptr<> with the semi-standard shared_ptr<>?
......@@ -150,7 +141,6 @@ The key component of this technology is the method ``Mat::create``. It takes the
Some notable exceptions from this scheme are ``cv::mixChannels``, ``cv::RNG::fill`` and a few others functions and methods. They are not able to allocate the output array, so the user has to do that in advance.
Saturation Arithmetics
----------------------
......@@ -166,7 +156,6 @@ The similar rules are applied to 8-bit signed and 16-bit signed and unsigned typ
where ``cv::uchar`` is OpenCV's 8-bit unsigned integer type. In optimized SIMD code we use specialized instructions, like SSE2' ``paddusb``, ``packuswb`` etc. to achieve exactly the same behavior as in C++ code.
Fixed Pixel Types. Limited Use of Templates
-------------------------------------------
......@@ -192,8 +181,7 @@ For these basic types there is enumeration::
Multi-channel (``n``-channel) types can be specified using ``CV_8UC1`` ... ``CV_64FC4`` constants (for number of channels from 1 to 4), or using ``CV_8UC(n)`` ... ``CV_64FC(n)`` or ``CV_MAKETYPE(CV_8U, n)`` ... ``CV_MAKETYPE(CV_64F, n)`` macros when the number of channels is more than 4 or unknown at compile time.
.. note::
``CV_32FC1 == CV_32F``, ``CV_32FC2 == CV_32FC(2) == CV_MAKETYPE(CV_32F, 2)`` and ``CV_MAKETYPE(depth, n) == ((x&7)<<3) + (n-1)``, that is, the type constant is formed from the ``depth``, taking the lowest 3 bits, and the number of channels minus 1, taking the next ``log2(CV_CN_MAX)`` bits.
.. note:: ``CV_32FC1 == CV_32F``, ``CV_32FC2 == CV_32FC(2) == CV_MAKETYPE(CV_32F, 2)`` and ``CV_MAKETYPE(depth, n) == ((x&7)<<3) + (n-1)``, that is, the type constant is formed from the ``depth``, taking the lowest 3 bits, and the number of channels minus 1, taking the next ``log2(CV_CN_MAX)`` bits.
Here are some examples::
......@@ -219,7 +207,6 @@ The subset of supported types for each functions has been defined from practical
Should we include such a table into the standard?
Should we specify minimum "must-have" set of supported formats for each functions?
Error handling
--------------
......@@ -227,9 +214,7 @@ OpenCV uses exceptions to signal about the critical errors. When the input data
The exceptions can be instances of ``cv::Exception`` class or its derivatives. In its turn, ``cv::Exception`` is a derivative of std::exception, so it can be gracefully handled in the code using other standard C++ library components.
The exception is typically thrown using ``CV_Error(errcode, description)`` macro, or its printf-like ``CV_Error_(errcode, printf-spec, (printf-args))`` variant, or using ``CV_Assert(condition)`` macro that checks the condition and throws exception when it is not satisfied. For performance-critical code there is ``CV_DbgAssert(condition)`` that is only retained in Debug configuration. Thanks to the automatic memory management, all the intermediate buffers are automatically deallocated in the case of sudden error; user only needs to put a try statement to catch the exceptions, if needed:
::
The exception is typically thrown using ``CV_Error(errcode, description)`` macro, or its printf-like ``CV_Error_(errcode, printf-spec, (printf-args))`` variant, or using ``CV_Assert(condition)`` macro that checks the condition and throws exception when it is not satisfied. For performance-critical code there is ``CV_DbgAssert(condition)`` that is only retained in Debug configuration. Thanks to the automatic memory management, all the intermediate buffers are automatically deallocated in the case of sudden error; user only needs to put a try statement to catch the exceptions, if needed: ::
try
{
......@@ -241,7 +226,6 @@ The exception is typically thrown using ``CV_Error(errcode, description)`` macro
std::cout << "exception caught: " << err_msg << std::endl;
}
Multi-threading and reenterability
----------------------------------
......
This diff is collapsed.
......@@ -3,29 +3,15 @@ XML/YAML Persistence
.. highlight:: cpp
.. index:: FileStorage
.. _FileStorage:
FileStorage
-----------
`id=0.36488878292 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/core/FileStorage>`__
.. ctype:: FileStorage
The XML/YAML file storage class
::
The XML/YAML file storage class ::
class FileStorage
{
......@@ -79,32 +65,17 @@ The XML/YAML file storage class
vector<char> structs;
int state;
};
..
.. index:: FileNode
.. _FileNode:
FileNode
--------
`id=0.228849909258 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/core/FileNode>`__
.. ctype:: FileNode
The XML/YAML file node class
::
The XML/YAML file node class ::
class CV_EXPORTS FileNode
{
......@@ -145,32 +116,17 @@ The XML/YAML file node class
const CvFileStorage* fs;
const CvFileNode* node;
};
..
.. index:: FileNodeIterator
.. _FileNodeIterator:
FileNodeIterator
----------------
`id=0.575104633905 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/core/FileNodeIterator>`__
.. ctype:: FileNodeIterator
The XML/YAML file node iterator class
::
The XML/YAML file node iterator class ::
class CV_EXPORTS FileNodeIterator
{
......@@ -197,7 +153,5 @@ The XML/YAML file node iterator class
CvSeqReader reader;
size_t remaining;
};
..
......@@ -3,14 +3,11 @@ Common Interfaces of Descriptor Extractors
.. highlight:: cpp
Extractors of keypoint descriptors in OpenCV have wrappers with common interface that enables to switch easily
between different algorithms solving the same problem. This section is devoted to computing descriptors
that are represented as vectors in a multidimensional space. All objects that implement ''vector''
descriptor extractors inherit
:func:`DescriptorExtractor`
interface.
:func:`DescriptorExtractor` interface.
.. index:: DescriptorExtractor
......@@ -18,21 +15,9 @@ interface.
DescriptorExtractor
-------------------
`id=0.00924308242838 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor>`__
.. ctype:: DescriptorExtractor
Abstract base class for computing descriptors for image keypoints.
::
Abstract base class for computing descriptors for image keypoints. ::
class CV_EXPORTS DescriptorExtractor
{
......@@ -55,8 +40,6 @@ Abstract base class for computing descriptors for image keypoints.
protected:
...
};
..
In this interface we assume a keypoint descriptor can be represented as a
......@@ -64,166 +47,75 @@ dense, fixed-dimensional vector of some basic type. Most descriptors used
in practice follow this pattern, as it makes it very easy to compute
distances between descriptors. Therefore we represent a collection of
descriptors as a
:func:`Mat`
, where each row is one keypoint descriptor.
:func:`Mat` , where each row is one keypoint descriptor.
.. index:: DescriptorExtractor::compute
cv::DescriptorExtractor::compute
--------------------------------
`id=0.622580160404 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor%3A%3Acompute>`__
.. cfunction:: void DescriptorExtractor::compute( const Mat\& image, vector<KeyPoint>\& keypoints, Mat\& descriptors ) const
Compute the descriptors for a set of keypoints detected in an image (first variant)
or image set (second variant).
:param image: The image.
:param keypoints: The keypoints. Keypoints for which a descriptor cannot be computed are removed.
:param descriptors: The descriptors. Row i is the descriptor for keypoint i.
.. cfunction:: void DescriptorExtractor::compute( const vector<Mat>\& images, vector<vector<KeyPoint> >\& keypoints, vector<Mat>\& descriptors ) const
* **images** The image set.
* **keypoints** Input keypoints collection. keypoints[i] is keypoints
detected in images[i]. Keypoints for which a descriptor
can not be computed are removed.
* **descriptors** Descriptor collection. descriptors[i] are descriptors computed for
a set keypoints[i].
.. index:: DescriptorExtractor::read
cv::DescriptorExtractor::read
-----------------------------
`id=0.708176779821 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor%3A%3Aread>`__
.. cfunction:: void DescriptorExtractor::read( const FileNode\& fn )
Read descriptor extractor object from file node.
:param fn: File node from which detector will be read.
.. index:: DescriptorExtractor::write
cv::DescriptorExtractor::write
------------------------------
`id=0.206682397054 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor%3A%3Awrite>`__
.. cfunction:: void DescriptorExtractor::write( FileStorage\& fs ) const
Write descriptor extractor object to file storage.
:param fs: File storage in which detector will be written.
.. index:: DescriptorExtractor::create
cv::DescriptorExtractor::create
-------------------------------
`id=0.923714079643 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/DescriptorExtractor%3A%3Acreate>`__
:func:`DescriptorExtractor`
.. cfunction:: Ptr<DescriptorExtractor> DescriptorExtractor::create( const string\& descriptorExtractorType )
Descriptor extractor factory that creates of given type with
default parameters (rather using default constructor).
:param descriptorExtractorType: Descriptor extractor type.
Now the following descriptor extractor types are supported:
\ ``"SIFT"`` --
:func:`SiftFeatureDetector`,\ ``"SURF"`` --
:func:`SurfFeatureDetector`,\ ``"BRIEF"`` --
:func:`BriefFeatureDetector` .
\
``"SIFT"``
--
:func:`SiftFeatureDetector`
,
\
``"SURF"``
--
:func:`SurfFeatureDetector`
,
\
``"BRIEF"``
--
:func:`BriefFeatureDetector`
.
\
Also combined format is supported: descriptor extractor adapter name (
``"Opponent"``
--
:func:`OpponentColorDescriptorExtractor`
) + descriptor extractor name (see above),
e.g.
``"OpponentSIFT"``
, etc.
Also combined format is supported: descriptor extractor adapter name ( ``"Opponent"`` --
:func:`OpponentColorDescriptorExtractor` ) + descriptor extractor name (see above),
e.g. ``"OpponentSIFT"`` , etc.
.. index:: SiftDescriptorExtractor
......@@ -231,23 +123,10 @@ e.g.
SiftDescriptorExtractor
-----------------------
`id=0.676546819501 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/SiftDescriptorExtractor>`__
.. ctype:: SiftDescriptorExtractor
Wrapping class for descriptors computing using
:func:`SIFT`
class.
::
:func:`SIFT` class. ::
class SiftDescriptorExtractor : public DescriptorExtractor
{
......@@ -268,34 +147,18 @@ class.
protected:
...
}
..
.. index:: SurfDescriptorExtractor
.. _SurfDescriptorExtractor:
SurfDescriptorExtractor
-----------------------
`id=0.638581739296 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/SurfDescriptorExtractor>`__
.. ctype:: SurfDescriptorExtractor
Wrapping class for descriptors computing using
:func:`SURF`
class.
::
:func:`SURF` class. ::
class SurfDescriptorExtractor : public DescriptorExtractor
{
......@@ -310,34 +173,18 @@ class.
protected:
...
}
..
.. index:: CalonderDescriptorExtractor
.. _CalonderDescriptorExtractor:
CalonderDescriptorExtractor
---------------------------
`id=0.301561509204 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/CalonderDescriptorExtractor>`__
.. ctype:: CalonderDescriptorExtractor
Wrapping class for descriptors computing using
:func:`RTreeClassifier`
class.
::
:func:`RTreeClassifier` class. ::
template<typename T>
class CalonderDescriptorExtractor : public DescriptorExtractor
......@@ -352,36 +199,21 @@ class.
protected:
...
}
..
.. index:: OpponentColorDescriptorExtractor
.. _OpponentColorDescriptorExtractor:
OpponentColorDescriptorExtractor
--------------------------------
`id=0.081563051622 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/OpponentColorDescriptorExtractor>`__
.. ctype:: OpponentColorDescriptorExtractor
Adapts a descriptor extractor to compute descripors in Opponent Color Space
(refer to van de Sande et al., CGIV 2008 "Color Descriptors for Object Category Recognition").
Input RGB image is transformed in Opponent Color Space. Then unadapted descriptor extractor
(set in constructor) computes descriptors on each of the three channel and concatenate
them into a single color descriptor.
::
them into a single color descriptor. ::
class OpponentColorDescriptorExtractor : public DescriptorExtractor
{
......@@ -395,34 +227,19 @@ them into a single color descriptor.
protected:
...
};
..
.. index:: BriefDescriptorExtractor
.. _BriefDescriptorExtractor:
BriefDescriptorExtractor
------------------------
`id=0.207875021385 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/BriefDescriptorExtractor>`__
.. ctype:: BriefDescriptorExtractor
Class for computing BRIEF descriptors described in paper of Calonder M., Lepetit V.,
Strecha C., Fua P.: ''BRIEF: Binary Robust Independent Elementary Features.''
11th European Conference on Computer Vision (ECCV), Heraklion, Crete. LNCS Springer, September 2010.
::
11th European Conference on Computer Vision (ECCV), Heraklion, Crete. LNCS Springer, September 2010. ::
class BriefDescriptorExtractor : public DescriptorExtractor
{
......@@ -440,7 +257,5 @@ Strecha C., Fua P.: ''BRIEF: Binary Robust Independent Elementary Features.''
protected:
...
};
..
......@@ -3,77 +3,40 @@ Drawing Function of Keypoints and Matches
.. highlight:: cpp
.. index:: drawMatches
cv::drawMatches
---------------
`id=0.919261687295 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/drawMatches>`__
.. cfunction:: void drawMatches( const Mat\& img1, const vector<KeyPoint>\& keypoints1, const Mat\& img2, const vector<KeyPoint>\& keypoints2, const vector<DMatch>\& matches1to2, Mat\& outImg, const Scalar\& matchColor=Scalar::all(-1), const Scalar\& singlePointColor=Scalar::all(-1), const vector<char>\& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
This function draws matches of keypints from two images on output image.
Match is a line connecting two keypoints (circles).
.. cfunction:: void drawMatches( const Mat\& img1, const vector<KeyPoint>\& keypoints1, const Mat\& img2, const vector<KeyPoint>\& keypoints2, const vector<vector<DMatch> >\& matches1to2, Mat\& outImg, const Scalar\& matchColor=Scalar::all(-1), const Scalar\& singlePointColor=Scalar::all(-1), const vector<vector<char>>\& matchesMask= vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
:param img1: First source image.
:param keypoints1: Keypoints from first source image.
:param img2: Second source image.
:param keypoints2: Keypoints from second source image.
:param matches: Matches from first image to second one, i.e. ``keypoints1[i]``
has corresponding point ``keypoints2[matches[i]]`` .
:param matches: Matches from first image to second one, i.e. ``keypoints1[i]`` has corresponding point ``keypoints2[matches[i]]`` .
:param outImg: Output image. Its content depends on ``flags`` value
what is drawn in output image. See below possible ``flags`` bit values.
:param matchColor: Color of matches (lines and connected keypoints).
If ``matchColor==Scalar::all(-1)`` color will be generated randomly.
:param singlePointColor: Color of single keypoints (circles), i.e. keypoints not having the matches.
If ``singlePointColor==Scalar::all(-1)`` color will be generated randomly.
:param matchesMask: Mask determining which matches will be drawn. If mask is empty all matches will be drawn.
:param flags: Each bit of ``flags`` sets some feature of drawing.
Possible ``flags`` bit values is defined by ``DrawMatchesFlags`` , see below.
::
Possible ``flags`` bit values is defined by ``DrawMatchesFlags`` , see below. ::
struct DrawMatchesFlags
{
......@@ -93,41 +56,23 @@ Match is a line connecting two keypoints (circles).
// be drawn.
};
};
..
.. index:: drawKeypoints
cv::drawKeypoints
-----------------
`id=0.694314481427 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/drawKeypoints>`__
.. cfunction:: void drawKeypoints( const Mat\& image, const vector<KeyPoint>\& keypoints, Mat\& outImg, const Scalar\& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT )
Draw keypoints.
:param image: Source image.
:param keypoints: Keypoints from source image.
:param outImg: Output image. Its content depends on ``flags`` value
what is drawn in output image. See possible ``flags`` bit values.
:param color: Color of keypoints
.
......@@ -136,5 +81,3 @@ cv::drawKeypoints
Possible ``flags`` bit values is defined by ``DrawMatchesFlags`` ,
see above in :func:`drawMatches` .
*********************
2D Features Framework
*********************
*********************************
features2d. 2D Features Framework
*********************************
.. toctree::
:maxdepth: 2
......
This diff is collapsed.
*******************************
GPU-accelerated Computer Vision
*******************************
************************************
gpu. GPU-accelerated Computer Vision
************************************
.. toctree::
:maxdepth: 2
......
This diff is collapsed.
This diff is collapsed.
......@@ -3,12 +3,9 @@ GPU module introduction
.. highlight:: cpp
General information
-------------------
The OpenCV GPU module is a set of classes and functions to utilize GPU computational capabilities. It is implemented using NVidia CUDA Runtime API, so only the NVidia GPUs are supported. It includes utility functions, low-level vision primitives as well as high-level algorithms. The utility functions and low-level primitives provide a powerful infrastructure for developing fast vision algorithms taking advantage of GPU. Whereas the high-level functionality includes some state-of-the-art algorithms (such as stereo correspondence, face and people detectors etc.), ready to be used by the application developers.
The GPU module is designed as host-level API, i.e. if a user has pre-compiled OpenCV GPU binaries, it is not necessary to have Cuda Toolkit installed or write any extra code to make use of the GPU.
......@@ -17,105 +14,66 @@ The GPU module depends on the Cuda Toolkit and NVidia Performance Primitives lib
OpenCV GPU module is designed for ease of use and does not require any knowledge of Cuda. Though, such a knowledge will certainly be useful in non-trivial cases, or when you want to get the highest performance. It is helpful to have understanding of the costs of various operations, what the GPU does, what are the preferred data formats etc. The GPU module is an effective instrument for quick implementation of GPU-accelerated computer vision algorithms. However, if you algorithm involves many simple operations, then for the best possible performance you may still need to write your own kernels, to avoid extra write and read operations on the intermediate results.
To enable CUDA support, configure OpenCV using CMake with
``WITH_CUDA=ON``
. When the flag is set and if CUDA is installed, the full-featured OpenCV GPU module will be built. Otherwise, the module will still be built, but at runtime all functions from the module will throw
:func:`Exception`
with
``CV_GpuNotSupported``
error code, except for
:func:`gpu::getCudaEnabledDeviceCount()`
. The latter function will return zero GPU count in this case. Building OpenCV without CUDA support does not perform device code compilation, so it does not require Cuda Toolkit installed. Therefore, using
:func:`gpu::getCudaEnabledDeviceCount()`
function it is possible to implement a high-level algorithm that will detect GPU presence at runtime and choose the appropriate implementation (CPU or GPU) accordingly.
To enable CUDA support, configure OpenCV using CMake with ``WITH_CUDA=ON`` . When the flag is set and if CUDA is installed, the full-featured OpenCV GPU module will be built. Otherwise, the module will still be built, but at runtime all functions from the module will throw
:func:`Exception` with ``CV_GpuNotSupported`` error code, except for
:func:`gpu::getCudaEnabledDeviceCount()` . The latter function will return zero GPU count in this case. Building OpenCV without CUDA support does not perform device code compilation, so it does not require Cuda Toolkit installed. Therefore, using
:func:`gpu::getCudaEnabledDeviceCount()` function it is possible to implement a high-level algorithm that will detect GPU presence at runtime and choose the appropriate implementation (CPU or GPU) accordingly.
Compilation for different NVidia platforms.
-------------------------------------------
NVidia compiler allows generating binary code (cubin and fatbin) and intermediate code (PTX). Binary code often implies a specific GPU architecture and generation, so the compatibility with other GPUs is not guaranteed. PTX is targeted for a virtual platform, which is defined entirely by the set of capabilities, or features. Depending on the virtual platform chosen, some of the instructions will be emulated or disabled, even if the real hardware supports all the features.
On first call, the PTX code is compiled to binary code for the particular GPU using JIT compiler. When the target GPU has lower "compute capability" (CC) than the PTX code, JIT fails.
By default, the OpenCV GPU module includes:
*
Binaries for compute capabilities 1.3 and 2.0 (controlled by
``CUDA_ARCH_BIN``
in CMake)
Binaries for compute capabilities 1.3 and 2.0 (controlled by ``CUDA_ARCH_BIN`` in CMake)
*
PTX code for compute capabilities 1.1 and 1.3 (controlled by
``CUDA_ARCH_PTX``
in CMake)
PTX code for compute capabilities 1.1 and 1.3 (controlled by ``CUDA_ARCH_PTX`` in CMake)
That means for devices with CC 1.3 and 2.0 binary images are ready to run. For all newer platforms the PTX code for 1.3 is JIT'ed to a binary image. For devices with 1.1 and 1.2 the PTX for 1.1 is JIT'ed. For devices with CC 1.0 no code is available and the functions will throw
:func:`Exception`
. For platforms where JIT compilation is performed first run will be slow.
:func:`Exception` . For platforms where JIT compilation is performed first run will be slow.
If you happen to have GPU with CC 1.0, the GPU module can still be compiled on it and most of the functions will run just fine on such card. Simply add "1.0" to the list of binaries, for example,
``CUDA_ARCH_BIN="1.0 1.3 2.0"``
. The functions that can not be run on CC 1.0 GPUs will throw an exception.
If you happen to have GPU with CC 1.0, the GPU module can still be compiled on it and most of the functions will run just fine on such card. Simply add "1.0" to the list of binaries, for example, ``CUDA_ARCH_BIN="1.0 1.3 2.0"`` . The functions that can not be run on CC 1.0 GPUs will throw an exception.
You can always determine at runtime whether OpenCV GPU built binaries (or PTX code) are compatible with your GPU. The function
:func:`gpu::DeviceInfo::isCompatible`
return the compatibility status (true/false).
:func:`gpu::DeviceInfo::isCompatible` return the compatibility status (true/false).
Threading and multi-threading.
------------------------------
OpenCV GPU module follows Cuda Runtime API conventions regarding the multi-threaded programming. That is, on first the API call a Cuda context is created implicitly, attached to the current CPU thread and then is used as the thread's "current" context. All further operations, such as memory allocation, GPU code compilation, will be associated with the context and the thread. Because any other thread is not attached to the context, memory (and other resources) allocated in the first thread can not be accessed by the other thread. Instead, for this other thread Cuda will create another context associated with it. In short, by default different threads do not share resources.
But such limitation can be removed using Cuda Driver API (version 3.1 or later). User can retrieve context reference for one thread, attach it to another thread and make it "current" for that thread. Then the threads can share memory and other resources. It is also possible to create a context explicitly before calling any GPU code and attach it to all the threads that you want to share the resources.
Also it is possible to create context explicitly using Cuda Driver API, attach and make "current" for all necessary threads. Cuda Runtime API (and OpenCV functions respectively) will pick up it.
Multi-GPU
---------
In the current version each of the OpenCV GPU algorithms can use only a single GPU. So, to utilize multiple GPUs, user has to manually distribute the work between the GPUs. Here are the two ways of utilizing multiple GPUs:
*
If you only use synchronous functions, first, create several CPU threads (one per each GPU) and from within each thread create CUDA context for the corresponding GPU using
:func:`gpu::setDevice()`
or Driver API. That's it. Now each of the threads will use the associated GPU.
:func:`gpu::setDevice()` or Driver API. That's it. Now each of the threads will use the associated GPU.
*
In case of asynchronous functions, it is possible to create several Cuda contexts associated with different GPUs but attached to one CPU thread. This can be done only by Driver API. Within the thread you can switch from one GPU to another by making the corresponding context "current". With non-blocking GPU calls managing algorithm is clear.
While developing algorithms for multiple GPUs a data passing overhead have to be taken into consideration. For primitive functions and for small images it can be significant and eliminate all the advantages of having multiple GPUs. But for high level algorithms Multi-GPU acceleration may be suitable. For example, Stereo Block Matching algorithm has been successfully parallelized using the following algorithm:
*
Each image of the stereo pair is split into two horizontal overlapping stripes.
*
Each pair of stripes (from the left and the right images) has been processed on a separate Fermi GPU
*
The results are merged into the single disparity map.
With this scheme dual GPU gave 180
%
performance increase comparing to the single Fermi GPU. The source code of the example is available at
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
****************************
High-level GUI and Media I/O
****************************
*************************************
highgui. High-level GUI and Media I/O
*************************************
While OpenCV was designed for use in full-scale
applications and can be used within functionally rich UI frameworks (such as Qt, WinForms or Cocoa) or without any UI at all, sometimes there is a need to try some functionality quickly and visualize the results. This is what the HighGUI module has been designed for.
......@@ -13,7 +12,6 @@ It provides easy interface to:
* read and write images to/from disk or memory.
* read video from camera or file and write video to a file.
.. toctree::
:maxdepth: 2
......
......@@ -2,35 +2,23 @@
highgui. High-level GUI and Media I/O
*************************************
While OpenCV was designed for use in full-scale
applications and can be used within functionally rich UI frameworks (such as Qt, WinForms or Cocoa) or without any UI at all, sometimes there is a need to try some functionality quickly and visualize the results. This is what the HighGUI module has been designed for.
It provides easy interface to:
*
create and manipulate windows that can display images and "remember" their content (no need to handle repaint events from OS)
*
add trackbars to the windows, handle simple mouse events as well as keyboard commmands
*
read and write images to/from disk or memory.
*
read video from camera or file and write video to a file.
.. toctree::
:maxdepth: 2
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
****************
Image Processing
****************
*************************
imgproc. Image Processing
*************************
.. toctree::
:maxdepth: 2
histograms
filtering
geometric_transformations
miscellaneous_transformations
histograms
structural_analysis_and_shape_descriptors
planar_subdivisions
motion_analysis_and_object_tracking
......
This diff is collapsed.
......@@ -3,4 +3,3 @@ Planar Subdivisions
.. highlight:: cpp
......@@ -25,7 +25,4 @@ Contents:
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
* :ref:`genindex` * :ref:`modindex` * :ref:`search`
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
****************
Machine Learning
****************
********************
ml. Machine Learning
********************
The Machine Learning Library (MLL) is a set of classes and functions for statistical classification, regression and clustering of data.
Most of the classification and regression algorithms are implemented as C++ classes. As the algorithms have different seta of features (like the ability to handle missing measurements, or categorical input variables etc.), there is a little common ground between the classes. This common ground is defined by the class `CvStatModel` that all the other ML classes are derived from.
.. toctree::
:maxdepth: 2
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
****************
Object Detection
****************
***************************
objdetect. Object Detection
***************************
.. toctree::
:maxdepth: 2
......
**************
Video Analysis
**************
*********************
video. Video Analysis
*********************
.. toctree::
:maxdepth: 2
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment