@@ -5,9 +5,9 @@ Common Interfaces of Descriptor Extractors
Extractors of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch
between different algorithms solving the same problem. This section is devoted to computing descriptors
that are represented as vectors in a multidimensional space. All objects that implement the ``vector``
represented as vectors in a multidimensional space. All objects that implement the ``vector``
descriptor extractors inherit the
:ref:`DescriptorExtractor` interface.
:ocv:class:`DescriptorExtractor` interface.
.. index:: DescriptorExtractor
...
...
@@ -15,7 +15,7 @@ DescriptorExtractor
-------------------
.. ocv:class:: DescriptorExtractor
Abstract base class for computing descriptors for image keypoints ::
Abstract base class for computing descriptors for image keypoints. ::
class CV_EXPORTS DescriptorExtractor
{
...
...
@@ -45,7 +45,7 @@ dense, fixed-dimension vector of a basic type. Most descriptors
follow this pattern as it simplifies computing
distances between descriptors. Therefore, a collection of
descriptors is represented as
:ref:`Mat` , where each row is a keypoint descriptor.
:ocv:class:`Mat` , where each row is a keypoint descriptor.
.. index:: DescriptorExtractor::compute
...
...
@@ -57,9 +57,9 @@ DescriptorExtractor::compute
:param image: Image.
:param keypoints: Keypoints. Keypoints for which a descriptor cannot be computed are removed. Somtimes new keypoints can be added, eg SIFT duplicates keypoint with several dominant orientations (for each orientation).
:param keypoints: Keypoints. Keypoints for which a descriptor cannot be computed are removed. Sometimes new keypoints can be added, for example: ``SIFT`` duplicates keypoint with several dominant orientations (for each orientation).
:param descriptors: Descriptors. Row i is the descriptor for keypoint i.
:param descriptors: Descriptors. Row ``i`` is the descriptor for keypoint ``i``.
@@ -6,8 +6,8 @@ Common Interfaces of Descriptor Matchers
Matchers of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch
between different algorithms solving the same problem. This section is devoted to matching descriptors
that are represented as vectors in a multidimensional space. All objects that implement ``vector``
descriptor matchers inherit
:ref:`DescriptorMatcher` interface.
descriptor matchers inherit the
:ocv:class:`DescriptorMatcher` interface.
.. index:: DMatch
...
...
@@ -18,7 +18,7 @@ DMatch
.. ocv:class:: DMatch
Class for matching keypoint descriptors: query descriptor index,
train descriptor index, train image index, and distance between descriptors ::
train descriptor index, train image index, and distance between descriptors. ::
struct DMatch
{
...
...
@@ -48,7 +48,7 @@ train descriptor index, train image index, and distance between descriptors ::
DescriptorMatcher
-----------------
.. c:type:: DescriptorMatcher
.. ocv:class:: DescriptorMatcher
Abstract base class for matching keypoint descriptors. It has two groups
of match methods: for matching descriptors of an image with another image or
...
...
@@ -198,7 +198,7 @@ DescriptorMatcher::knnMatch
:param k: Count of best matches found per each query descriptor or less if a query descriptor has less than k possible matches in total.
:param compactResult: Parameter that is used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
:param compactResult: Parameter used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
These extended variants of :ocv:func:`DescriptorMatcher::match` methods find several best matches for each query descriptor. The matches are returned in the distance increasing order. See :ocv:func:`DescriptorMatcher::match` for the details about query and train descriptors.
:param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between the input query descriptors and stored train descriptors from the i-th image ``trainDescCollection[i]``.
:param matches: The found matches.
:param matches: Found matches.
:param compactResult: Parameter that is used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
:param compactResult: Parameter used when the mask (or masks) is not empty. If ``compactResult`` is false, the ``matches`` vector has the same size as ``queryDescriptors`` rows. If ``compactResult`` is true, the ``matches`` vector does not contain matches for fully masked-out query descriptors.
:param maxDistance: Threshold for the distance between matched descriptors.
...
...
@@ -265,7 +265,7 @@ DescriptorMatcher::create
BruteForceMatcher
-----------------
.. c:type:: BruteForceMatcher
.. ocv:class:: BruteForceMatcher
Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets. ::
...
...
@@ -351,9 +351,9 @@ For efficiency, ``BruteForceMatcher`` is used as a template parameterized with t
FlannBasedMatcher
-----------------
.. c:type:: FlannBasedMatcher
.. ocv:class:: FlannBasedMatcher
Flann-based descriptor matcher. This matcher trains :ref:`flann::Index` on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. ``FlannBasedMatcher`` does not support masking permissible matches of descriptor sets because :ocv:func:`flann::Index` does not support this. ::
Flann-based descriptor matcher. This matcher trains :ocv:func:`flann::Index` on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. ``FlannBasedMatcher`` does not support masking permissible matches of descriptor sets because ``flann::Index`` does not support this. ::
class FlannBasedMatcher : public DescriptorMatcher
:ocv:class:`PyramidAdaptedFeatureDetector` ) + feature detector name (see above),
for example: ``"GridFAST"``, ``"PyramidSTAR"`` .
FastFeatureDetector
...
...
@@ -156,7 +156,7 @@ FastFeatureDetector
.. ocv:class:: FastFeatureDetector
Wrapping class for feature detection using the
:ref:`FAST` method ::
:ocv:func:`FAST` method. ::
class FastFeatureDetector : public FeatureDetector
{
...
...
@@ -173,7 +173,7 @@ GoodFeaturesToTrackDetector
.. ocv:class:: GoodFeaturesToTrackDetector
Wrapping class for feature detection using the
:ref:`goodFeaturesToTrack` function ::
:ocv:func:`goodFeaturesToTrack` function. ::
class GoodFeaturesToTrackDetector : public FeatureDetector
{
...
...
@@ -211,7 +211,7 @@ MserFeatureDetector
.. ocv:class:: MserFeatureDetector
Wrapping class for feature detection using the
:ref:`MSER` class ::
:ocv:class:`MSER` class. ::
class MserFeatureDetector : public FeatureDetector
{
...
...
@@ -233,7 +233,7 @@ StarFeatureDetector
.. ocv:class:: StarFeatureDetector
Wrapping class for feature detection using the
:ref:`StarDetector` class ::
:ocv:class:`StarDetector` class. ::
class StarFeatureDetector : public FeatureDetector
{
...
...
@@ -252,7 +252,7 @@ SiftFeatureDetector
.. ocv:class:: SiftFeatureDetector
Wrapping class for feature detection using the
:ref:`SIFT` class ::
:ocv:class:`SIFT` class. ::
class SiftFeatureDetector : public FeatureDetector
{
...
...
@@ -276,7 +276,7 @@ SurfFeatureDetector
.. ocv:class:: SurfFeatureDetector
Wrapping class for feature detection using the
:ref:`SURF` class ::
:ocv:class:`SURF` class. ::
class SurfFeatureDetector : public FeatureDetector
{
...
...
@@ -295,7 +295,7 @@ OrbFeatureDetector
.. ocv:class:: OrbFeatureDetector
Wrapping class for feature detection using the
:ref:`ORB` class ::
:ocv:class:`ORB` class. ::
class OrbFeatureDetector : public FeatureDetector
{
...
...
@@ -311,7 +311,7 @@ SimpleBlobDetector
-------------------
.. ocv:class:: SimpleBlobDetector
Class for extracting blobs from an image ::
Class for extracting blobs from an image. ::
class SimpleBlobDetector : public FeatureDetector
{
...
...
@@ -347,19 +347,27 @@ Class for extracting blobs from an image ::
...
};
The class implements a simple algorithm for extracting blobs from an image. It converts the source image to binary images by applying thresholding with several thresholds from ``minThreshold`` (inclusive) to ``maxThreshold`` (exclusive) with distance ``thresholdStep`` between neighboring thresholds. Then connected components are extracted from every binary image by :ocv:func:`findContours` and their centers are calculated. Centers from several binary images are grouped by their coordinates. Close centers form one group that corresponds to one blob and this is controled by the ``minDistBetweenBlobs`` parameter. Then final centers of blobs and their radiuses are estimated from these groups and returned as locations and sizes of keypoints.
The class implements a simple algorithm for extracting blobs from an image:
#. Convert the source image to binary images by applying thresholding with several thresholds from ``minThreshold`` (inclusive) to ``maxThreshold`` (exclusive) with distance ``thresholdStep`` between neighboring thresholds.
#. Extract connected components from every binary image by :ocv:func:`findContours` and calculate their centers.
#. Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the ``minDistBetweenBlobs`` parameter.
#. From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.
This class performs several filtrations of returned blobs. You should set ``filterBy*`` to true/false to turn on/off corresponding filtration. Available filtrations:
* By color. This filter compares the intensity of a binary image at the center of a blob to ``blobColor``. If they differ then the blob is filtered out. Use ``blobColor = 0`` to extract dark blobs and ``blobColor = 255`` to extract light blobs.
* **By color**. This filter compares the intensity of a binary image at the center of a blob to ``blobColor``. If they differ, the blob is filtered out. Use ``blobColor = 0`` to extract dark blobs and ``blobColor = 255`` to extract light blobs.
* By area. Extracted blobs will have area between ``minArea`` (inclusive) and ``maxArea`` (exclusive).
* **By area**. Extracted blobs have an area between ``minArea`` (inclusive) and ``maxArea`` (exclusive).
* By circularity. Extracted blobs will have circularity (:math:`\frac{4*\pi*Area}{perimeter * perimeter}`) between ``minCircularity`` (inclusive) and ``maxCircularity`` (exclusive).
* **By circularity**. Extracted blobs have circularity (:math:`\frac{4*\pi*Area}{perimeter * perimeter}`) between ``minCircularity`` (inclusive) and ``maxCircularity`` (exclusive).
* By ratio of the minimum inertia to maximum inertia. Extracted blobs will have this ratio between ``minInertiaRatio`` (inclusive) and ``maxInertiaRatio`` (exclusive).
* **By ratio of the minimum inertia to maximum inertia**. Extracted blobs have this ratio between ``minInertiaRatio`` (inclusive) and ``maxInertiaRatio`` (exclusive).
* By convexity. Extracted blobs will have convexity (area / area of blob convex hull) between ``minConvexity`` (inclusive) and ``maxConvexity`` (exclusive).
* **By convexity**. Extracted blobs have convexity (area / area of blob convex hull) between ``minConvexity`` (inclusive) and ``maxConvexity`` (exclusive).
Default values of parameters are tuned to extract dark circular blobs.
...
...
@@ -368,7 +376,7 @@ GridAdaptedFeatureDetector
--------------------------
.. ocv:class:: GridAdaptedFeatureDetector
Class adapting a detector to partition the source image into a grid and detect points in each cell ::
Class adapting a detector to partition the source image into a grid and detect points in each cell. ::
class GridAdaptedFeatureDetector : public FeatureDetector
{
...
...
@@ -411,7 +419,7 @@ DynamicAdaptedFeatureDetector
-----------------------------
.. ocv:class:: DynamicAdaptedFeatureDetector
Adaptively adjusting detector that iteratively detects features until the desired number is found ::
Adaptively adjusting detector that iteratively detects features until the desired number is found. ::
class DynamicAdaptedFeatureDetector: public FeatureDetector
{
...
...
@@ -426,7 +434,7 @@ used for the last detection. In this case, the detector may be used for consiste
of keypoints in a set of temporally related images, such as video streams or
panorama series.
``DynamicAdaptedFeatureDetector`` uses another detector such as FAST or SURF to do the dirty work,
``DynamicAdaptedFeatureDetector`` uses another detector, such as FAST or SURF, to do the dirty work,
with the help of ``AdjusterAdapter`` .
If the detected number of features is not large enough,
``AdjusterAdapter`` adjusts the detection parameters so that the next detection
...
...
@@ -448,25 +456,27 @@ Example of creating ``DynamicAdaptedFeatureDetector`` : ::
.. ocv:function:: DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector( const Ptr<AdjusterAdapter>& adjuster, int min_features, int max_features, int max_iters )
Constructs the class.
:param adjuster: :ref:`AdjusterAdapter` that detects features and adjusts parameters.
:param adjuster: :ocv:class:`AdjusterAdapter` that detects features and adjusts parameters.
:param min_features: Minimum desired number of features.
:param max_features: Maximum desired number of features.
:param max_iters: Maximum number of times to try adjusting the feature detector parameters. For :ref:`FastAdjuster` , this number can be high, but with ``Star`` or ``Surf`` many iterations can be time-comsuming. At each iteration the detector is rerun.
:param max_iters: Maximum number of times to try adjusting the feature detector parameters. For :ocv:class:`FastAdjuster` , this number can be high, but with ``Star`` or ``Surf`` many iterations can be time-comsuming. At each iteration the detector is rerun.
AdjusterAdapter
---------------
.. ocv:class:: AdjusterAdapter
Class providing an interface for adjusting parameters of a feature detector. This interface is used by :ref:`DynamicAdaptedFeatureDetector` . It is a wrapper for :ref:`FeatureDetector` that enables adjusting parameters after feature detection. ::
Class providing an interface for adjusting parameters of a feature detector. This interface is used by :ocv:class:`DynamicAdaptedFeatureDetector` . It is a wrapper for :ocv:class:`FeatureDetector` that enables adjusting parameters after feature detection. ::
class AdjusterAdapter: public FeatureDetector
{
...
...
@@ -481,9 +491,9 @@ Class providing an interface for adjusting parameters of a feature detector. Thi
See
:ref:`FastAdjuster`,
:ref:`StarAdjuster`,
:ref:`SurfAdjuster` for concrete implementations.
:ocv:class:`FastAdjuster`,
:ocv:class:`StarAdjuster`, and
:ocv:class:`SurfAdjuster` for concrete implementations.
Creates adjuster adapter by name ``detectorType``. The detector name is the same as in :ocv:func:`FeatureDetector::create`, but now supported ``"FAST"``, ``"STAR"`` and ``"SURF"`` only.
Creates an adjuster adapter by name ``detectorType``. The detector name is the same as in :ocv:func:`FeatureDetector::create`, but now supports ``"FAST"``, ``"STAR"``, and ``"SURF"`` only.
FastAdjuster
------------
.. ocv:class:: FastAdjuster
:ref:`AdjusterAdapter` for :ref:`FastFeatureDetector`. This class decreases or increases the threshold value by 1. ::
:ocv:class:`AdjusterAdapter` for :ocv:class:`FastFeatureDetector`. This class decreases or increases the threshold value by 1. ::
class FastAdjuster FastAdjuster: public AdjusterAdapter
{
...
...
@@ -556,7 +566,7 @@ StarAdjuster
------------
.. ocv:class:: StarAdjuster
:ref:`AdjusterAdapter` for :ref:`StarFeatureDetector`. This class adjusts the ``responseThreshhold`` of ``StarFeatureDetector``. ::
:ocv:class:`AdjusterAdapter` for :ocv:class:`StarFeatureDetector`. This class adjusts the ``responseThreshhold`` of ``StarFeatureDetector``. ::
class StarAdjuster: public AdjusterAdapter
{
...
...
@@ -568,7 +578,7 @@ SurfAdjuster
------------
.. ocv:class:: SurfAdjuster
:ref:`AdjusterAdapter` for :ref:`SurfFeatureDetector`. This class adjusts the ``hessianThreshold`` of ``SurfFeatureDetector``. ::
:ocv:class:`AdjusterAdapter` for :ocv:class:`SurfFeatureDetector`. This class adjusts the ``hessianThreshold`` of ``SurfFeatureDetector``. ::
class SurfAdjuster: public SurfAdjuster
{
...
...
@@ -580,7 +590,7 @@ FeatureDetector
---------------
.. ocv:class:: FeatureDetector
Abstract base class for 2D image feature detectors ::
Abstract base class for 2D image feature detectors. ::
Abstract interface for extracting and matching a keypoint descriptor. There are also :ref:`DescriptorExtractor` and :ref:`DescriptorMatcher` for these purposes but their interfaces are intended for descriptors represented as vectors in a multidimensional space. ``GenericDescriptorMatcher`` is a more generic interface for descriptors. :ref:`DescriptorMatcher` and ``GenericDescriptorMatcher`` have two groups of match methods: for matching keypoints of an image with another image or with an image set. ::
Abstract interface for extracting and matching a keypoint descriptor. There are also :ocv:class:`DescriptorExtractor` and :ocv:class:`DescriptorMatcher` for these purposes but their interfaces are intended for descriptors represented as vectors in a multidimensional space. ``GenericDescriptorMatcher`` is a more generic interface for descriptors. ``DescriptorMatcher`` and ``GenericDescriptorMatcher`` have two groups of match methods: for matching keypoints of an image with another image or with an image set. ::
:param trainKeypoints: Keypoints from a train image.
The method classifies each keypoint from a query set. The first variant of the method takes a train image and its keypoints as an input argument. The second variant uses the internally stored training collection that can be built using the ``GenericDescriptorMatcher::add`` method.
The method classifies each keypoint from a query set. The first variant of the method takes a train image and its keypoints as an input argument. The second variant uses the internally stored training collection that can be built using the ``GenericDescriptorMatcher::add`` method.
The methods do the following:
The methods do the following:
#.
Call the ``GenericDescriptorMatcher::match`` method to find correspondence between the query set and the training set.
#.
Call the ``GenericDescriptorMatcher::match`` method to find correspondence between the query set and the training set.
#.
Sey the ``class_id`` field of each keypoint from the query set to ``class_id`` of the corresponding keypoint from the training set.
#.
Set the ``class_id`` field of each keypoint from the query set to ``class_id`` of the corresponding keypoint from the training set.
:param matches: Matches. If a query descriptor (keypoint) is masked out in ``mask`` , match is added for this descriptor. So, ``matches`` size may be smaller than the query keypoints count.
:param mask: Mask specifying permissible matches between input query and train keypoints.
:param mask: Mask specifying permissible matches between an input query and train keypoints.
:param masks: Set of masks. Each ``masks[i]`` specifies permissible matches between input query keypoints and stored train keypoints from the i-th image.
The methods find the best match for each query keypoint. In the first variant of the method, a train image and its keypoints are the input arguments. In the second variant, query keypoints are matched to the internally stored training collection that can be built using ``GenericDescriptorMatcher::add`` method. Optional mask (or masks) can be passed to specify which query and training descriptors can be matched. Namely, ``queryKeypoints[i]`` can be matched with ``trainKeypoints[j]`` only if ``mask.at<uchar>(i,j)`` is non-zero.
The methods find the best match for each query keypoint. In the first variant of the method, a train image and its keypoints are the input arguments. In the second variant, query keypoints are matched to the internally stored training collection that can be built using the ``GenericDescriptorMatcher::add`` method. Optional mask (or masks) can be passed to specify which query and training descriptors can be matched. Namely, ``queryKeypoints[i]`` can be matched with ``trainKeypoints[j]`` only if ``mask.at<uchar>(i,j)`` is non-zero.
Find the ``k`` best matches for each query keypoint.
Finds the ``k`` best matches for each query keypoint.
The methods are extended variants of ``GenericDescriptorMatch::match``. The parameters are similar, and the the semantics is similar to ``DescriptorMatcher::knnMatch``. But this class does not require explicitly computed keypoint descriptors.
Draw the found matches of keypoints from two images
Draws the found matches of keypoints from two images.
:param img1: The first source image.
:param img1: First source image.
:param keypoints1: Keypoints from the first source image.
:param img2: The second source image.
:param img2: Second source image.
:param keypoints2: Keypoints from the second source image.
...
...
@@ -75,5 +75,5 @@ drawKeypoints
:param color: Color of keypoints.
:param flags: Flags setting drawing features. Possible ``flags`` bit values are defined by ``DrawMatchesFlags``. See details above in :ref:`drawMatches` .
:param flags: Flags setting drawing features. Possible ``flags`` bit values are defined by ``DrawMatchesFlags``. See details above in :ocv:func:`drawMatches` .
Detects corners using the FAST algorithm by E. Rosten (*Machine learning for high-speed corner detection*, 2006).
Detects corners using the FAST algorithm by E. Rosten (*Machine Learning for High-speed Corner Detection*, 2006).
:param image: Image where keypoints (corners) are detected.
...
...
@@ -27,7 +27,7 @@ MSER
----
.. ocv:class:: MSER
Maximally stable extremal region extractor ::
Maximally stable extremal region extractor. ::
class MSER : public CvMSERParams
{
...
...
@@ -56,7 +56,7 @@ StarDetector
------------
.. ocv:class:: StarDetector
Class implementing the Star keypoint detector ::
Class implementing the ``Star`` keypoint detector. ::
class StarDetector : CvStarDetectorParams
{
...
...
@@ -89,13 +89,11 @@ The class implements a modified version of the ``CenSurE`` keypoint detector des
.. index:: SIFT
.. _SIFT:
SIFT
----
.. ocv:class:: SIFT
Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) approach ::
Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) approach. ::
class CV_EXPORTS SIFT
{
...
...
@@ -179,13 +177,11 @@ Class for extracting keypoints and computing descriptors using the Scale Invaria
.. index:: SURF
.. _SURF:
SURF
----
.. ocv:class:: SURF
Class for extracting Speeded Up Robust Features from an image ::
Class for extracting Speeded Up Robust Features from an image. ::
class SURF : public CvSURFParams
{
...
...
@@ -214,18 +210,16 @@ The class implements the Speeded Up Robust Features descriptor
[Bay06].
There is a fast multi-scale Hessian keypoint detector that can be used to find keypoints
(default option). But the descriptors can be also computed for the user-specified keypoints.
The algorithm can be used for object tracking and localization, image stitching, and so on. See the ``find_obj.cpp`` demo in OpenCV samples directory.
The algorithm can be used for object tracking and localization, image stitching, and so on. See the ``find_obj.cpp`` demo in the OpenCV samples directory.
.. index:: ORB
.. _ORB:
ORB
----
.. ocv:class:: ORB
Class for extracting ORB features and descriptors from an image ::
Class for extracting ORB features and descriptors from an image. ::
class ORB
{
...
...
@@ -272,18 +266,17 @@ Class for extracting ORB features and descriptors from an image ::
bool useProvidedKeypoints=false) const;
};
The class implements ORB
The class implements ORB.
.. index:: RandomizedTree
.. _RandomizedTree:
RandomizedTree
--------------
.. ocv:class:: RandomizedTree
Class containing a base structure for ``RTreeClassifier`` ::
Class containing a base structure for ``RTreeClassifier``. ::
class CV_EXPORTS RandomizedTree
{
...
...
@@ -423,7 +416,7 @@ RTreeNode
---------
.. ocv:class:: RTreeNode
Class containing a base structure for ``RandomizedTree`` ::
Class containing a base structure for ``RandomizedTree``. ::
struct RTreeNode
{
...
...
@@ -451,7 +444,7 @@ RTreeClassifier
---------------
.. ocv:class:: RTreeClassifier
Class containing ``RTreeClassifier``. It represents the Calonder descriptor that was originally introduced by Michael Calonder. ::
Class containing ``RTreeClassifier``. It represents the Calonder descriptor originally introduced by Michael Calonder. ::
.. ocv:function:: BOWKMeansTrainer::BOWKMeansTrainer( int clusterCount, const TermCriteria& termcrit=TermCriteria(), int attempts=3, int flags=KMEANS_PP_CENTERS );
To understand constructor parameters, see :ref:`kmeans` function arguments.
See :ocv:func:`kmeans` function parameters.
BOWImgDescriptorExtractor
-------------------------
.. ocv:class:: BOWImgDescriptorExtractor
Class to compute an image descriptor using the ''bag of visual words''. Such a computation consists of the following steps:
Class to compute an image descriptor using the *bag of visual words*. Such a computation consists of the following steps:
#. Compute descriptors for a given image and its keypoints set.
#. Find the nearest visual words from the vocabulary for each keypoint descriptor.
#. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The ``i``-th bin of the histogram is a frequency of ``i``-th word of the vocabulary in the given image.
:param vocabulary: Vocabulary (can be trained using the inheritor of :ref:`BOWTrainer` ). Each row of the vocabulary is a visual word (cluster center).
:param vocabulary: Vocabulary (can be trained using the inheritor of :ocv:class:`BOWTrainer` ). Each row of the vocabulary is a visual word (cluster center).