Commit 77357127 authored by Vadim Pisarevsky's avatar Vadim Pisarevsky

removed obsolete tex docs in order to avoid possible confusion

parent 8047d050
This diff is collapsed.
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
\cvclass{DynamicAdaptedFeatureDetector}
An adaptively adjusting detector that iteratively detects until the desired number
of features are found.
If the detector is persisted, it will "remember" the parameters
used on the last detection. In this way, the detector may be used for consistent numbers
of keypoints in a sets of images that are temporally related such as video streams or
panorama series.
The DynamicAdaptedFeatureDetector uses another detector such as FAST or SURF to do the dirty work,
with the help of an AdjusterAdapter.
After a detection, and an unsatisfactory number of features are detected,
the AdjusterAdapter will adjust the detection parameters so that the next detection will
result in more or less features. This is repeated until either the number of desired features are found
or the parameters are maxed out.
Adapters can easily be implemented for any detector via the
AdjusterAdapter interface.
Beware that this is not thread safe - as the adjustment of parameters breaks the const
of the detection routine...
Here is a sample of how to create a DynamicAdaptedFeatureDetector.
\begin{lstlisting}
//sample usage:
//will create a detector that attempts to find
//100 - 110 FAST Keypoints, and will at most run
//FAST feature detection 10 times until that
//number of keypoints are found
Ptr<FeatureDetector> detector(new DynamicAdaptedFeatureDetector (100, 110, 10,
new FastAdjuster(20,true)));
\end{lstlisting}
\begin{lstlisting}
class DynamicAdaptedFeatureDetector: public FeatureDetector
{
public:
DynamicAdaptedFeatureDetector( const Ptr<AdjusterAdapter>& adjaster,
int min_features=400, int max_features=500, int max_iters=5 );
...
};
\end{lstlisting}
\cvCppFunc{DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector}
DynamicAdaptedFeatureDetector constructor.
\cvdefCpp{
DynamicAdaptedFeatureDetector::DynamicAdaptedFeatureDetector(
\par const Ptr<AdjusterAdapter>\& adjaster,
\par int min\_features, \par int max\_features, \par int max\_iters );
}
\begin{description}
\cvarg{adjaster}{ An \cvCppCross{AdjusterAdapter} that will do the detection and parameter
adjustment}
\cvarg{min\_features}{This minimum desired number features.}
\cvarg{max\_features}{The maximum desired number of features.}
\cvarg{max\_iters}{The maximum number of times to try to adjust the feature detector parameters. For the \cvCppCross{FastAdjuster} this number can be high,
but with Star or Surf, many iterations can get time consuming. At each iteration the detector is rerun, so keep this in mind when choosing this value.}
\end{description}
\cvclass{AdjusterAdapter}
A feature detector parameter adjuster interface, this is used by the \cvCppCross{DynamicAdaptedFeatureDetector}
and is a wrapper for \cvCppCross{FeatureDetecto}r that allow them to be adjusted after a detection.
See \cvCppCross{FastAdjuster}, \cvCppCross{StarAdjuster}, \cvCppCross{SurfAdjuster} for concrete implementations.
\begin{lstlisting}
class AdjusterAdapter: public FeatureDetector
{
public:
virtual ~AdjusterAdapter() {}
virtual void tooFew(int min, int n_detected) = 0;
virtual void tooMany(int max, int n_detected) = 0;
virtual bool good() const = 0;
};
\end{lstlisting}
\cvCppFunc{AdjusterAdapter::tooFew}
\cvdefCpp{
virtual void tooFew(int min, int n\_detected) = 0;
}
Too few features were detected so, adjust the detector parameters accordingly - so that the next
detection detects more features.
\begin{description}
\cvarg{min}{This minimum desired number features.}
\cvarg{n\_detected}{The actual number detected last run.}
\end{description}
An example implementation of this is
\begin{lstlisting}
void FastAdjuster::tooFew(int min, int n_detected)
{
thresh_--;
}
\end{lstlisting}
\cvCppFunc{AdjusterAdapter::tooMany}
Too many features were detected so, adjust the detector parameters accordingly - so that the next
detection detects less features.
\cvdefCpp{
virtual void tooMany(int max, int n\_detected) = 0;
}
\begin{description}
\cvarg{max}{This maximum desired number features.}
\cvarg{n\_detected}{The actual number detected last run.}
\end{description}
An example implementation of this is
\begin{lstlisting}
void FastAdjuster::tooMany(int min, int n_detected)
{
thresh_++;
}
\end{lstlisting}
\cvCppFunc{AdjusterAdapter::good}
Are params maxed out or still valid? Returns false if the parameters can't be adjusted any more.
\cvdefCpp{
virtual bool good() const = 0;
}
An example implementation of this is
\begin{lstlisting}
bool FastAdjuster::good() const
{
return (thresh_ > 1) && (thresh_ < 200);
}
\end{lstlisting}
\cvclass{FastAdjuster}
An \cvCppCross{AdjusterAdapter} for the \cvCppCross{FastFeatureDetector}. This will basically decrement or increment the
threshhold by 1
\begin{lstlisting}
class FastAdjuster FastAdjuster: public AdjusterAdapter
{
public:
FastAdjuster(int init_thresh = 20, bool nonmax = true);
...
};
\end{lstlisting}
\cvclass{StarAdjuster}
An \cvCppCross{AdjusterAdapter} for the \cvCppCross{StarFeatureDetector}. This adjusts the responseThreshhold of
StarFeatureDetector.
\begin{lstlisting}
class StarAdjuster: public AdjusterAdapter
{
StarAdjuster(double initial_thresh = 30.0);
...
};
\end{lstlisting}
\cvclass{SurfAdjuster}
An \cvCppCross{AdjusterAdapter} for the \cvCppCross{SurfFeatureDetector}. This adjusts the hessianThreshold of
SurfFeatureDetector.
\begin{lstlisting}
class SurfAdjuster: public SurfAdjuster
{
SurfAdjuster();
...
};
\end{lstlisting}
\ifCpp
\section{Object Categorization}
Some approaches based on local 2D features and used to object categorization
are described in this section.
\cvclass{BOWTrainer}
Abstract base class for training ''bag of visual words'' vocabulary from a set of descriptors.
See e.g. ''Visual Categorization with Bags of Keypoints'' of Gabriella Csurka, Christopher R. Dance,
Lixin Fan, Jutta Willamowski, Cedric Bray, 2004.
\begin{lstlisting}
class BOWTrainer
{
public:
BOWTrainer(){}
virtual ~BOWTrainer(){}
void add( const Mat& descriptors );
const vector<Mat>& getDescriptors() const;
int descripotorsCount() const;
virtual void clear();
virtual Mat cluster() const = 0;
virtual Mat cluster( const Mat& descriptors ) const = 0;
protected:
...
};
\end{lstlisting}
\cvCppFunc{BOWTrainer::add}
Add descriptors to training set. The training set will be clustered using \texttt{cluster}
method to construct vocabulary.
\cvdefCpp{
void BOWTrainer::add( const Mat\& descriptors );
}
\begin{description}
\cvarg{descriptors}{Descriptors to add to training set. Each row of \texttt{descriptors}
matrix is a one descriptor.}
\end{description}
\cvCppFunc{BOWTrainer::getDescriptors}
Returns training set of descriptors.
\cvdefCpp{
const vector<Mat>\& BOWTrainer::getDescriptors() const;
}
\cvCppFunc{BOWTrainer::descripotorsCount}
Returns count of all descriptors stored in the training set.
\cvdefCpp{
const vector<Mat>\& BOWTrainer::descripotorsCount() const;
}
\cvCppFunc{BOWTrainer::cluster}
Cluster train descriptors. Vocabulary consists from cluster centers. So this method
returns vocabulary. In first method variant the stored in object train descriptors will be
clustered, in second variant -- input descriptors will be clustered.
\cvdefCpp{
Mat BOWTrainer::cluster() const;
}
\cvdefCpp{
Mat BOWTrainer::cluster( const Mat\& descriptors ) const;
}
\begin{description}
\cvarg{descriptors}{Descriptors to cluster. Each row of \texttt{descriptors}
matrix is a one descriptor. Descriptors will not be added
to the inner train descriptor set.}
\end{description}
\cvclass{BOWKMeansTrainer}
\cvCppCross{kmeans} based class to train visual vocabulary using the ''bag of visual words'' approach.
\begin{lstlisting}
class BOWKMeansTrainer : public BOWTrainer
{
public:
BOWKMeansTrainer( int clusterCount, const TermCriteria& termcrit=TermCriteria(),
int attempts=3, int flags=KMEANS_PP_CENTERS );
virtual ~BOWKMeansTrainer(){}
// Returns trained vocabulary (i.e. cluster centers).
virtual Mat cluster() const;
virtual Mat cluster( const Mat& descriptors ) const;
protected:
...
};
\end{lstlisting}
To gain an understanding of constructor parameters see \cvCppCross{kmeans} function
arguments.
\cvclass{BOWImgDescriptorExtractor}
Class to compute image descriptor using ''bad of visual words''. In few,
such computing consists from the following steps:
1. Compute descriptors for given image and it's keypoints set, \\
2. Find nearest visual words from vocabulary for each keypoint descriptor, \\
3. Image descriptor is a normalized histogram of vocabulary words encountered in the image. I.e.
\texttt{i}-bin of the histogram is a frequency of \texttt{i}-word of vocabulary in the given image.
\begin{lstlisting}
class BOWImgDescriptorExtractor
{
public:
BOWImgDescriptorExtractor( const Ptr<DescriptorExtractor>& dextractor,
const Ptr<DescriptorMatcher>& dmatcher );
virtual ~BOWImgDescriptorExtractor(){}
void setVocabulary( const Mat& vocabulary );
const Mat& getVocabulary() const;
void compute( const Mat& image, vector<KeyPoint>& keypoints,
Mat& imgDescriptor,
vector<vector<int> >* pointIdxsOfClusters=0,
Mat* descriptors=0 );
int descriptorSize() const;
int descriptorType() const;
protected:
...
};
\end{lstlisting}
\cvCppFunc{BOWImgDescriptorExtractor::BOWImgDescriptorExtractor}
Constructor.
\cvdefCpp{
BOWImgDescriptorExtractor::BOWImgDescriptorExtractor(
\par const Ptr<DescriptorExtractor>\& dextractor,
\par const Ptr<DescriptorMatcher>\& dmatcher );
}
\begin{description}
\cvarg{dextractor}{Descriptor extractor that will be used to compute descriptors
for input image and it's keypoints.}
\cvarg{dmatcher}{Descriptor matcher that will be used to find nearest word of trained vocabulary to
each keupoints descriptor of the image.}
\end{description}
\cvCppFunc{BOWImgDescriptorExtractor::setVocabulary}
Method to set visual vocabulary.
\cvdefCpp{
void BOWImgDescriptorExtractor::setVocabulary( const Mat\& vocabulary );
}
\begin{description}
\cvarg{vocabulary}{Vocabulary (can be trained using inheritor of \cvCppCross{BOWTrainer}).
Each row of vocabulary is a one visual word (cluster center).}
\end{description}
\cvCppFunc{BOWImgDescriptorExtractor::getVocabulary}
Returns set vocabulary.
\cvdefCpp{
const Mat\& BOWImgDescriptorExtractor::getVocabulary() const;
}
\cvCppFunc{BOWImgDescriptorExtractor::compute}
Compute image descriptor using set visual vocabulary.
\cvdefCpp{
void BOWImgDescriptorExtractor::compute( const Mat\& image,
\par vector<KeyPoint>\& keypoints, Mat\& imgDescriptor,
\par vector<vector<int> >* pointIdxsOfClusters=0,
\par Mat* descriptors=0 );
}
\begin{description}
\cvarg{image}{The image. Image descriptor will be computed for this.}
\cvarg{keypoints}{Keypoints detected in the input image.}
\cvarg{imgDescriptor}{This is output, i.e. computed image descriptor.}
\cvarg{pointIdxsOfClusters}{Indices of keypoints which belong to the cluster, i.e.
\texttt{pointIdxsOfClusters[i]} is keypoint indices which belong
to the \texttt{i-}cluster (word of vocabulary) (returned if it is not 0.)}
\cvarg{descriptors}{Descriptors of the image keypoints (returned if it is not 0.)}
\end{description}
\cvCppFunc{BOWImgDescriptorExtractor::descriptorSize}
Returns image discriptor size, if vocabulary was set, and 0 otherwise.
\cvdefCpp{
int BOWImgDescriptorExtractor::descriptorSize() const;
}
\cvCppFunc{BOWImgDescriptorExtractor::descriptorType}
Returns image descriptor type.
\cvdefCpp{
int BOWImgDescriptorExtractor::descriptorType() const;
}
\fi
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
\documentclass[11pt]{book}
\newcommand{\targetlang}{}
\usepackage{myopencv}
\usepackage{amsmath}
\usepackage{ifthen}
\def\genc{true}
\def\genpy{false}
\def\targetlang{cpp}
\title{OpenCV Reference Manual} % used by \maketitle
\author{v2.0} % used by \maketitle
\date{Nov 25, 2009} % used by \maketitle
\begin{document}
\maketitle % automatic title!
\setcounter{tocdepth}{8}
\tableofcontents
%%% Chapters %%%
\input{opencvref_body}
%%%%%%%%%%%%%%%%
\end{document} % End of document.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment