Commit 43f3eb9f authored by biagio montesano's avatar biagio montesano

New warnings corrected

parent b433ac58
.. _LSDDetector:
Line Segments Detector
======================
Lines extraction methodology
----------------------------
The lines extraction methodology described in the following is mainly based on [EDL]_.
The extraction starts with a Gaussian pyramid generated from an original image, downsampled N-1 times, blurred N times, to obtain N layers (one for each octave), with layer 0 corresponding to input image. Then, from each layer (octave) in the pyramid, lines are extracted using LSD algorithm.
Differently from EDLine lines extractor used in original article, LSD furnishes information only about lines extremes; thus, additional information regarding slope and equation of line are computed via analytic methods. The number of pixels is obtained using `LineIterator <http://docs.opencv.org/modules/core/doc/drawing_functions.html#lineiterator>`_. Extracted lines are returned in the form of KeyLine objects, but since extraction is based on a method different from the one used in `BinaryDescriptor <binary_descriptor.html>`_ class, data associated to a line's extremes in original image and in octave it was extracted from, coincide. KeyLine's field *class_id* is used as an index to indicate the order of extraction of a line inside a single octave.
LSDDetector::createLSDDetector
------------------------------
Creates ad LSDDetector object, using smart pointers.
.. ocv:function:: Ptr<LSDDetector> LSDDetector::createLSDDetector()
LSDDetector::detect
-------------------
Detect lines inside an image.
.. ocv:function:: void LSDDetector::detect( const Mat& image, std::vector<KeyLine>& keylines, int scale, int numOctaves, const Mat& mask=Mat())
.. ocv:function:: void LSDDetector::detect( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, int scale, int numOctaves, const std::vector<Mat>& masks=std::vector<Mat>() ) const
:param image: input image
:param images: input images
:param keylines: vector or set of vectors that will store extracted lines for one or more images
:param mask: mask matrix to detect only KeyLines of interest
:param masks: vector of mask matrices to detect only KeyLines of interest from each input image
:param scale: scale factor used in pyramids generation
:param numOctaves: number of octaves inside pyramid
References
----------
.. [EDL] Von Gioi, R. Grompone, et al. *LSD: A fast line segment detector with a false detection control*, IEEE Transactions on Pattern Analysis and Machine Intelligence 32.4 (2010): 722-732.
......@@ -125,6 +125,8 @@ References
.. [LBD] Zhang, Lilian, and Reinhard Koch. *An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency*, Journal of Visual Communication and Image Representation 24.7 (2013): 794-805.
.. [EDL] Von Gioi, R. Grompone, et al. *LSD: A fast line segment detector with a false detection control*, IEEE Transactions on Pattern Analysis and Machine Intelligence 32.4 (2010): 722-732.
Summary
-------
......
......@@ -43,7 +43,6 @@
#define __OPENCV_LINE_DESCRIPTOR_HPP__
#include "opencv2/line_descriptor/descriptor.hpp"
//#include "opencv2/core.hpp"
namespace cv
{
......
......@@ -134,15 +134,6 @@ class CV_EXPORTS_W BinaryDescriptor : public Algorithm
};
struct CV_EXPORTS LineDetectionMode
{
enum
{
LSD_DETECTOR = 0, // detect lines using LSD
EDL_DETECTOR = 1 // detect lines using EDLines
};
};
/* constructor */
CV_WRAP
BinaryDescriptor( const BinaryDescriptor::Params &parameters = BinaryDescriptor::Params() );
......@@ -162,53 +153,48 @@ class CV_EXPORTS_W BinaryDescriptor : public Algorithm
int getReductionRatio();
void setReductionRatio( int rRatio );
/* read parameters from a FileNode object and store them (class function ) */
/* reads parameters from a FileNode object and store them (class function ) */
virtual void read( const cv::FileNode& fn );
/* store parameters to a FileStorage object (class function) */
/* stores parameters to a FileStorage object (class function) */
virtual void write( cv::FileStorage& fs ) const;
/* requires line detection (only one image) */
CV_WRAP
void detect( const Mat& image, CV_OUT std::vector<KeyLine>& keypoints, const Mat& mask = Mat(), int flags = LineDetectionMode::LSD_DETECTOR );
void detect( const Mat& image, CV_OUT std::vector<KeyLine>& keypoints, const Mat& mask = Mat() );
/* requires line detection (more than one image) */
void detect( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, const std::vector<Mat>& masks = std::vector<Mat>(),
int flags = LineDetectionMode::LSD_DETECTOR ) const;
void detect( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, const std::vector<Mat>& masks =
std::vector<Mat>() ) const;
/* requires descriptors computation (only one image) */
CV_WRAP
void compute( const Mat& image, CV_OUT CV_IN_OUT std::vector<KeyLine>& keylines, CV_OUT Mat& descriptors, bool returnFloatDescr = false, int flags =
LineDetectionMode::LSD_DETECTOR ) const;
void compute( const Mat& image, CV_OUT CV_IN_OUT std::vector<KeyLine>& keylines, CV_OUT Mat& descriptors, bool returnFloatDescr = false ) const;
/* requires descriptors computation (more than one image) */
void compute( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, std::vector<Mat>& descriptors, bool returnFloatDescr =
false,
int flags = LineDetectionMode::LSD_DETECTOR ) const;
false ) const;
/*return descriptor size */
/* returns descriptor size */
int descriptorSize() const;
/* return data type */
/* returns data type */
int descriptorType() const;
/* return norm mode */
/* returns norm mode */
int defaultNorm() const;
/* check whether Gaussian pyramids were created */
bool empty() const;
/* definition of operator () */
CV_WRAP_AS(detectAndCompute)
virtual void operator()( InputArray image, InputArray mask, CV_OUT std::vector<KeyLine>& keylines, OutputArray descriptors,
bool useProvidedKeyLines = false, bool returnFloatDescr = false, int flags = LineDetectionMode::LSD_DETECTOR ) const;
bool useProvidedKeyLines = false, bool returnFloatDescr = false ) const;
protected:
/* implementation of line detection */
virtual void detectImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, int flags, const Mat& mask = Mat() ) const;
virtual void detectImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, const Mat& mask = Mat() ) const;
/* implementation of descriptors' computation */
virtual void computeImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, Mat& descriptors, bool returnFloatDescr, int flags ) const;
virtual void computeImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, Mat& descriptors, bool returnFloatDescr ) const;
/* function inherited from Algorithm */
AlgorithmInfo* info() const;
......@@ -217,49 +203,24 @@ class CV_EXPORTS_W BinaryDescriptor : public Algorithm
/* conversion of an LBD descriptor to its binary representation */
unsigned char binaryConversion( float* f1, float* f2 );
/* compute LBD descriptors */
int computeLBD( ScaleLines &keyLines, int flags );
/* compute LBD descriptors using EDLine extractor */
int computeLBD_EDL( ScaleLines &keyLines );
int computeLBD( ScaleLines &keyLines );
/* compute Gaussian pyramid of input image */
void computeGaussianPyramid( const Mat& image );
/* gather lines in groups.
Each group contains the same line, detected in different octaves */
int OctaveKeyLines( ScaleLines &keyLines );
/* gather lines in groups using EDLine extractor.
/* gathers lines in groups using EDLine extractor.
Each group contains the same line, detected in different octaves */
int OctaveKeyLines_EDL( cv::Mat& image, ScaleLines &keyLines );
/* get coefficients of line passing by two points (in line_extremes) */
void getLineParameters( cv::Vec4i &line_extremes, cv::Vec3i &lineParams );
/* compute the angle between line and X axis */
float getLineDirection( cv::Vec3i &lineParams );
int OctaveKeyLines( cv::Mat& image, ScaleLines &keyLines );
/* the local gaussian coefficient applied to the orthogonal line direction within each band */
std::vector<float> gaussCoefL_;
/* the global gaussian coefficient applied to each Row within line support region */
/* the global gaussian coefficient applied to each row within line support region */
std::vector<float> gaussCoefG_;
/* vector to store horizontal and vertical derivatives of octave images */
std::vector<cv::Mat> dxImg_vector, dyImg_vector;
/* vectot to store sizes of octave images */
std::vector<cv::Size> images_sizes;
/* structure to store lines extracted from each octave image */
std::vector<std::vector<cv::Vec4i> > extractedLines;
/* descriptor parameters */
Params params;
/* vector to store the Gaussian pyramid od an input image */
std::vector<cv::Mat> octaveImages;
/* vector of sizes of downsampled and blurred images */
std::vector<cv::Size> images_sizes;
/*For each octave of image, we define an EDLineDetector, because we can get gradient images (dxImg, dyImg, gImg)
*from the EDLineDetector class without extra computation cost. Another reason is that, if we use
......@@ -269,6 +230,42 @@ class CV_EXPORTS_W BinaryDescriptor : public Algorithm
};
class CV_EXPORTS_W LSDDetector : public Algorithm
{
public:
/* constructor */
LSDDetector()
{
}
;
/* constructor with smart pointer */
static Ptr<LSDDetector> createLSDDetector();
/* requires line detection (only one image) */
CV_WRAP
void detect( const Mat& image, CV_OUT std::vector<KeyLine>& keypoints, int scale, int numOctaves, const Mat& mask = Mat() );
/* requires line detection (more than one image) */
void detect( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, int scale, int numOctaves,
const std::vector<Mat>& masks = std::vector<Mat>() ) const;
private:
/* compute Gaussian pyramid of input image */
void computeGaussianPyramid( const Mat& image, int numOctaves, int scale );
/* implementation of line detection */
void detectImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, int numOctaves, int scale, const Mat& mask ) const;
/* matrices for Gaussian pyramids */
std::vector<cv::Mat> gaussianPyrs;
protected:
/* function inherited from Algorithm */
AlgorithmInfo* info() const;
};
class CV_EXPORTS_W BinaryDescriptorMatcher : public Algorithm
{
......
......@@ -61,20 +61,19 @@ static void help()
<< std::endl;
}
inline void writeMat(cv::Mat m, std::string name, int n)
inline void writeMat( cv::Mat m, std::string name, int n )
{
std::stringstream ss;
std::string s;
ss << n;
ss >> s;
std::string fileNameConf = name + s;
cv::FileStorage fsConf(fileNameConf, cv::FileStorage::WRITE);
fsConf << "m" << m;
fsConf.release();
std::stringstream ss;
std::string s;
ss << n;
ss >> s;
std::string fileNameConf = name + s;
cv::FileStorage fsConf( fileNameConf, cv::FileStorage::WRITE );
fsConf << "m" << m;
fsConf.release();
}
int main( int argc, char** argv )
{
/* get parameters from command line */
......@@ -102,7 +101,7 @@ int main( int argc, char** argv )
/* compute lines */
std::vector<KeyLine> keylines;
bd->detect( imageMat, keylines, mask, 1 );
bd->detect( imageMat, keylines, mask );
/* select only lines from first octave */
std::vector<KeyLine> octave0;
......@@ -115,7 +114,7 @@ int main( int argc, char** argv )
/* compute descriptors */
cv::Mat descriptors;
bd->compute( imageMat, octave0, descriptors, false, 1 );
writeMat(descriptors, "bd_descriptors", 0);
bd->compute( imageMat, octave0, descriptors, 1);
writeMat( descriptors, "bd_descriptors", 0 );
}
......@@ -80,7 +80,7 @@ int main( int argc, char** argv )
std::cout << "Error, image could not be loaded. Please, check its path" << std::endl;
}
/* create a ramdom binary mask */
/* create a random binary mask */
cv::Mat mask = Mat::ones( imageMat.size(), CV_8UC1 );
/* create a pointer to a BinaryDescriptor object with deafult parameters */
......@@ -91,16 +91,15 @@ int main( int argc, char** argv )
/* extract lines */
cv::Mat output = imageMat.clone();
bd->detect( imageMat, lines, mask, 1 );
bd->detect( imageMat, lines, mask );
/* draw lines extracted from octave 0 */
if( output.channels() == 1 )
cvtColor( output, output, COLOR_GRAY2BGR );
for ( size_t i = 0; i < lines.size(); i++ )
{
KeyLine kl = lines[i];
if( kl.octave == 0 /*&& kl.response >0.08*/)
if( kl.octave == 0)
{
/* get a random color */
int R = ( rand() % (int) ( 255 + 1 ) );
......@@ -112,7 +111,7 @@ int main( int argc, char** argv )
Point pt2 = Point( kl.endPointX, kl.endPointY );
/* draw line */
line( output, pt1, pt2, Scalar( B, G, R ), 5 );
line( output, pt1, pt2, Scalar( B, G, R ), 3 );
}
}
......
......@@ -128,13 +128,4 @@ int main( int argc, char** argv )
std::vector<std::vector<DMatch> > matches;
bdm->radiusMatch( queries, matches, 30 );
/* print matches */
for ( size_t q = 0; q < matches.size(); q++ )
{
for ( size_t m = 0; m < matches[q].size(); m++ )
{
DMatch dm = matches[q][m];
std::cout << "Descriptor: " << q << " Image: " << dm.imgIdx << " Distance: " << dm.distance << std::endl;
}
}
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2014, Biagio Montesano, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
using namespace cv;
Ptr<LSDDetector> LSDDetector::createLSDDetector()
{
return Ptr<LSDDetector>( new LSDDetector() );
}
/* compute Gaussian pyramid of input image */
void LSDDetector::computeGaussianPyramid( const Mat& image, int numOctaves, int scale )
{
/* clear class fields */
gaussianPyrs.clear();
/* insert input image into pyramid */
cv::Mat currentMat = image.clone();
cv::GaussianBlur( currentMat, currentMat, cv::Size( 5, 5 ), 1 );
gaussianPyrs.push_back( currentMat );
/* fill Gaussian pyramid */
for ( int pyrCounter = 1; pyrCounter < numOctaves; pyrCounter++ )
{
/* compute and store next image in pyramid and its size */
pyrDown( currentMat, currentMat, Size( currentMat.cols / scale, currentMat.rows / scale ) );
gaussianPyrs.push_back( currentMat );
}
}
/* check lines' extremes */
inline void checkLineExtremes( cv::Vec4i& extremes, cv::Size imageSize )
{
if( extremes[0] < 0 )
extremes[0] = 0;
if( extremes[0] >= imageSize.width )
extremes[0] = imageSize.width - 1;
if( extremes[2] < 0 )
extremes[2] = 0;
if( extremes[2] >= imageSize.width )
extremes[2] = imageSize.width - 1;
if( extremes[1] < 0 )
extremes[1] = 0;
if( extremes[1] >= imageSize.height )
extremes[1] = imageSize.height - 1;
if( extremes[3] < 0 )
extremes[3] = 0;
if( extremes[3] >= imageSize.height )
extremes[3] = imageSize.height - 1;
}
/* requires line detection (only one image) */
void LSDDetector::detect( const Mat& image, CV_OUT std::vector<KeyLine>& keylines, int scale, int numOctaves, const Mat& mask )
{
if( mask.data != NULL && ( mask.size() != image.size() || mask.type() != CV_8UC1 ) )
{
std::cout << "Mask error while detecting lines: " << "please check its dimensions and that data type is CV_8UC1" << std::endl;
CV_Assert( false );
}
else
detectImpl( image, keylines, numOctaves, scale, mask );
}
/* requires line detection (more than one image) */
void LSDDetector::detect( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, int scale, int numOctaves,
const std::vector<Mat>& masks ) const
{
/* detect lines from each image */
for ( size_t counter = 0; counter < images.size(); counter++ )
{
if( masks[counter].data != NULL && ( masks[counter].size() != images[counter].size() || masks[counter].type() != CV_8UC1 ) )
{
std::cout << "Masks error while detecting lines: " << "please check their dimensions and that data types are CV_8UC1" << std::endl;
CV_Assert( false );
}
detectImpl( images[counter], keylines[counter], numOctaves, scale, masks[counter] );
}
}
/* implementation of line detection */
void LSDDetector::detectImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, int numOctaves, int scale, const Mat& mask ) const
{
cv::Mat image;
if( imageSrc.channels() != 1 )
cvtColor( imageSrc, image, COLOR_BGR2GRAY );
else
image = imageSrc.clone();
/*check whether image depth is different from 0 */
if( image.depth() != 0 )
{
std::cout << "Warning, depth image!= 0" << std::endl;
CV_Assert( false );
}
/* create a pointer to self */
LSDDetector *lsd = const_cast<LSDDetector*>( this );
/* compute Gaussian pyramids */
lsd->computeGaussianPyramid( image, numOctaves, scale );
/* create an LSD extractor */
cv::Ptr<cv::LineSegmentDetector> ls = cv::createLineSegmentDetector( cv::LSD_REFINE_ADV );
/* prepare a vector to host extracted segments */
std::vector<std::vector<cv::Vec4i> > lines_lsd;
/* extract lines */
for ( int i = 0; i < numOctaves; i++ )
{
std::vector<Vec4i> octave_lines;
ls->detect( gaussianPyrs[i], octave_lines );
lines_lsd.push_back( octave_lines );
}
/* create keylines */
for ( int j = 0; j < (int) lines_lsd.size(); j++ )
{
for ( int k = 0; k < (int) lines_lsd[j].size(); k++ )
{
KeyLine kl;
cv::Vec4i extremes = lines_lsd[j][k];
/* check data validity */
checkLineExtremes( extremes, gaussianPyrs[j].size() );
/* fill KeyLine's fields */
kl.startPointX = extremes[0];
kl.startPointY = extremes[1];
kl.endPointX = extremes[2];
kl.endPointY = extremes[3];
kl.sPointInOctaveX = extremes[0];
kl.sPointInOctaveY = extremes[1];
kl.ePointInOctaveX = extremes[2];
kl.ePointInOctaveY = extremes[3];
kl.lineLength = sqrt( pow( extremes[0] - extremes[2], 2 ) + pow( extremes[1] - extremes[3], 2 ) );
/* compute number of pixels covered by line */
LineIterator li( gaussianPyrs[j], Point( extremes[0], extremes[1] ), Point( extremes[2], extremes[3] ) );
kl.numOfPixels = li.count;
kl.angle = atan2( ( kl.endPointY - kl.startPointY ), ( kl.endPointX - kl.startPointX ) );
kl.class_id = k;
kl.octave = j;
kl.size = ( kl.endPointX - kl.startPointX ) * ( kl.endPointY - kl.startPointY );
kl.response = kl.lineLength / max( gaussianPyrs[j].cols, gaussianPyrs[j].rows );
kl.pt = Point( ( kl.endPointX + kl.startPointX ) / 2, ( kl.endPointY + kl.startPointY ) / 2 );
keylines.push_back( kl );
}
}
/* delete undesired KeyLines, according to input mask */
if( !mask.empty() )
{
for ( size_t keyCounter = 0; keyCounter < keylines.size(); keyCounter++ )
{
KeyLine kl = keylines[keyCounter];
if( mask.at<uchar>( kl.startPointY, kl.startPointX ) == 0 && mask.at<uchar>( kl.endPointY, kl.endPointX ) == 0 )
keylines.erase( keylines.begin() + keyCounter );
}
}
}
......@@ -154,9 +154,10 @@ BinaryDescriptor::BinaryDescriptor( const BinaryDescriptor::Params &parameters )
params( parameters )
{
/* reserve enough space for EDLine objects */
/* reserve enough space for EDLine objects and images in Gaussian pyramid */
edLineVec_.resize( params.numOfOctave_ );
images_sizes.resize( params.numOfOctave_ );
for ( unsigned int i = 0; i < params.numOfOctave_; i++ )
edLineVec_[i] = new EDLineDetector;
......@@ -196,7 +197,7 @@ BinaryDescriptor::BinaryDescriptor( const BinaryDescriptor::Params &parameters )
/* definition of operator () */
void BinaryDescriptor::operator()( InputArray image, InputArray mask, CV_OUT std::vector<KeyLine>& keylines, OutputArray descriptors,
bool useProvidedKeyLines, bool returnFloatDescr, int flags ) const
bool useProvidedKeyLines, bool returnFloatDescr ) const
{
/* create some matrix objects */
......@@ -214,10 +215,10 @@ void BinaryDescriptor::operator()( InputArray image, InputArray mask, CV_OUT std
/* require drawing KeyLines detection if demanded */
if( !useProvidedKeyLines )
detectImpl( imageMat, keylines, flags, maskMat );
detectImpl( imageMat, keylines, maskMat );
/* compute descriptors */
computeImpl( imageMat, keylines, descrMat, returnFloatDescr, flags );
computeImpl( imageMat, keylines, descrMat, returnFloatDescr );
}
BinaryDescriptor::~BinaryDescriptor()
......@@ -254,12 +255,6 @@ int BinaryDescriptor::descriptorSize() const
return 32 * 8;
}
/* check whether Gaussian pyramids were created */
bool BinaryDescriptor::empty() const
{
return octaveImages.empty();
}
/* power function with error management */
static inline int get2Pow( int i )
{
......@@ -287,109 +282,8 @@ unsigned char BinaryDescriptor::binaryConversion( float* f1, float* f2 )
}
/* get coefficients of line passing by two points in (line_extremes) */
void BinaryDescriptor::getLineParameters( cv::Vec4i& line_extremes, cv::Vec3i& lineParams )
{
int x1 = line_extremes[0];
int x2 = line_extremes[2];
int y1 = line_extremes[1];
int y2 = line_extremes[3];
/* line is parallel to Y axis */
if( x1 == x2 )
{
lineParams[0] = 1;
lineParams[1] = 0;
lineParams[2] = x1 /* or x2 */;
}
/* line is parallel to X axis */
else if( y1 == y2 )
{
lineParams[0] = 0;
lineParams[1] = 1;
lineParams[2] = y1 /* or y2 */;
}
/* line is not parallel to any axis */
else
{
lineParams[0] = y1 - y2;
lineParams[1] = x2 - x1;
lineParams[2] = -y1 * ( x2 - x1 ) + x1 * ( y2 - y1 );
}
}
/* compute the angle between line and X axis */
float BinaryDescriptor::getLineDirection( cv::Vec3i &lineParams )
{
/* line is parallel to X axis */
if( lineParams[0] == 0 )
return 0;
/* line is parallel to Y axis */
else if( lineParams[1] == 0 )
return M_PI / 2;
/* line is not parallel to any axis */
else
return atan2( -lineParams[0], lineParams[1] );
}
/* compute Gaussian pyramid of input image */
void BinaryDescriptor::computeGaussianPyramid( const Mat& image )
{
/* clear class fields */
images_sizes.clear();
octaveImages.clear();
extractedLines.clear();
/* insert input image into pyramid */
cv::Mat currentMat = image.clone();
cv::GaussianBlur( currentMat, currentMat, cv::Size( 5, 5 ), 1 );
octaveImages.push_back( currentMat );
images_sizes.push_back( currentMat.size() );
/* fill Gaussian pyramid */
for ( int pyrCounter = 1; pyrCounter < params.numOfOctave_; pyrCounter++ )
{
/* compute and store next image in pyramid and its size */
pyrDown( currentMat, currentMat, Size( currentMat.cols / params.reductionRatio, currentMat.rows / params.reductionRatio ) );
octaveImages.push_back( currentMat );
images_sizes.push_back( currentMat.size() );
}
}
/* check lines' extremes */
inline void checkLineExtremes( cv::Vec4i& extremes, cv::Size imageSize )
{
if( extremes[0] < 0 )
extremes[0] = 0;
if( extremes[0] >= imageSize.width )
extremes[0] = imageSize.width - 1;
if( extremes[2] < 0 )
extremes[2] = 0;
if( extremes[2] >= imageSize.width )
extremes[2] = imageSize.width - 1;
if( extremes[1] < 0 )
extremes[1] = 0;
if( extremes[1] >= imageSize.height )
extremes[1] = imageSize.height - 1;
if( extremes[3] < 0 )
extremes[3] = 0;
if( extremes[3] >= imageSize.height )
extremes[3] = imageSize.height - 1;
}
/* requires line detection (only one image) */
void BinaryDescriptor::detect( const Mat& image, CV_OUT std::vector<KeyLine>& keylines, const Mat& mask, int flags )
void BinaryDescriptor::detect( const Mat& image, CV_OUT std::vector<KeyLine>& keylines, const Mat& mask )
{
if( mask.data != NULL && ( mask.size() != image.size() || mask.type() != CV_8UC1 ) )
{
......@@ -400,12 +294,11 @@ void BinaryDescriptor::detect( const Mat& image, CV_OUT std::vector<KeyLine>& ke
}
else
detectImpl( image, keylines, flags, mask );
detectImpl( image, keylines, mask );
}
/* requires line detection (more than one image) */
void BinaryDescriptor::detect( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, const std::vector<Mat>& masks,
int flags ) const
void BinaryDescriptor::detect( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, const std::vector<Mat>& masks ) const
{
/* detect lines from each image */
for ( size_t counter = 0; counter < images.size(); counter++ )
......@@ -418,11 +311,11 @@ void BinaryDescriptor::detect( const std::vector<Mat>& images, std::vector<std::
CV_Assert( false );
}
detectImpl( images[counter], keylines[counter], flags, masks[counter] );
detectImpl( images[counter], keylines[counter], masks[counter] );
}
}
void BinaryDescriptor::detectImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, int flags, const Mat& mask ) const
void BinaryDescriptor::detectImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, const Mat& mask ) const
{
cv::Mat image;
......@@ -441,33 +334,9 @@ void BinaryDescriptor::detectImpl( const Mat& imageSrc, std::vector<KeyLine>& ke
/* create a pointer to self */
BinaryDescriptor *bn = const_cast<BinaryDescriptor*>( this );
/* compute Gaussian pyramid */
if( flags == 0 )
bn->computeGaussianPyramid( image );
/* detect and arrange lines across octaves */
ScaleLines sl;
Mat m = image.clone();
cvtColor( m, m, COLOR_GRAY2BGR );
if( flags == 0 )
bn->OctaveKeyLines( sl );
else
bn->OctaveKeyLines_EDL( image, sl );
Mat temp = image.clone();
cvtColor( temp, temp, COLOR_GRAY2BGR );
for ( size_t i = 0; i < sl.size(); i++ )
{
for ( size_t j = 0; j < sl[i].size(); j++ )
{
OctaveSingleLine tempOSL = sl[i][j];
line( m, Point( tempOSL.startPointX, tempOSL.startPointY ), Point( tempOSL.endPointX, tempOSL.endPointY ), Scalar( 255, 0, 0 ), 5 );
}
}
imshow( "Immagine", m );
waitKey();
bn->OctaveKeyLines( image, sl );
/* fill KeyLines vector */
for ( int i = 0; i < (int) sl.size(); i++ )
......@@ -480,9 +349,6 @@ void BinaryDescriptor::detectImpl( const Mat& imageSrc, std::vector<KeyLine>& ke
/* create a KeyLine object */
KeyLine kl;
/* check data validity */
// cv::Vec4i extremes( osl.startPointX, osl.startPointY, osl.endPointX, osl.endPointY );
// checkLineExtremes( extremes, imageSrc.size() );
/* fill KeyLine's fields */
kl.startPointX = osl.startPointX; //extremes[0];
kl.startPointY = osl.startPointY; //extremes[1];
......@@ -522,22 +388,22 @@ void BinaryDescriptor::detectImpl( const Mat& imageSrc, std::vector<KeyLine>& ke
}
/* requires descriptors computation (only one image) */
void BinaryDescriptor::compute( const Mat& image, CV_OUT CV_IN_OUT std::vector<KeyLine>& keylines, CV_OUT Mat& descriptors, bool returnFloatDescr,
int flags ) const
void BinaryDescriptor::compute( const Mat& image, CV_OUT CV_IN_OUT std::vector<KeyLine>& keylines, CV_OUT Mat& descriptors,
bool returnFloatDescr ) const
{
computeImpl( image, keylines, descriptors, returnFloatDescr, flags );
computeImpl( image, keylines, descriptors, returnFloatDescr );
}
/* requires descriptors computation (more than one image) */
void BinaryDescriptor::compute( const std::vector<Mat>& images, std::vector<std::vector<KeyLine> >& keylines, std::vector<Mat>& descriptors,
bool returnFloatDescr, int flags ) const
bool returnFloatDescr ) const
{
for ( size_t i = 0; i < images.size(); i++ )
computeImpl( images[i], keylines[i], descriptors[i], returnFloatDescr, flags );
computeImpl( images[i], keylines[i], descriptors[i], returnFloatDescr );
}
/* implementation of descriptors computation */
void BinaryDescriptor::computeImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, Mat& descriptors, bool returnFloatDescr, int flags ) const
void BinaryDescriptor::computeImpl( const Mat& imageSrc, std::vector<KeyLine>& keylines, Mat& descriptors, bool returnFloatDescr ) const
{
/* convert input image to gray scale */
cv::Mat image;
......@@ -620,34 +486,9 @@ void BinaryDescriptor::computeImpl( const Mat& imageSrc, std::vector<KeyLine>& k
}
}
/* compute Gaussian pyramid, if image is new or pyramid was not
computed before */
BinaryDescriptor *bn = const_cast<BinaryDescriptor*>( this );
/* all structures cleared in computeGaussianPyramid */
bn->computeGaussianPyramid( image );
/* compute Sobel's derivatives */
bn->dxImg_vector.clear();
bn->dyImg_vector.clear();
bn->dxImg_vector.resize( params.numOfOctave_ );
bn->dyImg_vector.resize( params.numOfOctave_ );
for ( size_t sobelCnt = 0; sobelCnt < octaveImages.size(); sobelCnt++ )
{
bn->dxImg_vector[sobelCnt].create( images_sizes[sobelCnt].height, images_sizes[sobelCnt].width, CV_16SC1 );
bn->dyImg_vector[sobelCnt].create( images_sizes[sobelCnt].height, images_sizes[sobelCnt].width, CV_16SC1 );
cv::Sobel( octaveImages[sobelCnt], bn->dxImg_vector[sobelCnt], CV_16SC1, 1, 0, 3 );
cv::Sobel( octaveImages[sobelCnt], bn->dyImg_vector[sobelCnt], CV_16SC1, 0, 1, 3 );
}
/* compute LBD descriptors */
if(flags == 0)
bn->computeLBD( sl, flags );
else
bn->computeLBD_EDL(sl);
BinaryDescriptor* bd = const_cast<BinaryDescriptor*>( this );
bd->computeLBD( sl );
/* resize output matrix */
if( !returnFloatDescr )
......@@ -657,9 +498,9 @@ void BinaryDescriptor::computeImpl( const Mat& imageSrc, std::vector<KeyLine>& k
descriptors = cv::Mat( keylines.size(), NUM_OF_BANDS * 8, CV_32FC1 );
/* fill output matrix with descriptors */
for ( size_t k = 0; k < sl.size(); k++ )
for ( int k = 0; k < (int)sl.size(); k++ )
{
for ( size_t lineC = 0; lineC < sl[k].size(); lineC++ )
for ( int lineC = 0; lineC < (int)sl[k].size(); lineC++ )
{
/* get original index of keypoint */
int lineOctave = ( sl[k][lineC] ).octaveCount;
......@@ -667,7 +508,6 @@ void BinaryDescriptor::computeImpl( const Mat& imageSrc, std::vector<KeyLine>& k
if( !returnFloatDescr )
{
/* get a pointer to correspondent row in output matrix */
uchar* pointerToRow = descriptors.ptr( originalIndex );
......@@ -677,22 +517,21 @@ void BinaryDescriptor::computeImpl( const Mat& imageSrc, std::vector<KeyLine>& k
/* fill current row with binary descriptor */
for ( int comb = 0; comb < 32; comb++ )
{
*pointerToRow = bn->binaryConversion( &desVec[8 * combinations[comb][0]], &desVec[8 * combinations[comb][1]] );
*pointerToRow = bd->binaryConversion( &desVec[8 * combinations[comb][0]], &desVec[8 * combinations[comb][1]] );
pointerToRow++;
}
}
else
{
std::cout << "Descrittori float" <<std::endl;
/* get a pointer to correspondent row in output matrix */
uchar* pointerToRow = descriptors.ptr( originalIndex );
float* pointerToRow = descriptors.ptr<float>( originalIndex );
/* get LBD data */
std::vector<float> desVec = sl[k][lineC].descriptor;
for ( size_t count = 0; count < desVec.size(); count++ )
for ( int count = 0; count < (int)desVec.size(); count++ )
{
*pointerToRow = desVec[count];
pointerToRow++;
......@@ -704,337 +543,7 @@ void BinaryDescriptor::computeImpl( const Mat& imageSrc, std::vector<KeyLine>& k
}
/* gather lines in groups. Each group contains the same line, detected in different octaves */
int BinaryDescriptor::OctaveKeyLines( ScaleLines &keyLines )
{
/* final number of extracted lines */
unsigned int numOfFinalLine = 0;
std::vector<float> prec, w_idth, nfa;
for ( size_t scaleCounter = 0; scaleCounter < octaveImages.size(); scaleCounter++ )
{
/* get current scaled image */
cv::Mat currentScaledImage = octaveImages[scaleCounter];
/* create an LSD detector and store a pointer to it */
cv::Ptr<cv::LineSegmentDetector> ls = cv::createLineSegmentDetector( cv::LSD_REFINE_ADV, 0.8, 0.6, 2.0, 22.5 );
/* prepare a vector to host extracted segments */
std::vector<cv::Vec4i> lines_std;
/* use detector to extract segments */
ls->detect( currentScaledImage, lines_std, w_idth, prec/*, nfa*/);
/* store lines extracted from current image */
extractedLines.push_back( lines_std );
/* update lines counter */
numOfFinalLine += lines_std.size();
}
/* prepare a vector to store octave information associated to extracted lines */
std::vector<OctaveLine> octaveLines( numOfFinalLine );
/* set lines' counter to 0 for reuse */
numOfFinalLine = 0;
/* counter to give a unique ID to lines in LineVecs */
unsigned int lineIDInScaleLineVec = 0;
/* floats to compute lines' lengths */
float dx, dy;
/* loop over lines extracted from scale 0 (original image) */
for ( unsigned int lineCurId = 0; lineCurId < extractedLines[0].size(); lineCurId++ )
{
/* set octave from which it was extracted */
octaveLines[numOfFinalLine].octaveCount = 0;
/* set ID within its octave */
octaveLines[numOfFinalLine].lineIDInOctave = lineCurId;
/* set a unique ID among all lines extracted in all octaves */
octaveLines[numOfFinalLine].lineIDInScaleLineVec = lineIDInScaleLineVec;
/* compute absolute value of difference between X coordinates of line's extreme points */
dx = fabs( ( extractedLines[0][lineCurId] )[0] - ( extractedLines[0][lineCurId] )[2] );
/* compute absolute value of difference between Y coordinates of line's extreme points */
dy = fabs( ( extractedLines[0][lineCurId] )[1] - ( extractedLines[0][lineCurId] )[3] );
/* compute line's length */
octaveLines[numOfFinalLine].lineLength = sqrt( dx * dx + dy * dy );
/* update counters */
numOfFinalLine++;
lineIDInScaleLineVec++;
}
/* create and fill an array to store scale factors */
float *scale = new float[params.numOfOctave_];
scale[0] = 1;
for ( unsigned int octaveCount = 1; octaveCount < (unsigned int) params.numOfOctave_; octaveCount++ )
{
scale[octaveCount] = params.reductionRatio * scale[octaveCount - 1];
}
/* some variables' declarations */
float rho1, rho2, tempValue;
float direction, near, length;
unsigned int octaveID, lineIDInOctave;
/*more than one octave image, organize lines in scale space.
*lines corresponding to the same line in octave images should have the same index in the ScaleLineVec */
if( params.numOfOctave_ > 1 )
{
/* some other variables' declarations */
float twoPI = 2 * M_PI;
unsigned int closeLineID;
float endPointDis, minEndPointDis, minLocalDis, maxLocalDis;
float lp0, lp1, lp2, lp3, np0, np1, np2, np3;
/* loop over list of octaves */
for ( unsigned int octaveCount = 1; octaveCount < (unsigned int) params.numOfOctave_; octaveCount++ )
{
/*for each line in current octave image, find their corresponding lines
in the octaveLines,
give them the same value of lineIDInScaleLineVec*/
/* loop over list of lines extracted from current octave */
for ( unsigned int lineCurId = 0; lineCurId < extractedLines[octaveCount].size(); lineCurId++ )
{
/* get (scaled) known term from equation of current line */
cv::Vec3i line_equation;
getLineParameters( extractedLines[octaveCount][lineCurId], line_equation );
rho1 = scale[octaveCount] * fabs( line_equation[2] );
/*nearThreshold depends on the distance of the image coordinate origin to current line.
*so nearThreshold = rho1 * nearThresholdRatio, where nearThresholdRatio = 1-cos(10*pi/180) = 0.0152*/
tempValue = rho1 * 0.0152;
float nearThreshold = ( tempValue > 6 ) ? ( tempValue ) : 6;
nearThreshold = ( nearThreshold < 12 ) ? nearThreshold : 12;
/* compute scaled lenght of current line */
dx = fabs( ( extractedLines[octaveCount][lineCurId] )[0] - ( extractedLines[octaveCount][lineCurId][2] ) ); //x1-x2
dy = fabs( ( extractedLines[octaveCount][lineCurId] )[1] - ( extractedLines[octaveCount][lineCurId][3] ) ); //y1-y2
length = scale[octaveCount] * sqrt( dx * dx + dy * dy );
minEndPointDis = 12;
/* loop over the octave representations of all lines */
for ( unsigned int lineNextId = 0; lineNextId < numOfFinalLine; lineNextId++ )
{
/* if a line from same octave is encountered,
a comparison with it shouldn't be considered */
octaveID = octaveLines[lineNextId].octaveCount;
if( octaveID == octaveCount )
break;
/* take ID (in octave) of line to be compared */
lineIDInOctave = octaveLines[lineNextId].lineIDInOctave;
/* compute difference between lines' directions, to check
whether they are parallel */
cv::Vec3i line_equation_to_compare;
getLineParameters( extractedLines[octaveID][lineIDInOctave], line_equation_to_compare );
direction = fabs( getLineDirection( line_equation ) - getLineDirection( line_equation_to_compare ) );
/* the angle between two lines are larger than 10degrees
(i.e. 10*pi/180=0.1745), they are not close to parallel */
if( direction > 0.1745 && ( twoPI - direction > 0.1745 ) )
continue;
/*now check whether current line and next line are near to each other.
Get known term from equation to be compared */
rho2 = scale[octaveID] * fabs( line_equation_to_compare[2] );
/* compute difference between known terms */
near = fabs( rho1 - rho2 );
/* two lines are not close in the image */
if( near > nearThreshold )
continue;
/* get the extreme points of the two lines */
lp0 = scale[octaveCount] * ( extractedLines[octaveCount][lineCurId] )[0];
lp1 = scale[octaveCount] * ( extractedLines[octaveCount][lineCurId] )[1];
lp2 = scale[octaveCount] * ( extractedLines[octaveCount][lineCurId] )[2];
lp3 = scale[octaveCount] * ( extractedLines[octaveCount][lineCurId] )[3];
np0 = scale[octaveID] * ( extractedLines[octaveID][lineIDInOctave] )[0];
np1 = scale[octaveID] * ( extractedLines[octaveID][lineIDInOctave] )[1];
np2 = scale[octaveID] * ( extractedLines[octaveID][lineIDInOctave] )[2];
np3 = scale[octaveID] * ( extractedLines[octaveID][lineIDInOctave] )[3];
/* get the distance between the two leftmost extremes of lines
L1(0,1)<->L2(0,1) */
dx = lp0 - np0;
dy = lp1 - np1;
endPointDis = sqrt( dx * dx + dy * dy );
/* set momentaneously min and max distance between lines to
the one between left extremes */
minLocalDis = endPointDis;
maxLocalDis = endPointDis;
/* compute distance between right extremes
L1(2,3)<->L2(2,3) */
dx = lp2 - np2;
dy = lp3 - np3;
endPointDis = sqrt( dx * dx + dy * dy );
/* update (if necessary) min and max distance between lines */
minLocalDis = ( endPointDis < minLocalDis ) ? endPointDis : minLocalDis;
maxLocalDis = ( endPointDis > maxLocalDis ) ? endPointDis : maxLocalDis;
/* compute distance between left extreme of current line and
right extreme of line to be compared
L1(0,1)<->L2(2,3) */
dx = lp0 - np2;
dy = lp1 - np3;
endPointDis = sqrt( dx * dx + dy * dy );
/* update (if necessary) min and max distance between lines */
minLocalDis = ( endPointDis < minLocalDis ) ? endPointDis : minLocalDis;
maxLocalDis = ( endPointDis > maxLocalDis ) ? endPointDis : maxLocalDis;
/* compute distance between right extreme of current line and
left extreme of line to be compared
L1(2,3)<->L2(0,1) */
dx = lp2 - np0;
dy = lp3 - np1;
endPointDis = sqrt( dx * dx + dy * dy );
/* update (if necessary) min and max distance between lines */
minLocalDis = ( endPointDis < minLocalDis ) ? endPointDis : minLocalDis;
maxLocalDis = ( endPointDis > maxLocalDis ) ? endPointDis : maxLocalDis;
/* check whether conditions for considering line to be compared
wremoveInvalidPointsorth to be inserted in the same LineVec are satisfied */
if( ( maxLocalDis < 0.8 * ( length + octaveLines[lineNextId].lineLength ) ) && ( minLocalDis < minEndPointDis ) )
{
/* keep the closest line */
minEndPointDis = minLocalDis;
closeLineID = lineNextId;
}
}
/* add current line into octaveLines */
if( minEndPointDis < 12 )
octaveLines[numOfFinalLine].lineIDInScaleLineVec = octaveLines[closeLineID].lineIDInScaleLineVec;
else
{
octaveLines[numOfFinalLine].lineIDInScaleLineVec = lineIDInScaleLineVec;
lineIDInScaleLineVec++;
}
octaveLines[numOfFinalLine].octaveCount = octaveCount;
octaveLines[numOfFinalLine].lineIDInOctave = lineCurId;
octaveLines[numOfFinalLine].lineLength = length;
numOfFinalLine++;
}
}
}
/* Reorganize the detected lines into keyLines */
keyLines.clear();
keyLines.resize( lineIDInScaleLineVec );
unsigned int tempID;
float s1, e1, s2, e2;
bool shouldChange;
OctaveSingleLine singleLine;
for ( unsigned int lineID = 0; lineID < numOfFinalLine; lineID++ )
{
lineIDInOctave = octaveLines[lineID].lineIDInOctave;
octaveID = octaveLines[lineID].octaveCount;
cv::Vec3i tempParams;
getLineParameters( extractedLines[octaveID][lineIDInOctave], tempParams );
singleLine.octaveCount = octaveID;
singleLine.lineLength = octaveLines[lineID].lineLength;
// decide the start point and end point
shouldChange = false;
s1 = ( extractedLines[octaveID][lineIDInOctave] )[0]; //sx
s2 = ( extractedLines[octaveID][lineIDInOctave] )[1]; //sy
e1 = ( extractedLines[octaveID][lineIDInOctave] )[2]; //ex
e2 = ( extractedLines[octaveID][lineIDInOctave] )[3]; //ey
dx = e1 - s1; //ex-sx
dy = e2 - s2; //ey-sy
if( direction >= -0.75 * M_PI && direction < -0.25 * M_PI )
{
if( dy > 0 )
shouldChange = true;
}
if( direction >= -0.25 * M_PI && direction < 0.25 * M_PI )
if( dx < 0 )
{
shouldChange = true;
}
if( direction >= 0.25 * M_PI && direction < 0.75 * M_PI )
if( dy < 0 )
{
shouldChange = true;
}
if( ( direction >= 0.75 * M_PI && direction < M_PI ) || ( direction >= -M_PI && direction < -0.75 * M_PI ) )
{
if( dx > 0 )
shouldChange = true;
}
tempValue = scale[octaveID];
if( shouldChange )
{
singleLine.sPointInOctaveX = e1;
singleLine.sPointInOctaveY = e2;
singleLine.ePointInOctaveX = s1;
singleLine.ePointInOctaveY = s2;
singleLine.startPointX = tempValue * e1;
singleLine.startPointY = tempValue * e2;
singleLine.endPointX = tempValue * s1;
singleLine.endPointY = tempValue * s2;
}
else
{
singleLine.sPointInOctaveX = s1;
singleLine.sPointInOctaveY = s2;
singleLine.ePointInOctaveX = e1;
singleLine.ePointInOctaveY = e2;
singleLine.startPointX = tempValue * s1;
singleLine.startPointY = tempValue * s2;
singleLine.endPointX = tempValue * e1;
singleLine.endPointY = tempValue * e2;
}
singleLine.direction = atan2( ( singleLine.endPointY - singleLine.startPointY ), ( singleLine.endPointX - singleLine.startPointX ) );
tempID = octaveLines[lineID].lineIDInScaleLineVec;
/* compute number of pixels covered by line */
LineIterator li( octaveImages[octaveID], Point( singleLine.startPointX, singleLine.startPointY ),
Point( singleLine.endPointX, singleLine.endPointY ) );
singleLine.numOfPixels = li.count;
/* store line */
keyLines[tempID].push_back( singleLine );
}
delete[] scale;
return 1;
}
int BinaryDescriptor::OctaveKeyLines_EDL( cv::Mat& image, ScaleLines &keyLines )
int BinaryDescriptor::OctaveKeyLines( cv::Mat& image, ScaleLines &keyLines )
{
/* final number of extracted lines */
......@@ -1056,6 +565,7 @@ int BinaryDescriptor::OctaveKeyLines_EDL( cv::Mat& image, ScaleLines &keyLines )
float increaseSigma = sqrt( curSigma2 - preSigma2 );
cv::GaussianBlur( image, blur, cv::Size( params.ksize_, params.ksize_ ), increaseSigma );
images_sizes[octaveCount] = blur.size();
/* for current octave, extract lines */
if( ( edLineVec_[octaveCount]->EDline( blur, true ) ) != true )
{
......@@ -1074,9 +584,6 @@ int BinaryDescriptor::OctaveKeyLines_EDL( cv::Mat& image, ScaleLines &keyLines )
} /* end of loop over number of octaves */
/*lines which correspond to the same line in the octave images will be stored
in the same element of ScaleLines.*/
/* prepare a vector to store octave information associated to extracted lines */
std::vector<OctaveLine> octaveLines( numOfFinalLine );
......@@ -1370,21 +877,18 @@ int BinaryDescriptor::OctaveKeyLines_EDL( cv::Mat& image, ScaleLines &keyLines )
keyLines[tempID].push_back( singleLine );
}
////////////////////////////////////
delete[] scale;
return 1;
}
/* compute LBD descriptors */
int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
int BinaryDescriptor::computeLBD( ScaleLines &keyLines )
{
//the default length of the band is the line length.
short numOfFinalLine = keyLines.size();
float *dL = new float[2]; //line direction cos(dir), sin(dir)
float *dO = new float[2]; //the clockwise orthogonal vector of line direction.
short heightOfLSP = params.widthOfBand_ * NUM_OF_BANDS; //the height of line support region;
short descriptor_size = NUM_OF_BANDS * 8; //each band, we compute the m( pgdL, ngdL, pgdO, ngdO) and std( pgdL, ngdL, pgdO, ngdO);
short descriptorSize = NUM_OF_BANDS * 8; //each band, we compute the m( pgdL, ngdL, pgdO, ngdO) and std( pgdL, ngdL, pgdO, ngdO);
float pgdLRowSum; //the summation of {g_dL |g_dL>0 } for each row of the region;
float ngdLRowSum; //the summation of {g_dL |g_dL<0 } for each row of the region;
float pgdL2RowSum; //the summation of {g_dL^2 |g_dL>0 } for each row of the region;
......@@ -1433,26 +937,14 @@ int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
pSingleLine = & ( keyLines[lineIDInScaleVec][lineIDInSameLine] );
octaveCount = pSingleLine->octaveCount;
/* retrieve associated dxImg and dyImg*/
if( flags == 1 )
{
pdxImg = edLineVec_[octaveCount]->dxImg_.ptr<short>();
pdyImg = edLineVec_[octaveCount]->dyImg_.ptr<short>();
}
else
{
pdxImg = dxImg_vector[octaveCount].ptr<short>();
pdyImg = dyImg_vector[octaveCount].ptr<short>();
}
/* retrieve associated dxImg and dyImg */
pdxImg = edLineVec_[octaveCount]->dxImg_.ptr<short>();
pdyImg = edLineVec_[octaveCount]->dyImg_.ptr<short>();
/* get image size to work on from real one
realWidth = edLineVec_[octaveCount]->imageWidth;
imageWidth = realWidth -1;
imageHeight = edLineVec_[octaveCount]->imageHeight-1; */
realWidth = images_sizes[octaveCount].width;
/* get image size to work on from real one */
realWidth = edLineVec_[octaveCount]->imageWidth;
imageWidth = realWidth - 1;
imageHeight = images_sizes[octaveCount].height - 1;
imageHeight = edLineVec_[octaveCount]->imageHeight - 1;
/* initialize memory areas */
memset( pgdLBandSum, 0, numOfBitsBand );
......@@ -1518,22 +1010,18 @@ int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
{
pgdLRowSum += gDL;
}
else
{
ngdLRowSum -= gDL;
}
if( gDO > 0 )
{
pgdORowSum += gDO;
}
else
{
ngdORowSum -= gDO;
}
sCorX += dL[0];
sCorY += dL[1];
/* gDLMat[hID][wID] = gDL; */
......@@ -1552,7 +1040,7 @@ int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
/* compute {g_dL |g_dL>0 }, {g_dL |g_dL<0 },
{g_dO |g_dO>0 }, {g_dO |g_dO<0 } of each band in the line support region
first, current row belongs to current band */
first, current row belong to current band */
bandID = hID / params.widthOfBand_;
coefInGaussion = gaussCoefL_[hID % params.widthOfBand_ + params.widthOfBand_];
pgdLBandSum[bandID] += coefInGaussion * pgdLRowSum;
......@@ -1580,7 +1068,6 @@ int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
pgdO2BandSum[bandID] += coefInGaussion * coefInGaussion * pgdO2RowSum;
ngdO2BandSum[bandID] += coefInGaussion * coefInGaussion * ngdO2RowSum;
}
bandID = bandID + 2;
if( bandID < NUM_OF_BANDS )
{/*the band below the current band */
......@@ -1599,7 +1086,7 @@ int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
return 0; */
/* construct line descriptor */
pSingleLine->descriptor.resize( descriptor_size );
pSingleLine->descriptor.resize( descriptorSize );
desVec = pSingleLine->descriptor.data();
short desID;
......@@ -1675,7 +1162,7 @@ int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
* vector no larger than this threshold. In Z.Wang's work, a value of 0.4 is found
* empirically to be a proper threshold.*/
desVec = pSingleLine->descriptor.data();
for ( short i = 0; i < descriptor_size; i++ )
for ( short i = 0; i < descriptorSize; i++ )
{
if( desVec[i] > 0.4 )
{
......@@ -1685,19 +1172,30 @@ int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
//re-normalize desVec;
temp = 0;
for ( short i = 0; i < descriptor_size; i++ )
for ( short i = 0; i < descriptorSize; i++ )
{
temp += desVec[i] * desVec[i];
}
temp = 1 / sqrt( temp );
for ( short i = 0; i < descriptor_size; i++ )
for ( short i = 0; i < descriptorSize; i++ )
{
desVec[i] = desVec[i] * temp;
}
}/* end for(short lineIDInSameLine = 0; lineIDInSameLine<sameLineSize;
lineIDInSameLine++) */
cv::Mat appoggio = cv::Mat( 1, 32, CV_32FC1 );
float* pointerToRow = appoggio.ptr<float>( 0 );
for ( int g = 0; g < 32; g++ )
{
/* get LBD data */
float* desVec = keyLines[lineIDInScaleVec][0].descriptor.data();
*pointerToRow = desVec[g];
pointerToRow++;
}
}/* end for(short lineIDInScaleVec = 0;
lineIDInScaleVec<numOfFinalLine; lineIDInScaleVec++) */
......@@ -1712,316 +1210,6 @@ int BinaryDescriptor::computeLBD( ScaleLines &keyLines, int flags )
delete[] pgdO2BandSum;
delete[] ngdO2BandSum;
return 1;
}
int BinaryDescriptor::computeLBD_EDL( ScaleLines &keyLines )
{
//the default length of the band is the line length.
short numOfFinalLine = keyLines.size();
float *dL = new float[2];//line direction cos(dir), sin(dir)
float *dO = new float[2];//the clockwise orthogonal vector of line direction.
short heightOfLSP = params.widthOfBand_*NUM_OF_BANDS;//the height of line support region;
short descriptorSize = NUM_OF_BANDS * 8;//each band, we compute the m( pgdL, ngdL, pgdO, ngdO) and std( pgdL, ngdL, pgdO, ngdO);
float pgdLRowSum;//the summation of {g_dL |g_dL>0 } for each row of the region;
float ngdLRowSum;//the summation of {g_dL |g_dL<0 } for each row of the region;
float pgdL2RowSum;//the summation of {g_dL^2 |g_dL>0 } for each row of the region;
float ngdL2RowSum;//the summation of {g_dL^2 |g_dL<0 } for each row of the region;
float pgdORowSum;//the summation of {g_dO |g_dO>0 } for each row of the region;
float ngdORowSum;//the summation of {g_dO |g_dO<0 } for each row of the region;
float pgdO2RowSum;//the summation of {g_dO^2 |g_dO>0 } for each row of the region;
float ngdO2RowSum;//the summation of {g_dO^2 |g_dO<0 } for each row of the region;
float *pgdLBandSum = new float[NUM_OF_BANDS];//the summation of {g_dL |g_dL>0 } for each band of the region;
float *ngdLBandSum = new float[NUM_OF_BANDS];//the summation of {g_dL |g_dL<0 } for each band of the region;
float *pgdL2BandSum = new float[NUM_OF_BANDS];//the summation of {g_dL^2 |g_dL>0 } for each band of the region;
float *ngdL2BandSum = new float[NUM_OF_BANDS];//the summation of {g_dL^2 |g_dL<0 } for each band of the region;
float *pgdOBandSum = new float[NUM_OF_BANDS];//the summation of {g_dO |g_dO>0 } for each band of the region;
float *ngdOBandSum = new float[NUM_OF_BANDS];//the summation of {g_dO |g_dO<0 } for each band of the region;
float *pgdO2BandSum = new float[NUM_OF_BANDS];//the summation of {g_dO^2 |g_dO>0 } for each band of the region;
float *ngdO2BandSum = new float[NUM_OF_BANDS];//the summation of {g_dO^2 |g_dO<0 } for each band of the region;
short numOfBitsBand = NUM_OF_BANDS*sizeof(float);
short lengthOfLSP; //the length of line support region, varies with lines
short halfHeight = (heightOfLSP-1)/2;
short halfWidth;
short bandID;
float coefInGaussion;
float lineMiddlePointX, lineMiddlePointY;
float sCorX, sCorY,sCorX0, sCorY0;
short tempCor, xCor, yCor;//pixel coordinates in image plane
short dx, dy;
float gDL;//store the gradient projection of pixels in support region along dL vector
float gDO;//store the gradient projection of pixels in support region along dO vector
short imageWidth, imageHeight, realWidth;
short *pdxImg, *pdyImg;
float *desVec;
short sameLineSize;
short octaveCount;
OctaveSingleLine *pSingleLine;
/* loop over list of LineVec */
for(short lineIDInScaleVec = 0; lineIDInScaleVec<numOfFinalLine; lineIDInScaleVec++){
sameLineSize = keyLines[lineIDInScaleVec].size();
/* loop over current LineVec's lines */
for(short lineIDInSameLine = 0; lineIDInSameLine<sameLineSize; lineIDInSameLine++){
/* get a line in current LineVec and its original ID in its octave */
pSingleLine = &(keyLines[lineIDInScaleVec][lineIDInSameLine]);
octaveCount = pSingleLine->octaveCount;
/* retrieve associated dxImg and dyImg */
pdxImg = edLineVec_[octaveCount]->dxImg_.ptr<short>();
pdyImg = edLineVec_[octaveCount]->dyImg_.ptr<short>();
/* get image size to work on from real one */
realWidth = edLineVec_[octaveCount]->imageWidth;
imageWidth = realWidth -1;
imageHeight = edLineVec_[octaveCount]->imageHeight-1;
/* initialize memory areas */
memset(pgdLBandSum, 0, numOfBitsBand);
memset(ngdLBandSum, 0, numOfBitsBand);
memset(pgdL2BandSum, 0, numOfBitsBand);
memset(ngdL2BandSum, 0, numOfBitsBand);
memset(pgdOBandSum, 0, numOfBitsBand);
memset(ngdOBandSum, 0, numOfBitsBand);
memset(pgdO2BandSum, 0, numOfBitsBand);
memset(ngdO2BandSum, 0, numOfBitsBand);
/* get length of line and its half */
lengthOfLSP = keyLines[lineIDInScaleVec][lineIDInSameLine].numOfPixels;
halfWidth = (lengthOfLSP-1)/2;
/* get middlepoint of line */
lineMiddlePointX = 0.5 * (pSingleLine->sPointInOctaveX + pSingleLine->ePointInOctaveX);
lineMiddlePointY = 0.5 * (pSingleLine->sPointInOctaveY + pSingleLine->ePointInOctaveY);
/*1.rotate the local coordinate system to the line direction (direction is the angle
between positive line direction and positive X axis)
*2.compute the gradient projection of pixels in line support region*/
/* get the vector representing original image reference system after rotation to aligh with
line's direction */
dL[0] = cos(pSingleLine->direction);
dL[1] = sin(pSingleLine->direction);
/* set the clockwise orthogonal vector of line direction */
dO[0] = -dL[1];
dO[1] = dL[0];
/* get rotated reference frame */
sCorX0= -dL[0]*halfWidth + dL[1]*halfHeight + lineMiddlePointX;//hID =0; wID = 0;
sCorY0= -dL[1]*halfWidth - dL[0]*halfHeight + lineMiddlePointY;
/* BIAS::Matrix<float> gDLMat(heightOfLSP,lengthOfLSP) */
for(short hID = 0; hID <heightOfLSP; hID++){
/*initialization */
sCorX = sCorX0;
sCorY = sCorY0;
pgdLRowSum = 0;
ngdLRowSum = 0;
pgdORowSum = 0;
ngdORowSum = 0;
for(short wID = 0; wID <lengthOfLSP; wID++){
tempCor = round(sCorX);
xCor = (tempCor<0)?0:(tempCor>imageWidth)?imageWidth:tempCor;
tempCor = round(sCorY);
yCor = (tempCor<0)?0:(tempCor>imageHeight)?imageHeight:tempCor;
/* To achieve rotation invariance, each simple gradient is rotated aligned with
* the line direction and clockwise orthogonal direction.*/
dx = pdxImg[yCor*realWidth+xCor];
dy = pdyImg[yCor*realWidth+xCor];
gDL = dx * dL[0] + dy * dL[1];
gDO = dx * dO[0] + dy * dO[1];
if(gDL>0){
pgdLRowSum += gDL;
}else{
ngdLRowSum -= gDL;
}
if(gDO>0){
pgdORowSum += gDO;
}else{
ngdORowSum -= gDO;
}
sCorX +=dL[0];
sCorY +=dL[1];
/* gDLMat[hID][wID] = gDL; */
}
sCorX0 -=dL[1];
sCorY0 +=dL[0];
coefInGaussion = gaussCoefG_[hID];
pgdLRowSum = coefInGaussion * pgdLRowSum;
ngdLRowSum = coefInGaussion * ngdLRowSum;
pgdL2RowSum = pgdLRowSum * pgdLRowSum;
ngdL2RowSum = ngdLRowSum * ngdLRowSum;
pgdORowSum = coefInGaussion * pgdORowSum;
ngdORowSum = coefInGaussion * ngdORowSum;
pgdO2RowSum = pgdORowSum * pgdORowSum;
ngdO2RowSum = ngdORowSum * ngdORowSum;
/* compute {g_dL |g_dL>0 }, {g_dL |g_dL<0 },
{g_dO |g_dO>0 }, {g_dO |g_dO<0 } of each band in the line support region
first, current row belong to current band */
bandID = hID/params.widthOfBand_;
coefInGaussion = gaussCoefL_[hID%params.widthOfBand_+params.widthOfBand_];
pgdLBandSum[bandID] += coefInGaussion * pgdLRowSum;
ngdLBandSum[bandID] += coefInGaussion * ngdLRowSum;
pgdL2BandSum[bandID] += coefInGaussion * coefInGaussion * pgdL2RowSum;
ngdL2BandSum[bandID] += coefInGaussion * coefInGaussion * ngdL2RowSum;
pgdOBandSum[bandID] += coefInGaussion * pgdORowSum;
ngdOBandSum[bandID] += coefInGaussion * ngdORowSum;
pgdO2BandSum[bandID] += coefInGaussion * coefInGaussion * pgdO2RowSum;
ngdO2BandSum[bandID] += coefInGaussion * coefInGaussion * ngdO2RowSum;
/* In order to reduce boundary effect along the line gradient direction,
* a row's gradient will contribute not only to its current band, but also
* to its nearest upper and down band with gaussCoefL_.*/
bandID--;
if(bandID>=0){/* the band above the current band */
coefInGaussion = gaussCoefL_[hID%params.widthOfBand_ + 2*params.widthOfBand_];
pgdLBandSum[bandID] += coefInGaussion * pgdLRowSum;
ngdLBandSum[bandID] += coefInGaussion * ngdLRowSum;
pgdL2BandSum[bandID] += coefInGaussion * coefInGaussion * pgdL2RowSum;
ngdL2BandSum[bandID] += coefInGaussion * coefInGaussion * ngdL2RowSum;
pgdOBandSum[bandID] += coefInGaussion * pgdORowSum;
ngdOBandSum[bandID] += coefInGaussion * ngdORowSum;
pgdO2BandSum[bandID] += coefInGaussion * coefInGaussion * pgdO2RowSum;
ngdO2BandSum[bandID] += coefInGaussion * coefInGaussion * ngdO2RowSum;
}
bandID = bandID+2;
if(bandID<NUM_OF_BANDS){/*the band below the current band */
coefInGaussion = gaussCoefL_[hID%params.widthOfBand_];
pgdLBandSum[bandID] += coefInGaussion * pgdLRowSum;
ngdLBandSum[bandID] += coefInGaussion * ngdLRowSum;
pgdL2BandSum[bandID] += coefInGaussion * coefInGaussion * pgdL2RowSum;
ngdL2BandSum[bandID] += coefInGaussion * coefInGaussion * ngdL2RowSum;
pgdOBandSum[bandID] += coefInGaussion * pgdORowSum;
ngdOBandSum[bandID] += coefInGaussion * ngdORowSum;
pgdO2BandSum[bandID] += coefInGaussion * coefInGaussion * pgdO2RowSum;
ngdO2BandSum[bandID] += coefInGaussion * coefInGaussion * ngdO2RowSum;
}
}
/* gDLMat.Save("gDLMat.txt");
return 0; */
/* construct line descriptor */
pSingleLine->descriptor.resize(descriptorSize);
desVec = pSingleLine->descriptor.data();
short desID;
/*Note that the first and last bands only have (lengthOfLSP * widthOfBand_ * 2.0) pixels
* which are counted. */
float invN2 = 1.0/(params.widthOfBand_ * 2.0);
float invN3 = 1.0/(params.widthOfBand_ * 3.0);
float invN, temp;
for(bandID = 0; bandID<NUM_OF_BANDS; bandID++){
if(bandID==0||bandID==NUM_OF_BANDS-1){ invN = invN2;
}else{ invN = invN3;}
desID = bandID * 8;
temp = pgdLBandSum[bandID] * invN;
desVec[desID] = temp;/* mean value of pgdL; */
desVec[desID+4] = sqrt(pgdL2BandSum[bandID] * invN - temp*temp);//std value of pgdL;
temp = ngdLBandSum[bandID] * invN;
desVec[desID+1] = temp;//mean value of ngdL;
desVec[desID+5] = sqrt(ngdL2BandSum[bandID] * invN - temp*temp);//std value of ngdL;
temp = pgdOBandSum[bandID] * invN;
desVec[desID+2] = temp;//mean value of pgdO;
desVec[desID+6] = sqrt(pgdO2BandSum[bandID] * invN - temp*temp);//std value of pgdO;
temp = ngdOBandSum[bandID] * invN;
desVec[desID+3] = temp;//mean value of ngdO;
desVec[desID+7] = sqrt(ngdO2BandSum[bandID] * invN - temp*temp);//std value of ngdO;
}
// normalize;
float tempM, tempS;
tempM = 0;
tempS = 0;
desVec = pSingleLine->descriptor.data();
int base = 0;
for(short i=0; i<NUM_OF_BANDS*8; ++base, i=base*8){
tempM += *(desVec+i) * *(desVec+i);//desVec[8*i+0] * desVec[8*i+0];
tempM += *(desVec+i+1) * *(desVec+i+1);//desVec[8*i+1] * desVec[8*i+1];
tempM += *(desVec+i+2) * *(desVec+i+2);//desVec[8*i+2] * desVec[8*i+2];
tempM += *(desVec+i+3) * *(desVec+i+3);//desVec[8*i+3] * desVec[8*i+3];
tempS += *(desVec+i+4) * *(desVec+i+4);//desVec[8*i+4] * desVec[8*i+4];
tempS += *(desVec+i+5) * *(desVec+i+5);//desVec[8*i+5] * desVec[8*i+5];
tempS += *(desVec+i+6) * *(desVec+i+6);//desVec[8*i+6] * desVec[8*i+6];
tempS += *(desVec+i+7) * *(desVec+i+7);//desVec[8*i+7] * desVec[8*i+7];
}
tempM = 1/sqrt(tempM);
tempS = 1/sqrt(tempS);
desVec = pSingleLine->descriptor.data();
base = 0;
for(short i=0; i<NUM_OF_BANDS*8; ++base, i=base*8){
*(desVec+i) = *(desVec+i) * tempM;//desVec[8*i] = desVec[8*i] * tempM;
*(desVec+1+i) = *(desVec+1+i) * tempM;//desVec[8*i+1] = desVec[8*i+1] * tempM;
*(desVec+2+i) = *(desVec+2+i) * tempM;//desVec[8*i+2] = desVec[8*i+2] * tempM;
*(desVec+3+i) = *(desVec+3+i) * tempM;//desVec[8*i+3] = desVec[8*i+3] * tempM;
*(desVec+4+i) = *(desVec+4+i) * tempS;//desVec[8*i+4] = desVec[8*i+4] * tempS;
*(desVec+5+i) = *(desVec+5+i) * tempS;//desVec[8*i+5] = desVec[8*i+5] * tempS;
*(desVec+6+i) = *(desVec+6+i) * tempS;//desVec[8*i+6] = desVec[8*i+6] * tempS;
*(desVec+7+i) = *(desVec+7+i) * tempS;//desVec[8*i+7] = desVec[8*i+7] * tempS;
}
/* In order to reduce the influence of non-linear illumination,
* a threshold is used to limit the value of element in the unit feature
* vector no larger than this threshold. In Z.Wang's work, a value of 0.4 is found
* empirically to be a proper threshold.*/
desVec = pSingleLine->descriptor.data();
for(short i=0; i<descriptorSize; i++ ){
if(desVec[i]>0.4){
desVec[i]=0.4;
}
}
//re-normalize desVec;
temp = 0;
for(short i=0; i<descriptorSize; i++){
temp += desVec[i] * desVec[i];
}
temp = 1/sqrt(temp);
for(short i=0; i<descriptorSize; i++){
desVec[i] = desVec[i] * temp;
}
}/* end for(short lineIDInSameLine = 0; lineIDInSameLine<sameLineSize;
lineIDInSameLine++) */
cv::Mat appoggio = cv::Mat(1, 32, CV_32FC1);
float* pointerToRow = appoggio.ptr<float>(0);
for(int g = 0; g<32; g++)
{
/* get LBD data */
float* desVec = keyLines[lineIDInScaleVec][0].descriptor.data();
*pointerToRow = desVec[g];
pointerToRow++;
}
}/* end for(short lineIDInScaleVec = 0;
lineIDInScaleVec<numOfFinalLine; lineIDInScaleVec++) */
delete [] dL;
delete [] dO;
delete [] pgdLBandSum;
delete [] ngdLBandSum;
delete [] pgdL2BandSum;
delete [] ngdL2BandSum;
delete [] pgdOBandSum;
delete [] ngdOBandSum;
delete [] pgdO2BandSum;
delete [] ngdO2BandSum;
return 1;
}
......@@ -159,7 +159,7 @@ void BinaryDescriptorMatcher::match( const Mat& queryDescriptors, std::vector<DM
CV_Assert( false );
}
/* create a DMatch object if required by mask of if there is
/* create a DMatch object if required by mask or if there is
no mask at all */
else if( masks.empty() || masks[itup->second].at<uchar>( counter ) != 0 )
{
......@@ -212,7 +212,7 @@ void BinaryDescriptorMatcher::match( const Mat& queryDescriptors, const Mat& tra
/* compose matches */
for ( int counter = 0; counter < queryDescriptors.rows; counter++ )
{
/* create a DMatch object if required by mask of if there is
/* create a DMatch object if required by mask or if there is
no mask at all */
if( mask.empty() || ( !mask.empty() && mask.at<uchar>( counter ) != 0 ) )
{
......
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -46,12 +46,14 @@ namespace cv
CV_INIT_ALGORITHM( BinaryDescriptor, "BINARY.DESCRIPTOR", );
CV_INIT_ALGORITHM( BinaryDescriptorMatcher, "BINARY.DESCRIPTOR.MATCHER", );
CV_INIT_ALGORITHM( LSDDetector, "LSDDETECTOR", );
bool initModule_line_descriptor( void )
{
bool all = true;
all &= !BinaryDescriptor_info_auto.name().empty();
all &= !BinaryDescriptorMatcher_info_auto.name().empty();
all &= !LSDDetector_info_auto.name().empty();
return all;
}
......
......@@ -58,8 +58,6 @@
#include <algorithm>
#include <bitset>
#include "opencv2/line_descriptor.hpp"
#endif
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment