Commit abe59c33 authored by Vadim Pisarevsky's avatar Vadim Pisarevsky

Merge pull request #340 from RobertaRavanelli:FinalTermPullRequest

parents 9cb10bd6 fc98b871
set(the_description "Structured Light API")
ocv_define_module(structured_light opencv_core opencv_calib3d opencv_imgproc opencv_highgui opencv_features2d opencv_viz opencv_rgbd)
\ No newline at end of file
Structured Light module
============================================================
\ No newline at end of file
@article{UNDERWORLD,
title={{3DUNDERWORLD-SLS}: {A}n {O}pen-{S}ource {S}tructured-{L}ight {S}canning {S}ystem for {R}apid {G}eometry {A}cquisition},
author={Herakleous, Kyriakos and Poullis, Charalambos},
journal={arXiv preprint arXiv:1406.6595},
year={2014}
}
@Article{pattern,
author = {Salvi, Joaquim and Pag\'es, Jordi and Batlle, Joan},
title = {Pattern codification strategies in structured light systems},
journal = {Pattern Recognition},
volume = {37},
number = {4},
pages = {827-849},
year = {April 2004},
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
/*#ifdef __OPENCV_BUILD
#error this is a compatibility header which should not be used inside the OpenCV library
#endif*/
#include "opencv2/structured_light/structured_light.hpp"
#include "opencv2/structured_light/graycodepattern.hpp"
/** @defgroup structured_light Structured Light API
Structured light is considered one of the most effective techniques to acquire 3D models.
This technique is based on projecting a light pattern and capturing the illuminated scene
from one or more points of view. Since the pattern is coded, correspondences between image
points and points of the projected pattern can be quickly found and 3D information easily
retrieved.
One of the most commonly exploited coding strategies is based on trmatime-multiplexing. In this
case, a set of patterns are successively projected onto the measuring surface.
The codeword for a given pixel is usually formed by the sequence of illuminance values for that
pixel across the projected patterns. Thus, the codification is called temporal because the bits
of the codewords are multiplexed in time @cite pattern .
In this module a time-multiplexing coding strategy based on Gray encoding is implemented following the
(stereo) approach described in 3DUNDERWORLD algorithm @cite UNDERWORLD .
For more details, see @ref tutorial_structured_light.
*/
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_GRAY_CODE_PATTERN_HPP__
#define __OPENCV_GRAY_CODE_PATTERN_HPP__
#include "opencv2/core.hpp"
namespace cv {
namespace structured_light {
//! @addtogroup structured_light
//! @{
/** @brief Class implementing the Gray-code pattern, based on @cite UNDERWORLD.
*
* The generation of the pattern images is performed with Gray encoding using the traditional white and black colors.
*
* The information about the two image axes x, y is encoded separately into two different pattern sequences.
* A projector P with resolution (P_res_x, P_res_y) will result in Ncols = log 2 (P_res_x) encoded pattern images representing the columns, and
* in Nrows = log 2 (P_res_y) encoded pattern images representing the rows.
* For example a projector with resolution 1024x768 will result in Ncols = 10 and Nrows = 10.
* However, the generated pattern sequence consists of both regular color and color-inverted images: inverted pattern images are images
* with the same structure as the original but with inverted colors.
* This provides an effective method for easily determining the intensity value of each pixel when it is lit (highest value) and
* when it is not lit (lowest value). So for a a projector with resolution 1024x768, the number of pattern images will be Ncols * 2 + Nrows * 2 = 40.
*
*/
class CV_EXPORTS_W GrayCodePattern : public StructuredLightPattern
{
public:
/** @brief Parameters of StructuredLightPattern constructor.
* @param width Projector's width. Default value is 1024.
* @param height Projector's height. Default value is 768.
*/
struct CV_EXPORTS_W_SIMPLE Params
{
CV_WRAP
Params();
CV_PROP_RW
int width;
CV_PROP_RW
int height;
};
/** @brief Constructor
@param parameters GrayCodePattern parameters GrayCodePattern::Params: the width and the height of the projector.
*/
CV_WRAP
static Ptr<GrayCodePattern> create( const GrayCodePattern::Params &parameters = GrayCodePattern::Params() );
/** @brief Get the number of pattern images needed for the graycode pattern.
*
* @return The number of pattern images needed for the graycode pattern.
*
*/
CV_WRAP
virtual size_t getNumberOfPatternImages() const = 0;
/** @brief Sets the value for white threshold, needed for decoding.
*
* White threshold is a number between 0-255 that represents the minimum brightness difference required for valid pixels, between the graycode pattern and its inverse images; used in getProjPixel method.
*
* @param value The desired white threshold value.
*
*/
CV_WRAP
virtual void setWhiteThreshold( size_t value ) = 0;
/** @brief Sets the value for black threshold, needed for decoding (shadowsmasks computation).
*
* Black threshold is a number between 0-255 that represents the minimum brightness difference required for valid pixels, between the fully illuminated (white) and the not illuminated images (black); used in computeShadowMasks method.
*
* @param value The desired black threshold value.
*
*/
CV_WRAP
virtual void setBlackThreshold( size_t value ) = 0;
/** @brief Generates the all-black and all-white images needed for shadowMasks computation.
*
* To identify shadow regions, the regions of two images where the pixels are not lit by projector's light and thus where there is not coded information,
* the 3DUNDERWORLD algorithm computes a shadow mask for the two cameras views, starting from a white and a black images captured by each camera.
* This method generates these two additional images to project.
*
* @param blackImage The generated all-black CV_8U image, at projector's resolution.
* @param whiteImage The generated all-white CV_8U image, at projector's resolution.
*/
CV_WRAP
virtual void getImagesForShadowMasks( InputOutputArray blackImage, InputOutputArray whiteImage ) const = 0;
/** @brief For a (x,y) pixel of a camera returns the corresponding projector pixel.
*
* The function decodes each pixel in the pattern images acquired by a camera into their corresponding decimal numbers representing the projector's column and row,
* providing a mapping between camera's and projector's pixel.
*
* @param patternImages The pattern images acquired by the camera, stored in a grayscale vector < Mat >.
* @param x x coordinate of the image pixel.
* @param y y coordinate of the image pixel.
* @param projPix Projector's pixel corresponding to the camera's pixel: projPix.x and projPix.y are the image coordinates of the projector’s pixel corresponding to the pixel being decoded in a camera.
*/
CV_WRAP
virtual bool getProjPixel( InputArrayOfArrays patternImages, int x, int y, Point &projPix ) const = 0;
};
//! @}
}
}
#endif
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_STRUCTURED_LIGHT_HPP__
#define __OPENCV_STRUCTURED_LIGHT_HPP__
#include "opencv2/core.hpp"
namespace cv {
namespace structured_light {
//! @addtogroup structured_light
//! @{
//! Type of the decoding algorithm
// other algorithms can be implemented
enum
{
DECODE_3D_UNDERWORLD = 0 //!< Kyriakos Herakleous, Charalambos Poullis. “3DUNDERWORLD-SLS: An Open-Source Structured-Light Scanning System for Rapid Geometry Acquisition”, arXiv preprint arXiv:1406.6595 (2014).
};
/** @brief Abstract base class for generating and decoding structured light patterns.
*/
class CV_EXPORTS_W StructuredLightPattern : public virtual Algorithm
{
public:
/** @brief Generates the structured light pattern to project.
@param patternImages The generated pattern: a vector<Mat>, in which each image is a CV_8U Mat at projector's resolution.
*/
CV_WRAP
virtual bool generate( OutputArrayOfArrays patternImages ) = 0;
/** @brief Decodes the structured light pattern, generating a disparity map
@param patternImages The acquired pattern images to decode (vector<vector<Mat>>), loaded as grayscale and previously rectified.
@param disparityMap The decoding result: a CV_64F Mat at image resolution, storing the computed disparity map.
@param blackImages The all-black images needed for shadowMasks computation.
@param whiteImages The all-white images needed for shadowMasks computation.
@param flags Flags setting decoding algorithms. Default: DECODE_3D_UNDERWORLD.
@note All the images must be at the same resolution.
*/
CV_WRAP
virtual bool decode( InputArrayOfArrays patternImages, OutputArray disparityMap, InputArrayOfArrays blackImages =
noArray(),
InputArrayOfArrays whiteImages = noArray(), int flags = DECODE_3D_UNDERWORLD ) const = 0;
};
//! @}
}
}
#endif
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/structured_light.hpp>
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
static const char* keys =
{ "{@path | | Path of the folder where the captured pattern images will be save }"
"{@proj_width | | Projector width }"
"{@proj_height | | Projector height }" };
static void help()
{
cout << "\nThis example shows how to use the \"Structured Light module\" to acquire a graycode pattern"
"\nCall (with the two cams connected):\n"
"./example_structured_light_cap_pattern <path> <proj_width> <proj_height> \n"
<< endl;
}
int main( int argc, char** argv )
{
structured_light::GrayCodePattern::Params params;
CommandLineParser parser( argc, argv, keys );
String path = parser.get<String>( 0 );
params.width = parser.get<int>( 1 );
params.height = parser.get<int>( 2 );
if( path.empty() || params.width < 1 || params.height < 1 )
{
help();
return -1;
}
// Set up GraycodePattern with params
Ptr<structured_light::GrayCodePattern> graycode = structured_light::GrayCodePattern::create( params );
// Storage for pattern
vector<Mat> pattern;
graycode->generate( pattern );
cout << pattern.size() << " pattern images + 2 images for shadows mask computation to acquire with both cameras"
<< endl;
// Generate the all-white and all-black images needed for shadows mask computation
Mat white;
Mat black;
graycode->getImagesForShadowMasks( black, white );
pattern.push_back( white );
pattern.push_back( black );
// Setting pattern window on second monitor (the projector's one)
namedWindow( "Pattern Window", WINDOW_NORMAL );
resizeWindow( "Pattern Window", params.width, params.height );
moveWindow( "Pattern Window", params.width + 316, -20 );
setWindowProperty( "Pattern Window", WND_PROP_FULLSCREEN, WINDOW_FULLSCREEN );
// Open camera number 1, using libgphoto2
VideoCapture cap1( CAP_GPHOTO2 );
if( !cap1.isOpened() )
{
// check if cam1 opened
cout << "cam1 not opened!" << endl;
help();
return -1;
}
// Open camera number 2
VideoCapture cap2( 1 );
if( !cap2.isOpened() )
{
// check if cam2 opened
cout << "cam2 not opened!" << endl;
help();
return -1;
}
// Turning off autofocus
cap1.set( CAP_PROP_SETTINGS, 1 );
cap2.set( CAP_PROP_SETTINGS, 1 );
int i = 0;
while( i < (int) pattern.size() )
{
cout << "Waiting to save image number " << i + 1 << endl << "Press any key to acquire the photo" << endl;
imshow( "Pattern Window", pattern[i] );
Mat frame1;
Mat frame2;
cap1 >> frame1; // get a new frame from camera 1
cap2 >> frame2; // get a new frame from camera 2
if( ( frame1.data ) && ( frame2.data ) )
{
Mat tmp;
cout << "cam 1 size: " << Size( ( int ) cap1.get( CAP_PROP_FRAME_WIDTH ), ( int ) cap1.get( CAP_PROP_FRAME_HEIGHT ) )
<< endl;
cout << "cam 2 size: " << Size( ( int ) cap2.get( CAP_PROP_FRAME_WIDTH ), ( int ) cap2.get( CAP_PROP_FRAME_HEIGHT ) )
<< endl;
cout << "zoom cam 1: " << cap1.get( CAP_PROP_ZOOM ) << endl << "zoom cam 2: " << cap2.get( CAP_PROP_ZOOM )
<< endl;
cout << "focus cam 1: " << cap1.get( CAP_PROP_FOCUS ) << endl << "focus cam 2: " << cap2.get( CAP_PROP_FOCUS )
<< endl;
cout << "Press enter to save the photo or an other key to re-acquire the photo" << endl;
namedWindow( "cam1", WINDOW_NORMAL );
resizeWindow( "cam1", 640, 480 );
namedWindow( "cam2", WINDOW_NORMAL );
resizeWindow( "cam2", 640, 480 );
// Moving window of cam2 to see the image at the same time with cam1
moveWindow( "cam2", 640 + 75, 0 );
// Resizing images to avoid issues for high resolution images, visualizing them as grayscale
resize( frame1, tmp, Size( 640, 480 ) );
cvtColor( tmp, tmp, COLOR_RGB2GRAY );
imshow( "cam1", tmp );
resize( frame2, tmp, Size( 640, 480 ) );
cvtColor( tmp, tmp, COLOR_RGB2GRAY );
imshow( "cam2", tmp );
bool save1 = false;
bool save2 = false;
int key = waitKey( 0 );
// Pressing enter, it saves the output
if( key == 13 )
{
ostringstream name;
name << i + 1;
save1 = imwrite( path + "pattern_cam1_im" + name.str() + ".png", frame1 );
save2 = imwrite( path + "pattern_cam2_im" + name.str() + ".png", frame2 );
if( ( save1 ) && ( save2 ) )
{
cout << "pattern cam1 and cam2 images number " << i + 1 << " saved" << endl << endl;
i++;
}
else
{
cout << "pattern cam1 and cam2 images number " << i + 1 << " NOT saved" << endl << endl << "Retry, check the path"<< endl << endl;
}
}
// Pressing escape, the program closes
if( key == 27 )
{
cout << "Closing program" << endl;
}
}
else
{
cout << "No frame data, waiting for new frame" << endl;
}
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_PRECOMP_H__
#define __OPENCV_PRECOMP_H__
#include "opencv2/structured_light.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/core/private.hpp"
#endif
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "test_precomp.hpp"
using namespace std;
using namespace cv;
/****************************************************************************************\
* GetProjPixel test *
\****************************************************************************************/
class CV_GetProjPixelTest : public cvtest::BaseTest
{
public:
CV_GetProjPixelTest();
~CV_GetProjPixelTest();
protected:
void run(int);
};
CV_GetProjPixelTest::CV_GetProjPixelTest(){}
CV_GetProjPixelTest::~CV_GetProjPixelTest(){}
void CV_GetProjPixelTest::run( int )
{
// Using default projector resolution (1024 x 768)
Ptr<structured_light::GrayCodePattern> graycode = structured_light::GrayCodePattern::create();
// Storage for pattern
vector<Mat> pattern;
// Generate the pattern
graycode->generate( pattern );
Point projPixel;
size_t image_width = pattern[0].cols;
size_t image_height = pattern[0].rows;
for( size_t i = 0; i < image_width; i++ )
{
for( size_t j = 0; j < image_height; j++ )
{
//for a (x,y) pixel of the camera returns the corresponding projector pixel
bool error = graycode->getProjPixel( pattern, i, j, projPixel );
EXPECT_FALSE( error );
EXPECT_EQ( projPixel.y, j );
EXPECT_EQ( projPixel.x, i );
}
}
}
/****************************************************************************************\
* Test registration *
\****************************************************************************************/
TEST( GrayCodePattern, getProjPixel )
{
CV_GetProjPixelTest test;
test.safe_run();
}
\ No newline at end of file
#include "test_precomp.hpp"
CV_TEST_MAIN("cv")
\ No newline at end of file
This diff is collapsed.
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-declarations"
# if defined __clang__ || defined __APPLE__
# pragma GCC diagnostic ignored "-Wmissing-prototypes"
# pragma GCC diagnostic ignored "-Wextra"
# endif
#endif
#ifndef __OPENCV_TEST_PRECOMP_HPP__
#define __OPENCV_TEST_PRECOMP_HPP__
#include "opencv2/ts.hpp"
#include "opencv2/structured_light.hpp"
#include <opencv2/rgbd.hpp>
#include <iostream>
#endif
\ No newline at end of file
Capture Gray code pattern tutorial {#tutorial_capture_graycode_pattern}
=============
Goal
----
In this tutorial you will learn how to use the *GrayCodePattern* class to:
- Generate a Gray code pattern.
- Project the Gray code pattern.
- Capture the projected Gray code pattern.
It is important to underline that *GrayCodePattern* class actually implements the 3DUNDERWORLD algorithm described in @cite UNDERWORLD , which is based on a stereo approach: we need to capture the projected pattern at the same time from two different views if we want to reconstruct the 3D model of the scanned object. Thus, an acquisition set consists of the images captured by each camera for each image in the pattern sequence.
Code
----
@include structured_light/samples/cap_pattern.cpp
Explanation
-----------
First of all the pattern images to project must be generated. Since the number of images is a function of the projector's resolution, *GrayCodePattern* class parameters must be set with our projector's width and height. In this way the *generate* method can be called: it fills a vector of Mat with the computed pattern images:
@code{.cpp}
structured_light::GrayCodePattern::Params params;
....
params.width = parser.get<int>( 1 );
params.height = parser.get<int>( 2 );
....
// Set up GraycodePattern with params
Ptr<structured_light::GrayCodePattern> graycode = structured_light::GrayCodePattern::create( params );
// Storage for pattern
vector<Mat> pattern;
graycode->generate( pattern );
@endcode
For example, using the default projector resolution (1024 x 768), 40 images have to be projected: 20 for regular color pattern (10 images for the columns sequence and 10 for the rows one) and 20 for the color-inverted pattern, where the inverted pattern images are images with the same structure as the original but with inverted colors. This provides an effective method for easily determining the intensity value of each pixel when it is lit (highest value) and when it is not lit (lowest value) during the decoding step.
Subsequently, to identify shadow regions, the regions of two images where the pixels are not lit by projector's light and thus where there is not code information, the 3DUNDERWORLD algorithm computes a shadow mask for the two cameras views, starting from a white and a black images captured by each camera. So two additional images need to be projected and captured with both cameras:
@code{.cpp}
// Generate the all-white and all-black images needed for shadows mask computation
Mat white;
Mat black;
graycode->getImagesForShadowMasks( black, white );
pattern.push_back( white );
pattern.push_back( black );
@endcode
Thus, the final projection sequence is projected as follows: first the column and its inverted sequence, then the row and its inverted sequence and finally the white and black images.
Once the pattern images have been generated, they must be projected using the full screen option: the images must fill all the projection area, otherwise the projector full resolution is not exploited, a condition on which is based 3DUNDERWORLD implementation.
@code{.cpp}
// Setting pattern window on second monitor (the projector's one)
namedWindow( "Pattern Window", WINDOW_NORMAL );
resizeWindow( "Pattern Window", params.width, params.height );
moveWindow( "Pattern Window", params.width + 316, -20 );
setWindowProperty( "Pattern Window", WND_PROP_FULLSCREEN, WINDOW_FULLSCREEN );
@endcode
At this point the images can be captured with our digital cameras, using libgphoto2 library, recently included in OpenCV: remember to turn on gPhoto2 option in Cmake.list when building OpenCV.
@code{.cpp}
// Open camera number 1, using libgphoto2
VideoCapture cap1( CAP_GPHOTO2 );
if( !cap1.isOpened() )
{
// check if cam1 opened
cout << "cam1 not opened!" << endl;
help();
return -1;
}
// Open camera number 2
VideoCapture cap2( 1 );
if( !cap2.isOpened() )
{
// check if cam2 opened
cout << "cam2 not opened!" << endl;
help();
return -1;
}
@endcode
The two cameras must work at the same resolution and must have autofocus option disabled, maintaining the same focus during all acquisition. The projector can be positioned in the middle of the cameras.
However, before to proceed with pattern acquisition, the cameras must be calibrated. Once the calibration is performed, there should be no movement of the cameras, otherwise a new calibration will be needed.
After having connected the cameras and the projector to the computer, cap_pattern demo can be launched giving as parameters the path where to save the images, and the projector's width and height, taking care to use the same focus and cameras settings of calibration.
At this point, to acquire the images with both cameras, the user can press any key.
@code{.cpp}
// Turning off autofocus
cap1.set( CAP_PROP_SETTINGS, 1 );
cap2.set( CAP_PROP_SETTINGS, 1 );
int i = 0;
while( i < (int) pattern.size() )
{
cout << "Waiting to save image number " << i + 1 << endl << "Press any key to acquire the photo" << endl;
imshow( "Pattern Window", pattern[i] );
Mat frame1;
Mat frame2;
cap1 >> frame1; // get a new frame from camera 1
cap2 >> frame2; // get a new frame from camera 2
...
}
@endcode
If the captured images are good (the user must take care that the projected pattern is viewed from the two cameras), the user can save them pressing the enter key, otherwise pressing any other key he can take another shot.
@code{.cpp}
// Pressing enter, it saves the output
if( key == 13 )
{
ostringstream name;
name << i + 1;
save1 = imwrite( path + "pattern_cam1_im" + name.str() + ".png", frame1 );
save2 = imwrite( path + "pattern_cam2_im" + name.str() + ".png", frame2 );
if( ( save1 ) && ( save2 ) )
{
cout << "pattern cam1 and cam2 images number " << i + 1 << " saved" << endl << endl;
i++;
}
else
{
cout << "pattern cam1 and cam2 images number " << i + 1 << " NOT saved" << endl << endl << "Retry, check the path"<< endl << endl;
}
}
@endcode
The acquistion ends when all the pattern images have saved for both cameras. Then the user can reconstruct the 3D model of the captured scene using the *decode* method of *GrayCodePattern* class (see next tutorial).
Decode Gray code pattern tutorial {#tutorial_decode_graycode_pattern}
=============
Goal
----
In this tutorial you will learn how to use the *GrayCodePattern* class to:
- Decode a previously acquired Gray code pattern.
- Generate a disparity map.
- Generate a pointcloud.
Code
----
@include structured_light/samples/pointcloud.cpp
Explanation
-----------
First of all the needed parameters must be passed to the program.
The first is the name list of previously acquired pattern images, stored in a .yaml file organized as below:
@code{.cpp}
%YAML:1.0
cam1:
- "/data/pattern_cam1_im1.png"
- "/data/pattern_cam1_im2.png"
..............
- "/data/pattern_cam1_im42.png"
- "/data/pattern_cam1_im43.png"
- "/data/pattern_cam1_im44.png"
cam2:
- "/data/pattern_cam2_im1.png"
- "/data/pattern_cam2_im2.png"
..............
- "/data/pattern_cam2_im42.png"
- "/data/pattern_cam2_im43.png"
- "/data/pattern_cam2_im44.png"
@endcode
For example, the dataset used for this tutorial has been acquired using a projector with a resolution of 1280x800, so 42 pattern images (from number 1 to 42) + 1 white (number 43) and 1 black (number 44) were captured with both the two cameras.
Then the cameras calibration parameters, stored in another .yml file, together with the width and the height of the projector used to project the pattern, and, optionally, the values of white and black tresholds, must be passed to the tutorial program.
In this way, *GrayCodePattern* class parameters can be set up with the width and the height of the projector used during the pattern acquisition and a pointer to a GrayCodePattern object can be created:
@code{.cpp}
structured_light::GrayCodePattern::Params params;
....
params.width = parser.get<int>( 2 );
params.height = parser.get<int>( 3 );
....
// Set up GraycodePattern with params
Ptr<structured_light::GrayCodePattern> graycode = structured_light::GrayCodePattern::create( params );
@endcode
If the white and black thresholds are passed as parameters (these thresholds influence the number of decoded pixels), their values can be set, otherwise the algorithm will use the default values.
@code{.cpp}
size_t white_thresh = 0;
size_t black_thresh = 0;
if( argc == 7 )
{
// If passed, setting the white and black threshold, otherwise using default values
white_thresh = parser.get<size_t>( 4 );
black_thresh = parser.get<size_t>( 5 );
graycode->setWhiteThreshold( white_thresh );
graycode->setBlackThreshold( black_thresh );
}
@endcode
At this point, to use the *decode* method of *GrayCodePattern* class, the acquired pattern images must be stored in a vector of vector of Mat.
The external vector has a size of two because two are the cameras: the first vector stores the pattern images captured from the left camera, the second those acquired from the right one. The number of pattern images is obviously the same for both cameras and can be retrieved using the getNumberOfPatternImages() method:
@code{.cpp}
size_t numberOfPatternImages = graycode->getNumberOfPatternImages();
vector<vector<Mat> > captured_pattern;
captured_pattern.resize( 2 );
captured_pattern[0].resize( numberOfPatternImages );
captured_pattern[1].resize( numberOfPatternImages );
.....
for( size_t i = 0; i < numberOfPatternImages; i++ )
{
captured_pattern[0][i] = imread( imagelist[i], IMREAD_GRAYSCALE );
captured_pattern[1][i] = imread( imagelist[i + numberOfPatternImages + 2], IMREAD_GRAYSCALE );
......
}
@endcode
As regards the black and white images, they must be stored in two different vectors of Mat:
@code{.cpp}
vector<Mat> blackImages;
vector<Mat> whiteImages;
blackImages.resize( 2 );
whiteImages.resize( 2 );
// Loading images (all white + all black) needed for shadows computation
cvtColor( color, whiteImages[0], COLOR_RGB2GRAY );
whiteImages[1] = imread( imagelist[2 * numberOfPatternImages + 2], IMREAD_GRAYSCALE );
blackImages[0] = imread( imagelist[numberOfPatternImages + 1], IMREAD_GRAYSCALE );
blackImages[1] = imread( imagelist[2 * numberOfPatternImages + 2 + 1], IMREAD_GRAYSCALE );
@endcode
It is important to underline that all the images, the pattern ones, black and white, must be loaded as grayscale images and rectified before being passed to decode method:
@code{.cpp}
// Stereo rectify
cout << "Rectifying images..." << endl;
Mat R1, R2, P1, P2, Q;
Rect validRoi[2];
stereoRectify( cam1intrinsics, cam1distCoeffs, cam2intrinsics, cam2distCoeffs, imagesSize, R, T, R1, R2, P1, P2, Q, 0,
-1, imagesSize, &validRoi[0], &validRoi[1] );
Mat map1x, map1y, map2x, map2y;
initUndistortRectifyMap( cam1intrinsics, cam1distCoeffs, R1, P1, imagesSize, CV_32FC1, map1x, map1y );
initUndistortRectifyMap( cam2intrinsics, cam2distCoeffs, R2, P2, imagesSize, CV_32FC1, map2x, map2y );
........
for( size_t i = 0; i < numberOfPatternImages; i++ )
{
........
remap( captured_pattern[1][i], captured_pattern[1][i], map1x, map1y, INTER_NEAREST, BORDER_CONSTANT, Scalar() );
remap( captured_pattern[0][i], captured_pattern[0][i], map2x, map2y, INTER_NEAREST, BORDER_CONSTANT, Scalar() );
}
........
remap( color, color, map2x, map2y, INTER_NEAREST, BORDER_CONSTANT, Scalar() );
remap( whiteImages[0], whiteImages[0], map2x, map2y, INTER_NEAREST, BORDER_CONSTANT, Scalar() );
remap( whiteImages[1], whiteImages[1], map1x, map1y, INTER_NEAREST, BORDER_CONSTANT, Scalar() );
remap( blackImages[0], blackImages[0], map2x, map2y, INTER_NEAREST, BORDER_CONSTANT, Scalar() );
remap( blackImages[1], blackImages[1], map1x, map1y, INTER_NEAREST, BORDER_CONSTANT, Scalar() );
@endcode
In this way the *decode* method can be called to decode the pattern and to generate the corresponding disparity map, computed on the first camera (left):
@code{.cpp}
Mat disparityMap;
bool decoded = graycode->decode(captured_pattern, disparityMap, blackImages, whiteImages,
structured_light::DECODE_3D_UNDERWORLD);
@endcode
To better visualize the result, a colormap is applied to the computed disparity:
@code{.cpp}
double min;
double max;
minMaxIdx(disparityMap, &min, &max);
Mat cm_disp, scaledDisparityMap;
cout << "disp min " << min << endl << "disp max " << max << endl;
convertScaleAbs( disparityMap, scaledDisparityMap, 255 / ( max - min ) );
applyColorMap( scaledDisparityMap, cm_disp, COLORMAP_JET );
// Show the result
resize( cm_disp, cm_disp, Size( 640, 480 ) );
imshow( "cm disparity m", cm_disp )
@endcode
![](pics/cm_disparity.png)
At this point the point cloud can be generated using the reprojectImageTo3D method, taking care to convert the computed disparity in a CV_32FC1 Mat (decode method computes a CV_64FC1 disparity map):
@code{.cpp}
Mat pointcloud;
disparityMap.convertTo( disparityMap, CV_32FC1 );
reprojectImageTo3D( disparityMap, pointcloud, Q, true, -1 );
@endcode
Then a mask to remove the unwanted background is computed:
@code{.cpp}
Mat dst, thresholded_disp;
threshold( scaledDisparityMap, thresholded_disp, 0, 255, THRESH_OTSU + THRESH_BINARY );
resize( thresholded_disp, dst, Size( 640, 480 ) );
imshow( "threshold disp otsu", dst );
@endcode
![](pics/threshold_disp.png)
The white image of cam1 was previously loaded also as a color image, in order to map the color of the object on its reconstructed pointcloud:
@code{.cpp}
Mat color = imread( imagelist[numberOfPatternImages], IMREAD_COLOR );
@endcode
The background renoval mask is thus applied to the point cloud and to the color image:
@code{.cpp}
Mat pointcloud_tresh, color_tresh;
pointcloud.copyTo(pointcloud_tresh, thresholded_disp);
color.copyTo(color_tresh, thresholded_disp);
@endcode
Finally the computed point cloud of the scanned object can be visualized on viz:
@code{.cpp}
viz::Viz3d myWindow( "Point cloud with color");
myWindow.setBackgroundMeshLab();
myWindow.showWidget( "coosys", viz::WCoordinateSystem());
myWindow.showWidget( "pointcloud", viz::WCloud( pointcloud_tresh, color_tresh ) );
myWindow.showWidget( "text2d", viz::WText( "Point cloud", Point(20, 20), 20, viz::Color::green() ) );
myWindow.spin();
@endcode
![](pics/plane_viz.png)
Structured Light tutorials {#tutorial_structured_light}
=============================================================
- @subpage tutorial_capture_graycode_pattern
_Compatibility:_ \> OpenCV 3.0.0
_Author:_ Roberta Ravanelli
You will learn how to acquire a dataset using *GrayCodePattern* class.
- @subpage tutorial_decode_graycode_pattern
_Compatibility:_ \> OpenCV 3.0.0
_Author:_ Roberta Ravanelli
You will learn how to decode a previously acquired Gray code pattern, generating a pointcloud.
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment