Commit fb272047 authored by baisheng lai's avatar baisheng lai

All files of the second half period

parent c6e54d17
......@@ -2,3 +2,5 @@ Custom Calibration Pattern for 3D reconstruction
================================================
1. Custom calibration pattern for 3D reconstruction
2. Omnidirectional camera calibration
3. random pattern calibration object
4. multi-camera calibration
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, Baisheng Lai (laibaisheng@gmail.com), Zhejiang University,
// all rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_MULTICAMERACALIBRATION_HPP__
#define __OPENCV_MULTICAMERACALIBRATION_HPP__
#include "precomp.hpp"
#include "randomPatten.hpp"
#include "omnidir.hpp"
#include <string>
#include <iostream>
using namespace cv;
#define HEAD -1
#define INVALID -2
/** @brief Class for multiple camera calibration that supports pinhole camera and omnidirection camera.
For omnidirectional camera model, please refer to omnidir.hpp in ccalib module.
It first calibrate each camera individually, then a bundle adjustment like optimization is applied to
refine extrinsic parameters. So far, it only support "random" pattern for calibration,
see randomPattern.hpp in ccalib module for details.
Images that are used should be named by "cameraIdx-timestamp.*", several images with the same timestamp
means that they are the same pattern that are photographed. cameraIdx should start from 0.
For more details, please refer to paper
B. Li, L. Heng, K. Kevin and M. Pollefeys, "A Multiple-Camera System
Calibration Toolbox Using A Feature Descriptor-Based Calibration
Pattern", in IROS 2013.
*/
class CV_EXPORTS multiCameraCalibration
{
public:
enum {
PINHOLE,
OMNIDIRECTIONAL
//FISHEYE
};
// an edge connects a camera and pattern
struct edge
{
int cameraVertex; // vertex index for camera in this edge
int photoVertex; // vertex index for pattern in this edge
int photoIndex; // photo index among photos for this camera
Mat transform; // transform from pattern to camera
edge(int cv, int pv, int pi, Mat trans)
{
cameraVertex = cv;
photoVertex = pv;
photoIndex = pi;
transform = trans;
}
};
struct vertex
{
Mat pose; // relative pose to the first camera. For camera vertex, it is the
// transform from the first camera to this camera, for pattern vertex,
// it is the transform from pattern to the first camera
int timestamp; // timestamp of photo, only available for photo vertex
vertex(Mat po, int ts)
{
pose = po;
timestamp = ts;
}
vertex()
{
pose = Mat::eye(4, 4, CV_32F);
timestamp = -1;
}
};
/* @brief Constructor
@param cameraType camera type, PINHOLE or OMNIDIRECTIONAL
@param nCameras number of cameras
@fileName filename of string list that are used for calibration, the file is generated
by imagelist_creator from OpenCv samples. The first one in the list is the pattern filename.
@patternWidth the physical width of pattern, in user defined unit.
@patternHeight the physical height of pattern, in user defined unit.
@showExtration whether show extracted features and feature filtering.
@nMiniMatches minimal number of matched features for a frame.
@criteria optimization stopping criteria.
@detector feature detector that detect feature points in pattern and images.
@descriptor feature descriptor.
@matcher feature matcher.
*/
multiCameraCalibration(int cameraType, int nCameras, const std::string& fileName, float patternWidth,
float patternHeight, int showExtration = 0, int nMiniMatches = 20, int flags = 0,
TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 300, 1e-7),
Ptr<FeatureDetector> detector = AKAZE::create(AKAZE::DESCRIPTOR_MLDB, 0, 3, 0.002f),
Ptr<DescriptorExtractor> descriptor = AKAZE::create(AKAZE::DESCRIPTOR_MLDB,0, 3, 0.002f),
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-L1"));
/* @brief load images
*/
void loadImages();
/* @brief initialize multiple camera calibration. It calibrates each camera individually.
*/
void initialize();
/* @brief optimization extrinsic parameters
*/
double optimizeExtrinsics();
/* @brief run multi-camera camera calibration, it runs loadImage(), initialize() and optimizeExtrinsics()
*/
double run();
/* @brief write camera parameters to file.
*/
void writeParameters(const std::string& filename);
private:
std::vector<std::string> readStringList();
int getPhotoVertex(int timestamp);
void graphTraverse(const Mat& G, int begin, std::vector<int>& order, std::vector<int>& pre);
void findRowNonZero(const Mat& row, Mat& idx);
void computeJacobianExtrinsic(const Mat& extrinsicParams, Mat& JTJ_inv, Mat& JTE);
void computePhotoCameraJacobian(const Mat& rvecPhoto, const Mat& tvecPhoto, const Mat& rvecCamera,
const Mat& tvecCamera, Mat& rvecTran, Mat& tvecTran, const Mat& objectPoints, const Mat& imagePoints, const Mat& K,
const Mat& distort, const Mat& xi, Mat& jacobianPhoto, Mat& jacobianCamera, Mat& E);
void compose_motion(InputArray _om1, InputArray _T1, InputArray _om2, InputArray _T2, Mat& om3, Mat& T3, Mat& dom3dom1,
Mat& dom3dT1, Mat& dom3dom2, Mat& dom3dT2, Mat& dT3dom1, Mat& dT3dT1, Mat& dT3dom2, Mat& dT3dT2);
void JRodriguesMatlab(const Mat& src, Mat& dst);
void dAB(InputArray A, InputArray B, OutputArray dABdA, OutputArray dABdB);
double computeProjectError(Mat& parameters);
void vector2parameters(const Mat& parameters, std::vector<Vec3f>& rvecVertex, std::vector<Vec3f>& tvecVertexs);
void parameters2vector(const std::vector<Vec3f>& rvecVertex, const std::vector<Vec3f>& tvecVertex, Mat& parameters);
int _camType; //PINHOLE, FISHEYE or OMNIDIRECTIONAL
int _nCamera;
int _nMiniMatches;
int _flags;
double _error;
float _patternWidth, _patternHeight;
TermCriteria _criteria;
std::string _filename;
int _showExtraction;
Ptr<FeatureDetector> _detector;
Ptr<DescriptorExtractor> _descriptor;
Ptr<DescriptorMatcher> _matcher;
std::vector<edge> _edgeList;
std::vector<vertex> _vertexList;
std::vector<std::vector<cv::Mat> > _objectPointsForEachCamera;
std::vector<std::vector<cv::Mat> > _imagePointsForEachCamera;
std::vector<cv::Mat> _cameraMatrix;
std::vector<cv::Mat> _distortCoeffs;
std::vector<cv::Mat> _xi;
std::vector<std::vector<Mat> > _omEachCamera, _tEachCamera;
};
#endif
\ No newline at end of file
......@@ -40,62 +40,12 @@
//M*/
#ifndef __OPENCV_OMNIDIR_HPP__
#define __OPENCV_OMNIDIR_HPP__
#ifdef __cplusplus
#include <opencv2/core.hpp>
#include <opencv2/features2d.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/calib3d.hpp>
#include <vector>
#include "precomp.hpp"
#include <iostream>
namespace cv
{
/* @defgroup calib3d_omnidir Omnidirectional camera model\
Here is a brief description of implenmented omnidirectional camera model. This model can be
used for both catadioptric and fisheye cameras. Especially, catadioptric cameras have very
large field of view (FOV), i.e., a 360 degrees of horizontal FOV, means the scene around the
camera can be all taken in a singel photo. Compared with perspective cameras, omnidirectional
cameras get more information in a single shot and avoid things like image stitching.
The large FOV of omnidirectional cameras also introduces large distortion, so that it is
not vivid for human's eye. Rectification that removes distortion is also included in this module.
For a 3D point Xw in world coordinate, it is first transformed to camera coordinate:
\f[X_c = R X_w + T \f]
where R and T are rotation and translation matrix. Then \f$ X_c \f$ is then projected to unit sphere:
\f[ X_s = \frac{Xc}{||Xc||} \f]
Let \f$ X_s = (x, y, z) \f$, then \f$ X_s \f$ is projected to normalized plane:
\f[ (x_u, y_u, 1) = (\frac{x}{z + \xi}, \frac{y}{z + \xi}, 1) \f]
where \f$ \xi \f$ is a parameter of camera. So far the point contains no distortion, add distortion by
\f[ x_d = (1 + k_1 r^2 + k_2 r^4 )*x_u + 2p_1 x_u y_u + p_2(r^2 + 2x_u^2 ) \\
y_d = (1 + k_1 r^2 + k_2 r^4 )*y_u + p_1 (r^2 + 2y_u^2) + 2p_2 x_u y_u \f]
where \f$ r^2 = x_u^2 + y_u^2\f$ and \f$(k_1, k_2, p_1, p_2)\f$ are distortion coefficients.
At last, convert to pixel coordinates:
\f[ u = f_x x_d + s y_d + c_x \\
v = f_y y_d + c_y \f]
where \f$ s\f$ is the skew coefficient and \f$ (cx, cy\f$ are image centers.
*/
/** @brief The methods in this namespace is to calibrate omnidirectional cameras.
This module was accepted as a GSoC 2015 project for OpenCV, authored by
Baisheng Lai, mentored by Bo Li.
@ingroup calib3d_omnidir
*/
namespace omnidir
{
//! @addtogroup calib3d_omnidir
//! @{
enum {
CALIB_USE_GUESS = 1,
CALIB_FIX_SKEW = 2,
......@@ -108,101 +58,235 @@ namespace omnidir
CALIB_FIX_CENTER = 256
};
enum{
RECTIFY_PERSPECTIVE = 1,
RECTIFY_CYLINDRICAL = 2,
RECTIFY_LONGLATI = 3,
RECTIFY_STEREOGRAPHIC = 4
};
enum{
XYZRGB = 1,
XYZ = 2
};
/**
* This module was accepted as a GSoC 2015 project for OpenCV, authored by
* Baisheng Lai, mentored by Bo Li.
*/
/** @brief Projects points for omnidirectional camera using CMei's model
@param objectPoints Object points in world coordiante, 1xN/Nx1 3-channel of type CV_64F and N
is the number of points.
@param imagePoints Output array of image points, 1xN/Nx1 2-channel of type CV_64F
@param objectPoints Object points in world coordinate, vector of vector of Vec3f or Mat of
1xN/Nx1 3-channel of type CV_32F and N is the number of points. 64F is also acceptable.
@param imagePoints Output array of image points, vector of vector of Vec2f or
1xN/Nx1 2-channel of type CV_32F. 64F is also acceptable.
@param rvec vector of rotation between world coordinate and camera coordinate, i.e., om
@param tvec vector of translation between pattern coordinate and camera coordinate
@param K Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
@param D Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
@param xi The parameter xi for CMei's model
@param jacobian Optional output 2Nx16 of type CV_64F jacobian matrix, constains the derivatives of
image pixel points wrt parametes including \f$om, T, f_x, f_y, s, c_x, c_y, xi, k_1, k_2, p_1, p_2\f$.
@param jacobian Optional output 2Nx16 of type CV_64F jacobian matrix, contains the derivatives of
image pixel points wrt parameters including \f$om, T, f_x, f_y, s, c_x, c_y, xi, k_1, k_2, p_1, p_2\f$.
This matrix will be used in calibration by optimization.
The function projects object 3D points of world coordiante to image pixels, parametered by intrinsic
and extrinsic parameters. Also, it optionaly compute a by-product: the jacobian matrix containing
onstains the derivatives of image pixel points wrt intrinsic and extrinsic parametes.
The function projects object 3D points of world coordinate to image pixels, parameter by intrinsic
and extrinsic parameters. Also, it optionally compute a by-product: the jacobian matrix containing
contains the derivatives of image pixel points wrt intrinsic and extrinsic parameters.
*/
CV_EXPORTS_W void projectPoints(InputArray objectPoints, OutputArray imagePoints, InputArray rvec, InputArray tvec,
InputArray K, double xi, InputArray D, OutputArray jacobian = noArray());
/** @brief Undistort 2D image points for omnidirectional camera using CMei's model
@param distorted Array of distorted image points, 1xN/Nx1 2-channel of tyep CV_64F
@param distorted Array of distorted image points, vector of Vec2f
or 1xN/Nx1 2-channel Mat of type CV_32F, 64F depth is also acceptable
@param K Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
@param D Distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
@param xi The parameter xi for CMei's model
@param R Rotation trainsform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3
1-channel or 1x1 3-channel
@param undistorted array of normalized object points, 1xN/Nx1 2-channel of type CV_64F
@param undistorted array of normalized object points, vector of Vec2f/Vec2d or 1xN/Nx1 2-channel Mat with the same
depth of distorted points.
*/
CV_EXPORTS_W void undistortPoints(InputArray distorted, OutputArray undistorted, InputArray K, InputArray D, InputArray xi, InputArray R);
CV_EXPORTS_W void undistortPoints(InputArray distorted, OutputArray undistorted, InputArray K, InputArray D, double xi, InputArray R);
/** @brief Computes undistortion and rectification maps for omnidirectional camera image transform by cv::remap().
If D is empty zero distortion is used, if R or P is empty identity matrixes are used.
/** @brief Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R.
It output two maps that are used for cv::remap(). If D is empty then zero distortion is used,
if R or P is empty then identity matrices are used.
@param K Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
@param D Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
@param xi The parameter xi for CMei's model
@param R Rotation trainsform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3
@param K Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$, with depth CV_32F or CV_64F
@param D Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$, with depth CV_32F or CV_64F
@param R Rotation transform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3, with depth CV_32F or CV_64F
@param P New camera matrix (3x3) or new projection matrix (3x4)
@param size Undistorted image size.
@param mltype Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps()
@param m1type Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps()
for details.
@param map1 The first output map.
@param map2 The second output map.
@param flags Flags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC
are supported.
*/
CV_EXPORTS_W void initUndistortRectifyMap(InputArray K, InputArray D, double xi, InputArray R, InputArray P, const cv::Size& size,
int mltype, OutputArray map1, OutputArray map2);
CV_EXPORTS_W void initUndistortRectifyMap(InputArray K, InputArray D, InputArray xi, InputArray R, InputArray P, const cv::Size& size,
int mltype, OutputArray map1, OutputArray map2, int flags);
/** @brief Undistort omnidirectional images to perspective images
@param distorted omnidirectional image with very large distortion
@param undistorted The output undistorted image
@param distorted The input omnidirectional image.
@param undistorted The output undistorted image.
@param K Camera matrix \f$K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$.
@param D Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2)\f$.
@param xi The parameter xi for CMei's model
@param Knew Camera matrix of the distorted image. By default, it is just K.
@param xi The parameter xi for CMei's model.
@param Knew Camera matrix of the distorted image. If it is not assigned, it is just K.
@param new_size The new image size. By default, it is the size of distorted.
@param R Rotation matrix between the input and output images. By default, it is identity matrix.
*/
CV_EXPORTS_W void undistortImage(InputArray distorted, OutputArray undistorted, InputArray K, InputArray D, double xi,
InputArray Knew = cv::noArray(), const Size& new_size = Size());
CV_EXPORTS_W void undistortImage(InputArray distorted, OutputArray undistorted, InputArray K, InputArray D, InputArray xi, int flags,
InputArray Knew = cv::noArray(), const Size& new_size = Size(), InputArray R = Matx33d::eye());
/** @brief Perform omnidirectional camera calibration
/** @brief Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.
@param patternPoints Vector of vector of pattern points in world (pattern) coordiante, 1xN/Nx1 3-channel
@param imagePoints Vector of vector of correspoinding image points of objectPoints
@param objectPoints Vector of vector of Vec3f object points in world (pattern) coordinate.
It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.
@param imagePoints Vector of vector of Vec2f corresponding image points of objectPoints. It must be the same
size and the same type with objectPoints.
@param size Image size of calibration images.
@param K Output calibrated camera matrix. If you want to initialize K by yourself, input a non-empty K.
@param xi Ouput parameter xi for CMei's model
@param K Output calibrated camera matrix.
@param xi Output parameter xi for CMei's model
@param D Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$
@param omAll Output rotations for each calibration images
@param tAll Output translation for each calibration images
@param flags The flags that control calibrate
@param criteria Termination criteria for optimization
@param idx Indices of images that pass initialization, which are really used in calibration. So the size of rvecs is the
same as idx.total().
*/
CV_EXPORTS_W double calibrate(InputArray patternPoints, InputArray imagePoints, Size size,
InputOutputArray K, InputOutputArray xi, InputOutputArray D, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs,
int flags, TermCriteria criteria, OutputArray idx=noArray());
/** @brief Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two
cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.
@param objectPoints Object points in world (pattern) coordinate. Its type is vector<vector<Vec3f> >.
It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.
@param imagePoints1 The corresponding image points of the first camera, with type vector<vector<Vec2f> >.
It must be the same size and the same type as objectPoints.
@param imagePoints2 The corresponding image points of the second camera, with type vector<vector<Vec2f> >.
It must be the same size and the same type as objectPoints.
@param imageSize Image size of calibration images.
@param K1 Output camera matrix for the first camera.
@param xi1 Output parameter xi of Mei's model for the first camera
@param D1 Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera
@param K2 Output camera matrix for the first camera.
@param xi2 Output parameter xi of CMei's model for the second camera
@param D2 Output distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera
@param rvec Output rotation between the first and second camera
@param tvec Output translation between the first and second camera
@param rvecsL Output rotation for each image of the first camera
@param tvecsL Output translation for each image of the first camera
@param flags The flags that control stereoCalibrate
@param criteria Termination criteria for optimization
@param idx Indices of image pairs that pass initialization, which are really used in calibration. So the size of rvecs is the
same as idx.total().
@
*/
CV_EXPORTS_W double calibrate(InputOutputArrayOfArrays patternPoints, InputOutputArrayOfArrays imagePoints, Size size,
InputOutputArray K, InputOutputArray xi, InputOutputArray D, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll,
int flags, TermCriteria criteria);
CV_EXPORTS_W double stereoCalibrate(InputOutputArrayOfArrays objectPoints, InputOutputArrayOfArrays imagePoints1, InputOutputArrayOfArrays imagePoints2,
const Size& imageSize1, const Size& imageSize2, InputOutputArray K1, InputOutputArray xi1, InputOutputArray D1, InputOutputArray K2, InputOutputArray xi2,
InputOutputArray D2, OutputArray rvec, OutputArray tvec, OutputArrayOfArrays rvecsL, OutputArrayOfArrays tvecsL, int flags, TermCriteria criteria, OutputArray idx=noArray());
/** @brief Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras
@param R Rotation between the first and second camera
@param T Translation between the first and second camera
@param R1 Output 3x3 rotation matrix for the first camera
@param R2 Output 3x3 rotation matrix for the second camera
*/
CV_EXPORTS_W void stereoRectify(InputArray R, InputArray T, OutputArray R1, OutputArray R2);
/** @brief Stereo 3D reconstruction from a pair of images
@param K1 Input camera matrix of the first camera
@param D1 Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the first camera
@param xi1 Input parameter xi for the first camera for CMei's model
@param K2 Input camera matrix of the second camera
@param D2 Input distortion parameters \f$(k_1, k_2, p_1, p_2)\f$ for the second camera
@param xi2 Input parameter xi for the second camera for CMei's model
@param R Rotation between the first and second camera
@param T Translation between the first and second camera
@param flag Flag of rectification type, RECTIFY_PERSPECTIVE or RECTIFY_LONGLATI
@param numDisparities The parameter 'numDisparities' in StereoSGBM, see StereoSGBM for details.
@param SADWindowSize The parameter 'SADWindowSize' in StereoSGBM, see StereoSGBM for details.
@param depth Depth map generated by stereo matching
@param pointCloud Point cloud of 3D reconstruction, with type CV_64FC3
@param newSize Image size of rectified image, see omnidir::undistortImage
@param Knew New camera matrix of rectified image, see omnidir::undistortImage
*/
CV_EXPORTS_W void stereoReconstruct(InputArray image1, InputArray image2, InputArray K1, InputArray D1, InputArray xi1,
InputArray K2, InputArray D2, InputArray xi2, InputArray R, InputArray T, int flag, int numDisparities, int SADWindowSize,
OutputArray disparity, OutputArray image1Rec, OutputArray image2Rec, const Size& newSize = Size(), InputArray Knew = cv::noArray(),
OutputArray pointCloud = cv::noArray(), int pointType = XYZRGB);
//! @} calib3d_omnidir
namespace internal
{
void initializeCalibration(InputOutputArrayOfArrays objectPoints, InputOutputArrayOfArrays imagePoints, Size size, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll, OutputArray K, double& xi);
void initializeCalibration(InputOutputArrayOfArrays objectPoints, InputOutputArrayOfArrays imagePoints, Size size, OutputArrayOfArrays omAll,
OutputArrayOfArrays tAll, OutputArray K, double& xi, OutputArray idx = noArray());
void initializeStereoCalibration(InputOutputArrayOfArrays objectPoints, InputOutputArrayOfArrays imagePoints1, InputOutputArrayOfArrays imagePoints2,
const Size& size1, const Size& size2, OutputArray om, OutputArray T, OutputArrayOfArrays omL, OutputArrayOfArrays tL, OutputArray K1, OutputArray D1, OutputArray K2, OutputArray D2,
double &xi1, double &xi2, int flags, OutputArray idx);
void computeJacobian(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, InputArray parameters, Mat& JTJ_inv, Mat& JTE, int flags);
void encodeParameters(InputArray K, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll, InputArray distoaration, double xi, int n, OutputArray parameters);
void computeJacobianStereo(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
InputArray parameters, Mat& JTJ_inv, Mat& JTE, int flags);
void encodeParameters(InputArray K, InputArrayOfArrays omAll, InputArrayOfArrays tAll, InputArray distoaration, double xi, OutputArray parameters);
void encodeParametersStereo(InputArray K1, InputArray K2, InputArray om, InputArray T, InputArrayOfArrays omL, InputArrayOfArrays tL,
InputArray D1, InputArray D2, double xi1, double xi2, OutputArray parameters);
void decodeParameters(InputArray paramsters, OutputArray K, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll, OutputArray distoration, double& xi);
void decodeParametersStereo(InputArray parameters, OutputArray K1, OutputArray K2, OutputArray om, OutputArray T, OutputArrayOfArrays omL,
OutputArrayOfArrays tL, OutputArray D1, OutputArray D2, double& xi1, double& xi2);
void estimateUncertainties(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, InputArray parameters, Mat& errors, Vec2d& std_error, double& rms, int flags);
double computeMeanReproerr(InputArrayOfArrays imagePoints, InputArrayOfArrays proImagePoints);
void estimateUncertaintiesStereo(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2, InputArray parameters, Mat& errors,
Vec2d& std_error, double& rms, int flags);
double computeMeanReproErr(InputArrayOfArrays imagePoints, InputArrayOfArrays proImagePoints);
double computeMeanReproErr(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, InputArray K, InputArray D, double xi, InputArrayOfArrays omAll,
InputArrayOfArrays tAll);
double computeMeanReproErrStereo(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2, InputArray K1, InputArray K2,
InputArray D1, InputArray D2, double xi1, double xi2, InputArray om, InputArray T, InputArrayOfArrays omL, InputArrayOfArrays TL);
void checkFixed(Mat &G, int flags, int n);
void subMatrix(const Mat& src, Mat& dst, const std::vector<int>& cols, const std::vector<int>& rows);
void flags2idx(int flags, std::vector<int>& idx, int n);
void flags2idxStereo(int flags, std::vector<int>& idx, int n);
void fillFixed(Mat&G, int flags, int n);
void fillFixedStereo(Mat& G, int flags, int n);
double findMedian(const Mat& row);
Vec3d findMedian3(InputArray mat);
void getInterset(InputArray idx1, InputArray idx2, OutputArray inter1, OutputArray inter2, OutputArray inter_ori);
void compose_motion(InputArray _om1, InputArray _T1, InputArray _om2, InputArray _T2, Mat& om3, Mat& T3, Mat& dom3dom1,
Mat& dom3dT1, Mat& dom3dom2, Mat& dom3dT2, Mat& dT3dom1, Mat& dT3dT1, Mat& dT3dom2, Mat& dT3dT2);
//void JRodriguesMatlab(const Mat& src, Mat& dst);
//void dAB(InputArray A, InputArray B, OutputArray dABdA, OutputArray dABdB);
} // internal
......@@ -210,4 +294,3 @@ namespace internal
} //cv
#endif
\ No newline at end of file
#endif
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, Baisheng Lai (laibaisheng@gmail.com), Zhejiang University,
// all rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_RANDOMPATTERN_HPP__
#define __OPENCV_RANDOMPATTERN_HPP__
#include "precomp.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
using namespace cv;
/** @brief Class for finding features points and corresponding 3D in world coordinate of
a "random" pattern, which can be to be used in calibration. It is useful when pattern is
partly occluded or only a part of pattern can be observed in multiple cameras calibration.
The pattern can be generated by randomPatternGenerator class described in this file.
Please refer to paper
B. Li, L. Heng, K. Kevin and M. Pollefeys, "A Multiple-Camera System
Calibration Toolbox Using A Feature Descriptor-Based Calibration
Pattern", in IROS 2013.
*/
class CV_EXPORTS randomPatternCornerFinder
{
public:
/* @brief Construct randomPatternCalibration object
@param patternWidth the real width of "random" pattern in a user defined unit.
@param patternHeight the real height of "random" pattern in a user defined unit.
@param nMiniMatch number of minimal matches, otherwise that image is abandoned
@depth depth of output objectPoints and imagePoints, set it to be CV_32F or CV_64F.
@showExtraction whether show feature extraction, 0 for no and 1 for yes.
@detector feature detector to detect feature points in pattern and images.
@descriptor feature descriptor.
@matcher feature matcher.
*/
randomPatternCornerFinder(float patternWidth, float patternHeight,
int nminiMatch = 20, int depth = CV_32F, int showExtraction = 0,
Ptr<FeatureDetector> detector = AKAZE::create(AKAZE::DESCRIPTOR_MLDB, 0, 3, 0.002f),
Ptr<DescriptorExtractor> descriptor = AKAZE::create(AKAZE::DESCRIPTOR_MLDB,0, 3, 0.002f),
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-L1"));
/* @brief Load pattern image and compute features for pattern
@param patternImage image for "random" pattern generated by randomPatternGenerator, run it first.
*/
void loadPattern(cv::Mat patternImage);
/* @brief Compute matched object points and image points which are used for calibration
The objectPoints (3D) and imagePoints (2D) are stored inside the class. Run getObjectPoints()
and getImagePoints() to get them.
@param inputImages vector of 8-bit grayscale images containing "random" pattern
that are used for calibration.
*/
void computeObjectImagePoints(std::vector<cv::Mat> inputImages);
//void computeObjectImagePoints2(std::vector<cv::Mat> inputImages);
/* @brief Compute object and image points for a single image. It returns a vector<Mat> that
the first element stores the imagePoints and the second one stores the objectPoints.
@param inputImage single input image for calibration
*/
std::vector<cv::Mat> computeObjectImagePointsForSingle(cv::Mat inputImage);
/* @brief Get object(3D) points
*/
std::vector<cv::Mat> getObjectPoints();
/* @brief and image(2D) points
*/
std::vector<cv::Mat> getImagePoints();
private:
std::vector<cv::Mat> _objectPonits, _imagePoints;
float _patternWidth, _patternHeight;
cv::Size _patternImageSize;
int _nminiMatch;
int _depth;
Ptr<FeatureDetector> _detector;
Ptr<DescriptorExtractor> _descriptor;
Ptr<DescriptorMatcher> _matcher;
Mat _descriptorPattern;
std::vector<cv::KeyPoint> _keypointsPattern;
Mat _patternImage;
int _showExtraction;
void keyPoints2MatchedLocation(const std::vector<cv::KeyPoint>& imageKeypoints,
const std::vector<cv::KeyPoint>& patternKeypoints, const std::vector<cv::DMatch> matchces,
cv::Mat& matchedImagelocation, cv::Mat& matchedPatternLocation);
void getFilteredLocation(cv::Mat& imageKeypoints, cv::Mat& patternKeypoints, const cv::Mat mask);
void getObjectImagePoints(const cv::Mat& imageKeypoints, const cv::Mat& patternKeypoints);
void crossCheckMatching( cv::Ptr<DescriptorMatcher>& descriptorMatcher,
const Mat& descriptors1, const Mat& descriptors2,
std::vector<DMatch>& filteredMatches12, int knn=1 );
void drawCorrespondence(const Mat& image1, const std::vector<cv::KeyPoint> keypoint1,
const Mat& image2, const std::vector<cv::KeyPoint> keypoint2, const std::vector<cv::DMatch> matchces,
const Mat& mask1, const Mat& mask2);
};
/* @brief Class to generate "random" pattern image that are used for randomPatternCornerFinder
Please refer to paper
B. Li, L. Heng, K. Kevin and M. Pollefeys, "A Multiple-Camera System
Calibration Toolbox Using A Feature Descriptor-Based Calibration
Pattern", in IROS 2013.
*/
class randomPatternGenerator
{
public:
/* @brief Construct randomPatternGenerator
@param imageWidth image width of the generated pattern image
@param imageHeight image height of the generated pattern image
*/
randomPatternGenerator(int imageWidth, int imageHeight);
/* @brief Generate pattern
*/
void generatePattern();
/* @brief Get pattern
*/
cv::Mat getPattern();
private:
cv::Mat _pattern;
int _imageWidth, _imageHeight;
};
#endif
\ No newline at end of file
#include "opencv2/omnidir.hpp"
#include "opencv2/multiCameraCalibration.hpp"
#include "opencv2/randomPatten.hpp"
using namespace std;
using namespace cv;
const char * usage =
"\n example command line for multi-camera calibration by using random pattern \n"
" multiCamCalib -nc 5 -pw 800 -ph 600 -ct 1 -fe 0 -nm 10 multi_camera.xml \n"
"\n"
" the file multi_camera.xml is generated by imagelist_creator as \n"
" imagelist_creator multi_camera.xml *.* \n"
" note the first filename in multi_camera.xml is the pattern, the rest are photo names,\n"
" photo names should be in form of cameraIdx-timestamp.*, and cameraIdx starts from 0";
static void help()
{
printf("\n This is a sample for multi-camera calibration, so far it only support random pattern,\n"
"see randomPattern.hpp for detail. Pinhole and omnidirectional cameras are both supported, \n"
"for omnidirectional camera, see omnidir.hpp for detail.\n"
"Usage: mutiCamCalib \n"
" -nc <num_camera> # number of cameras \n"
" -pw <pattern_width> # physical width of random pattern \n"
" -ph <pattern_height> # physical height of random pattern \n"
" -ct <camera_type> # camera type, 0 for pinhole and 1 for omnidirectional \n"
" -fe # whether show feature extraction\n"
" -nm # number of minimal matches of an image \n"
" input_data # text file with pattern file names and a list of photo names, the file is generated by imagelist_creator \n");
printf("\n %s", usage);
}
int main(int argc, char** argv)
{
float patternWidth, patternHeight;
int nCamera, nMiniMatches, cameraType;
const char* outputFilename = "multi-camera-results.xml";
const char* inputFilename = 0;
int showFeatureExtraction;
if (argc < 2)
{
help();
return 1;
}
for (int i = 1; i < argc; ++i)
{
const char* s = argv[i];
if (strcmp( s, "-nc") == 0)
{
if (sscanf( argv[++i], "%u", &nCamera) != 1 || nCamera <= 0)
{
return fprintf(stderr, "Invalid number of cameras \n"), -1;
}
}
else if ( strcmp( s, "-pw" ) == 0 )
{
if (sscanf( argv[++i], "%f", &patternWidth) != 1 || patternWidth <=0 )
{
return fprintf(stderr, "Invalid pattern width \n"), -1;
}
}
else if ( strcmp( s, "-ph" ) == 0 )
{
if (sscanf( argv[++i], "%f", &patternHeight) != 1 || patternHeight <=0 )
{
return fprintf(stderr, "Invalid pattern height \n"), -1;
}
}
else if ( strcmp( s, "-ct" ) == 0 )
{
if (sscanf( argv[++i], "%u", &cameraType) != 1 || (cameraType !=0 && cameraType !=1 && cameraType !=2) )
{
return fprintf(stderr, "Invalid camera type, 0 for pinhole and 1 for omnidirectional \n"), -1;
}
}
else if ( strcmp( s, "-fe" ) == 0 )
{
if (sscanf( argv[++i], "%u", &showFeatureExtraction) != 1 || (showFeatureExtraction !=1 && showFeatureExtraction !=0) )
{
return fprintf(stderr, "Not bool value, set to 0 or 1 \n"), -1;
}
}
else if ( strcmp( s, "-nm" ) == 0 )
{
if (sscanf( argv[++i], "%u", &nMiniMatches) != 1 || nMiniMatches <=0 )
{
return fprintf(stderr, "Invalid number of minimal matches \n"), -1;
}
}
else if( s[0] != '-')
{
inputFilename = s;
}
else
{
return fprintf( stderr, "Unknown option %s\n", s ), -1;
}
}
// do multi-camera calibration
multiCameraCalibration multiCalib(cameraType, nCamera, inputFilename, patternWidth, patternHeight, showFeatureExtraction, nMiniMatches);
multiCalib.loadImages();
multiCalib.initialize();
multiCalib.optimizeExtrinsics();
// the above three lines can be replaced by multiCalib.run();
multiCalib.writeParameters(outputFilename);
}
\ No newline at end of file
#include"opencv2/omnidir.hpp"
#include"opencv2/core/core.hpp"
#include"opencv2/imgproc/imgproc.hpp"
#include"opencv2/highgui/highgui.hpp"
#include <vector>
#include <iostream>
#include <string>
#include <time.h>
using namespace cv;
using namespace std;
const char * usage =
"\n example command line for omnidirectional camera calibration.\n"
" omniCalibration -w 6 -h 7 -sw 80 -sh 80 image_list.xml \n"
" \n"
" the file image_list.xml is generated by imagelist_creator as\n"
"imagelist_creator image_list.xml *.*";
static void help()
{
printf("\n This is a sample for omnidirectional camera calibration.\n"
"Usage: omniCalibration\n"
" -w <board_width> # the number of inner corners per one of board dimension\n"
" -h <board_height> # the number of inner corners per another board dimension\n"
" [-sw <square_width>] # the width of square in some user-defined units (1 by default)\n"
" [-sh <square_height>] # the height of square in some user-defined units (1 by default)\n"
" [-o <out_camera_params>] # the output filename for intrinsic [and extrinsic] parameters\n"
" [-fs <fix_skew>] # fix skew\n"
" [-fp ] # fix the principal point at the center\n"
" input_data # input data - text file with a list of the images of the board, which is generated by imagelist_creator"
);
printf("\n %s", usage);
}
static void calcChessboardCorners(Size boardSize, double square_width, double square_height,
Mat& corners)
{
// corners has type of CV_64FC3
corners.release();
int n = boardSize.width * boardSize.height;
corners.create(n, 1, CV_64FC3);
Vec3d *ptr = corners.ptr<Vec3d>();
for (int i = 0; i < boardSize.height; ++i)
{
for (int j = 0; j < boardSize.width; ++j)
{
ptr[i*boardSize.width + j] = Vec3d(double(j * square_width), double(i * square_height), 0.0);
}
}
}
static bool detecChessboardCorners(const vector<string>& list, vector<string>& list_detected,
vector<Mat>& imagePoints, Size boardSize, Size& imageSize)
{
imagePoints.resize(0);
list_detected.resize(0);
int n_img = (int)list.size();
Mat img;
for(int i = 0; i < n_img; ++i)
{
Mat points;
img = imread(list[i], IMREAD_GRAYSCALE);
bool found = findChessboardCorners( img, boardSize, points);
if (found)
{
if (points.type() != CV_64FC2)
points.convertTo(points, CV_64FC2);
imagePoints.push_back(points);
list_detected.push_back(list[i]);
}
}
if (!img.empty())
imageSize = img.size();
if (imagePoints.size() < 3)
return false;
else
return true;
}
static bool readStringList( const string& filename, vector<string>& l )
{
l.resize(0);
FileStorage fs(filename, FileStorage::READ);
if( !fs.isOpened() )
return false;
FileNode n = fs.getFirstTopLevelNode();
if( n.type() != FileNode::SEQ )
return false;
FileNodeIterator it = n.begin(), it_end = n.end();
for( ; it != it_end; ++it )
l.push_back((string)*it);
return true;
}
static void saveCameraParams( const string & filename, Size imageSize, Size boardSize, double square_width,
double square_height, int flags, const Mat& cameraMatrix, const Mat& distCoeffs, const double xi,
const vector<Vec3d>& rvecs, const vector<Vec3d>& tvecs, vector<string> detec_list, const Mat& idx,
const double rms, const vector<Mat>& imagePoints)
{
FileStorage fs( filename, FileStorage::WRITE );
time_t tt;
time( &tt );
struct tm *t2 = localtime( &tt );
char buf[1024];
strftime( buf, sizeof(buf)-1, "%c", t2 );
fs << "calibration_time" << buf;
if ( !rvecs.empty())
fs << "nFrames" << (int)rvecs.size();
if ( flags != 0)
{
sprintf( buf, "flags: %s%s%s%s%s",
flags & omnidir::CALIB_USE_GUESS ? "+use_intrinsic_guess" : "",
flags & omnidir::CALIB_FIX_SKEW ? "+fix_skew" : "",
flags & omnidir::CALIB_FIX_K1 ? "+fix_k1" : "",
flags & omnidir::CALIB_FIX_K2 ? "+fix_k2" : "",
flags & omnidir::CALIB_FIX_P1 ? "+fix_p1" : "",
flags & omnidir::CALIB_FIX_P2 ? "+fix_p2" : "",
flags & omnidir::CALIB_FIX_XI ? "+fix_xi" : "",
flags & omnidir::CALIB_FIX_GAMMA ? "+fix_gamma" : "",
flags & omnidir::CALIB_FIX_CENTER ? "+fix_center" : "");
//cvWriteComment( *fs, buf, 0 );
}
fs << "flags" << flags;
fs << "camera_matrix" << cameraMatrix;
fs << "distortion_coefficients" << distCoeffs;
fs << "xi" << xi;
//cvWriteComment( *fs, "names of images that are acturally used in calibration", 0 );
fs << "used_imgs" << "[";
for (int i = 0; i < (int)idx.total(); ++i)
{
fs << detec_list[(int)idx.at<int>(i)];
}
fs << "]";
if ( !rvecs.empty() && !tvecs.empty() )
{
Mat rvec_tvec((int)rvecs.size(), 6, CV_64F);
for (int i = 0; i < (int)rvecs.size(); ++i)
{
Mat(rvecs[i]).reshape(1, 1).copyTo(rvec_tvec(Rect(0, i, 3, 1)));
Mat(tvecs[i]).reshape(1, 1).copyTo(rvec_tvec(Rect(3, i, 3, 1)));
}
//cvWriteComment( *fs, "a set of 6-tuples (rotation vector + translation vector) for each view", 0 );
fs << "extrinsic_parameters" << rvec_tvec;
}
if ( !imagePoints.empty() )
{
Mat imageMat((int)imagePoints.size(), (int)imagePoints[0].total(), CV_64FC2);
for (int i = 0; i < (int)imagePoints.size(); ++i)
{
Mat r = imageMat.row(i).reshape(2, imageMat.cols);
Mat imagei(imagePoints[i]);
imagei.copyTo(r);
}
fs << "image_points" << imageMat;
}
}
int main(int argc, char** argv)
{
Size boardSize, imageSize;
int flags = 0;
double square_width, square_height;
const char* outputFilename = "out_camera_params.xml";
const char* inputFilename = 0;
vector<Mat> objectPoints;
vector<Mat> imagePoints;
if(argc < 2)
{
help();
return 1;
}
for(int i = 1; i < argc; i++)
{
const char* s = argv[i];
if( strcmp( s, "-w") == 0)
{
if( sscanf( argv[++i], "%u", &boardSize.width ) != 1 || boardSize.width <= 0 )
return fprintf( stderr, "Invalid board width\n" ), -1;
}
else if( strcmp( s, "-h" ) == 0 )
{
if( sscanf( argv[++i], "%u", &boardSize.height ) != 1 || boardSize.height <= 0 )
return fprintf( stderr, "Invalid board height\n" ), -1;
}
else if( strcmp( s, "-sw" ) == 0 )
{
if( sscanf( argv[++i], "%lf", &square_width ) != 1 || square_width <= 0 )
return fprintf(stderr, "Invalid square width\n"), -1;
}
else if( strcmp( s, "-sh" ) == 0 )
{
if( sscanf( argv[++i], "%lf", &square_height) != 1 || square_height <= 0 )
return fprintf(stderr, "Invalid square height\n"), -1;
}
else if( strcmp( s, "-o" ) == 0 )
{
outputFilename = argv[++i];
}
else if( strcmp( s, "-fs" ) == 0 )
{
flags |= omnidir::CALIB_FIX_SKEW;
}
else if( strcmp( s, "-fp" ) == 0 )
{
flags |= omnidir::CALIB_FIX_CENTER;
}
else if( s[0] != '-')
{
inputFilename = s;
}
else
{
return fprintf( stderr, "Unknown option %s\n", s ), -1;
}
}
// get image name list
vector<string> image_list, detec_list;
if(!readStringList(inputFilename, image_list))
return fprintf( stderr, "Failed to read image list\n"), -1;
// find corners in images
// some images may be failed in automatic corner detection, passed cases are in detec_list
if(!detecChessboardCorners(image_list, detec_list, imagePoints, boardSize, imageSize))
return fprintf(stderr, "Not enough corner detected images\n"), -1;
// calculate object coordinates
Mat object;
calcChessboardCorners(boardSize, square_width, square_height, object);
for(int i = 0; i < (int)detec_list.size(); ++i)
objectPoints.push_back(object);
// run calibration, some images are discarded in calibration process because they are failed
// in initialization. Retained image indexes are in idx variable.
Mat K, D, xi, idx;
vector<Vec3d> rvecs, tvecs;
double _xi, rms;
TermCriteria criteria(3, 100, 1e-4);
rms = omnidir::calibrate(objectPoints, imagePoints, imageSize, K, xi, D, rvecs, tvecs, flags, criteria, idx);
_xi = xi.at<double>(0);
saveCameraParams(outputFilename, imageSize, boardSize, square_width, square_height, flags, K, D, _xi,
rvecs, tvecs, detec_list, idx, rms, imagePoints);
}
#include "opencv2/omnidir.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <vector>
#include <iostream>
#include <string>
#include <time.h>
using namespace cv;
using namespace std;
const char * usage =
"\n example command line for calibrate a pair of omnidirectional camera.\n"
" omniStereoCalibration -w 6 -h 7 -sw 80 -sh 80 image_list_1.xml image_list_2.xml\n"
" \n"
" the file image_list_1.xml and image_list_2.xml generated by imagelist_creator as\n"
"imagelist_creator image_list_1.xml *.*";
static void help()
{
printf("\n This is a sample for omnidirectional camera calibration.\n"
"Usage: omniCalibration\n"
" -w <board_width> # the number of inner corners per one of board dimension\n"
" -h <board_height> # the number of inner corners per another board dimension\n"
" [-sw <square_width>] # the width of square in some user-defined units (1 by default)\n"
" [-sh <square_height>] # the height of square in some user-defined units (1 by default)\n"
" [-o <out_camera_params>] # the output filename for intrinsic [and extrinsic] parameters\n"
" [-fs <fix_skew>] # fix skew\n"
" [-fp ] # fix the principal point at the center\n"
" input_data_1 # input data - text file with a list of the images of the first camera, which is generated by imagelist_creator"
" input_data_2 # input data - text file with a list of the images of the second camera, which is generated by imagelist_creator"
);
printf("\n %s", usage);
}
static void calcChessboardCorners(Size boardSize, double square_width, double square_height,
Mat& corners)
{
// corners has type of CV_64FC3
corners.release();
int n = boardSize.width * boardSize.height;
corners.create(n, 1, CV_64FC3);
Vec3d *ptr = corners.ptr<Vec3d>();
for (int i = 0; i < boardSize.height; ++i)
{
for (int j = 0; j < boardSize.width; ++j)
{
ptr[i*boardSize.width + j] = Vec3d(double(j * square_width), double(i * square_height), 0.0);
}
}
}
static bool detecChessboardCorners(const vector<string>& list1, vector<string>& list_detected_1,
const vector<string>& list2, vector<string>& list_detected_2,
vector<Mat>& image_points_1, vector<Mat>& image_points_2, Size boardSize, Size& imageSize)
{
image_points_1.resize(0);
image_points_2.resize(0);
list_detected_1.resize(0);
list_detected_2.resize(0);
int n_img = (int)list1.size();
Mat img_l, img_r;
for(int i = 0; i < n_img; ++i)
{
Mat points_1, points_2;
img_l = imread(list1[i], IMREAD_GRAYSCALE);
img_r = imread(list2[i], IMREAD_GRAYSCALE);
bool found_l = findChessboardCorners( img_l, boardSize, points_1);
bool found_r = findChessboardCorners( img_r, boardSize, points_2);
if (found_l && found_r)
{
if (points_1.type() != CV_64FC2)
points_1.convertTo(points_1, CV_64FC2);
if (points_2.type() != CV_64FC2)
points_2.convertTo(points_2, CV_64FC2);
image_points_1.push_back(points_1);
image_points_2.push_back(points_2);
list_detected_1.push_back(list1[i]);
list_detected_2.push_back(list2[i]);
}
}
if (!img_l.empty())
imageSize = img_l.size();
if (image_points_1.size() < 3)
return false;
else
return true;
}
static bool readStringList( const string& filename, vector<string>& l )
{
l.resize(0);
FileStorage fs(filename, FileStorage::READ);
if( !fs.isOpened() )
return false;
FileNode n = fs.getFirstTopLevelNode();
if( n.type() != FileNode::SEQ )
return false;
FileNodeIterator it = n.begin(), it_end = n.end();
for( ; it != it_end; ++it )
l.push_back((string)*it);
return true;
}
static void saveCameraParams( const string & filename, Size imageSize, Size boardSize, double square_width,
double square_height, int flags, const Mat& cameraMatrix1, const Mat& cameraMatrix2, const Mat& distCoeffs1,
const Mat& disCoeffs2, const double xi1, const double xi2, const Vec3d rvec, const Vec3d tvec,
const vector<Vec3d>& rvecs, const vector<Vec3d>& tvecs, vector<string> detec_list_1, vector<string> detec_list_2,
const Mat& idx, const double rms, const vector<Mat>& imagePoints1, const vector<Mat>& imagePoints2)
{
FileStorage fs( filename, FileStorage::WRITE );
time_t tt;
time( &tt );
struct tm *t2 = localtime( &tt );
char buf[1024];
strftime( buf, sizeof(buf)-1, "%c", t2 );
fs << "calibration_time" << buf;
if ( !rvecs.empty())
fs << "nFrames" << (int)rvecs.size();
if ( flags != 0)
{
sprintf( buf, "flags: %s%s%s%s%s",
flags & omnidir::CALIB_USE_GUESS ? "+use_intrinsic_guess" : "",
flags & omnidir::CALIB_FIX_SKEW ? "+fix_skew" : "",
flags & omnidir::CALIB_FIX_K1 ? "+fix_k1" : "",
flags & omnidir::CALIB_FIX_K2 ? "+fix_k2" : "",
flags & omnidir::CALIB_FIX_P1 ? "+fix_p1" : "",
flags & omnidir::CALIB_FIX_P2 ? "+fix_p2" : "",
flags & omnidir::CALIB_FIX_XI ? "+fix_xi" : "",
flags & omnidir::CALIB_FIX_GAMMA ? "+fix_gamma" : "",
flags & omnidir::CALIB_FIX_CENTER ? "+fix_center" : "");
//cvWriteComment( *fs, buf, 0 );
}
fs << "flags" << flags;
fs << "camera_matrix_1" << cameraMatrix1;
fs << "distortion_coefficients_1" << distCoeffs1;
fs << "xi_1" << xi1;
fs << "camera_matrix_2" << cameraMatrix2;
fs << "distortion_coefficients_2" << disCoeffs2;
fs << "xi_2" << xi2;
Mat om_t(1, 6, CV_64F);
Mat(rvec).reshape(1, 1).copyTo(om_t.colRange(0, 3));
Mat(tvec).reshape(1, 1).copyTo(om_t.colRange(3, 6));
//cvWriteComment( *fs, "6-tuples (rotation vector + translation vector) for each view", 0 );
fs << "extrinsic_parameters" << om_t;
if ( !rvecs.empty() && !tvecs.empty() )
{
Mat rvec_tvec((int)rvecs.size(), 6, CV_64F);
for (int i = 0; i < (int)rvecs.size(); ++i)
{
Mat(rvecs[i]).reshape(1, 1).copyTo(rvec_tvec(Rect(0, i, 3, 1)));
Mat(tvecs[i]).reshape(1, 1).copyTo(rvec_tvec(Rect(3, i, 3, 1)));
}
//cvWriteComment( *fs, "a set of 6-tuples (rotation vector + translation vector) for each view", 0 );
fs << "extrinsic_parameters_1" << rvec_tvec;
}
//cvWriteComment( *fs, "names of images that are acturally used in calibration", 0 );
fs << "used_imgs_1" << "[";
for (int i = 0; i < (int)idx.total(); ++i)
{
fs << detec_list_1[(int)idx.at<int>(i)];
}
fs << "]";
fs << "used_imgs_2" << "[";
for (int i = 0; i < (int)idx.total(); ++i)
{
fs << detec_list_2[(int)idx.at<int>(i)];
}
fs << "]";
if ( !imagePoints1.empty() )
{
Mat imageMat((int)imagePoints1.size(), (int)imagePoints1[0].total(), CV_64FC2);
for (int i = 0; i < (int)imagePoints1.size(); ++i)
{
Mat r = imageMat.row(i).reshape(2, imageMat.cols);
Mat imagei(imagePoints1[i]);
imagei.copyTo(r);
}
fs << "image_points_1" << imageMat;
}
if ( !imagePoints2.empty() )
{
Mat imageMat((int)imagePoints2.size(), (int)imagePoints2[0].total(), CV_64FC2);
for (int i = 0; i < (int)imagePoints2.size(); ++i)
{
Mat r = imageMat.row(i).reshape(2, imageMat.cols);
Mat imagei(imagePoints2[i]);
imagei.copyTo(r);
}
fs << "image_points_2" << imageMat;
}
}
int main(int argc, char** argv)
{
Size boardSize, imageSize;
int flags = 0;
double square_width, square_height;
const char* outputFilename = "out_camera_params_stereo.xml";
const char* inputFilename1 = 0;
const char* inputFilename2 = 0;
vector<Mat> objectPoints;
vector<Mat> imagePoints1;
vector<Mat> imagePoints2;
if(argc < 2)
{
help();
return 1;
}
bool fist_flag = true;
for(int i = 1; i < argc; i++)
{
const char* s = argv[i];
if( strcmp( s, "-w") == 0)
{
if( sscanf( argv[++i], "%u", &boardSize.width ) != 1 || boardSize.width <= 0 )
return fprintf( stderr, "Invalid board width\n" ), -1;
}
else if( strcmp( s, "-h" ) == 0 )
{
if( sscanf( argv[++i], "%u", &boardSize.height ) != 1 || boardSize.height <= 0 )
return fprintf( stderr, "Invalid board height\n" ), -1;
}
else if( strcmp( s, "-sw" ) == 0 )
{
if( sscanf( argv[++i], "%lf", &square_width ) != 1 || square_width <= 0 )
return fprintf(stderr, "Invalid square width\n"), -1;
}
else if( strcmp( s, "-sh" ) == 0 )
{
if( sscanf( argv[++i], "%lf", &square_height) != 1 || square_height <= 0 )
return fprintf(stderr, "Invalid square height\n"), -1;
}
else if( strcmp( s, "-o" ) == 0 )
{
outputFilename = argv[++i];
}
else if( strcmp( s, "-fs" ) == 0 )
{
flags |= omnidir::CALIB_FIX_SKEW;
}
else if( strcmp( s, "-fp" ) == 0 )
{
flags |= omnidir::CALIB_FIX_CENTER;
}
else if( s[0] != '-' && fist_flag)
{
fist_flag = false;
inputFilename1 = s;
}
else if( s[0] != '-' && !fist_flag)
{
inputFilename2 = s;
}
else
{
return fprintf( stderr, "Unknown option %s\n", s ), -1;
}
}
// get image name list
vector<string> image_list1, detec_list_1, image_list2, detec_list_2;
if((!readStringList(inputFilename1, image_list1)) || (!readStringList(inputFilename2, image_list2)))
return fprintf( stderr, "Failed to read image list\n"), -1;
// find corners in images
// some images may be failed in automatic corner detection, passed cases are in detec_list
if(!detecChessboardCorners(image_list1, detec_list_1, image_list2, detec_list_2,
imagePoints1, imagePoints2, boardSize, imageSize))
return fprintf(stderr, "Not enough corner detected images\n"), -1;
// calculate object coordinates
Mat object;
calcChessboardCorners(boardSize, square_width, square_height, object);
for(int i = 0; i < (int)detec_list_1.size(); ++i)
{
objectPoints.push_back(object);
}
// run calibration, some images are discarded in calibration process because they are failed
// in initialization. Retained image indexes are in idx variable.
Mat K1, K2, D1, D2, xi1, xi2, idx;
vector<Vec3d> rvecs, tvecs;
Vec3d rvec, tvec;
double _xi1, _xi2, rms;
TermCriteria criteria(3, 100, 1e-4);
rms = omnidir::stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize, K1, xi1, D1,
K2, xi2, D2, rvec, tvec, rvecs, tvecs, flags, criteria, idx);
_xi1 = xi1.at<double>(0);
_xi2 = xi2.at<double>(0);
saveCameraParams(outputFilename, imageSize, boardSize, square_width, square_height, flags,
K1, K2, D1, D2, _xi1, _xi2, rvec, tvec, rvecs, tvecs, detec_list_1, detec_list_2, idx, rms,
imagePoints1, imagePoints2);
}
\ No newline at end of file
#include "opencv2/randomPatten.hpp";
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/calib3d.hpp"
#include <vector>
#include <iostream>
#include <time.h>
using namespace std;
using namespace cv;
const char * usage =
"\n example command line for calibrate a camera by random pattern. \n"
" randomPatternCalibration -pw 600 -ph 850 -mm 20 image_list.xml \n"
"\n"
" the file image_list.xml is generated by imagelist_creator as\n"
"imagelist_creator image_list.xml *.*";
static void help()
{
printf("\n This is a sample for camera calibration by a random pattern.\n"
"Usage: randomPatternCalibration\n"
" -pw <pattern_width> # the physical width of random pattern\n"
" -ph <pattern_height> # the physical height of random pattern\n"
" -mm <minimal_match> # minimal number of matches\n"
" [-fp ] # fix the principal point at the center \n"
" input_data # input data - text file with a list of the images of the board, which is generated by imagelist_creator"
);
printf("\n %s", usage);
}
static bool readStringList( const string& filename, vector<string>& l )
{
l.resize(0);
FileStorage fs(filename, FileStorage::READ);
if( !fs.isOpened() )
return false;
FileNode n = fs.getFirstTopLevelNode();
if( n.type() != FileNode::SEQ )
return false;
FileNodeIterator it = n.begin(), it_end = n.end();
for( ; it != it_end; ++it )
l.push_back((string)*it);
return true;
}
static void saveCameraParams(const string& filename, Size imageSize, float patternWidth,
float patternHeight, int flags, const Mat& cameraMatrix, const Mat& distCoeffs,
const vector<Mat>& rvecs, const vector<Mat>& tvecs, double rms)
{
FileStorage fs (filename, FileStorage::WRITE );
time_t tt;
time( &tt );
struct tm *t2 = localtime( &tt );
char buf[1024];
strftime( buf, sizeof(buf)-1, "%c", t2 );
fs << "calibration_time" << buf;
if( !rvecs.empty())
fs << "nframes" << (int)rvecs.size();
fs << "image_width" << imageSize.width;
fs << "image_height" << imageSize.height;
fs << "pattern_width" << patternWidth;
fs << "pattern_height" << patternHeight;
fs << "flags" <<flags;
fs << "camera_matrix" << cameraMatrix;
fs << "distortion_coefficients" << distCoeffs;
fs << "rms" << rms;
if( !rvecs.empty() && !tvecs.empty() )
{
CV_Assert(rvecs[0].type() == tvecs[0].type());
Mat bigmat((int)rvecs.size(), 6, rvecs[0].type());
for( int i = 0; i < (int)rvecs.size(); i++ )
{
Mat r = bigmat(Range(i, i+1), Range(0,3));
Mat t = bigmat(Range(i, i+1), Range(3,6));
CV_Assert(rvecs[i].rows == 3 && rvecs[i].cols == 1);
CV_Assert(tvecs[i].rows == 3 && tvecs[i].cols == 1);
//*.t() is MatExpr (not Mat) so we can use assignment operator
r = rvecs[i].t();
t = tvecs[i].t();
}
//cvWriteComment( *fs, "a set of 6-tuples (rotation vector + translation vector) for each view", 0 );
fs << "extrinsic_parameters" << bigmat;
}
}
int main(int argc, char** argv)
{
const char* inputFilename;
const char* outputFilename = "out_camera_params.xml";
vector<string> imglist;
vector<Mat> vecImg;
int flags = 0;
float patternWidth, patternHeight;
int nMiniMatches;
if(argc < 2)
{
help();
return 1;
}
for (int i = 1; i < argc; ++i)
{
const char* s = argv[i];
if(strcmp(s, "-pw") == 0)
{
if(sscanf(argv[++i], "%f", &patternWidth) != 1 || patternWidth <= 0)
return fprintf( stderr, "Invalid pattern width\n"), -1;
}
else if(strcmp(s, "-ph") == 0)
{
if(sscanf(argv[++i], "%f", &patternHeight) != 1 || patternHeight <= 0)
return fprintf( stderr, "Invalid pattern height\n"), -1;
}
else if (strcmp(s, "-mm") == 0)
{
if (sscanf(argv[++i], "%d", &nMiniMatches) != 1 || nMiniMatches < 5)
return fprintf( stderr, "Invalid number of minimal matches or number is too small"), -1;
}
else if( strcmp( s, "-fp" ) == 0 )
{
flags |= CALIB_FIX_PRINCIPAL_POINT;
}
else if( s[0] != '-')
{
inputFilename = s;
}
else
{
return fprintf( stderr, "Unknown option %s\n", s ), -1;
}
}
readStringList(inputFilename, imglist);
// the first image is the pattern
Mat pattern = cv::imread(imglist[0], cv::IMREAD_GRAYSCALE);
for (int i = 1; i < (int)imglist.size(); ++i)
{
Mat img;
img = cv::imread(imglist[i], cv::IMREAD_GRAYSCALE);
vecImg.push_back(img);
}
randomPatternCornerFinder finder(patternWidth, patternHeight, nMiniMatches);
finder.loadPattern(pattern);
finder.computeObjectImagePoints(vecImg);
vector<Mat> objectPoints = finder.getObjectPoints();
vector<Mat> imagePoints = finder.getImagePoints();
Mat K;
Mat D;
vector<Mat> rvec, tvec;
double rms = calibrateCamera(objectPoints, imagePoints, vecImg[0].size(), K, D, rvec, tvec);
saveCameraParams("camera_params.xml", vecImg[0].size(), patternWidth, patternHeight, flags, K, D, rvec, tvec, rms);
}
#include "opencv2/randomPatten.hpp";
const char * usage =
"\n example command line for generating a random pattern. \n"
" randomPatternGenerator -iw 600 -ih 850 pattern.png\n"
"\n";
static void help()
{
printf("\n This is a sample for generating a random pattern that can be used for calibration.\n"
"Usage: randomPatternGenerator\n"
" -iw <image_width> # the width of pattern image\n"
" -ih <image_height> # the height of pattern image\n"
" filename # the filename for pattern image \n"
);
printf("\n %s", usage);
}
int main(int argc, char** argv)
{
const char* filename = 0;
Mat pattern;
int width, height;
if(argc < 2)
{
help();
return 1;
}
for (int i = 1; i < argc; ++i)
{
const char* s = argv[i];
if(strcmp(s, "-iw") == 0)
{
if(sscanf(argv[++i], "%d", &width) != 1 || width <= 0)
return fprintf( stderr, "Invalid pattern image width\n"), -1;
}
else if(strcmp(s, "-ih") == 0)
{
if(sscanf(argv[++i], "%d", &height) != 1 || height <= 0)
return fprintf( stderr, "Invalid pattern image height\n"), -1;
}
else if( s[0] != '-')
{
filename = s;
}
else
{
return fprintf( stderr, "Unknown option %s\n", s ), -1;
}
}
randomPatternGenerator generator(width, height);
generator.generatePattern();
pattern = generator.getPattern();
imwrite(filename, pattern);
}
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, Baisheng Lai (laibaisheng@gmail.com), Zhejiang University,
// all rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
/**
* This module was accepted as a GSoC 2015 project for OpenCV, authored by
* Baisheng Lai, mentored by Bo Li.
*
* The omnidirectional camera in this module is denoted by the catadioptric
* model. Please refer to Mei's paper for details of the camera model:
*
* C. Mei and P. Rives, "Single view point omnidirectional camera
* calibration from planar grids", in ICRA 2007.
*
* The implementation of the calibration part is based on Li's calibration
* toolbox:
*
* B. Li, L. Heng, K. Kevin and M. Pollefeys, "A Multiple-Camera System
* Calibration Toolbox Using A Feature Descriptor-Based Calibration
* Pattern", in IROS 2013.
*/
#include "precomp.hpp"
#include "opencv2/multiCameraCalibration.hpp"
#include <string>
#include <vector>
#include <queue>
#include <iostream>
using namespace cv;
multiCameraCalibration::multiCameraCalibration(int cameraType, int nCameras, const std::string& fileName,
float patternWidth, float patternHeight, int showExtration, int nMiniMatches, int flags, TermCriteria criteria,
Ptr<FeatureDetector> detector, Ptr<DescriptorExtractor> descriptor,
Ptr<DescriptorMatcher> matcher)
{
_camType = cameraType;
_nCamera = nCameras;
_flags = flags;
_nMiniMatches = nMiniMatches;
_filename = fileName;
_patternWidth = patternWidth;
_patternHeight = patternHeight;
_criteria = criteria;
_showExtraction = showExtration;
_objectPointsForEachCamera.resize(_nCamera);
_imagePointsForEachCamera.resize(_nCamera);
_cameraMatrix.resize(_nCamera);
_distortCoeffs.resize(_nCamera);
_xi.resize(_nCamera);
_omEachCamera.resize(_nCamera);
_tEachCamera.resize(_nCamera);
_detector = detector;
_descriptor = descriptor;
_matcher = matcher;
for (int i = 0; i < _nCamera; ++i)
{
_vertexList.push_back(vertex());
}
}
double multiCameraCalibration::run()
{
loadImages();
initialize();
double error = optimizeExtrinsics();
return error;
}
std::vector<std::string> multiCameraCalibration::readStringList()
{
std::vector<std::string> l;
l.resize(0);
FileStorage fs(_filename, FileStorage::READ);
FileNode n = fs.getFirstTopLevelNode();
FileNodeIterator it = n.begin(), it_end = n.end();
for( ; it != it_end; ++it )
l.push_back((std::string)*it);
return l;
}
void multiCameraCalibration::loadImages()
{
std::vector<std::string> file_list;
file_list = readStringList();
//Ptr<FeatureDetector> detector = AKAZE::create(AKAZE::DESCRIPTOR_MLDB,
// 0, 3, 0.002f);
//Ptr<DescriptorExtractor> descriptor = AKAZE::create(AKAZE::DESCRIPTOR_MLDB,
// 0, 3, 0.002f);
//Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-L1");
Ptr<FeatureDetector> detector = _detector;
Ptr<DescriptorExtractor> descriptor = _descriptor;
Ptr<DescriptorMatcher> matcher = _matcher;
randomPatternCornerFinder finder(_patternWidth, _patternHeight, 10, CV_32F, this->_showExtraction, detector, descriptor, matcher);
Mat pattern = cv::imread(file_list[0]);
finder.loadPattern(pattern);
std::vector<std::vector<std::string> > filesEachCameraFull(_nCamera);
std::vector<std::vector<int> > timestampFull(_nCamera);
for (int i = 1; i < (int)file_list.size(); ++i)
{
int cameraVertex, timestamp;
std::string filename = file_list[i].substr(0, file_list[i].find('.'));
int spritPosition1 = filename.rfind('/');
int spritPosition2 = filename.rfind('\\');
if (spritPosition1!=std::string::npos)
{
filename = filename.substr(spritPosition1+1, filename.size() - 1);
}
else if(spritPosition2!= std::string::npos)
{
filename = filename.substr(spritPosition2+1, filename.size() - 1);
}
sscanf(filename.c_str(), "%d-%d", &cameraVertex, &timestamp);
filesEachCameraFull[cameraVertex].push_back(file_list[i]);
timestampFull[cameraVertex].push_back(timestamp);
}
// calibrate each camera individually
for (int camera = 0; camera < _nCamera; ++camera)
{
Mat image, cameraMatrix, distortCoeffs;
// find image and object points
for (int imgIdx = 0; imgIdx < (int)filesEachCameraFull[camera].size(); ++imgIdx)
{
image = imread(filesEachCameraFull[camera][imgIdx], IMREAD_GRAYSCALE);
std::vector<Mat> imgObj = finder.computeObjectImagePointsForSingle(image);
_imagePointsForEachCamera[camera].push_back(imgObj[0]);
_objectPointsForEachCamera[camera].push_back(imgObj[1]);
}
// calibrate
Mat idx;
double rms;
if (_camType == PINHOLE)
{
rms = cv::calibrateCamera(_objectPointsForEachCamera[camera], _imagePointsForEachCamera[camera],
image.size(), _cameraMatrix[camera], _distortCoeffs[camera], _omEachCamera[camera],
_tEachCamera[camera],_flags);
idx = Mat(1, (int)_omEachCamera[camera].size(), CV_32S);
for (int i = 0; i < (int)idx.total(); ++i)
{
idx.at<int>(i) = i;
}
}
//else if (_camType == FISHEYE)
//{
// rms = cv::fisheye::calibrate(_objectPointsForEachCamera[camera], _imagePointsForEachCamera[camera],
// image.size(), _cameraMatrix[camera], _distortCoeffs[camera], _omEachCamera[camera],
// _tEachCamera[camera], _flags);
// idx = Mat(1, (int)_omEachCamera[camera].size(), CV_32S);
// for (int i = 0; i < (int)idx.total(); ++i)
// {
// idx.at<int>(i) = i;
// }
//}
else if (_camType == OMNIDIRECTIONAL)
{
rms = cv::omnidir::calibrate(_objectPointsForEachCamera[camera], _imagePointsForEachCamera[camera],
image.size(), _cameraMatrix[camera], _xi[camera], _distortCoeffs[camera], _omEachCamera[camera],
_tEachCamera[camera], _flags, TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 300, 1e-7),
idx);
}
_cameraMatrix[camera].convertTo(_cameraMatrix[camera], CV_32F);
_distortCoeffs[camera].convertTo(_distortCoeffs[camera], CV_32F);
_xi[camera].convertTo(_xi[camera], CV_32F);
//else
//{
// CV_Error_(CV_StsOutOfRange, "Unknown camera type, use PINHOLE or OMNIDIRECTIONAL");
//}
for (int i = 0; i < (int)_omEachCamera[camera].size(); ++i)
{
if ((int)_objectPointsForEachCamera[camera][idx.at<int>(i)].total() > _nMiniMatches)
{
int cameraVertex, timestamp, photoVertex;
cameraVertex = camera;
timestamp = timestampFull[camera][idx.at<int>(i)];
photoVertex = this->getPhotoVertex(timestamp);
if (_omEachCamera[camera][i].type()!=CV_32F)
{
_omEachCamera[camera][i].convertTo(_omEachCamera[camera][i], CV_32F);
}
if (_tEachCamera[camera][i].type()!=CV_32F)
{
_tEachCamera[camera][i].convertTo(_tEachCamera[camera][i], CV_32F);
}
Mat transform = Mat::eye(4, 4, CV_32F);
Mat R, T;
Rodrigues(_omEachCamera[camera][i], R);
T = (_tEachCamera[camera][i]).reshape(1, 3);
R.copyTo(transform.rowRange(0, 3).colRange(0, 3));
T.copyTo(transform.rowRange(0, 3).col(3));
this->_edgeList.push_back(edge(cameraVertex, photoVertex, idx.at<int>(i), transform));
}
}
}
}
int multiCameraCalibration::getPhotoVertex(int timestamp)
{
int photoVertex = INVALID;
// find in existing photo vertex
for (int i = 0; i < (int)_vertexList.size(); ++i)
{
if (_vertexList[i].timestamp == timestamp)
{
photoVertex = i;
break;
}
}
// add a new photo vertex
if (photoVertex == INVALID)
{
_vertexList.push_back(vertex(Mat::eye(4, 4, CV_32F), timestamp));
photoVertex = (int)_vertexList.size() - 1;
}
return photoVertex;
}
void multiCameraCalibration::initialize()
{
int nVertices = (int)_vertexList.size();
int nEdges = (int) _edgeList.size();
// build graph
Mat G = Mat::zeros((int)this->_vertexList.size(), (int)this->_vertexList.size(), CV_32S);
for (int edgeIdx = 0; edgeIdx < (int)this->_edgeList.size(); ++edgeIdx)
{
G.at<int>(this->_edgeList[edgeIdx].cameraVertex, this->_edgeList[edgeIdx].photoVertex) = edgeIdx + 1;
}
G = G + G.t();
// traverse the graph
std::vector<int> pre, order;
graphTraverse(G, 0, order, pre);
for (int i = 0; i < _nCamera; ++i)
{
if (pre[i] == INVALID)
{
std::cout << "camera" << i << "is not connected" << std::endl;
}
}
for (int i = 1; i < (int)order.size(); ++i)
{
int vertexIdx = order[i];
Mat prePose = this->_vertexList[pre[vertexIdx]].pose;
int edgeIdx = G.at<int>(vertexIdx, pre[vertexIdx]) - 1;
Mat transform = this->_edgeList[edgeIdx].transform;
if (vertexIdx < _nCamera)
{
this->_vertexList[vertexIdx].pose = transform * prePose.inv();
this->_vertexList[vertexIdx].pose.convertTo(this->_vertexList[vertexIdx].pose, CV_32F);
}
else
{
this->_vertexList[vertexIdx].pose = prePose.inv() * transform;
this->_vertexList[vertexIdx].pose.convertTo(this->_vertexList[vertexIdx].pose, CV_32F);
}
}
}
double multiCameraCalibration::optimizeExtrinsics()
{
// get om, t vector
int nVertex = (int)this->_vertexList.size();
Mat extrinParam(1, (nVertex-1)*6, CV_32F);
int offset = 0;
// the pose of the vertex[0] is eye
for (int i = 1; i < nVertex; ++i)
{
Mat rvec, tvec;
cv::Rodrigues(this->_vertexList[i].pose.rowRange(0,3).colRange(0, 3), rvec);
this->_vertexList[i].pose.rowRange(0,3).col(3).copyTo(tvec);
rvec.reshape(1, 1).copyTo(extrinParam.colRange(offset, offset + 3));
tvec.reshape(1, 1).copyTo(extrinParam.colRange(offset+3, offset +6));
offset += 6;
}
double error_pre = computeProjectError(extrinParam);
// optimization
const double alpha_smooth = 0.01;
//const double thresh_cond = 1e6;
double change = 1;
for(int iter = 0; ; ++iter)
{
if ((_criteria.type == 1 && iter >= _criteria.maxCount) ||
(_criteria.type == 2 && change <= _criteria.epsilon) ||
(_criteria.type == 3 && (change <= _criteria.epsilon || iter >= _criteria.maxCount)))
break;
double alpha_smooth2 = 1 - std::pow(1 - alpha_smooth, (double)iter + 1.0);
Mat JTJ_inv, JTError;
this->computeJacobianExtrinsic(extrinParam, JTJ_inv, JTError);
Mat G = alpha_smooth2*JTJ_inv * JTError;
if (G.depth() == CV_64F)
{
G.convertTo(G, CV_32F);
}
extrinParam = extrinParam + G.reshape(1, 1);
change = norm(G) / norm(extrinParam);
double error = computeProjectError(extrinParam);
}
double error = computeProjectError(extrinParam);
std::vector<Vec3f> rvecVertex, tvecVertex;
vector2parameters(extrinParam, rvecVertex, tvecVertex);
for (int verIdx = 1; verIdx < (int)_vertexList.size(); ++verIdx)
{
Mat R;
Mat pose = Mat::eye(4, 4, CV_32F);
Rodrigues(rvecVertex[verIdx-1], R);
R.copyTo(pose.colRange(0, 3).rowRange(0, 3));
Mat(tvecVertex[verIdx-1]).reshape(1, 3).copyTo(pose.rowRange(0, 3).col(3));
_vertexList[verIdx].pose = pose;
}
return error;
}
void multiCameraCalibration::computeJacobianExtrinsic(const Mat& extrinsicParams, Mat& JTJ_inv, Mat& JTE)
{
int nParam = (int)extrinsicParams.total();
int nEdge = (int)_edgeList.size();
std::vector<int> pointsLocation(nEdge+1, 0);
for (int edgeIdx = 0; edgeIdx < nEdge; ++edgeIdx)
{
int nPoints = _objectPointsForEachCamera[_edgeList[edgeIdx].cameraVertex][_edgeList[edgeIdx].photoIndex].total();
pointsLocation[edgeIdx+1] = pointsLocation[edgeIdx] + nPoints*2;
}
JTJ_inv = Mat(nParam, nParam, CV_64F);
JTE = Mat(nParam, 1, CV_64F);
Mat J = Mat::zeros(pointsLocation[nEdge], nParam, CV_64F);
Mat E = Mat::zeros(pointsLocation[nEdge], 1, CV_64F);
for (int edgeIdx = 0; edgeIdx < nEdge; ++edgeIdx)
{
int photoVertex = _edgeList[edgeIdx].photoVertex;
int photoIndex = _edgeList[edgeIdx].photoIndex;
int cameraVertex = _edgeList[edgeIdx].cameraVertex;
Mat objectPoints = _objectPointsForEachCamera[cameraVertex][photoIndex];
Mat imagePoints = _imagePointsForEachCamera[cameraVertex][photoIndex];
Mat rvecTran, tvecTran;
Mat R = _edgeList[edgeIdx].transform.rowRange(0, 3).colRange(0, 3);
tvecTran = _edgeList[edgeIdx].transform.rowRange(0, 3).col(3);
cv::Rodrigues(R, rvecTran);
Mat rvecPhoto = extrinsicParams.colRange((photoVertex-1)*6, (photoVertex-1)*6 + 3);
Mat tvecPhoto = extrinsicParams.colRange((photoVertex-1)*6 + 3, (photoVertex-1)*6 + 6);
Mat rvecCamera, tvecCamera;
if (cameraVertex > 0)
{
rvecCamera = extrinsicParams.colRange((cameraVertex-1)*6, (cameraVertex-1)*6 + 3);
tvecCamera = extrinsicParams.colRange((cameraVertex-1)*6 + 3, (cameraVertex-1)*6 + 6);
}
else
{
rvecCamera = Mat::zeros(3, 1, CV_32F);
tvecCamera = Mat::zeros(3, 1, CV_32F);
}
Mat jacobianPhoto, jacobianCamera, error;
computePhotoCameraJacobian(rvecPhoto, tvecPhoto, rvecCamera, tvecCamera, rvecTran, tvecTran,
objectPoints, imagePoints, this->_cameraMatrix[cameraVertex], this->_distortCoeffs[cameraVertex],
this->_xi[cameraVertex], jacobianPhoto, jacobianCamera, error);
if (cameraVertex > 0)
{
jacobianCamera.copyTo(J.rowRange(pointsLocation[edgeIdx], pointsLocation[edgeIdx+1]).
colRange((cameraVertex-1)*6, cameraVertex*6));
}
jacobianPhoto.copyTo(J.rowRange(pointsLocation[edgeIdx], pointsLocation[edgeIdx+1]).
colRange((photoVertex-1)*6, photoVertex*6));
error.copyTo(E.rowRange(pointsLocation[edgeIdx], pointsLocation[edgeIdx+1]));
}
//std::cout << J.t() * J << std::endl;
JTJ_inv = (J.t() * J + 1e-10).inv();
JTE = J.t() * E;
}
void multiCameraCalibration::computePhotoCameraJacobian(const Mat& rvecPhoto, const Mat& tvecPhoto, const Mat& rvecCamera,
const Mat& tvecCamera, Mat& rvecTran, Mat& tvecTran, const Mat& objectPoints, const Mat& imagePoints, const Mat& K,
const Mat& distort, const Mat& xi, Mat& jacobianPhoto, Mat& jacobianCamera, Mat& E)
{
Mat drvecTran_drecvPhoto, drvecTran_dtvecPhoto,
drvecTran_drvecCamera, drvecTran_dtvecCamera,
dtvecTran_drvecPhoto, dtvecTran_dtvecPhoto,
dtvecTran_drvecCamera, dtvecTran_dtvecCamera;
multiCameraCalibration::compose_motion(rvecPhoto, tvecPhoto, rvecCamera, tvecCamera, rvecTran, tvecTran,
drvecTran_drecvPhoto, drvecTran_dtvecPhoto, drvecTran_drvecCamera, drvecTran_dtvecCamera,
dtvecTran_drvecPhoto, dtvecTran_dtvecPhoto, dtvecTran_drvecCamera, dtvecTran_dtvecCamera);
if (rvecTran.depth() == CV_64F)
{
rvecTran.convertTo(rvecTran, CV_32F);
}
if (tvecTran.depth() == CV_64F)
{
tvecTran.convertTo(tvecTran, CV_32F);
}
float _xi;
if (_camType == OMNIDIRECTIONAL)
{
_xi= xi.at<float>(0);
}
Mat imagePoints2, jacobian, dx_drvecCamera, dx_dtvecCamera, dx_drvecPhoto, dx_dtvecPhoto;
if (_camType == PINHOLE)
{
cv::projectPoints(objectPoints, rvecTran, tvecTran, K, distort, imagePoints2, jacobian);
}
//else if (_camType == FISHEYE)
//{
// cv::fisheye::projectPoints(objectPoints, imagePoints2, rvecTran, tvecTran, K, distort, 0, jacobian);
//}
else if (_camType == OMNIDIRECTIONAL)
{
cv::omnidir::projectPoints(objectPoints, imagePoints2, rvecTran, tvecTran, K, _xi, distort, jacobian);
}
if (objectPoints.depth() == CV_32F)
{
Mat(imagePoints - imagePoints2).convertTo(E, CV_64FC2);
}
else
{
E = imagePoints - imagePoints2;
}
E = E.reshape(1, (int)imagePoints.total()*2);
dx_drvecCamera = jacobian.colRange(0, 3) * drvecTran_drvecCamera + jacobian.colRange(3, 6) * dtvecTran_drvecCamera;
dx_dtvecCamera = jacobian.colRange(0, 3) * drvecTran_dtvecCamera + jacobian.colRange(3, 6) * dtvecTran_dtvecCamera;
dx_drvecPhoto = jacobian.colRange(0, 3) * drvecTran_drecvPhoto + jacobian.colRange(3, 6) * dtvecTran_drvecPhoto;
dx_dtvecPhoto = jacobian.colRange(0, 3) * drvecTran_dtvecPhoto + jacobian.colRange(3, 6) * dtvecTran_dtvecPhoto;
jacobianCamera = cv::Mat(dx_drvecCamera.rows, 6, CV_64F);
jacobianPhoto = cv::Mat(dx_drvecPhoto.rows, 6, CV_64F);
dx_drvecCamera.copyTo(jacobianCamera.colRange(0, 3));
dx_dtvecCamera.copyTo(jacobianCamera.colRange(3, 6));
dx_drvecPhoto.copyTo(jacobianPhoto.colRange(0, 3));
dx_dtvecPhoto.copyTo(jacobianPhoto.colRange(3, 6));
}
void multiCameraCalibration::graphTraverse(const Mat& G, int begin, std::vector<int>& order, std::vector<int>& pre)
{
CV_Assert(!G.empty() && G.rows == G.cols);
int nVertex = G.rows;
order.resize(0);
pre.resize(nVertex, INVALID);
pre[begin] = -1;
std::vector<bool> visited(nVertex, false);
std::queue<int> q;
visited[begin] = true;
q.push(begin);
order.push_back(begin);
while(!q.empty())
{
int v = q.front();
q.pop();
Mat idx;
// use my findNonZero maybe
findRowNonZero(G.row(v), idx);
for(int i = 0; i < (int)idx.total(); ++i)
{
int neighbor = idx.at<int>(i);
if (!visited[neighbor])
{
visited[neighbor] = true;
q.push(neighbor);
order.push_back(neighbor);
pre[neighbor] = v;
}
}
}
}
void multiCameraCalibration::findRowNonZero(const Mat& row, Mat& idx)
{
CV_Assert(!row.empty() && row.rows == 1 && row.channels() == 1);
Mat _row;
std::vector<int> _idx;
row.convertTo(_row, CV_32F);
for (int i = 0; i < (int)row.total(); ++i)
{
if (_row.at<float>(i) != 0)
{
_idx.push_back(i);
}
}
idx.release();
idx.create(1, (int)_idx.size(), CV_32S);
for (int i = 0; i < (int)_idx.size(); ++i)
{
idx.at<int>(i) = _idx[i];
}
}
double multiCameraCalibration::computeProjectError(Mat& parameters)
{
int nVertex = (int)_vertexList.size();
CV_Assert(parameters.total() == (nVertex-1) * 6 && parameters.depth() == CV_32F);
int nEdge = (int)_edgeList.size();
// recompute the transform between photos and cameras
std::vector<edge> edgeList = this->_edgeList;
std::vector<Vec3f> rvecVertex, tvecVertex;
vector2parameters(parameters, rvecVertex, tvecVertex);
float totalError = 0;
int totalNPoints = 0;
for (int edgeIdx = 0; edgeIdx < nEdge; ++edgeIdx)
{
Mat RPhoto, RCamera, TPhoto, TCamera, transform;
int cameraVertex = edgeList[edgeIdx].cameraVertex;
int photoVertex = edgeList[edgeIdx].photoVertex;
int PhotoIndex = edgeList[edgeIdx].photoIndex;
TPhoto = Mat(tvecVertex[photoVertex - 1]).reshape(1, 3);
//edgeList[edgeIdx].transform = Mat::ones(4, 4, CV_32F);
transform = Mat::eye(4, 4, CV_32F);
cv::Rodrigues(rvecVertex[photoVertex-1], RPhoto);
if (cameraVertex == 0)
{
//RPhoto.copyTo(edgeList[edgeIdx].transform.rowRange(0, 3).colRange(0, 3));
RPhoto.copyTo(transform.rowRange(0, 3).colRange(0, 3));
TPhoto.copyTo(transform.rowRange(0, 3).col(3));
//TPhoto.copyTo(transform.rowRange(0, 3).col(3));
}
else
{
TCamera = Mat(tvecVertex[cameraVertex - 1]).reshape(1, 3);
cv::Rodrigues(rvecVertex[cameraVertex - 1], RCamera);
//Mat(RCamera*RPhoto).copyTo(edgeList[edgeIdx].transform.rowRange(0, 3).colRange(0, 3));
Mat(RCamera*RPhoto).copyTo(transform.rowRange(0, 3).colRange(0, 3));
//Mat(RCamera * TPhoto + TCamera).copyTo(edgeList[edgeIdx].transform.rowRange(0, 3).col(3));
Mat(RCamera * TPhoto + TCamera).copyTo(transform.rowRange(0, 3).col(3));
}
transform.copyTo(edgeList[edgeIdx].transform);
Mat rvec, tvec;
cv::Rodrigues(transform.rowRange(0, 3).colRange(0, 3), rvec);
transform.rowRange(0, 3).col(3).copyTo(tvec);
Mat objectPoints, imagePoints, proImagePoints;
objectPoints = this->_objectPointsForEachCamera[cameraVertex][PhotoIndex];
imagePoints = this->_imagePointsForEachCamera[cameraVertex][PhotoIndex];
if (this->_camType == PINHOLE)
{
cv::projectPoints(objectPoints, rvec, tvec, _cameraMatrix[cameraVertex], _distortCoeffs[cameraVertex],
proImagePoints);
}
//else if (this->_camType == FISHEYE)
//{
// cv::fisheye::projectPoints(objectPoints, proImagePoints, rvec, tvec, _cameraMatrix[cameraVertex],
// _distortCoeffs[cameraVertex]);
//}
else if (this->_camType == OMNIDIRECTIONAL)
{
float xi = _xi[cameraVertex].at<float>(0);
cv::omnidir::projectPoints(objectPoints, proImagePoints, rvec, tvec, _cameraMatrix[cameraVertex],
xi, _distortCoeffs[cameraVertex]);
}
Mat error = imagePoints - proImagePoints;
Vec2f* ptr_err = error.ptr<Vec2f>();
for (int i = 0; i < (int)error.total(); ++i)
{
totalError += sqrt(ptr_err[i][0]*ptr_err[i][0] + ptr_err[i][1]*ptr_err[i][1]);
}
totalNPoints += (int)error.total();
}
double meanReProjError = totalError / totalNPoints;
_error = meanReProjError;
return meanReProjError;
}
void multiCameraCalibration::compose_motion(InputArray _om1, InputArray _T1, InputArray _om2, InputArray _T2, Mat& om3, Mat& T3, Mat& dom3dom1,
Mat& dom3dT1, Mat& dom3dom2, Mat& dom3dT2, Mat& dT3dom1, Mat& dT3dT1, Mat& dT3dom2, Mat& dT3dT2)
{
Mat om1, om2, T1, T2;
_om1.getMat().convertTo(om1, CV_64F);
_om2.getMat().convertTo(om2, CV_64F);
_T1.getMat().reshape(1, 3).convertTo(T1, CV_64F);
_T2.getMat().reshape(1, 3).convertTo(T2, CV_64F);
/*Mat om2 = _om2.getMat();
Mat T1 = _T1.getMat().reshape(1, 3);
Mat T2 = _T2.getMat().reshape(1, 3);*/
//% Rotations:
Mat R1, R2, R3, dR1dom1(9, 3, CV_64FC1), dR2dom2;
cv::Rodrigues(om1, R1, dR1dom1);
cv::Rodrigues(om2, R2, dR2dom2);
/*JRodriguesMatlab(dR1dom1, dR1dom1);
JRodriguesMatlab(dR2dom2, dR2dom2);*/
dR1dom1 = dR1dom1.t();
dR2dom2 = dR2dom2.t();
R3 = R2 * R1;
Mat dR3dR2, dR3dR1;
//dAB(R2, R1, dR3dR2, dR3dR1);
matMulDeriv(R2, R1, dR3dR2, dR3dR1);
Mat dom3dR3;
cv::Rodrigues(R3, om3, dom3dR3);
//JRodriguesMatlab(dom3dR3, dom3dR3);
dom3dR3 = dom3dR3.t();
dom3dom1 = dom3dR3 * dR3dR1 * dR1dom1;
dom3dom2 = dom3dR3 * dR3dR2 * dR2dom2;
dom3dT1 = Mat::zeros(3, 3, CV_64FC1);
dom3dT2 = Mat::zeros(3, 3, CV_64FC1);
//% Translations:
Mat T3t = R2 * T1;
Mat dT3tdR2, dT3tdT1;
//dAB(R2, T1, dT3tdR2, dT3tdT1);
matMulDeriv(R2, T1, dT3tdR2, dT3tdT1);
Mat dT3tdom2 = dT3tdR2 * dR2dom2;
T3 = T3t + T2;
dT3dT1 = dT3tdT1;
dT3dT2 = Mat::eye(3, 3, CV_64FC1);
dT3dom2 = dT3tdom2;
dT3dom1 = Mat::zeros(3, 3, CV_64FC1);
}
void multiCameraCalibration::JRodriguesMatlab(const Mat& src, Mat& dst)
{
Mat tmp(src.cols, src.rows, src.type());
if (src.rows == 9)
{
Mat(src.row(0).t()).copyTo(tmp.col(0));
Mat(src.row(1).t()).copyTo(tmp.col(3));
Mat(src.row(2).t()).copyTo(tmp.col(6));
Mat(src.row(3).t()).copyTo(tmp.col(1));
Mat(src.row(4).t()).copyTo(tmp.col(4));
Mat(src.row(5).t()).copyTo(tmp.col(7));
Mat(src.row(6).t()).copyTo(tmp.col(2));
Mat(src.row(7).t()).copyTo(tmp.col(5));
Mat(src.row(8).t()).copyTo(tmp.col(8));
}
else
{
Mat(src.col(0).t()).copyTo(tmp.row(0));
Mat(src.col(1).t()).copyTo(tmp.row(3));
Mat(src.col(2).t()).copyTo(tmp.row(6));
Mat(src.col(3).t()).copyTo(tmp.row(1));
Mat(src.col(4).t()).copyTo(tmp.row(4));
Mat(src.col(5).t()).copyTo(tmp.row(7));
Mat(src.col(6).t()).copyTo(tmp.row(2));
Mat(src.col(7).t()).copyTo(tmp.row(5));
Mat(src.col(8).t()).copyTo(tmp.row(8));
}
dst = tmp.clone();
}
void multiCameraCalibration::dAB(InputArray A, InputArray B, OutputArray dABdA, OutputArray dABdB)
{
CV_Assert(A.getMat().cols == B.getMat().rows);
CV_Assert(A.type() == CV_64FC1 && B.type() == CV_64FC1);
int p = A.getMat().rows;
int n = A.getMat().cols;
int q = B.getMat().cols;
dABdA.create(p * q, p * n, CV_64FC1);
dABdB.create(p * q, q * n, CV_64FC1);
dABdA.getMat() = Mat::zeros(p * q, p * n, CV_64FC1);
dABdB.getMat() = Mat::zeros(p * q, q * n, CV_64FC1);
for (int i = 0; i < q; ++i)
{
for (int j = 0; j < p; ++j)
{
int ij = j + i * p;
for (int k = 0; k < n; ++k)
{
int kj = j + k * p;
dABdA.getMat().at<double>(ij, kj) = B.getMat().at<double>(k, i);
}
}
}
for (int i = 0; i < q; ++i)
{
A.getMat().copyTo(dABdB.getMat().rowRange(i * p, i * p + p).colRange(i * n, i * n + n));
}
}
void multiCameraCalibration::vector2parameters(const Mat& parameters, std::vector<Vec3f>& rvecVertex, std::vector<Vec3f>& tvecVertexs)
{
int nVertex = (int)_vertexList.size();
CV_Assert(parameters.channels() == 1 && parameters.total() == 6*(nVertex - 1));
CV_Assert(parameters.depth() == CV_32F);
parameters.reshape(1, 1);
rvecVertex.reserve(0);
tvecVertexs.resize(0);
for (int i = 0; i < nVertex - 1; ++i)
{
rvecVertex.push_back(Vec3f(parameters.colRange(i*6, i*6 + 3)));
tvecVertexs.push_back(Vec3f(parameters.colRange(i*6 + 3, i*6 + 6)));
}
}
void multiCameraCalibration::parameters2vector(const std::vector<Vec3f>& rvecVertex, const std::vector<Vec3f>& tvecVertex, Mat& parameters)
{
CV_Assert(rvecVertex.size() == tvecVertex.size());
int nVertex = (int)rvecVertex.size();
// the pose of the first camera is known
parameters.create(1, 6*(nVertex-1), CV_32F);
for (int i = 0; i < nVertex-1; ++i)
{
Mat(rvecVertex[i]).reshape(1, 1).copyTo(parameters.colRange(i*6, i*6 + 3));
Mat(tvecVertex[i]).reshape(1, 1).copyTo(parameters.colRange(i*6 + 3, i*6 + 6));
}
}
void multiCameraCalibration::writeParameters(const std::string& filename)
{
FileStorage fs( filename, FileStorage::WRITE );
fs << "nCameras" << _nCamera;
for (int camIdx = 0; camIdx < _nCamera; ++camIdx)
{
char num[10];
sprintf(num, "%d", camIdx);
std::string cameraMatrix = "camera_matrix_" + std::string(num);
std::string cameraPose = "camera_pose_" + std::string(num);
std::string cameraDistortion = "camera_distortion_" + std::string(num);
std::string cameraXi = "xi_" + std::string(num);
fs << cameraMatrix << _cameraMatrix[camIdx];
fs << cameraDistortion << _distortCoeffs[camIdx];
if (_camType == OMNIDIRECTIONAL)
{
fs << cameraXi << _xi[camIdx].at<float>(0);
}
fs << cameraPose << _vertexList[camIdx].pose;
}
fs << "meanReprojectError" <<_error;
for (int photoIdx = _nCamera; photoIdx < (int)_vertexList.size(); ++photoIdx)
{
char timestamp[100];
sprintf(timestamp, "%d", _vertexList[photoIdx].timestamp);
std::string photoTimestamp = "pose_timestamp_" + std::string(timestamp);
fs << photoTimestamp << _vertexList[photoIdx].pose;
}
}
......@@ -46,22 +46,20 @@
* The omnidirectional camera in this module is denoted by the catadioptric
* model. Please refer to Mei's paper for details of the camera model:
*
* C. Mei and P. Rives, Single view point omnidirectional camera
* calibration from planar grids,?in ICRA 2007.
* C. Mei and P. Rives, "Single view point omnidirectional camera
* calibration from planar grids", in ICRA 2007.
*
* The implementation of the calibration part is based on Li's calibration
* toolbox:
*
* B. Li, L. Heng, K. Köser and M. Pollefeys, "A Multiple-Camera System
* B. Li, L. Heng, K. Kevin and M. Pollefeys, "A Multiple-Camera System
* Calibration Toolbox Using A Feature Descriptor-Based Calibration
* Pattern", in IROS 2013.
*/
#ifndef __OPENCV_OMNIDIR_CPP__
#define __OPENCV_OMNIDIR_CPP__
#ifdef __cplusplus
#include "precomp.hpp"
#include "opencv2/omnidir.hpp"
#include <fstream>
#include <iostream>
namespace cv { namespace
{
struct JacobianRow
......@@ -81,26 +79,43 @@ void cv::omnidir::projectPoints(InputArray objectPoints, OutputArray imagePoints
InputArray rvec, InputArray tvec, InputArray K, double xi, InputArray D, OutputArray jacobian)
{
CV_Assert(objectPoints.type() == CV_64FC3);
CV_Assert(rvec.type() == CV_64F && rvec.total() == 3);
CV_Assert(tvec.type() == CV_64F && tvec.total() == 3);
CV_Assert(K.type() == CV_64F && K.size() == Size(3,3));
CV_Assert(D.type() == CV_64F && D.total() == 4);
// each row is an image point
CV_Assert(objectPoints.type() == CV_64FC3 || objectPoints.type() == CV_32FC3);
CV_Assert((rvec.depth() == CV_64F || rvec.depth() == CV_32F) && rvec.total() == 3);
CV_Assert((tvec.depth() == CV_64F || tvec.depth() == CV_32F) && tvec.total() == 3);
CV_Assert((K.type() == CV_64F || K.type() == CV_32F) && K.size() == Size(3,3));
CV_Assert((D.type() == CV_64F || D.type() == CV_32F) && D.total() == 4);
imagePoints.create(objectPoints.size(), CV_MAKETYPE(objectPoints.depth(), 2));
int n = (int)objectPoints.total();
Vec3d om = *rvec.getMat().ptr<Vec3d>();
Vec3d T = *tvec.getMat().ptr<Vec3d>();
Matx33d Kc = K.getMat();
Vec<double, 4> kp= (Vec<double,4>)*D.getMat().ptr<Vec<double,4> >();
Vec2d f,c;
Vec3d om = rvec.depth() == CV_32F ? (Vec3d)*rvec.getMat().ptr<Vec3f>() : *rvec.getMat().ptr<Vec3d>();
Vec3d T = tvec.depth() == CV_32F ? (Vec3d)*tvec.getMat().ptr<Vec3f>() : *tvec.getMat().ptr<Vec3d>();
f = Vec2d(Kc(0,0),Kc(1,1));
Vec2d f,c;
double s;
if (K.depth() == CV_32F)
{
Matx33f Kc = K.getMat();
f = Vec2f(Kc(0,0), Kc(1,1));
c = Vec2f(Kc(0,2),Kc(1,2));
s = (double)Kc(0,1);
}
else
{
Matx33d Kc = K.getMat();
f = Vec2d(Kc(0,0), Kc(1,1));
c = Vec2d(Kc(0,2),Kc(1,2));
double s = Kc(0,1);
const Vec3d* Xw_all = objectPoints.getMat().ptr<Vec3d>();
s = Kc(0,1);
}
Vec4d kp = D.depth() == CV_32F ? (Vec4d)*D.getMat().ptr<Vec4f>() : *D.getMat().ptr<Vec4d>();
//Vec<double, 4> kp= (Vec<double,4>)*D.getMat().ptr<Vec<double,4> >();
const Vec3d* Xw_alld = objectPoints.getMat().ptr<Vec3d>();
const Vec3f* Xw_allf = objectPoints.getMat().ptr<Vec3f>();
Vec2d* xpd = imagePoints.getMat().ptr<Vec2d>();
Vec2f* xpf = imagePoints.getMat().ptr<Vec2f>();
Matx33d R;
Matx<double, 3, 9> dRdom;
......@@ -120,7 +135,7 @@ void cv::omnidir::projectPoints(InputArray objectPoints, OutputArray imagePoints
for (int i = 0; i < n; i++)
{
// convert to camera coordinate
Vec3d Xw = (Vec3d)Xw_all[i];
Vec3d Xw = objectPoints.depth() == CV_32F ? (Vec3d)Xw_allf[i] : Xw_alld[i];
Vec3d Xc = (Vec3d)(R*Xw + T);
......@@ -139,8 +154,20 @@ void cv::omnidir::projectPoints(InputArray objectPoints, OutputArray imagePoints
xd[1] = xu[1]*(1+k1*r2+k2*r4) + p1*(r2+2*xu[1]*xu[1]) + 2*p2*xu[0]*xu[1];
// convert to pixel coordinate
xpd[i][0] = f[0]*xd[0]+s*xd[1]+c[0];
xpd[i][1] = f[1]*xd[1]+c[1];
Vec2d final;
final[0] = f[0]*xd[0]+s*xd[1]+c[0];
final[1] = f[1]*xd[1]+c[1];
if (objectPoints.depth() == CV_32F)
{
xpf[i] = final;
}
else
{
xpd[i] = final;
}
/*xpd[i][0] = f[0]*xd[0]+s*xd[1]+c[0];
xpd[i][1] = f[1]*xd[1]+c[1];*/
if (jacobian.needed())
{
......@@ -214,23 +241,39 @@ void cv::omnidir::projectPoints(InputArray objectPoints, OutputArray imagePoints
/////////////////////////////////////////////////////////////////////////////
//////// undistortPoints
void cv::omnidir::undistortPoints( InputArray distorted, OutputArray undistorted,
InputArray K, InputArray D, double xi, InputArray R)
InputArray K, InputArray D, InputArray xi, InputArray R)
{
CV_Assert(distorted.type() == CV_64FC2);
undistorted.create(distorted.size(), distorted.type());
CV_Assert(distorted.type() == CV_64FC2 || distorted.type() == CV_32FC2);
CV_Assert(R.empty() || (!R.empty() && (R.size() == Size(3, 3) || R.total() * R.channels() == 3)
&& (R.depth() == CV_64F || R.depth() == CV_32F)));
CV_Assert((D.depth() == CV_64F || D.depth() == CV_32F) && D.total() == 4);
CV_Assert(K.size() == Size(3, 3) && (K.depth() == CV_64F || K.depth() == CV_32F));
CV_Assert(xi.total() == 1 && (xi.depth() == CV_64F || xi.depth() == CV_32F));
CV_Assert(R.empty() || R.size() == Size(3, 3) || R.total() * R.channels() == 3);
CV_Assert(D.type() == CV_64F && D.total() == 4 && K.size() == Size(3, 3) && K.depth() == CV_64F);
undistorted.create(distorted.size(), distorted.type());
cv::Vec2d f, c;
double s;
if (K.depth() == CV_32F)
{
Matx33f camMat = K.getMat();
f = Vec2f(camMat(0,0), camMat(1,1));
c = Vec2f(camMat(0,2), camMat(1,2));
s = (double)camMat(0,1);
}
else if (K.depth() == CV_64F)
{
Matx33d camMat = K.getMat();
f = cv::Vec2d(camMat(0,0), camMat(1,1));
c = cv::Vec2d(camMat(0,2), camMat(1,2));
double s = camMat(0,1);
Vec4d kp = (Vec4d)*D.getMat().ptr<Vec4d>();
f = Vec2d(camMat(0,0), camMat(1,1));
c = Vec2d(camMat(0,2), camMat(1,2));
s = camMat(0,1);
}
Vec4d kp = D.depth()==CV_32F ? (Vec4d)*D.getMat().ptr<Vec4f>():(Vec4d)*D.getMat().ptr<Vec4d>();
Vec2d k = Vec2d(kp[0], kp[1]);
Vec2d p = Vec2d(kp[2], kp[3]);
double _xi = xi.depth() == CV_32F ? (double)*xi.getMat().ptr<float>() : *xi.getMat().ptr<double>();
cv::Matx33d RR = cv::Matx33d::eye();
// R is om
if(!R.empty() && R.total()*R.channels() == 3)
......@@ -245,12 +288,15 @@ void cv::omnidir::undistortPoints( InputArray distorted, OutputArray undistorted
}
const cv::Vec2d *srcd = distorted.getMat().ptr<cv::Vec2d>();
const cv::Vec2f *srcf = distorted.getMat().ptr<cv::Vec2f>();
cv::Vec2d *dstd = undistorted.getMat().ptr<cv::Vec2d>();
cv::Vec2f *dstf = undistorted.getMat().ptr<cv::Vec2f>();
int n = (int)distorted.total();
for (int i = 0; i < n; i++)
{
Vec2d pi = (Vec2d)srcd[i]; // image point
Vec2d pi = distorted.depth() == CV_32F ? (Vec2d)srcf[i]:(Vec2d)srcd[i]; // image point
Vec2d pp((pi[0]*f[1]-c[0]*f[1]-s*(pi[1]-c[1]))/(f[0]*f[1]), (pi[1]-c[1])/f[1]); //plane
Vec2d pu = pp; // points without distortion
......@@ -266,10 +312,10 @@ void cv::omnidir::undistortPoints( InputArray distorted, OutputArray undistorted
// project to unit sphere
double r2 = pu[0]*pu[0] + pu[1]*pu[1];
double a = (r2 + 1);
double b = 2*xi*r2;
double cc = r2*xi*xi-1;
double b = 2*_xi*r2;
double cc = r2*_xi*_xi-1;
double Zs = (-b + sqrt(b*b - 4*a*cc))/(2*a);
Vec3d Xw = Vec3d(pu[0]*(Zs + xi), pu[1]*(Zs +xi), Zs);
Vec3d Xw = Vec3d(pu[0]*(Zs + _xi), pu[1]*(Zs +_xi), Zs);
// rotate
Xw = RR * Xw;
......@@ -278,17 +324,23 @@ void cv::omnidir::undistortPoints( InputArray distorted, OutputArray undistorted
Vec3d Xs = Xw / cv::norm(Xw);
// reproject to camera plane
Vec3d ppu = Vec3d(Xs[0]/(Xs[2]+xi), Xs[1]/(Xs[2]+xi), 1.0);
Vec3d ppu = Vec3d(Xs[0]/(Xs[2]+_xi), Xs[1]/(Xs[2]+_xi), 1.0);
if (undistorted.depth() == CV_32F)
{
dstf[i] = Vec2f((float)ppu[0], (float)ppu[1]);
}
else if (undistorted.depth() == CV_64F)
{
dstd[i] = Vec2d(ppu[0], ppu[1]);
}
}
}
/////////////////////////////////////////////////////////////////////////////
//////// cv::omnidir::initUndistortRectifyMap
void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, double xi, InputArray R, InputArray P,
const cv::Size& size, int m1type, OutputArray map1, OutputArray map2)
void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, InputArray xi, InputArray R, InputArray P,
const cv::Size& size, int m1type, OutputArray map1, OutputArray map2, int flags)
{
CV_Assert( m1type == CV_16SC2 || m1type == CV_32F || m1type <=0 );
map1.create( size, m1type <= 0 ? CV_16SC2 : m1type );
......@@ -300,6 +352,9 @@ void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, double xi,
CV_Assert(P.empty() || P.size() == Size(3, 3) || P.size() == Size(4, 3));
CV_Assert(R.empty() || (R.depth() == CV_32F || R.depth() == CV_64F));
CV_Assert(R.empty() || R.size() == Size(3, 3) || R.total() * R.channels() == 3);
CV_Assert(flags == RECTIFY_PERSPECTIVE || flags == RECTIFY_CYLINDRICAL || flags == RECTIFY_LONGLATI
|| flags == RECTIFY_STEREOGRAPHIC);
CV_Assert(xi.total() == 1 && (xi.depth() == CV_32F || xi.depth() == CV_64F));
cv::Vec2d f, c;
double s;
......@@ -308,7 +363,7 @@ void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, double xi,
Matx33f camMat = K.getMat();
f = Vec2f(camMat(0, 0), camMat(1, 1));
c = Vec2f(camMat(0, 2), camMat(1, 2));
s = camMat(0,1);
s = (double)camMat(0,1);
}
else
{
......@@ -321,6 +376,7 @@ void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, double xi,
Vec4d kp = Vec4d::all(0);
if (!D.empty())
kp = D.depth() == CV_32F ? (Vec4d)*D.getMat().ptr<Vec4f>(): *D.getMat().ptr<Vec4d>();
double _xi = xi.depth() == CV_32F ? (double)*xi.getMat().ptr<float>() : *xi.getMat().ptr<double>();
Vec2d k = Vec2d(kp[0], kp[1]);
Vec2d p = Vec2d(kp[2], kp[3]);
cv::Matx33d RR = cv::Matx33d::eye();
......@@ -339,8 +395,12 @@ void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, double xi,
else
PP = K.getMat();
cv::Matx33d iR = (PP*RR).inv(cv::DECOMP_SVD);
cv::Matx33d iKR = (PP*RR).inv(cv::DECOMP_SVD);
cv::Matx33d iK = PP.inv(cv::DECOMP_SVD);
cv::Matx33d iR = RR.inv(cv::DECOMP_SVD);
if (flags == omnidir::RECTIFY_PERSPECTIVE)
{
// so far it is undistorted to perspective image
for (int i = 0; i < size.height; ++i)
{
......@@ -349,10 +409,10 @@ void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, double xi,
short* m1 = (short*)m1f;
ushort* m2 = (ushort*)m2f;
double _x = i*iR(0, 1) + iR(0, 2),
_y = i*iR(1, 1) + iR(1, 2),
_w = i*iR(2, 1) + iR(2, 2);
for(int j = 0; j < size.width; ++j, _x+=iR(0,0), _y+=iR(1,0), _w+=iR(2,0))
double _x = i*iKR(0, 1) + iKR(0, 2),
_y = i*iKR(1, 1) + iKR(1, 2),
_w = i*iKR(2, 1) + iKR(2, 2);
for(int j = 0; j < size.width; ++j, _x+=iKR(0,0), _y+=iKR(1,0), _w+=iKR(2,0))
{
// project back to unit sphere
double r = sqrt(_x*_x + _y*_y + _w*_w);
......@@ -360,8 +420,86 @@ void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, double xi,
double Ys = _y / r;
double Zs = _w / r;
// project to image plane
double xu = Xs / (Zs + xi),
yu = Ys / (Zs + xi);
double xu = Xs / (Zs + _xi),
yu = Ys / (Zs + _xi);
// add distortion
double r2 = xu*xu + yu*yu;
double r4 = r2*r2;
double xd = (1+k[0]*r2+k[1]*r4)*xu + 2*p[0]*xu*yu + p[1]*(r2+2*xu*xu);
double yd = (1+k[0]*r2+k[1]*r4)*yu + p[0]*(r2+2*yu*yu) + 2*p[1]*xu*yu;
// to image pixel
double u = f[0]*xd + s*yd + c[0];
double v = f[1]*yd + c[1];
if( m1type == CV_16SC2 )
{
int iu = cv::saturate_cast<int>(u*cv::INTER_TAB_SIZE);
int iv = cv::saturate_cast<int>(v*cv::INTER_TAB_SIZE);
m1[j*2+0] = (short)(iu >> cv::INTER_BITS);
m1[j*2+1] = (short)(iv >> cv::INTER_BITS);
m2[j] = (ushort)((iv & (cv::INTER_TAB_SIZE-1))*cv::INTER_TAB_SIZE + (iu & (cv::INTER_TAB_SIZE-1)));
}
else if( m1type == CV_32FC1 )
{
m1f[j] = (float)u;
m2f[j] = (float)v;
}
}
}
}
else if(flags == omnidir::RECTIFY_CYLINDRICAL || flags == omnidir::RECTIFY_LONGLATI ||
flags == omnidir::RECTIFY_STEREOGRAPHIC)
{
for (int i = 0; i < size.height; ++i)
{
float* m1f = map1.getMat().ptr<float>(i);
float* m2f = map2.getMat().ptr<float>(i);
short* m1 = (short*)m1f;
ushort* m2 = (ushort*)m2f;
// for RECTIFY_LONGLATI, theta and h are longittude and latitude
double theta = i*iK(0, 1) + iK(0, 2),
h = i*iK(1, 1) + iK(1, 2);
for (int j = 0; j < size.width; ++j, theta+=iK(0,0), h+=iK(1,0))
{
double _xt, _yt, _wt;
if (flags == omnidir::RECTIFY_CYLINDRICAL)
{
//_xt = std::sin(theta);
//_yt = h;
//_wt = std::cos(theta);
_xt = std::cos(theta);
_yt = std::sin(theta);
_wt = h;
}
else if (flags == omnidir::RECTIFY_LONGLATI)
{
_xt = -std::cos(theta);
_yt = -std::sin(theta) * std::cos(h);
_wt = std::sin(theta) * std::sin(h);
}
else if (flags == omnidir::RECTIFY_STEREOGRAPHIC)
{
double a = theta*theta + h*h + 4;
double b = -2*theta*theta - 2*h*h;
double c = theta*theta + h*h -4;
_yt = (-b-std::sqrt(b*b - 4*a*c))/(2*a);
_xt = theta*(1 - _yt) / 2;
_wt = h*(1 - _yt) / 2;
}
double _x = iR(0,0)*_xt + iR(0,1)*_yt + iR(0,2)*_wt;
double _y = iR(1,0)*_xt + iR(1,1)*_yt + iR(1,2)*_wt;
double _w = iR(2,0)*_xt + iR(2,1)*_yt + iR(2,2)*_wt;
double r = sqrt(_x*_x + _y*_y + _w*_w);
double Xs = _x / r;
double Ys = _y / r;
double Zs = _w / r;
// project to image plane
double xu = Xs / (Zs + _xi),
yu = Ys / (Zs + _xi);
// add distortion
double r2 = xu*xu + yu*yu;
double r4 = r2*r2;
......@@ -386,25 +524,27 @@ void cv::omnidir::initUndistortRectifyMap(InputArray K, InputArray D, double xi,
}
}
}
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// cv::omnidir::undistortImage
void cv::omnidir::undistortImage(InputArray distorted, OutputArray undistorted,
InputArray K, InputArray D, double xi, InputArray Knew, const Size& new_size)
InputArray K, InputArray D, InputArray xi, int flags, InputArray Knew, const Size& new_size, InputArray R)
{
Size size = new_size.area() != 0 ? new_size : distorted.size();
cv::Mat map1, map2;
omnidir::initUndistortRectifyMap(K, D, xi, cv::Matx33d::eye(), Knew, size, CV_16SC2, map1, map2 );
omnidir::initUndistortRectifyMap(K, D, xi, R, Knew, size, CV_16SC2, map1, map2, flags);
cv::remap(distorted, undistorted, map1, map2, INTER_LINEAR, BORDER_CONSTANT);
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// cv::omnidir::internal::initializeCalibration
void cv::omnidir::internal::initializeCalibration(InputOutputArrayOfArrays patternPoints, InputOutputArrayOfArrays imagePoints, Size size, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll, OutputArray K, double& xi)
void cv::omnidir::internal::initializeCalibration(InputOutputArrayOfArrays patternPoints, InputOutputArrayOfArrays imagePoints, Size size,
OutputArrayOfArrays omAll, OutputArrayOfArrays tAll, OutputArray K, double& xi, OutputArray idx)
{
// For details please refer to Section III from Li's IROS 2013 paper
......@@ -422,16 +562,24 @@ void cv::omnidir::internal::initializeCalibration(InputOutputArrayOfArrays patte
Mat _K;
for (int image_idx = 0; image_idx < n_img; ++image_idx)
{
cv::Mat objPoints = patternPoints.getMat(image_idx);
cv::Mat imgPoints = imagePoints.getMat(image_idx);
cv::Mat objPoints, imgPoints;
patternPoints.getMat(image_idx).copyTo(objPoints);
imagePoints.getMat(image_idx).copyTo(imgPoints);
int n_point = imgPoints.rows * imgPoints.cols;
if (objPoints.rows != n_point)
objPoints = objPoints.reshape(3, n_point);
if (imgPoints.rows != n_point)
imgPoints = imgPoints.reshape(2, n_point);
// objectPoints should be 3-channel data, imagePoints should be 2-channel data
CV_Assert(objPoints.type() == CV_64FC3 && imgPoints.type() == CV_64FC2);
CV_Assert(objPoints.type() == CV_64FC3 && imgPoints.type() == CV_64FC2 );
std::vector<cv::Mat> xy, uv;
cv::split(objPoints, xy);
cv::split(imgPoints, uv);
int n_point = imgPoints.rows * imgPoints.cols;
cv::Mat x = xy[0].reshape(1, n_point), y = xy[1].reshape(1, n_point),
u = uv[0].reshape(1, n_point) - u0, v = uv[1].reshape(1, n_point) - v0;
......@@ -450,7 +598,7 @@ void cv::omnidir::internal::initializeCalibration(InputOutputArrayOfArrays patte
V = V.t();
double miniReprojectError = 1e5;
// the signs of r1, r2, r3 are unkown, so they can be flipped.
// the signs of r1, r2, r3 are unknown, so they can be flipped.
for (int coef = 1; coef >= -1; coef-=2)
{
double r11 = V.at<double>(0, 5) * coef;
......@@ -523,9 +671,9 @@ void cv::omnidir::internal::initializeCalibration(InputOutputArrayOfArrays patte
Mat projedImgPoints;
Matx33d Kc(gamma, 0, u0, 0, gamma, v0, 0, 0, 1);
// reproj error
// reproject error
cv::omnidir::projectPoints(objPoints, projedImgPoints, om, t, Kc, 1, Matx14d(0, 0, 0, 0), cv::noArray());
double reprojectError = omnidir::internal::computeMeanReproerr(imgPoints, projedImgPoints);
double reprojectError = omnidir::internal::computeMeanReproErr(imgPoints, projedImgPoints);
// if this reproject error is smaller
if (reprojectError < miniReprojectError)
......@@ -552,15 +700,16 @@ void cv::omnidir::internal::initializeCalibration(InputOutputArrayOfArrays patte
_K = Mat(Matx33d(gammaFinal, 0, u0, 0, gammaFinal, v0, 0, 0, 1));
_K.convertTo(K, CV_64F);
std::vector<int> _idx;
// recompute reproject error using the final gamma
for (int i = 0; i< n_img; i++)
{
Mat _projected;
cv::omnidir::projectPoints(patternPoints.getMat(i), _projected, v_omAll[i], v_tAll[i], _K, 1, Matx14d(0, 0, 0, 0), cv::noArray());
double _error = omnidir::internal::computeMeanReproerr(imagePoints.getMat(i), _projected);
if(_error < 20)
double _error = omnidir::internal::computeMeanReproErr(imagePoints.getMat(i), _projected);
if(_error < 100)
{
_idx.push_back(i);
reProjErrorFilter.push_back(_error);
omFilter.push_back(v_omAll[i]);
tFilter.push_back(v_tAll[i]);
......@@ -569,21 +718,174 @@ void cv::omnidir::internal::initializeCalibration(InputOutputArrayOfArrays patte
}
}
if (idx.needed())
{
idx.create(1, (int)_idx.size(), CV_32S);
Mat idx_m = idx.getMat();
for (int j = 0; j < (int)idx_m.total(); j++)
{
idx_m.at<int>(j) = _idx[j];
}
}
if(omAll.kind() == _InputArray::STD_VECTOR_MAT)
{
for (int i = 0; i < (int)omFilter.size(); ++i)
{
omAll.getMat(i) = Mat(omFilter[i]);
tAll.getMat(i) = Mat(tFilter[i]);
}
}
else
{
cv::Mat(omFilter).convertTo(omAll, CV_64FC3);
cv::Mat(tFilter).convertTo(tAll, CV_64FC3);
}
int depth1 = patternPoints.depth(), depth2 = imagePoints.depth();
patternPoints.create(Size(1, (int)patternPointsFilter.size()), CV_MAKETYPE(depth1,3));
imagePoints.create(Size(1, (int)patternPointsFilter.size()), CV_MAKETYPE(depth2,2));
for(int i = 0; i < (int)patternPointsFilter.size(); ++i)
//patternPoints.create(Size(1, (int)patternPointsFilter.size()), CV_MAKETYPE(depth2,3), );
//imagePoints.create(Size(1, (int)patternPointsFilter.size()), CV_MAKETYPE(depth2,2));
//std::cout << patternPoints.total() << std::endl;
//std::cout << patternPointsFilter[0].channels() << std::endl;
//std::cout << patternPoints.getMat(0).channels() << std::endl;
//std::cout << imagePoints.getMat(0).channels() << std::endl;
//for(int i = 0; i < (int)patternPointsFilter.size(); ++i)
//{
// patternPointsFilter[i].copyTo(patternPoints.getMat(i));
// imagePointsFilter[i].copyTo(imagePoints.getMat(i));
//}
xi = 1;
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// cv::omnidir::internal::initializeStereoCalibration
void cv::omnidir::internal::initializeStereoCalibration(InputOutputArrayOfArrays objectPoints, InputOutputArrayOfArrays imagePoints1, InputOutputArrayOfArrays imagePoints2,
const Size& size1, const Size& size2, OutputArray om, OutputArray T, OutputArrayOfArrays omL, OutputArrayOfArrays tL, OutputArray K1, OutputArray D1, OutputArray K2, OutputArray D2,
double &xi1, double &xi2, int flags, OutputArray idx)
{
int n = objectPoints.total();
Mat idx1, idx2;
Matx33d _K1, _K2;
Matx14d _D1, _D2;
Mat _xi1m, _xi2m;
std::vector<Vec3d> omAllTemp1, omAllTemp2, tAllTemp1, tAllTemp2;
std::vector<Mat> objectPointsTemp1(n), objectPointsTemp2(n), imagePointsTemp1(n), imagePointsTemp2(n);
for (int i = 0; i < n; ++i)
{
patternPointsFilter[i].copyTo(patternPoints.getMat(i));
imagePointsFilter[i].copyTo(imagePoints.getMat(i));
objectPointsTemp1[i] = objectPoints.getMat(i).clone();
objectPointsTemp2[i] = objectPoints.getMat(i).clone();
imagePointsTemp1[i] = imagePoints1.getMat(i).clone();
imagePointsTemp2[i] = imagePoints2.getMat(i).clone();
}
xi = 1;
omnidir::calibrate(objectPointsTemp1, imagePointsTemp1, size1, _K1, _xi1m, _D1, omAllTemp1, tAllTemp1, flags, TermCriteria(3, 100, 1e-6), idx1);
omnidir::calibrate(objectPointsTemp2, imagePointsTemp2, size2, _K2, _xi2m, _D2, omAllTemp2, tAllTemp2, flags, TermCriteria(3, 100, 1e-6), idx2);
// find the intersection idx
Mat interIdx1, interIdx2, interOri;
getInterset(idx1, idx2, interIdx1, interIdx2, interOri);
if (idx.empty())
idx.create(1, (int)interOri.total(), CV_32S);
interOri.copyTo(idx.getMat());
int n_inter = (int)interIdx1.total();
int depth1 = objectPoints.depth(), depth2 = imagePoints1.depth();
objectPoints.create(Size(1, n_inter), CV_MAKETYPE(depth1,3));
imagePoints1.create(Size(1, n_inter), CV_MAKETYPE(depth2,2));
imagePoints2.create(Size(1, n_inter), CV_MAKETYPE(depth2,2));
std::vector<Vec3d> omAll1(n_inter), omAll2(n_inter), tAll1(n_inter), tAll2(n_inter);
for(int i = 0; i < (int)interIdx1.total(); ++i)
{
objectPointsTemp1[interOri.at<int>(i)].copyTo(objectPoints.getMat(i));
imagePointsTemp1[interOri.at<int>(i)].copyTo(imagePoints1.getMat(i));
imagePointsTemp2[interOri.at<int>(i)].copyTo(imagePoints2.getMat(i));
omAll1[i] = omAllTemp1[interIdx1.at<int>(i)];
tAll1[i] = tAllTemp1[interIdx1.at<int>(i)];
omAll2[i] = omAllTemp2[interIdx2.at<int>(i)];
tAll2[i] = tAllTemp2[interIdx2.at<int>(i)];
/*objectPointsTemp1[interIdx1.at<int>(i)].copyTo(objectPoints.getMat(i));
imagePointsTemp1[interIdx1.at<int>(i)].copyTo(imagePoints1.getMat(i));
imagePointsTemp2[interIdx2.at<int>(i)].copyTo(imagePoints2.getMat(i));
omAll1[i] = omAllTemp1[interIdx1.at<int>(i)];
tAll1[i] = tAllTemp1[interIdx1.at<int>(i)];
omAll2[i] = omAllTemp2[interIdx2.at<int>(i)];
tAll2[i] = tAllTemp2[interIdx2.at<int>(i)];*/
}
// initialize R,T
Mat omEstAll(1, n_inter, CV_64FC3), tEstAll(1, n_inter, CV_64FC3);
Mat R1, R2, T1, T2, omLR, TLR, RLR;
for (int i = 0; i < n_inter; ++i)
{
Rodrigues(omAll1[i], R1);
Rodrigues(omAll2[i], R2);
T1 = Mat(tAll1[i]).reshape(1, 3);
T2 = Mat(tAll2[i]).reshape(1, 3);
RLR = R2 * R1.t();
TLR = T2 - RLR*T1;
Rodrigues(RLR, omLR);
omLR.reshape(3, 1).copyTo(omEstAll.col(i));
TLR.reshape(3, 1).copyTo(tEstAll.col(i));
}
Vec3d omEst = internal::findMedian3(omEstAll);
Vec3d tEst = internal::findMedian3(tEstAll);
Mat(omEst).copyTo(om.getMat());
Mat(tEst).copyTo(T.getMat());
if (omL.empty())
{
omL.create((int)omAll1.size(), 1, CV_64FC3);
}
if (tL.empty())
{
tL.create((int)tAll1.size(), 1, CV_64FC3);
}
if(omL.kind() == _InputArray::STD_VECTOR_MAT)
{
for(int i = 0; i < n_inter; ++i)
{
omL.create(3, 1, CV_64F, i, true);
tL.create(3, 1, CV_64F, i, true);
omL.getMat(i) = Mat(omAll1[i]);
tL.getMat(i) = Mat(tAll1[i]);
}
}
else
{
cv::Mat(omAll1).convertTo(omL, CV_64FC3);
cv::Mat(tAll1).convertTo(tL, CV_64FC3);
}
if (K1.empty())
{
K1.create(3, 3, CV_64F);
K2.create(3, 3, CV_64F);
}
if (D1.empty())
{
D1.create(1, 4, CV_64F);
D2.create(1, 4, CV_64F);
}
Mat(_K1).copyTo(K1.getMat());
Mat(_K2).copyTo(K2.getMat());
Mat(_D1).copyTo(D1.getMat());
Mat(_D2).copyTo(D2.getMat());
xi1 = _xi1m.at<double>(0);
xi2 = _xi2m.at<double>(0);
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// cv::omnidir::internal::computeJacobian
......@@ -598,8 +900,17 @@ void cv::omnidir::internal::computeJacobian(InputArrayOfArrays objectPoints, Inp
Mat JTJ = Mat::zeros(10 + 6*n, 10 + 6*n, CV_64F);
JTJ_inv = Mat::zeros(10 + 6*n, 10 + 6*n, CV_64F);
JTE = Mat::zeros(10 + 6*n, 1, CV_64F);
int nPointsAll = 0;
for (int i = 0; i < n; ++i)
{
nPointsAll += objectPoints.getMat(i).total();
}
//Mat J = Mat::zeros(2*n*objectPoints.getMat(0).total(), 10+6*n, CV_64F);
//Mat exAll = Mat::zeros(2*n*objectPoints.getMat(0).total(), 1, CV_64F);
Mat J = Mat::zeros(2*nPointsAll, 10+6*n, CV_64F);
Mat exAll = Mat::zeros(2*nPointsAll, 10+6*n, CV_64F);
double *para = parameters.getMat().ptr<double>();
Matx33d K(para[6*n], para[6*n+2], para[6*n+3],
0, para[6*n+1], para[6*n+4],
......@@ -609,8 +920,12 @@ void cv::omnidir::internal::computeJacobian(InputArrayOfArrays objectPoints, Inp
for (int i = 0; i < n; i++)
{
Mat objPoints, imgPoints, om, T;
objPoints = objectPoints.getMat(i);
imgPoints = imagePoints.getMat(i);
objectPoints.getMat(i).copyTo(objPoints);
imagePoints.getMat(i).copyTo(imgPoints);
objPoints = objPoints.reshape(3, objPoints.rows*objPoints.cols);
imgPoints = imgPoints.reshape(2, imgPoints.rows*imgPoints.cols);
//objPoints = objectPoints.getMat(i);
//imgPoints = imagePoints.getMat(i);
om = parameters.getMat().colRange(i*6, i*6+3);
T = parameters.getMat().colRange(i*6+3, (i+1)*6);
......@@ -635,41 +950,284 @@ void cv::omnidir::internal::computeJacobian(InputArrayOfArrays objectPoints, Inp
Mat(JIn.t()*JEx).copyTo(JTJ(Rect(i*6, 6*n, 6, 10)));
JTE(Rect(0, 6*n, 1, 10)) = JTE(Rect(0, 6*n,1, 10)) + JIn.t() * projError.reshape(1, 2*projError.rows);
JTE(Rect(0, i*6, 1, 6)) = JEx.t() * projError.reshape(1, 2*projError.rows);
JTE(Rect(0, 6*n, 1, 10)) = JTE(Rect(0, 6*n,1, 10)) + JIn.t() * projError.reshape(1, 2*projError.total());
JTE(Rect(0, i*6, 1, 6)) = JEx.t() * projError.reshape(1, 2*projError.total());
}
//int nPoints = objectPoints.getMat(i).total();
//JIn.copyTo(J(Rect(6*n, i*nPoints*2, 10, nPoints*2)));
//JEx.copyTo(J(Rect(6*i, i*nPoints*2, 6, nPoints*2)));
//projError.reshape(1, 2*projError.rows).copyTo(exAll.rowRange(i*2*nPoints, (i+1)*2*nPoints));
}
//JTJ = J.t()*J;
//JTE = J.t()*exAll;
std::vector<int> _idx(6*n+10, 1);
flags2idx(flags, _idx, n);
subMatrix(JTJ, JTJ, _idx, _idx);
subMatrix(JTE, JTE, std::vector<int>(1, 1), _idx);
// in case JTJ is singular
double epsilon = 1e-9;
double epsilon = 1e-10;
JTJ_inv = Mat(JTJ+epsilon).inv();
}
double cv::omnidir::calibrate(InputOutputArrayOfArrays patternPoints, InputOutputArrayOfArrays imagePoints, Size size,
void cv::omnidir::internal::computeJacobianStereo(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
InputArray parameters, Mat& JTJ_inv, Mat& JTE, int flags)
{
CV_Assert(!objectPoints.empty() && objectPoints.type() == CV_64FC3);
CV_Assert(!imagePoints1.empty() && imagePoints1.type() == CV_64FC2);
CV_Assert(!imagePoints2.empty() && imagePoints2.type() == CV_64FC2);
CV_Assert((imagePoints1.total() == imagePoints2.total()) && (imagePoints1.total() == objectPoints.total()));
// compute Jacobian matrix by naive way
int n_img = objectPoints.total();
int n_points = objectPoints.getMat(0).total();
Mat J = Mat::zeros(4 * n_points * n_img, 20 + 6 * (n_img + 1), CV_64F);
Mat exAll = Mat::zeros(4 * n_points * n_img, 1, CV_64F);
double *para = parameters.getMat().ptr<double>();
int offset1 = (n_img + 1) * 6;
int offset2 = offset1 + 10;
Matx33d K1(para[offset1], para[offset1+2], para[offset1+3],
0, para[offset1+1], para[offset1+4],
0, 0, 1);
Matx14d D1(para[offset1+6], para[offset1+7], para[offset1+8], para[offset1+9]);
double xi1 = para[offset1+5];
Matx33d K2(para[offset2], para[offset2+2], para[offset2+3],
0, para[offset2+1], para[offset2+4],
0, 0, 1);
Matx14d D2(para[offset2+6], para[offset2+7], para[offset2+8], para[offset2+9]);
double xi2 = para[offset2+5];
Mat om = parameters.getMat().colRange(0, 3);
Mat T = parameters.getMat().colRange(3, 6);
for (int i = 0; i < n_img; i++)
{
Mat objPointsi, imgPoints1i, imgPoints2i, om1, T1, om, T;
//objPointsi = objectPoints.getMat(i);
//imgPoints1i = imagePoints1.getMat(i);
//imgPoints2i = imagePoints2.getMat(i);
objectPoints.getMat(i).copyTo(objPointsi);
imagePoints1.getMat(i).copyTo(imgPoints1i);
imagePoints2.getMat(i).copyTo(imgPoints2i);
objPointsi = objPointsi.reshape(3, objPointsi.rows*objPointsi.cols);
imgPoints1i = imgPoints1i.reshape(2, imgPoints1i.rows*imgPoints1i.cols);
imgPoints2i = imgPoints2i.reshape(2, imgPoints2i.rows*imgPoints2i.cols);
om1 = parameters.getMat().reshape(1, 1).colRange((1 + i) * 6, (1 + i) * 6 + 3);
T1 = parameters.getMat().reshape(1, 1).colRange((1 + i) * 6 + 3, (i + 1) * 6 + 6);
om = parameters.getMat().reshape(1, 1).colRange(0, 3);
T = parameters.getMat().reshape(1, 1).colRange(3, 6);
Mat imgProj1, imgProj2, jacobian1, jacobian2;
// jacobian for left image
cv::omnidir::projectPoints(objPointsi, imgProj1, om1, T1, K1, xi1, D1, jacobian1);
Mat projError1 = imgPoints1i - imgProj1;
//Mat JIn1(jacobian1.rows, 10, CV_64F);
//Mat JEx1(jacobian1.rows, 6, CV_64F);
jacobian1.colRange(6, 16).copyTo(J(Rect(6*(n_img+1), i*n_points*4, 10, n_points*2)));
jacobian1.colRange(0, 6).copyTo(J(Rect(6+i*6, i*n_points*4, 6, n_points*2)));
projError1.reshape(1, 2*n_points).copyTo(exAll.rowRange(i*4*n_points, (i*4+2)*n_points));
//jacobian for right image
Mat om2, T2, dom2dom1, dom2dT1, dom2dom, dom2dT, dT2dom1, dT2dT1, dT2dom, dT2dT;
cv::omnidir::internal::compose_motion(om1, T1, om, T, om2, T2, dom2dom1, dom2dT1, dom2dom, dom2dT, dT2dom1, dT2dT1, dT2dom, dT2dT);
cv::omnidir::projectPoints(objPointsi, imgProj2, om2, T2, K2, xi2, D2, jacobian2);
Mat projError2 = imgPoints2i - imgProj2;
projError2.reshape(1, 2*n_points).copyTo(exAll.rowRange((i*4+2)*n_points, (i*4+4)*n_points));
Mat dxrdom = jacobian2.colRange(0, 3) * dom2dom + jacobian2.colRange(3, 6) * dT2dom;
Mat dxrdT = jacobian2.colRange(0, 3) * dom2dT + jacobian2.colRange(3, 6) * dT2dT;
Mat dxrdom1 = jacobian2.colRange(0, 3) * dom2dom1 + jacobian2.colRange(3, 6) * dT2dom1;
Mat dxrdT1 = jacobian2.colRange(0, 3) * dom2dT1 + jacobian2.colRange(3, 6) * dT2dT1;
//Mat dxrdom = jacobian2.colRange(0, 3) * dom2dom;
//Mat dxrdT = jacobian2.colRange(3, 6) * dT2dT;
//Mat dxrdom1 = jacobian2.colRange(0, 3) * dom2dom1;
//Mat dxrdT1 = jacobian2.colRange(3, 6) * dT2dT1;
dxrdom.copyTo(J(Rect(0, (i*4+2)*n_points, 3, n_points*2)));
dxrdT.copyTo(J(Rect(3, (i*4+2)*n_points, 3, n_points*2)));
dxrdom1.copyTo(J(Rect(6+i*6, (i*4+2)*n_points, 3, n_points*2)));
dxrdT1.copyTo(J(Rect(6+i*6+3, (i*4+2)*n_points, 3, n_points*2)));
jacobian2.colRange(6, 16).copyTo(J(Rect(6*(n_img+1)+10, (4*i+2)*n_points, 10, n_points*2)));
}
std::vector<int> _idx(6*(n_img+1)+20, 1);
flags2idxStereo(flags, _idx, n_img);
Mat JTJ = J.t()*J;
JTE = J.t()*exAll;
subMatrix(JTJ, JTJ, _idx, _idx);
subMatrix(JTE, JTE, std::vector<int>(1, 1), _idx);
double epsilon = 1e-10;
JTJ_inv = Mat(JTJ+epsilon).inv();
}
// This function is from fisheye.cpp
void cv::omnidir::internal::compose_motion(InputArray _om1, InputArray _T1, InputArray _om2, InputArray _T2, Mat& om3, Mat& T3, Mat& dom3dom1,
Mat& dom3dT1, Mat& dom3dom2, Mat& dom3dT2, Mat& dT3dom1, Mat& dT3dT1, Mat& dT3dom2, Mat& dT3dT2)
{
Mat om1 = _om1.getMat();
Mat om2 = _om2.getMat();
Mat T1 = _T1.getMat().reshape(1, 3);
Mat T2 = _T2.getMat().reshape(1, 3);
//% Rotations:
Mat R1, R2, R3, dR1dom1(9, 3, CV_64FC1), dR2dom2;
Rodrigues(om1, R1, dR1dom1);
Rodrigues(om2, R2, dR2dom2);
//JRodriguesMatlab(dR1dom1, dR1dom1);
//JRodriguesMatlab(dR2dom2, dR2dom2);
dR1dom1 = dR1dom1.t();
dR2dom2 = dR2dom2.t();
R3 = R2 * R1;
Mat dR3dR2, dR3dR1;
//dAB(R2, R1, dR3dR2, dR3dR1);
matMulDeriv(R2, R1, dR3dR2, dR3dR1);
Mat dom3dR3;
Rodrigues(R3, om3, dom3dR3);
//JRodriguesMatlab(dom3dR3, dom3dR3);
dom3dR3 = dom3dR3.t();
dom3dom1 = dom3dR3 * dR3dR1 * dR1dom1;
dom3dom2 = dom3dR3 * dR3dR2 * dR2dom2;
dom3dT1 = Mat::zeros(3, 3, CV_64FC1);
dom3dT2 = Mat::zeros(3, 3, CV_64FC1);
//% Translations:
Mat T3t = R2 * T1;
Mat dT3tdR2, dT3tdT1;
//dAB(R2, T1, dT3tdR2, dT3tdT1);
matMulDeriv(R2, T1, dT3tdR2, dT3tdT1);
Mat dT3tdom2 = dT3tdR2 * dR2dom2;
T3 = T3t + T2;
dT3dT1 = dT3tdT1;
dT3dT2 = Mat::eye(3, 3, CV_64FC1);
dT3dom2 = dT3tdom2;
dT3dom1 = Mat::zeros(3, 3, CV_64FC1);
}
// This function is from fisheye.cpp
//void cv::omnidir::internal::JRodriguesMatlab(const Mat& src, Mat& dst)
//{
// Mat tmp(src.cols, src.rows, src.type());
// if (src.rows == 9)
// {
// Mat(src.row(0).t()).copyTo(tmp.col(0));
// Mat(src.row(1).t()).copyTo(tmp.col(3));
// Mat(src.row(2).t()).copyTo(tmp.col(6));
// Mat(src.row(3).t()).copyTo(tmp.col(1));
// Mat(src.row(4).t()).copyTo(tmp.col(4));
// Mat(src.row(5).t()).copyTo(tmp.col(7));
// Mat(src.row(6).t()).copyTo(tmp.col(2));
// Mat(src.row(7).t()).copyTo(tmp.col(5));
// Mat(src.row(8).t()).copyTo(tmp.col(8));
// }
// else
// {
// Mat(src.col(0).t()).copyTo(tmp.row(0));
// Mat(src.col(1).t()).copyTo(tmp.row(3));
// Mat(src.col(2).t()).copyTo(tmp.row(6));
// Mat(src.col(3).t()).copyTo(tmp.row(1));
// Mat(src.col(4).t()).copyTo(tmp.row(4));
// Mat(src.col(5).t()).copyTo(tmp.row(7));
// Mat(src.col(6).t()).copyTo(tmp.row(2));
// Mat(src.col(7).t()).copyTo(tmp.row(5));
// Mat(src.col(8).t()).copyTo(tmp.row(8));
// }
// dst = tmp.clone();
//}
// This function is from fisheye.cpp
//void cv::omnidir::internal::dAB(InputArray A, InputArray B, OutputArray dABdA, OutputArray dABdB)
//{
// CV_Assert(A.getMat().cols == B.getMat().rows);
// CV_Assert(A.type() == CV_64FC1 && B.type() == CV_64FC1);
//
// int p = A.getMat().rows;
// int n = A.getMat().cols;
// int q = B.getMat().cols;
//
// dABdA.create(p * q, p * n, CV_64FC1);
// dABdB.create(p * q, q * n, CV_64FC1);
//
// dABdA.getMat() = Mat::zeros(p * q, p * n, CV_64FC1);
// dABdB.getMat() = Mat::zeros(p * q, q * n, CV_64FC1);
//
// for (int i = 0; i < q; ++i)
// {
// for (int j = 0; j < p; ++j)
// {
// int ij = j + i * p;
// for (int k = 0; k < n; ++k)
// {
// int kj = j + k * p;
// dABdA.getMat().at<double>(ij, kj) = B.getMat().at<double>(k, i);
// }
// }
// }
//
// for (int i = 0; i < q; ++i)
// {
// A.getMat().copyTo(dABdB.getMat().rowRange(i * p, i * p + p).colRange(i * n, i * n + n));
// }
//}
double cv::omnidir::calibrate(InputArray patternPoints, InputArray imagePoints, Size size,
InputOutputArray K, InputOutputArray xi, InputOutputArray D, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll,
int flags, TermCriteria criteria)
int flags, TermCriteria criteria, OutputArray idx)
{
CV_Assert(!patternPoints.empty() && !imagePoints.empty() && patternPoints.total() == imagePoints.total());
CV_Assert(patternPoints.type() == CV_64FC3 && imagePoints.type() == CV_64FC2);
CV_Assert((patternPoints.type() == CV_64FC3 && imagePoints.type() == CV_64FC2) ||
(patternPoints.type() == CV_32FC3 && imagePoints.type() == CV_32FC2));
CV_Assert(patternPoints.getMat(0).channels() == 3 && imagePoints.getMat(0).channels() == 2);
CV_Assert((!K.empty() && K.size() == Size(3,3)) || K.empty());
CV_Assert((!D.empty() && D.total() == 4) || D.empty());
CV_Assert((!xi.empty() && xi.total() == 1) || xi.empty());
CV_Assert((!omAll.empty() && omAll.depth() == patternPoints.depth()) || omAll.empty());
CV_Assert((!tAll.empty() && tAll.depth() == patternPoints.depth()) || tAll.empty());
int depth = patternPoints.depth();
std::vector<Mat> _patternPoints, _imagePoints;
for (int i = 0; i < (int)patternPoints.total(); ++i)
{
_patternPoints.push_back(patternPoints.getMat(i));
_imagePoints.push_back(imagePoints.getMat(i));
if (depth == CV_32F)
{
_patternPoints[i].convertTo(_patternPoints[i], CV_64FC3);
_imagePoints[i].convertTo(_imagePoints[i], CV_64FC2);
}
}
double _xi;
// initialization
cv::omnidir::internal::initializeCalibration(patternPoints, imagePoints, size, omAll, tAll, K, _xi);
int n = (int)patternPoints.total();
std::vector<Vec3d> _omAll, _tAll;
Matx33d _K;
Matx14d _D;
Mat _idx;
cv::omnidir::internal::initializeCalibration(_patternPoints, _imagePoints, size, _omAll, _tAll, _K, _xi, _idx);
std::vector<Mat> _patternPointsTmp = _patternPoints;
std::vector<Mat> _imagePointsTmp = _imagePoints;
_patternPoints.clear();
_imagePoints.clear();
// erase
for (int i = 0; i < (int)_idx.total(); i++)
{
_patternPoints.push_back(_patternPointsTmp[_idx.at<int>(i)]);
_imagePoints.push_back(_imagePointsTmp[_idx.at<int>(i)]);
}
int n = (int)_patternPoints.size();
Mat finalParam(1, 10 + 6*n, CV_64F);
Mat currentParam(1, 10 + 6*n, CV_64F);
cv::omnidir::internal::encodeParameters(K, omAll, tAll, Mat::zeros(1,4,CV_64F), _xi, n, currentParam);
cv::omnidir::internal::encodeParameters(_K, _omAll, _tAll, Mat::zeros(1,4,CV_64F), _xi, currentParam);
// optimization
const double alpha_smooth = 0.1;
const double alpha_smooth = 0.001;
//const double thresh_cond = 1e6;
double change = 1;
for(int iter = 0; ; ++iter)
......@@ -678,10 +1236,10 @@ double cv::omnidir::calibrate(InputOutputArrayOfArrays patternPoints, InputOutpu
(criteria.type == 2 && change <= criteria.epsilon) ||
(criteria.type == 3 && (change <= criteria.epsilon || iter >= criteria.maxCount)))
break;
double alpha_smooth2 = 1 - std::pow(1 - alpha_smooth, (double)iter/10 + 1.0);
double alpha_smooth2 = 1 - std::pow(1 - alpha_smooth, (double)iter + 1.0);
Mat JTJ_inv, JTError;
cv::omnidir::internal::computeJacobian(patternPoints, imagePoints, currentParam, JTJ_inv, JTError, flags);
cv::omnidir::internal::computeJacobian(_patternPoints, _imagePoints, currentParam, JTJ_inv, JTError, flags);
// GaussCNewton
Mat G = alpha_smooth2*JTJ_inv * JTError;
......@@ -694,64 +1252,495 @@ double cv::omnidir::calibrate(InputOutputArrayOfArrays patternPoints, InputOutpu
currentParam = finalParam.clone();
cv::omnidir::internal::decodeParameters(currentParam, _K, _omAll, _tAll, _D, _xi);
double repr = internal::computeMeanReproErr(_patternPoints, _imagePoints, _K, _D, _xi, _omAll, _tAll);
}
cv::omnidir::internal::decodeParameters(currentParam, K, omAll, tAll, D, _xi);
cv::omnidir::internal::decodeParameters(currentParam, _K, _omAll, _tAll, _D, _xi);
std::vector<Mat> proImagePoints;
double repr = internal::computeMeanReproErr(_patternPoints, _imagePoints, _K, _D, _xi, _omAll, _tAll);
if (omAll.needed())
{
omAll.create((int)_omAll.size(), 1, CV_64FC3);
}
if (tAll.needed())
{
tAll.create((int)_tAll.size(), 1, CV_64FC3);
}
if (omAll.kind() == _InputArray::STD_VECTOR_MAT)
{
for (int i = 0; i < n; ++i)
{
Mat imgPointsi;
cv::omnidir::projectPoints(patternPoints.getMat(i), imgPointsi, omAll.getMat().at<cv::Vec3d>(i), tAll.getMat().at<cv::Vec3d>(i), K, _xi, D, noArray());
proImagePoints.push_back(imgPointsi);
omAll.create(3, 1, CV_64F, i, true);
tAll.create(3, 1, CV_64F, i, true);
Mat tmpom = Mat(_omAll[i]);
Mat tmpt = Mat(_tAll[i]);
tmpom.convertTo(tmpom, CV_64F);
tmpt.convertTo(tmpt, CV_64F);
tmpom.copyTo(omAll.getMat(i));
tmpt.copyTo(tAll.getMat(i));
}
}
else
{
Mat(_omAll).convertTo(omAll, CV_64FC3);
Mat(_tAll).convertTo(tAll, CV_64FC3);
}
if(K.empty())
{
K.create(3, 3, CV_64F);
}
if (D.empty())
{
D.create(1, 4, CV_64F);
}
//double meanRepr = omnidir::internal::computeMeanReproerr(imagePoints, proImagePoints);
Mat(_K).convertTo(K.getMat(), K.empty()? CV_64F : K.type());
Mat(_D).convertTo(D.getMat(), D.empty() ? CV_64F: D.type());
if (xi.empty())
{
xi.create(1, 1, CV_64F);
}
Mat xi_m = Mat(1, 1, CV_64F);
xi_m.at<double>(0) = _xi;
xi_m.copyTo(xi.getMat());
xi_m.convertTo(xi.getMat(), xi.empty() ? CV_64F : xi.type());
if (idx.needed())
{
idx.create(1, _idx.total(), CV_32S);
_idx.copyTo(idx.getMat());
}
Vec2d std_error;
double rms;
Mat errors;
cv::omnidir::internal::estimateUncertainties(patternPoints, imagePoints, finalParam, errors, std_error, rms, flags);
cv::omnidir::internal::estimateUncertainties(_patternPoints, _imagePoints, finalParam, errors, std_error, rms, flags);
return rms;
}
void cv::omnidir::internal::encodeParameters(InputArray K, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll, InputArray distoaration, double xi, int n, OutputArray parameters)
double cv::omnidir::stereoCalibrate(InputOutputArrayOfArrays objectPoints, InputOutputArrayOfArrays imagePoints1, InputOutputArrayOfArrays imagePoints2,
const Size& imageSize1, const Size& imageSize2, InputOutputArray K1, InputOutputArray xi1, InputOutputArray D1, InputOutputArray K2, InputOutputArray xi2,
InputOutputArray D2, OutputArray om, OutputArray T, OutputArrayOfArrays omL, OutputArrayOfArrays tL, int flags, TermCriteria criteria, OutputArray idx)
{
CV_Assert(K.type() == CV_64F && K.size() == Size(3,3));
CV_Assert(distoaration.total() == 4 && distoaration.type() == CV_64F);
CV_Assert(!objectPoints.empty() && (objectPoints.type() == CV_64FC3 || objectPoints.type() == CV_32FC3));
CV_Assert(!imagePoints1.empty() && (imagePoints1.type() == CV_64FC2 || imagePoints1.type() == CV_32FC2));
CV_Assert(!imagePoints2.empty() && (imagePoints2.type() == CV_64FC2 || imagePoints2.type() == CV_32FC2));
Mat _omAll = omAll.getMat(), _tAll = tAll.getMat();
Mat tmp = Mat(_omAll.at<Vec3d>(0)).reshape(1,3).clone();
Matx33d _K = K.getMat();
Vec4d _D = (Vec4d)distoaration.getMat();
parameters.create(1, 10+6*n,CV_64F);
Mat _params = parameters.getMat();
for (int i = 0; i < n; i++)
{
Mat(_omAll.at<Vec3d>(i)).reshape(1, 1).copyTo(_params.colRange(i*6, i*6+3));
Mat(_tAll.at<Vec3d>(i)).reshape(1, 1).copyTo(_params.colRange(i*6+3, (i+1)*6));
}
//CV_Assert((!K1.empty() && K1.size() == Size(3,3)));
//CV_Assert((!K2.empty() && K2.size() == Size(3,3)));
//CV_Assert((!D1.empty() && D1.total() == 4));
//CV_Assert((!D2.empty() && D2.total() == 4));
_params.at<double>(0, 6*n) = _K(0,0);
_params.at<double>(0, 6*n+1) = _K(1,1);
_params.at<double>(0, 6*n+2) = _K(0,1);
_params.at<double>(0, 6*n+3) = _K(0,2);
_params.at<double>(0, 6*n+4) = _K(1,2);
_params.at<double>(0, 6*n+5) = xi;
_params.at<double>(0, 6*n+6) = _D[0];
_params.at<double>(0, 6*n+7) = _D[1];
_params.at<double>(0, 6*n+8) = _D[2];
_params.at<double>(0, 6*n+9) = _D[3];
}
CV_Assert(((flags & CALIB_USE_GUESS) && !K1.empty() && !D1.empty() && !K2.empty() && !D2.empty()) || !(flags & CALIB_USE_GUESS));
void cv::omnidir::internal::decodeParameters(InputArray paramsters, OutputArray K, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll, OutputArray distoration, double& xi)
int depth = objectPoints.depth();
std::vector<Mat> _objectPoints, _imagePoints1, _imagePoints2;
for (int i = 0; i < (int)objectPoints.total(); ++i)
{
if(K.empty())
_objectPoints.push_back(objectPoints.getMat(i));
_imagePoints1.push_back(imagePoints1.getMat(i));
_imagePoints2.push_back(imagePoints2.getMat(i));
if (depth == CV_32F)
{
_objectPoints[i].convertTo(_objectPoints[i], CV_64FC3);
_imagePoints1[i].convertTo(_imagePoints1[i], CV_64FC2);
_imagePoints2[i].convertTo(_imagePoints2[i], CV_64FC2);
}
}
Matx33d _K1, _K2;
Matx14d _D1, _D2;
int n = objectPoints.total();
int nPoints = objectPoints.getMat(0).total();
//if (!K1.empty()) K1.getMat().convertTo(_K1, CV_64FC1);
//if (!D1.empty()) D1.getMat().convertTo(_D1, CV_64FC1);
//if (!K2.empty()) K2.getMat().convertTo(_K2, CV_64FC1);
//if (!D2.empty()) D2.getMat().convertTo(_D2, CV_64FC1);
double _xi1, _xi2;
//if(!xi1.empty()) _xi1 = xi1.getMat().at<double>(0);
//if(!xi2.empty()) _xi2 = xi1.getMat().at<double>(0);
std::vector<Vec3d> _omL, _TL;
Vec3d _om, _T;
// initializaition
Mat _idx;
internal::initializeStereoCalibration(_objectPoints, _imagePoints1, _imagePoints2, imageSize1, imageSize2, _om, _T, _omL, _TL, _K1, _D1, _K2, _D2, _xi1, _xi2, flags, _idx);
if(idx.needed())
{
idx.create(1, (int)_idx.total(), CV_32S);
_idx.copyTo(idx.getMat());
}
n = (int)_objectPoints.size();
Mat finalParam(1, 10 + 6*n, CV_64F);
Mat currentParam(1, 10 + 6*n, CV_64F);
double repr1 = internal::computeMeanReproErrStereo(_objectPoints, _imagePoints1, _imagePoints2, _K1, _K2, _D1, _D2, _xi1, _xi2, _om,
_T, _omL, _TL);
cv::omnidir::internal::encodeParametersStereo(_K1, _K2, _om, _T, _omL, _TL, _D1, _D2, _xi1, _xi2, currentParam);
// optimization
const double alpha_smooth = 0.01;
double change = 1;
for(int iter = 0; ; ++iter)
{
if ((criteria.type == 1 && iter >= criteria.maxCount) ||
(criteria.type == 2 && change <= criteria.epsilon) ||
(criteria.type == 3 && (change <= criteria.epsilon || iter >= criteria.maxCount)))
break;
double alpha_smooth2 = 1 - std::pow(1 - alpha_smooth, (double)iter + 1.0);
Mat JTJ_inv, JTError;
cv::omnidir::internal::computeJacobianStereo(_objectPoints, _imagePoints1, _imagePoints2, currentParam, JTJ_inv, JTError, flags);
// GaussCNewton
Mat G = alpha_smooth2*JTJ_inv * JTError;
omnidir::internal::fillFixedStereo(G, flags, n);
finalParam = currentParam + G.t();
change = norm(G) / norm(currentParam);
currentParam = finalParam.clone();
cv::omnidir::internal::decodeParametersStereo(currentParam, _K1, _K2, _om, _T, _omL, _TL, _D1, _D2, _xi1, _xi2);
double repr = internal::computeMeanReproErrStereo(_objectPoints, _imagePoints1, _imagePoints2, _K1, _K2, _D1, _D2, _xi1, _xi2, _om,
_T, _omL, _TL);
}
cv::omnidir::internal::decodeParametersStereo(finalParam, _K1, _K2, _om, _T, _omL, _TL, _D1, _D2, _xi1, _xi2);
double repr = internal::computeMeanReproErrStereo(_objectPoints, _imagePoints1, _imagePoints2, _K1, _K2, _D1, _D2, _xi1, _xi2, _om,
_T, _omL, _TL);
if (K1.empty())
{
K1.create(3, 3, CV_64F);
D1.create(1, 4, CV_64F);
K2.create(3, 3, CV_64F);
D2.create(1, 4, CV_64F);
}
if (om.empty())
{
om.create(1, 3, CV_64F);
T.create(1, 3, CV_64F);
}
if (omL.empty())
{
omL.create(1, n, CV_64FC3);
tL.create(1, n, CV_64FC3);
}
Mat(_K1).convertTo(K1.getMat(), K1.empty() ? CV_64F : K1.type());
Mat(_D1).convertTo(D1.getMat(), D1.empty() ? CV_64F : D1.type());
Mat(_K2).convertTo(K2.getMat(), K2.empty() ? CV_64F : K2.type());
Mat(_D2).convertTo(D2.getMat(), D2.empty() ? CV_64F : D2.type());
Mat(_om).convertTo(om.getMat(), om.empty() ? CV_64F: om.type());
Mat(_T).convertTo(T.getMat(), T.empty() ? CV_64F: T.type());
if (omL.needed())
{
omL.create((int)_omL.size(), 1, CV_64FC3);
}
if (tL.needed())
{
tL.create((int)_TL.size(), 1, CV_64FC3);
}
if (omL.kind() == _InputArray::STD_VECTOR_MAT)
{
for (int i = 0; i < n; ++i)
{
omL.create(3, 1, CV_64F, i, true);
tL.create(3, 1, CV_64F, i, true);
Mat(_omL[i]).copyTo(omL.getMat(i));
Mat(_TL[i]).copyTo(tL.getMat(i));
}
}
else
{
Mat(_omL).convertTo(omL, omL.empty() ? CV_64FC3 : omL.type());
Mat(_TL).convertTo(tL, tL.empty() ? CV_64FC3 : tL.type());
}
Mat xi1_m = Mat(1, 1, CV_64F),
xi2_m = Mat(1, 1, CV_64F);
xi1_m.at<double>(0) = _xi1;
xi2_m.at<double>(0) = _xi2;
if (xi1.empty())
{
xi1.create(1, 1, CV_64F);
}
if (xi2.empty())
{
xi2.create(1, 1, CV_64F);
}
xi1_m.convertTo(xi1, xi1.empty() ? CV_64F : xi1.type());
xi2_m.convertTo(xi2, xi2.empty() ? CV_64F : xi2.type());
// compute uncertainty
Vec2d std_error;
double rms;
Mat errors;
cv::omnidir::internal::estimateUncertaintiesStereo(_objectPoints, _imagePoints1, _imagePoints2, finalParam, errors, std_error, rms, flags);
return rms;
}
void cv::omnidir::stereoReconstruct(InputArray image1, InputArray image2, InputArray K1, InputArray D1,
InputArray xi1, InputArray K2, InputArray D2, InputArray xi2, InputArray R, InputArray T, int flag,
int numDisparities, int SADWindowSize, OutputArray disparity, OutputArray image1Rec, OutputArray image2Rec,
const Size& newSize, InputArray Knew, OutputArray pointCloud, int pointType)
{
CV_Assert(!K1.empty() && K1.size() == Size(3,3) && (K1.type() == CV_64F || K1.type() == CV_32F));
CV_Assert(!K2.empty() && K2.size() == Size(3,3) && (K2.type() == CV_64F || K2.type() == CV_32F));
CV_Assert(!D1.empty() && D1.total() == 4 && (D1.type() == CV_64F || D1.type() == CV_32F));
CV_Assert(!D2.empty() && D2.total() == 4 && (D2.type() == CV_64F || D2.type() == CV_32F));
CV_Assert(!R.empty() && (R.size() == Size(3,3) || R.total() == 3) && (R.type() == CV_64F || R.type() == CV_32F));
CV_Assert(!T.empty() && T.total() == 3 && (T.type() == CV_64F || T.type() == CV_32F));
CV_Assert(!image1.empty() && (image1.type() == CV_8U || image1.type() == CV_8UC3));
CV_Assert(!image2.empty() && (image2.type() == CV_8U || image2.type() == CV_8UC3));
CV_Assert(flag == omnidir::RECTIFY_LONGLATI || flag == omnidir::RECTIFY_PERSPECTIVE);
Mat _K1, _D1, _K2, _D2, _R, _T;
K1.getMat().convertTo(_K1, CV_64F);
K2.getMat().convertTo(_K2, CV_64F);
D1.getMat().convertTo(_D1, CV_64F);
D2.getMat().convertTo(_D2, CV_64F);
T.getMat().reshape(1, 3).convertTo(_T, CV_64F);
if (R.size() == Size(3, 3))
{
R.getMat().convertTo(_R, CV_64F);
}
else if (R.total() == 3)
{
Rodrigues(R.getMat(), _R);
_R.convertTo(_R, CV_64F);
}
// stereo rectify so that stereo matching can be applied in one line
Mat R1, R2;
stereoRectify(_R, _T, R1, R2);
Mat undis1, undis2;
Matx33d _Knew = Matx33d(_K1);
if (!Knew.empty())
{
Knew.getMat().convertTo(_Knew, CV_64F);
}
undistortImage(image1.getMat(), undis1, _K1, _D1, xi1, flag, _Knew, newSize, R1);
undistortImage(image2.getMat(), undis2, _K2, _D2, xi2, flag, _Knew, newSize, R2);
undis1.copyTo(image1Rec);
undis2.copyTo(image2Rec);
// stereo matching by semi-global
Mat _disMap;
int channel = image1.channels();
//cv::StereoSGBM matching(0, numDisparities, SADWindowSize, 8*channel*SADWindowSize*SADWindowSize, 32*channel*SADWindowSize*SADWindowSize);
//matching(undis1, undis2, _depthMap);
Ptr<StereoSGBM> sgbm = StereoSGBM::create(0, numDisparities, SADWindowSize, 8 * channel*SADWindowSize*SADWindowSize, 32 * channel*SADWindowSize*SADWindowSize);
sgbm->compute(undis1, undis2, _disMap);
// some regions of image1 is black, the corresponding regions of disparity map is also invalid.
Mat realDis;
_disMap.convertTo(_disMap, CV_32F);
Mat(_disMap/16.0f).convertTo(realDis, CV_32F);
Mat grayImg, binaryImg, idx;
if (undis1.channels() == 3)
{
cvtColor(undis1, grayImg, COLOR_RGB2GRAY);
}
else
{
grayImg = undis1;
}
binaryImg = (grayImg <= 0);
findNonZero(binaryImg, idx);
for (int i = 0; i < (int)idx.total(); ++i)
{
Vec2i _idx = idx.at<Vec2i>(i);
realDis.at<float>(_idx[1], _idx[0]) = 0.0f;
}
disparity.create(realDis.size(), realDis.type());
realDis.copyTo(disparity.getMat());
std::vector<Vec3f> _pointCloud;
std::vector<Vec6f> _pointCloudColor;
double baseline = cv::norm(T);
double f = _Knew(0, 0);
Matx33d K_inv = _Knew.inv();
std::vector<Mat> rgb;
if (undis1.channels() == 3)
{
split(undis1, rgb);
}
if (pointCloud.needed())
{
for (int i = 0; i < newSize.width; ++i)
{
for(int j = 0; j < newSize.height; ++j)
{
Vec3f point;
Vec6f pointColor;
if (realDis.at<float>(j, i) > 15)
{
float depth = float(baseline * f /realDis.at<float>(j, i));
// for RECTIFY_PERSPECTIVE, (x,y) are image plane points,
// for RECTIFY_LONGLATI, (x,y) are (theta, phi) angles
float x = float(K_inv(0,0) * i + K_inv(0,1) * j + K_inv(0,2));
float y = float(K_inv(1,0) * i + K_inv(1,1) * j + K_inv(1,2));
if (flag == omnidir::RECTIFY_LONGLATI)
{
point = Vec3f((float)-std::cos(x), (float)(-std::sin(x)*std::cos(y)), (float)(std::sin(x)*std::sin(y))) * depth;
}
else if(flag == omnidir::RECTIFY_PERSPECTIVE)
{
point = Vec3f(float(x), float(y), 1.0f) * depth;
}
if (pointType == XYZ)
{
_pointCloud.push_back(point);
}
else if (pointType == XYZRGB)
{
pointColor[0] = point[0];
pointColor[1] = point[1];
pointColor[2] = point[2];
if (undis1.channels() == 1)
{
pointColor[3] = float(undis1.at<uchar>(j, i));
pointColor[4] = pointColor[3];
pointColor[5] = pointColor[3];
}
else if (undis1.channels() == 3)
{
pointColor[3] = rgb[0].at<uchar>(j, i);
pointColor[4] = rgb[1].at<uchar>(j, i);
pointColor[5] = rgb[2].at<uchar>(j, i);
}
_pointCloudColor.push_back(pointColor);
}
}
}
}
int nPoints;
if (pointType == XYZ)
{
nPoints = (int)_pointCloud.size();
Mat(_pointCloud).convertTo(pointCloud, CV_MAKE_TYPE(CV_32F, 3));
}
else if (pointType == XYZRGB)
{
nPoints = (int)_pointCloudColor.size();
Mat(_pointCloudColor).convertTo(pointCloud, CV_MAKE_TYPE(CV_32F, 6));
}
}
}
void cv::omnidir::internal::encodeParameters(InputArray K, InputArrayOfArrays omAll, InputArrayOfArrays tAll, InputArray distoaration, double xi, OutputArray parameters)
{
CV_Assert(K.type() == CV_64F && K.size() == Size(3,3));
CV_Assert(distoaration.total() == 4 && distoaration.type() == CV_64F);
int n = omAll.total();
Mat _omAll = omAll.getMat(), _tAll = tAll.getMat();
Mat tmp = Mat(_omAll.at<Vec3d>(0)).reshape(1,3).clone();
Matx33d _K = K.getMat();
Vec4d _D = (Vec4d)distoaration.getMat();
parameters.create(1, 10+6*n,CV_64F);
Mat _params = parameters.getMat();
for (int i = 0; i < n; i++)
{
Mat(_omAll.at<Vec3d>(i)).reshape(1, 1).copyTo(_params.colRange(i*6, i*6+3));
Mat(_tAll.at<Vec3d>(i)).reshape(1, 1).copyTo(_params.colRange(i*6+3, (i+1)*6));
}
_params.at<double>(0, 6*n) = _K(0,0);
_params.at<double>(0, 6*n+1) = _K(1,1);
_params.at<double>(0, 6*n+2) = _K(0,1);
_params.at<double>(0, 6*n+3) = _K(0,2);
_params.at<double>(0, 6*n+4) = _K(1,2);
_params.at<double>(0, 6*n+5) = xi;
_params.at<double>(0, 6*n+6) = _D[0];
_params.at<double>(0, 6*n+7) = _D[1];
_params.at<double>(0, 6*n+8) = _D[2];
_params.at<double>(0, 6*n+9) = _D[3];
}
void cv::omnidir::internal::encodeParametersStereo(InputArray K1, InputArray K2, InputArray om, InputArray T, InputArrayOfArrays omL, InputArrayOfArrays tL,
InputArray D1, InputArray D2, double xi1, double xi2, OutputArray parameters)
{
CV_Assert(!K1.empty() && K1.type() == CV_64F && K1.size() == Size(3,3));
CV_Assert(!K2.empty() && K2.type() == CV_64F && K2.size() == Size(3,3));
CV_Assert(!om.empty() && om.type() == CV_64F && om.total() == 3);
CV_Assert(!T.empty() && T.type() == CV_64F && T.total() == 3);
CV_Assert(omL.total() == tL.total() && omL.type() == CV_64FC3 && tL.type() == CV_64FC3);
CV_Assert(D1.type() == CV_64F && D1.total() == 4 && D2.type() == CV_64F && D2.total() == 4);
int n = (int)omL.total();
// om, T, omL, tL, intrinsic left, intrinsic right
parameters.create(1, 20 + 6 * (n + 1), CV_64F);
Mat _params = parameters.getMat();
om.getMat().reshape(1, 1).copyTo(_params.colRange(0, 3));
T.getMat().reshape(1, 1).copyTo(_params.colRange(3, 6));
for(int i = 0; i < n; ++i)
{
Mat(omL.getMat().at<Vec3d>(i)).reshape(1, 1).copyTo(_params.colRange(6 + i*6, 6 + i*6 + 3));
Mat(tL.getMat().at<Vec3d>(i)).reshape(1, 1).copyTo(_params.colRange(6 + i*6 + 3, 6 + i*6 + 6));
}
Matx33d _K1 = K1.getMat();
Matx33d _K2 = K2.getMat();
Vec4d _D1 = D1.getMat();
Vec4d _D2 = D2.getMat();
_params.at<double>(0, 6*(n+1)) = _K1(0,0);
_params.at<double>(0, 6*(n+1)+1) = _K1(1,1);
_params.at<double>(0, 6*(n+1)+2) = _K1(0,1);
_params.at<double>(0, 6*(n+1)+3) = _K1(0,2);
_params.at<double>(0, 6*(n+1)+4) = _K1(1,2);
_params.at<double>(0, 6*(n+1)+5) = xi1;
_params.at<double>(0, 6*(n+1)+6) = _D1[0];
_params.at<double>(0, 6*(n+1)+7) = _D1[1];
_params.at<double>(0, 6*(n+1)+8) = _D1[2];
_params.at<double>(0, 6*(n+1)+9) = _D1[3];
_params.at<double>(0, 6*(n+1)+10) = _K2(0,0);
_params.at<double>(0, 6*(n+1)+11) = _K2(1,1);
_params.at<double>(0, 6*(n+1)+12) = _K2(0,1);
_params.at<double>(0, 6*(n+1)+13) = _K2(0,2);
_params.at<double>(0, 6*(n+1)+14) = _K2(1,2);
_params.at<double>(0, 6*(n+1)+15) = xi2;
_params.at<double>(0, 6*(n+1)+16) = _D2[0];
_params.at<double>(0, 6*(n+1)+17) = _D2[1];
_params.at<double>(0, 6*(n+1)+18) = _D2[2];
_params.at<double>(0, 6*(n+1)+19) = _D2[3];
}
void cv::omnidir::internal::decodeParameters(InputArray parameters, OutputArray K, OutputArrayOfArrays omAll, OutputArrayOfArrays tAll, OutputArray distoration, double& xi)
{
if(K.empty())
K.create(3,3,CV_64F);
Matx33d _K;
int n = (int)(paramsters.total()-10)/6;
int n = (int)(parameters.total()-10)/6;
if(omAll.empty())
omAll.create(1, n, CV_64FC3);
if(tAll.empty())
......@@ -759,22 +1748,103 @@ void cv::omnidir::internal::encodeParameters(InputArray K, OutputArrayOfArrays o
if(distoration.empty())
distoration.create(1, 4, CV_64F);
Matx14d _D = distoration.getMat();
Mat param = paramsters.getMat();
Mat param = parameters.getMat();
double *para = param.ptr<double>();
_K = Matx33d(para[6*n], para[6*n+2], para[6*n+3],
0, para[6*n+1], para[6*n+4],
0, 0, 1);
_D = Matx14d(para[6*n+6], para[6*n+7], para[6*n+8], para[6*n+9]);
xi = para[6*n+5];
std::vector<Vec3d> _omAll(n), _tAll(n);
for (int i = 0; i < n; i++)
{
param.colRange(i*6, i*6+3).reshape(3, 1).copyTo(omAll.getMat(i));
param.colRange(i*6+3, i*6+6).reshape(3, 1).copyTo(tAll.getMat(i));
_omAll[i] = Vec3d(param.colRange(i*6, i*6+3));
_tAll[i] = Vec3d(param.colRange(i*6+3, i*6+6));
}
Mat(_D).convertTo(distoration, CV_64F);
Mat(_K).convertTo(K, CV_64F);
if (omAll.kind() == _InputArray::STD_VECTOR_MAT)
{
for (int i = 0; i < n; ++i)
{
Mat(_omAll[i]).copyTo(omAll.getMat(i));
Mat(_tAll[i]).copyTo(tAll.getMat(i));
}
}
else
{
Mat(_omAll).convertTo(omAll, CV_64FC3);
Mat(_tAll).convertTo(tAll, CV_64FC3);
}
}
void cv::omnidir::internal::decodeParametersStereo(InputArray parameters, OutputArray K1, OutputArray K2, OutputArray om, OutputArray T, OutputArrayOfArrays omL,
OutputArrayOfArrays tL, OutputArray D1, OutputArray D2, double& xi1, double& xi2)
{
if(K1.empty())
K1.create(3, 3, CV_64F);
if(K2.empty())
K2.create(3, 3, CV_64F);
if(om.empty())
om.create(3, 1, CV_64F);
if(T.empty())
T.create(3, 1, CV_64F);
int n = (parameters.total() - 20) / 6 - 1;
if(omL.empty())
omL.create(1, n, CV_64FC3);
if(tL.empty())
tL.create(1, n, CV_64FC3);
if(D1.empty())
D1.create(1, 4, CV_64F);
if(D2.empty())
D2.create(1, 4, CV_64F);
Mat param = parameters.getMat().reshape(1, 1);
param.colRange(0, 3).reshape(1, 3).copyTo(om.getMat());
param.colRange(3, 6).reshape(1, 3).copyTo(T.getMat());
std::vector<Vec3d> _omL, _tL;
for(int i = 0; i < n; i++)
{
_omL.push_back(Vec3d(param.colRange(6 + i*6, 6 + i*6 + 3)));
_tL.push_back(Vec3d(param.colRange(6 + i*6 + 3, 6 + i*6 + 6)));
}
double* para = param.ptr<double>();
int offset1 = (n + 1)*6;
Matx33d _K1(para[offset1], para[offset1+2], para[offset1+3],
0, para[offset1+1], para[offset1+4],
0, 0, 1);
xi1 = para[offset1+5];
Matx14d _D1(para[offset1+6], para[offset1+7], para[offset1+8], para[offset1+9]);
int offset2 = (n + 1)*6 + 10;
Matx33d _K2(para[offset2], para[offset2+2], para[offset2+3],
0, para[offset2+1], para[offset2+4],
0, 0, 1);
xi2 = para[offset2+5];
Matx14d _D2(para[offset2+6], para[offset2+7], para[offset2+8], para[offset2+9]);
Mat(_K1).convertTo(K1, CV_64F);
Mat(_D1).convertTo(D1, CV_64F);
Mat(_K2).convertTo(K2, CV_64F);
Mat(_D2).convertTo(D2, CV_64F);
if(omL.kind() == _InputArray::STD_VECTOR_MAT)
{
for(int i = 0; i < n; ++i)
{
Mat(_omL[i]).copyTo(omL.getMat(i));
Mat(_tL[i]).copyTo(tL.getMat(i));
}
}
else
{
Mat(_omL).convertTo(omL, CV_64FC3);
Mat(_tL).convertTo(tL, CV_64FC3);
}
}
void cv::omnidir::internal::estimateUncertainties(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, InputArray parameters,
......@@ -786,9 +1856,13 @@ void cv::omnidir::internal::estimateUncertainties(InputArrayOfArrays objectPoint
int n = (int) objectPoints.total();
// assume every image has the same number of objectpoints
int nPointsImage = (int)objectPoints.getMat(0).total();
int nPointsAll = 0;
for (int i = 0; i < n; ++i)
{
nPointsAll += objectPoints.getMat(i).total();
}
Mat reprojError = Mat(n*nPointsImage, 1, CV_64FC2);
Mat reprojError = Mat(nPointsAll, 1, CV_64FC2);
double* para = parameters.getMat().ptr<double>();
Matx33d K(para[6*n], para[6*n+2], para[6*n+3],
......@@ -796,11 +1870,15 @@ void cv::omnidir::internal::estimateUncertainties(InputArrayOfArrays objectPoint
0, 0, 1);
Matx14d D(para[6*n+6], para[6*n+7], para[6*n+8], para[6*n+9]);
double xi = para[6*n+5];
int nPointsAccu = 0;
for(int i=0; i < n; ++i)
{
Mat imgPoints = imagePoints.getMat(i);
Mat objPoints = objectPoints.getMat(i);
Mat imgPoints, objPoints;
imagePoints.getMat(i).copyTo(imgPoints);
objectPoints.getMat(i).copyTo(objPoints);
imgPoints = imgPoints.reshape(2, imgPoints.rows*imgPoints.cols);
objPoints = objPoints.reshape(3, objPoints.rows*objPoints.cols);
Mat om = parameters.getMat().colRange(i*6, i*6+3);
Mat T = parameters.getMat().colRange(i*6+3, (i+1)*6);
......@@ -811,16 +1889,17 @@ void cv::omnidir::internal::estimateUncertainties(InputArrayOfArrays objectPoint
Mat errorx = (imgPoints - x);
//reprojError.rowRange(errorx.rows*i, errorx.rows*(i+1)) = errorx.clone();
errorx.copyTo(reprojError.rowRange(errorx.rows*i, errorx.rows*(i+1)));
errorx.copyTo(reprojError.rowRange(nPointsAccu, nPointsAccu + (int)errorx.total()));
nPointsAccu += (int)errorx.total();
}
meanStdDev(reprojError, noArray(), std_error);
std_error *= sqrt((double)reprojError.total()/((double)reprojError.total() - 1.0));
Mat simga_x;
meanStdDev(reprojError.reshape(1,1), noArray(), simga_x);
simga_x *= sqrt(2.0*(double)reprojError.total()/(2.0*(double)reprojError.total() - 1.0));
double s = simga_x.at<double>(0);
Mat sigma_x;
meanStdDev(reprojError.reshape(1,1), noArray(), sigma_x);
sigma_x *= sqrt(2.0*(double)reprojError.total()/(2.0*(double)reprojError.total() - 1.0));
double s = sigma_x.at<double>(0);
Mat _JTJ_inv, _JTE;
computeJacobian(objectPoints, imagePoints, parameters, _JTJ_inv, _JTE, flags);
......@@ -841,8 +1920,98 @@ void cv::omnidir::internal::estimateUncertainties(InputArrayOfArrays objectPoint
rms = sqrt(rms);
}
// estimateUncertaintiesStereo
void cv::omnidir::internal::estimateUncertaintiesStereo(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
InputArray parameters, Mat& errors, Vec2d& std_error, double& rms, int flags)
{
CV_Assert(!objectPoints.empty() && objectPoints.type() == CV_64FC3);
CV_Assert(!imagePoints1.empty() && imagePoints1.type() == CV_64FC2 && imagePoints1.total() == objectPoints.total());
CV_Assert(!imagePoints2.empty() && imagePoints2.type() == CV_64FC2 && imagePoints1.total() == imagePoints2.total());
int n_img = objectPoints.total();
CV_Assert((int)parameters.total() == (6*(n_img+1)+20));
Mat _K1, _K2, _D1, _D2;
Vec3d _om, _T;
std::vector<Vec3d> _omL(n_img), _tL(n_img);
Mat _parameters = parameters.getMat().reshape(1, 1);
double _xi1, _xi2;
internal::decodeParametersStereo(_parameters, _K1, _K2, _om, _T, _omL, _tL, _D1, _D2, _xi1, _xi2);
int n_points = objectPoints.getMat(0).total();
Mat reprojErrorAll = Mat::zeros(2*n_points*n_img, 1, CV_64FC2);
// error for left image
for (int i = 0; i < n_img; ++i)
{
//Mat objPointsi = objectPoints.getMat(i);
//Mat imgPointsi = imagePoints1.getMat(i);
Mat objPointsi, imgPointsi;
objectPoints.getMat(i).copyTo(objPointsi);
imagePoints1.getMat(i).copyTo(imgPointsi);
objPointsi = objPointsi.reshape(3, objPointsi.rows*objPointsi.cols);
imgPointsi = imgPointsi.reshape(2, imgPointsi.rows*imgPointsi.cols);
Mat x;
omnidir::projectPoints(objPointsi, x, _omL[i], _tL[i], _K1, _xi1, _D1, cv::noArray());
Mat errorx = imgPointsi - x;
errorx.copyTo(reprojErrorAll.rowRange(i*2*n_points, (i*2+1)*n_points));
}
// error for right image
for (int i = 0; i < n_img; ++i)
{
//Mat objPointsi = objectPoints.getMat(i);
//Mat imgPointsi = imagePoints2.getMat(i);
Mat objPointsi, imgPointsi;
objectPoints.getMat(i).copyTo(objPointsi);
imagePoints2.getMat(i).copyTo(imgPointsi);
objPointsi = objPointsi.reshape(3, objPointsi.rows*objPointsi.cols);
imgPointsi = imgPointsi.reshape(2, imgPointsi.rows*imgPointsi.cols);
Mat x;
Mat _R, _R2, _R1, _T2, _T1, _om2;
Rodrigues(_om, _R);
Rodrigues(_omL[i], _R1);
_T1 = Mat(_tL[i]);
_R2 = _R * _R1;
_T2 = _R * _T1 + Mat(_T);
Rodrigues(_R2, _om2);
omnidir::projectPoints(objPointsi, x, _om2, _T2, _K2, _xi2, _D2, cv::noArray());
Mat errorx = imgPointsi - x;
errorx.copyTo(reprojErrorAll.rowRange((i*2+1)*n_points, (i+1)*2*n_points));
}
meanStdDev(reprojErrorAll, cv::noArray(), std_error);
std_error *= sqrt((double)reprojErrorAll.total()/((double)reprojErrorAll.total() - 1.0));
Mat sigma_x;
meanStdDev(reprojErrorAll.reshape(1,1), noArray(), sigma_x);
sigma_x *= sqrt(2.0*(double)reprojErrorAll.total()/(2.0*(double)reprojErrorAll.total() - 1.0));
double s = sigma_x.at<double>(0);
Mat _JTJ_inv, _JTE;
computeJacobianStereo(objectPoints, imagePoints1, imagePoints2, _parameters, _JTJ_inv, _JTE, flags);
cv::sqrt(_JTJ_inv, _JTJ_inv);
errors = 3 * s * _JTJ_inv.diag();
rms = 0;
const Vec2d* ptr_ex = reprojErrorAll.ptr<Vec2d>();
for (int i = 0; i < (int)reprojErrorAll.total(); i++)
{
rms += ptr_ex[i][0] * ptr_ex[i][0] + ptr_ex[i][1] * ptr_ex[i][1];
}
rms /= (double)reprojErrorAll.total();
rms = sqrt(rms);
}
//
double cv::omnidir::internal::computeMeanReproerr(InputArrayOfArrays imagePoints, InputArrayOfArrays proImagePoints)
double cv::omnidir::internal::computeMeanReproErr(InputArrayOfArrays imagePoints, InputArrayOfArrays proImagePoints)
{
CV_Assert(!imagePoints.empty() && imagePoints.type()==CV_64FC2);
CV_Assert(!proImagePoints.empty() && proImagePoints.type() == CV_64FC2);
......@@ -851,9 +2020,30 @@ double cv::omnidir::internal::computeMeanReproerr(InputArrayOfArrays imagePoints
int n = (int)imagePoints.total();
double reprojError = 0;
int totalPoints = 0;
if (imagePoints.kind() == _InputArray::STD_VECTOR_MAT)
{
for (int i = 0; i < n; i++)
{
Mat errorI = imagePoints.getMat(i) - proImagePoints.getMat(i);
Mat x, proj_x;
imagePoints.getMat(i).copyTo(x);
proImagePoints.getMat(i).copyTo(proj_x);
Mat errorI = x.reshape(2, x.rows*x.cols) - proj_x.reshape(2, proj_x.rows*proj_x.cols);
//Mat errorI = imagePoints.getMat(i) - proImagePoints.getMat(i);
totalPoints += (int)errorI.total();
Vec2d* ptr_err = errorI.ptr<Vec2d>();
for (int j = 0; j < (int)errorI.total(); j++)
{
reprojError += sqrt(ptr_err[j][0]*ptr_err[j][0] + ptr_err[j][1]*ptr_err[j][1]);
}
}
}
else
{
Mat x, proj_x;
imagePoints.getMat().copyTo(x);
proImagePoints.getMat().copyTo(proj_x);
Mat errorI = x.reshape(2, x.rows*x.cols) - proj_x.reshape(2, proj_x.rows*proj_x.cols);
//Mat errorI = imagePoints.getMat() - proImagePoints.getMat();
totalPoints += (int)errorI.total();
Vec2d* ptr_err = errorI.ptr<Vec2d>();
for (int j = 0; j < (int)errorI.total(); j++)
......@@ -865,6 +2055,72 @@ double cv::omnidir::internal::computeMeanReproerr(InputArrayOfArrays imagePoints
return meanReprojError;
}
double cv::omnidir::internal::computeMeanReproErr(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, InputArray K, InputArray D, double xi, InputArrayOfArrays omAll,
InputArrayOfArrays tAll)
{
CV_Assert(objectPoints.total() == imagePoints.total());
CV_Assert(!objectPoints.empty() && objectPoints.type() == CV_64FC3);
CV_Assert(!imagePoints.empty() && imagePoints.type() == CV_64FC2);
std::vector<Mat> proImagePoints;
int n = (int)objectPoints.total();
Mat _omAll = omAll.getMat();
Mat _tAll = tAll.getMat();
for(int i = 0; i < n; ++i)
{
Mat imgPoint;
//cv::omnidir::projectPoints(objetPoints.getMat(i), imgPoint, omAll.getMat(i), tAll.getMat(i), K.getMat(), xi, D.getMat(), noArray());
cv::omnidir::projectPoints(objectPoints.getMat(i), imgPoint, _omAll.at<Vec3d>(i), _tAll.at<Vec3d>(i), K.getMat(), xi, D.getMat(), noArray());
proImagePoints.push_back(imgPoint);
}
return internal::computeMeanReproErr(imagePoints, proImagePoints);
}
double cv::omnidir::internal::computeMeanReproErrStereo(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2, InputArray K1, InputArray K2,
InputArray D1, InputArray D2, double xi1, double xi2, InputArray om, InputArray T, InputArrayOfArrays omL, InputArrayOfArrays TL)
{
CV_Assert(objectPoints.total() == imagePoints1.total() && imagePoints1.total() == imagePoints2.total());
CV_Assert(!objectPoints.empty() && objectPoints.type() == CV_64FC3);
CV_Assert(!imagePoints1.empty() && imagePoints1.type() == CV_64FC2);
CV_Assert(!imagePoints2.empty() && imagePoints2.type() == CV_64FC2);
std::vector<Mat> proImagePoints1, proImagePoints2;
int n = (int)objectPoints.total();
Mat _omL = omL.getMat(), _TL = TL.getMat();
Mat _om = om.getMat(), _R, _T = T.getMat();
Rodrigues(_om, _R);
Mat _K1 = K1.getMat(), _K2 = K2.getMat();
Mat _D1 = D1.getMat(), _D2 = D2.getMat();
// reprojection error for left image
for (int i = 0; i < n; ++i)
{
Mat imgPoints;
cv::omnidir::projectPoints(objectPoints.getMat(i), imgPoints, _omL.at<Vec3d>(i), _TL.at<Vec3d>(i), _K1, xi1, _D1, cv::noArray());
proImagePoints1.push_back(imgPoints);
}
// reprojection error for right image
for (int i = 0; i < n; ++i)
{
Mat imgPoints;
Mat _omRi,_RRi,_TRi,_RLi, _TLi;
Rodrigues(_omL.at<Vec3d>(i), _RLi);
_TLi = Mat(_TL.at<Vec3d>(i)).reshape(1, 3);
_RRi = _R * _RLi;
_TRi = _R * _TLi + _T;
Rodrigues(_RRi, _omRi);
cv::omnidir::projectPoints(objectPoints.getMat(i), imgPoints, _omRi, _TRi, _K2, xi2, _D2, cv::noArray());
proImagePoints2.push_back(imgPoints);
}
double reProErr1 = internal::computeMeanReproErr(imagePoints1, proImagePoints1);
double reProErr2 = internal::computeMeanReproErr(imagePoints2, proImagePoints2);
double reProErr = (reProErr1 + reProErr2) / 2.0;
//double reProErr = reProErr1*reProErr1 + reProErr2* reProErr2;
return reProErr;
}
void cv::omnidir::internal::checkFixed(Mat& G, int flags, int n)
{
int _flags = flags;
......@@ -942,6 +2198,7 @@ void cv::omnidir::internal::subMatrix(const Mat& src, Mat& dst, const std::vecto
void cv::omnidir::internal::flags2idx(int flags, std::vector<int>& idx, int n)
{
idx = std::vector<int>(6*n+10,1);
int _flags = flags;
if(_flags >= omnidir::CALIB_FIX_CENTER)
{
......@@ -986,6 +2243,66 @@ void cv::omnidir::internal::flags2idx(int flags, std::vector<int>& idx, int n)
}
}
void cv::omnidir::internal::flags2idxStereo(int flags, std::vector<int>& idx, int n)
{
idx = std::vector<int>(6*(n+1)+20, 1);
int _flags = flags;
int offset1 = 6*(n+1);
int offset2 = offset1 + 10;
if(_flags >= omnidir::CALIB_FIX_CENTER)
{
idx[offset1+3] = 0;
idx[offset1+4] = 0;
idx[offset2+3] = 0;
idx[offset2+4] = 0;
_flags -= omnidir::CALIB_FIX_CENTER;
}
if(_flags >= omnidir::CALIB_FIX_GAMMA)
{
idx[offset1] = 0;
idx[offset1+1] = 0;
idx[offset2] = 0;
idx[offset2+1] = 0;
_flags -= omnidir::CALIB_FIX_GAMMA;
}
if(_flags >= omnidir::CALIB_FIX_XI)
{
idx[offset1 + 5] = 0;
idx[offset2 + 5] = 0;
_flags -= omnidir::CALIB_FIX_XI;
}
if(_flags >= omnidir::CALIB_FIX_P2)
{
idx[offset1 + 9] = 0;
idx[offset2 + 9] = 0;
_flags -= omnidir::CALIB_FIX_P2;
}
if(_flags >= omnidir::CALIB_FIX_P1)
{
idx[offset1 + 8] = 0;
idx[offset2 + 8] = 0;
_flags -= omnidir::CALIB_FIX_P1;
}
if(_flags >= omnidir::CALIB_FIX_K2)
{
idx[offset1 + 7] = 0;
idx[offset2 + 7] = 0;
_flags -= omnidir::CALIB_FIX_K2;
}
if(_flags >= omnidir::CALIB_FIX_K1)
{
idx[offset1 + 6] = 0;
idx[offset2 + 6] = 0;
_flags -= omnidir::CALIB_FIX_K1;
}
if(_flags >= omnidir::CALIB_FIX_SKEW)
{
idx[offset1 + 2] = 0;
idx[offset2 + 2] = 0;
}
}
// fill in zerso for fixed parameters
void cv::omnidir::internal::fillFixed(Mat&G, int flags, int n)
{
Mat tmp = G.clone();
......@@ -1003,20 +2320,117 @@ void cv::omnidir::internal::fillFixed(Mat&G, int flags, int n)
}
}
//double cv::omnidir::stereoCalibrate(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
// Size imageSize, InputOutputArray K1, double& xi1, InputOutputArray D1, InputOutputArray K2, double& xi2,
// InputOutputArray D2, OutputArray R, OutputArray T, int flags, TermCriteria criteria)
//{
// return 1;
//}
void cv::omnidir::internal::fillFixedStereo(Mat& G, int flags, int n)
{
Mat tmp = G.clone();
std::vector<int> idx(6*(n+1)+20, 1);
flags2idxStereo(flags, idx, n);
G.release();
G.create(6 * (n+1) + 20, 1, CV_64F);
G = cv::Mat::zeros(6* (n + 1) + 20, 1, CV_64F);
for (int i = 0,j=0; i < (int)idx.size(); i++)
{
if (idx[i])
{
G.at<double>(i) = tmp.at<double>(j++);
}
}
}
double cv::omnidir::internal::findMedian(const Mat& row)
{
CV_Assert(!row.empty() && row.rows == 1 && row.type() == CV_64F);
Mat tmp = row.clone();
cv::sort(tmp, tmp, 0);
if((int)tmp.total()%2 == 0)
return tmp.at<double>((int)tmp.total() / 2);
else
return 0.5 * (tmp.at<double>((int)tmp.total() / 2) + tmp.at<double>((int)tmp.total()/2 - 1));
}
//void cv::omnidir::stereoRectify(InputArray K1, InputArray D1, double xi1, InputArray K2, InputArray D2, double xi2, const Size imageSize,
// InputArray R, InputArray tvec, OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q, int flags,
// const Size& newImageSize)
//{
//
//}
cv::Vec3d cv::omnidir::internal::findMedian3(InputArray mat)
{
CV_Assert(mat.depth() == CV_64F && mat.getMat().rows == 1);
Mat M = Mat(mat.getMat().t()).reshape(1).t();
return Vec3d(findMedian(M.row(0)), findMedian(M.row(1)), findMedian(M.row(2)));
}
void cv::omnidir::stereoRectify(InputArray R, InputArray T, OutputArray R1, OutputArray R2)
{
CV_Assert((R.size() == Size(3,3) || R.total() == 3) && (R.depth() == CV_32F || R.depth() == CV_64F));
CV_Assert(T.total() == 3 && (T.depth() == CV_32F || T.depth() == CV_64F));
Mat _R, _T;
if (R.size() == Size(3, 3))
{
R.getMat().convertTo(_R, CV_64F);
}
else if (R.total() == 3)
{
Rodrigues(R.getMat(), _R);
_R.convertTo(_R, CV_64F);
}
T.getMat().reshape(1, 3).convertTo(_T, CV_64F);
R1.create(3, 3, CV_64F);
Mat _R1 = R1.getMat();
R2.create(3, 3, CV_64F);
Mat _R2 = R2.getMat();
Mat R21 = _R.t();
Mat T21 = -_R.t() * _T;
#endif // __OPENCV_OMNIDIR_CPP__
#endif // cplusplus
\ No newline at end of file
Mat e1, e2, e3;
e1 = T21.t() / norm(T21);
e2 = Mat(Matx13d(T21.at<double>(1)*-1, T21.at<double>(0), 0.0));
e2 = e2 / norm(e2);
e3 = e1.cross(e2);
e3 = e3 / norm(e3);
e1.copyTo(_R1.row(0));
e2.copyTo(_R1.row(1));
e3.copyTo(_R1.row(2));
_R2 = R21 * _R1;
}
void cv::omnidir::internal::getInterset(InputArray idx1, InputArray idx2, OutputArray inter1, OutputArray inter2,
OutputArray inter_ori)
{
Mat _idx1 = idx1.getMat();
Mat _idx2 = idx2.getMat();
int n1 = (int)idx1.total();
int n2 = (int)idx2.total();
std::vector<int> _inter1, _inter2, _inter_ori;
for (int i = 0; i < n1; ++i)
{
for (int j = 0; j < n2; ++j)
{
if(_idx1.at<int>(i) == _idx2.at<int>(j))
{
_inter1.push_back(i);
_inter2.push_back(j);
_inter_ori.push_back(_idx1.at<int>(i));
}
}
}
inter1.create(1, (int)_inter1.size(), CV_32S);
inter2.create(1, (int)_inter2.size(), CV_32S);
inter_ori.create(1, (int)_inter_ori.size(), CV_32S);
for (int i = 0; i < (int)_inter1.size(); ++i)
{
inter1.getMat().at<int>(i) = _inter1[i];
}
for (int i = 0; i < (int)_inter2.size(); ++i)
{
inter2.getMat().at<int>(i) = _inter2[i];
}
for (int i = 0; i < (int)_inter_ori.size(); ++i)
{
inter_ori.getMat().at<int>(i) = _inter_ori[i];
}
}
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2014, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_CCALIB_PRECOMP__
#define __OPENCV_CCALIB_PRECOMP__
#include <opencv2/core.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/features2d.hpp>
#ifndef __OPENCV_PRECOMP_H__
#define __OPENCV_PRECOMP_H__
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <vector>
#endif
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2015, Baisheng Lai (laibaisheng@gmail.com), Zhejiang University,
// all rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
/**
* This module was accepted as a GSoC 2015 project for OpenCV, authored by
* Baisheng Lai, mentored by Bo Li.
* This module implements points extraction from a "random" pattern image.
* It returns 3D object points and 2D image pixels that can be used for calibration.
* Compared with traditional calibration object like chessboard, it is useful when pattern is
* partly occluded or only a part of pattern can be observed in multiple cameras calibration.
* For detail, refer to this paper
* B. Li, L. Heng, K. Kevin and M. Pollefeys, "A Multiple-Camera System
* Calibration Toolbox Using A Feature Descriptor-Based Calibration
* Pattern", in IROS 2013.
*/
#include "precomp.hpp"
#include "opencv2/randomPatten.hpp"
#include <iostream>
using namespace cv;
using namespace std;
randomPatternCornerFinder::randomPatternCornerFinder(float patternWidth, float patternHeight,
int nminiMatch, int depth, int showExtraction, Ptr<FeatureDetector> detector, Ptr<DescriptorExtractor> descriptor,
Ptr<DescriptorMatcher> matcher)
{
_patternHeight = patternHeight;
_patternWidth = patternWidth;
_nminiMatch = nminiMatch;
_objectPonits.resize(0);
_imagePoints.resize(0);
_depth = depth;
_detector = detector;
_descriptor = descriptor;
_matcher = matcher;
_showExtraction = showExtraction;
}
//void randomPatternCornerFinder::computeObjectImagePoints2(std::vector<cv::Mat> inputImages)
//{
// int nImag = (int)inputImages.size();
//
// Mat descriptorPattern = _descriptorPattern;
// std::vector<cv::KeyPoint> keypointsPattern = _keypointsPattern;
// Mat keypointsPatternLocation;
//
// //Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-L1");
// for (int i = 0; i < nImag; ++i)
// {
// CV_Assert(inputImages[i].type() == CV_8UC1);
//
// Mat image = inputImages[i], imageEquHist;
// equalizeHist(image, imageEquHist);
//
// // key points for image
// std::vector<cv::KeyPoint> keypointsImage, keypointsImage1, keypointsImage2;
// Mat keypointsImageLocation;
// Mat descriptorImage, descriptorImage1, descriptorImage2;
//
// _detector->detect(image, keypointsImage1);
// _detector->detect(imageEquHist, keypointsImage2);
// _descriptor->compute(image, keypointsImage1, descriptorImage1);
// _descriptor->compute(imageEquHist, keypointsImage2, descriptorImage2);
//
// // only CV_32F type is support for match
// descriptorImage1.convertTo(descriptorImage1, CV_32F);
// descriptorImage2.convertTo(descriptorImage2, CV_32F);
//
// // match with pattern
// std::vector<DMatch> matchesImgtoPat, matchesImgtoPat1, matchesImgtoPat2;
//
// crossCheckMatching(this->_matcher, descriptorImage1, descriptorPattern, matchesImgtoPat1, 1);
// crossCheckMatching(this->_matcher, descriptorImage2, descriptorPattern, matchesImgtoPat2, 1);
//
// if ((int)matchesImgtoPat1.size() > (int)matchesImgtoPat2.size())
// {
// matchesImgtoPat = matchesImgtoPat1;
// keypointsImage = keypointsImage1;
// }
// else
// {
// matchesImgtoPat = matchesImgtoPat2;
// keypointsImage = keypointsImage2;
// }
//
// keyPoints2MatchedLocation(keypointsImage, keypointsPattern, matchesImgtoPat,
// keypointsImageLocation, keypointsPatternLocation);
//
// Mat img_corr;
//
// // innerMask is CV_8U type
// Mat innerMask1, innerMask2;
//
// // draw raw correspondence
// if (this->_showExtraction)
// {
// drawCorrespondence(inputImages[i], keypointsImage, this->_patternImage, keypointsPattern, matchesImgtoPat,
// innerMask1, innerMask2);
// }
//
// // outlier remove
// findFundamentalMat(keypointsImageLocation, keypointsPatternLocation,
// FM_7POINT, 1, 0.99, innerMask1);
// getFilteredLocation(keypointsImageLocation, keypointsPatternLocation, innerMask1);
//
// findHomography(keypointsImageLocation, keypointsPatternLocation, RANSAC, 3, innerMask2);
// getFilteredLocation(keypointsImageLocation, keypointsPatternLocation, innerMask2);
//
// // draw filtered correspondence
// if (this->_showExtraction)
// {
// drawCorrespondence(inputImages[i], keypointsImage, this->_patternImage, keypointsPattern, matchesImgtoPat,
// innerMask1, innerMask2);
// }
//
// if((int)keypointsImageLocation.total() > _nminiMatch)
// {
// getObjectImagePoints(keypointsImageLocation, keypointsPatternLocation);
// }
// }
//}
void randomPatternCornerFinder::computeObjectImagePoints(std::vector<cv::Mat> inputImages)
{
CV_Assert(!_patternImage.empty());
CV_Assert(inputImages.size() > 0);
int nImages = (int)inputImages.size();
std::vector<Mat> imageObjectPoints;
for (int i = 0; i < nImages; ++i)
{
imageObjectPoints = computeObjectImagePointsForSingle(inputImages[i]);
if ((int)imageObjectPoints[0].total() > _nminiMatch)
{
_imagePoints.push_back(imageObjectPoints[0]);
_objectPonits.push_back(imageObjectPoints[1]);
}
}
}
void randomPatternCornerFinder::keyPoints2MatchedLocation(const std::vector<cv::KeyPoint>& imageKeypoints,
const std::vector<cv::KeyPoint>& patternKeypoints, const std::vector<cv::DMatch> matchces,
cv::Mat& matchedImagelocation, cv::Mat& matchedPatternLocation)
{
matchedImagelocation.release();
matchedPatternLocation.release();
std::vector<Vec2d> image, pattern;
for (int i = 0; i < (int)matchces.size(); ++i)
{
Point2f imgPt = imageKeypoints[matchces[i].queryIdx].pt;
Point2f patPt = patternKeypoints[matchces[i].trainIdx].pt;
image.push_back(Vec2d(imgPt.x, imgPt.y));
pattern.push_back(Vec2d(patPt.x, patPt.y));
}
Mat(image).convertTo(matchedImagelocation, CV_64FC2);
Mat(pattern).convertTo(matchedPatternLocation, CV_64FC2);
}
void randomPatternCornerFinder::getFilteredLocation(cv::Mat& imageKeypoints, cv::Mat& patternKeypoints, const cv::Mat mask)
{
Mat tmpKeypoint, tmpPattern;
imageKeypoints.copyTo(tmpKeypoint);
patternKeypoints.copyTo(tmpPattern);
imageKeypoints.release();
patternKeypoints.release();
std::vector<cv::Vec2d> vecKeypoint, vecPattern;
for(int i = 0; i < (int)mask.total(); ++i)
{
if (mask.at<uchar>(i) == 1)
{
vecKeypoint.push_back(tmpKeypoint.at<Vec2d>(i));
vecPattern.push_back(tmpPattern.at<Vec2d>(i));
}
}
Mat(vecKeypoint).convertTo(imageKeypoints, CV_64FC2);
Mat(vecPattern).convertTo(patternKeypoints, CV_64FC2);
}
void randomPatternCornerFinder::getObjectImagePoints(const cv::Mat& imageKeypoints, const cv::Mat& patternKeypoints)
{
Mat imagePoints_i, objectPoints_i;
int imagePointsType = CV_MAKETYPE(_depth, 2);
int objectPointsType = CV_MAKETYPE(_depth, 3);
imageKeypoints.convertTo(imagePoints_i, imagePointsType);
_imagePoints.push_back(imagePoints_i);
if (patternKeypoints.total()!=CV_64FC2)
patternKeypoints.convertTo(patternKeypoints, CV_64FC2);
std::vector<Vec3d> objectPoints;
for (int i = 0; i < (int)patternKeypoints.total(); ++i)
{
double x = patternKeypoints.at<Vec2d>(i)[0];
double y = patternKeypoints.at<Vec2d>(i)[1];
x = x / _patternImageSize.width * _patternWidth;
y = y / _patternImageSize.height * _patternHeight;
objectPoints.push_back(Vec3d(x, y, 0));
}
Mat(objectPoints).convertTo(objectPoints_i, objectPointsType);
_objectPonits.push_back(objectPoints_i);
}
void randomPatternCornerFinder::crossCheckMatching( Ptr<DescriptorMatcher>& descriptorMatcher,
const Mat& descriptors1, const Mat& descriptors2,
std::vector<DMatch>& filteredMatches12, int knn )
{
filteredMatches12.clear();
std::vector<std::vector<DMatch> > matches12, matches21;
descriptorMatcher->knnMatch( descriptors1, descriptors2, matches12, knn );
descriptorMatcher->knnMatch( descriptors2, descriptors1, matches21, knn );
for( size_t m = 0; m < matches12.size(); m++ )
{
bool findCrossCheck = false;
for( size_t fk = 0; fk < matches12[m].size(); fk++ )
{
DMatch forward = matches12[m][fk];
for( size_t bk = 0; bk < matches21[forward.trainIdx].size(); bk++ )
{
DMatch backward = matches21[forward.trainIdx][bk];
if( backward.trainIdx == forward.queryIdx )
{
filteredMatches12.push_back(forward);
findCrossCheck = true;
break;
}
}
if( findCrossCheck ) break;
}
}
}
void randomPatternCornerFinder::drawCorrespondence(const Mat& image1, const std::vector<cv::KeyPoint> keypoint1,
const Mat& image2, const std::vector<cv::KeyPoint> keypoint2, const std::vector<cv::DMatch> matchces,
const Mat& mask1, const Mat& mask2)
{
Mat img_corr;
if(mask1.empty())
{
drawMatches(image1, keypoint1, image2, keypoint2, matchces, img_corr);
}
else if(!mask1.empty() && mask2.empty())
{
std::vector<cv::DMatch> matchesFilter;
for (int i = 0; i < (int)mask1.total(); ++i)
{
if (mask1.at<uchar>(i) == 1)
{
matchesFilter.push_back(matchces[i]);
}
}
drawMatches(image1, keypoint1, image2, keypoint2, matchesFilter, img_corr);
}
else if(!mask1.empty() && !mask2.empty())
{
std::vector<cv::DMatch> matchesFilter;
int j = 0;
for (int i = 0; i < (int)mask1.total(); ++i)
{
if (mask1.at<uchar>(i) == 1)
{
if (mask2.at<uchar>(j) == 1)
{
matchesFilter.push_back(matchces[i]);
}
j++;
}
}
drawMatches(image1, keypoint1, image2, keypoint2, matchesFilter, img_corr);
}
imshow("correspondence", img_corr);
waitKey(0);
}
std::vector<cv::Mat> randomPatternCornerFinder::getObjectPoints()
{
return _objectPonits;
}
std::vector<cv::Mat> randomPatternCornerFinder::getImagePoints()
{
return _imagePoints;
}
void randomPatternCornerFinder::loadPattern(cv::Mat patternImage)
{
_patternImage = patternImage.clone();
if (_patternImage.type()!= CV_8U)
_patternImage.convertTo(_patternImage, CV_8U);
_patternImageSize = _patternImage.size();
_detector->detect(patternImage, _keypointsPattern);
_descriptor->compute(patternImage, _keypointsPattern, _descriptorPattern);
_descriptorPattern.convertTo(_descriptorPattern, CV_32F);
}
std::vector<cv::Mat> randomPatternCornerFinder::computeObjectImagePointsForSingle(cv::Mat inputImage)
{
CV_Assert(!_patternImage.empty());
std::vector<cv::Mat> r(2);
Mat descriptorImage1, descriptorImage2,descriptorImage;
std::vector<cv::KeyPoint> keypointsImage1, keypointsImage2, keypointsImage;
if (inputImage.type()!=CV_8U)
{
inputImage.convertTo(inputImage, CV_8U);
}
Mat imageEquHist;
equalizeHist(inputImage, imageEquHist);
_detector->detect(inputImage, keypointsImage1);
_descriptor->compute(inputImage, keypointsImage1, descriptorImage1);
_detector->detect(imageEquHist, keypointsImage2);
_descriptor->compute(imageEquHist, keypointsImage2, descriptorImage2);
descriptorImage1.convertTo(descriptorImage1, CV_32F);
descriptorImage2.convertTo(descriptorImage2, CV_32F);
// match with pattern
std::vector<DMatch> matchesImgtoPat, matchesImgtoPat1, matchesImgtoPat2;
cv::Mat keypointsImageLocation, keypointsPatternLocation;
crossCheckMatching(this->_matcher, descriptorImage1, this->_descriptorPattern, matchesImgtoPat1, 5);
crossCheckMatching(this->_matcher, descriptorImage2, this->_descriptorPattern, matchesImgtoPat2, 5);
if ((int)matchesImgtoPat1.size() > (int)matchesImgtoPat2.size())
{
matchesImgtoPat = matchesImgtoPat1;
keypointsImage = keypointsImage1;
}
else
{
matchesImgtoPat = matchesImgtoPat2;
keypointsImage = keypointsImage2;
}
keyPoints2MatchedLocation(keypointsImage, this->_keypointsPattern, matchesImgtoPat,
keypointsImageLocation, keypointsPatternLocation);
Mat img_corr;
// innerMask is CV_8U type
Mat innerMask1, innerMask2;
// draw raw correspondence
if(this->_showExtraction)
{
drawCorrespondence(inputImage, keypointsImage, _patternImage, _keypointsPattern, matchesImgtoPat,
innerMask1, innerMask2);
}
// outlier remove
findFundamentalMat(keypointsImageLocation, keypointsPatternLocation,
FM_RANSAC, 1, 0.995, innerMask1);
getFilteredLocation(keypointsImageLocation, keypointsPatternLocation, innerMask1);
findHomography(keypointsImageLocation, keypointsPatternLocation, RANSAC, 10*inputImage.cols/1000, innerMask2);
getFilteredLocation(keypointsImageLocation, keypointsPatternLocation, innerMask2);
// draw filtered correspondence
if (this->_showExtraction)
{
drawCorrespondence(inputImage, keypointsImage, _patternImage, _keypointsPattern, matchesImgtoPat,
innerMask1, innerMask2);
}
std::vector<Vec3d> objectPoints;
int imagePointsType = CV_MAKETYPE(_depth, 2);
int objectPointsType = CV_MAKETYPE(_depth, 3);
keypointsImageLocation.convertTo(r[0], imagePointsType);
for (int i = 0; i < (int)keypointsPatternLocation.total(); ++i)
{
double x = keypointsPatternLocation.at<Vec2d>(i)[0];
double y = keypointsPatternLocation.at<Vec2d>(i)[1];
x = x / _patternImageSize.width * _patternWidth;
y = y / _patternImageSize.height * _patternHeight;
objectPoints.push_back(Vec3d(x, y, 0));
}
Mat(objectPoints).convertTo(r[1], objectPointsType);
return r;
}
randomPatternGenerator::randomPatternGenerator(int imageWidth, int imageHeight)
{
_imageWidth = imageWidth;
_imageHeight = imageHeight;
}
void randomPatternGenerator::generatePattern()
{
Mat pattern = Mat(_imageHeight, _imageWidth, CV_32F, Scalar(0.));
int m = 5;
int count = 0;
while(m < _imageWidth)
{
int n = (int)std::floor(double(_imageHeight) / double(_imageWidth) * double(m)) + 1;
Mat r = Mat(n, m, CV_32F);
cv::randn(r, Scalar::all(0), Scalar::all(1));
cv::resize(r, r, Size(_imageWidth ,_imageHeight));
double min_r, max_r;
minMaxLoc(r, &min_r, &max_r);
r = (r - (float)min_r) / float(max_r - min_r);
pattern += r;
count += 1;
m *= 2;
}
std::cout << pattern.at<float>(0,1) << std::endl;
pattern = pattern / count * 255;
pattern.convertTo(pattern, CV_8U);
std::cout << pattern.at<uchar>(0,1) << std::endl;
equalizeHist(pattern, pattern);
pattern.copyTo(_pattern);
}
cv::Mat randomPatternGenerator::getPattern()
{
return _pattern;
}
\ No newline at end of file
This source diff could not be displayed because it is too large. You can view the blob instead.
Omnidirectional Cameara Calibration {#tutorial_omnidir_calib_main}
======================
This module includes calibration, rectification and stereo reconstruction of omnidirectional camearas. The camera model is from this paper:
This module includes calibration, rectification and stereo reconstruction of omnidirectional camearas. The camera model is described in this paper:
*C. Mei and P. Rives, Single view point omnidirectional camera calibration from planar grids, in ICRA 2007.*
The model is capable of describing catadioptric cameras, which may have 360 degrees of field of view. Also, it can be used for fisheye cameras.
The model is capable of modeling catadioptric cameras and fisheye cameras, which may both have very large field of view.
The implementation of the calibration part is based on Li's calibration toolbox:
......@@ -14,7 +14,7 @@ The implementation of the calibration part is based on Li's calibration toolbox:
This tutorial will introduce the following parts of omnidirectional camera calibartion module:
- calibrate a single camera.
- calibrate a system with multiple cameras.
- calibrate a stereo pair of cameras.
- rectify images so that large distoration is removed.
- reconstruct 3D from two stereo images, with large filed of view.
- comparison with fisheye model in opencv/calib3d/
......@@ -22,61 +22,164 @@ This tutorial will introduce the following parts of omnidirectional camera calib
Single Camera Calibration
---------------------
The first step to calibrate camera is to get a calibration object and take some photos. The most common object is checkerboard, other objects may also be available for this module further. The physical length of checkerboard square is required.
The first step to calibrate camera is to get a calibration pattern and take some photos. Several kinds of patterns are supported by OpenCV, like checkerborad and circle grid. A new pattern named random pattern can also be used, you can refer to opencv_contrib/modules/ccalib for more details.
Next extract checkerboard corners and get their positions in all images by Opencv function *findChessboardCorners* or by hand (if you do want to make sure that corners are perfectly extracted). Save the positions in images in imagePoints, with vector of Mat type of CV_64FC2. That is, you get a 1XN or NX1 CV_64FC2 mat for each image, N is the number of corners. Then set the world coordinate to one of the four extreme corners and set xy plane as the checkerboard plane so that the position of checkerboard corners in world frame can be determined. Save the points in world coordiante in objectPoints, with vector of Mat type of CV_64FC3. Each element in vector objectPoints and imagePoints must come from the same image.
Next step is to extract corners from calibration pattern. For checkerboard, use OpenCV function ```cv::findChessboardCorners```; for circle grid, use ```cv::findCirclesGrid```, for random pattern, use the ```randomPatternCornerFinder``` class in opencv_contrib/modules/ccalib/src/randomPattern.hpp. Save the positions of corners in images in a variable like ```imagePoints```. The type of ```imagePoints``` may be ```std::vector<std::vector<cv::Vec2f>>```, the first vector stores corners in each frame, the second vector stores corners in an individual frame. The type can also be ```std::vector<cv::Mat>``` where the ```cv::Mat``` is ```CV_32FC2```.
In the folder *data*, the file *omni_calib_data.xml* stores an example of objectPoints, imagePoints and imageSize. Use the following code to load them:
Also, the corresponding 3D points in world (pattern) coordinate are required. You can compute they for yourself if you know the physical size of your pattern. Save 3D points in ```objectPoints```, similar to ```imagePoints```, it can be ```std::vector<std::vector<Vec3f>>``` or ```std::vector<cv::Mat>``` where ```cv::Mat``` is of type ```CV_32FC3```. Note the size of ```objectPoints``` and ```imagePoints``` must be the same because they are corresponding to each other.
@code{.cpp}
Another thing you should input is the size of images. The file opencv_contrib/modules/ccalib/tutorial/data/omni_calib_data.xml stores an example of objectPoints, imagePoints and imageSize. Use the following code to load them:
```
cv::FileStorage fs("omni_calib_data.xml", cv::FileStorage::READ);
std::vector<cv::Mat> objectPoints, imagePoints;
cv::Size imgSize;
fs["objectPoints"] >> objectPoints;
fs["imagePoints"] >> imagePoints;
fs["imageSize"] >> imgSize;
@endcode
```
Then define some variables to store the output parameters and run the calibration function like:
```
cv::Mat K, xi, D, idx;
int flags = 0;
cv::TermCriteria critia(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 200, 0.0001);
std::vector<cv::Mat> rvecs, tvecs;
double rms = cv::omnidir::calibrate(objectPoints, imagePoints, imgSize, K, xi, D, rvecs, tvecs, flags, critia, idx);
```
Then run the calibration function like:
```K```, ```xi```, ```D``` are internal parameters and ```rvecs```, ```tvecs``` are external parameters that store the pose of patterns. All of them have depth of ```CV_64F```. The ```xi``` is a single value variable of Mei's model. ```idx``` is a ```CV_32S``` Mat that stores indices of images that are really used in calibration. This is due to some images are failed in the initialization step so they are not used in the final optimization. The returned value *rms* is the root mean square of reprojection errors.
@code{.cpp}
double rms = omnidir::calibrate(objectPoints, imagePoints, size, K, xi, D, om, t, flags, critia)
@endcode
The calibration supports some features, *flags* is a enumeration for some features, including:
The variable *size* of tyep Size is the size of images. *flags* is a enumeration for some features, including:
- cv::omnidir::CALIB_FIX_SKEW
- cv::omnidir::CALIB_FIX_K1
- cv::omnidir::CALIB_FIX_K2
- cv::omnidir::CALIB_FIX_P1
- cv::omnidir::CALIB_FIX_P2
- cv::omnidir::CALIB_FIX_XI
- cv::omnidir::CALIB_FIX_GAMMA
- cv::omnidir::CALIB_FIX_CENTER
- CALIB_USE_GUESS: initialize camera parameters by input K, xi, D, om, t.
- CALIB_FIX_SKEW, CALIB_FIX_K1, CALIB_FIX_K2, CALIB_FIX_P1, CALIB_FIX_P2, CALIB_FIX_XI, CALIB_FIX_GAMMA, CALIB_FIX_CENTER: fix the corresponding parameters during calibration, you can use 'plus' operator to set multiple features. For example, CALIB_FIX_SKEW+CALIB_FIX_K1 means fix skew and K1.
Your can specify ```flags``` to fix parameters during calibration. Use 'plus' operator to set multiple features. For example, ```CALIB_FIX_SKEW+CALIB_FIX_K1``` means fixing skew and K1.
*K*, *xi*, *D*, *om*, *t* are output internal and external parameters. The returned value *rms* is the root mean square of reprojection errors.
```critia``` is the stopping critia during optimization, set it to be, for example, cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 200, 0.0001), which means using 200 iterations and stopping when relative change is smaller than 0.0001.
critia is the stopping critia during optimization, set it to be, for example, cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 200, 0.0001), which means using 200 iterations and stopping when relative change is smaller than 0.0001.
Stereo Calibration
--------------------------
Stereo calibration is to calibrate two cameras together. The output parameters include camera parameters of two cameras and the relative pose of them. To recover the relative pose, two cameras must observe the same pattern at the same time, so the ```objectPoints``` of two cameras are the same.
Now detect image corners for both cameras as discussed above to get ```imagePoints1``` and ```imagePoints2```. Then compute the shared ```objectPoints```.
An example of of stereo calibration data is stored in opencv_contrib/modules/ccalib/tutorial/data/omni_stereocalib_data.xml. Load the data by
```
cv::FileStorage fs("omni_stereocalib_data.xml", cv::FileStorage::READ);
std::vector<cv::Mat> objectPoints, imagePoints1, imagePoints2;
cv::Size imgSize1, imgSize2;
fs["objectPoints"] >> objectPoints;
fs["imagePoints1"] >> imagePoints1;
fs["imagePoints2"] >> imagePoints2;
fs["imageSize1"] >> imgSize1;
fs["imageSize2"] >> imgSize2;
```
Then do stereo calibration by
```
cv::Mat K1, K2, xi1, xi2, D1, D2;
int flags = 0;
cv::TermCriteria critia(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 200, 0.0001);
std::vector<cv::Mat> rvecsL, tvecsL;
cv::Mat rvec, tvec;
double rms = cv::omnidir::calibrate(objectPoints, imagePoints1, imagePoints2, imgSize1, imgSize2, K1, xi1, D1, K2, xi2, D2, rvec, tvec, rvecsL, tvecsL, flags, critia, idx);
```
Here ```rvec``` and ```tvec``` are the transform between the first and the second camera. ```rvecsL``` and ```tvecsL``` are the transforms between patterns and the first camera.
Image Rectificaiton
---------------------------
Omnidirectional images have very large distortion, making it is not compatible with human's eye balls. Here is an example of omnidirectional image of 360 degrees of horizontal field of view.
Omnidirectional images have very large distortion, so it is not compatible with human's eye balls. For better view, rectification can be applied if camera parameters are known. Here is an example of omnidirectional image of 360 degrees of horizontal field of view.
![image](img/sample.jpg)
After rectification, a perspective like view is generated. Here is one example to run image rectification in this module:
@code{.cpp}
omnidir::undistortImage(distorted, undistorted, K, D, xi, int flags, Knew, new_size)
@endcode
```
cv::omnidir::undistortImage(distorted, undistorted, K, D, xi, int flags, Knew, new_size)
```
The variable *distorted* and *undistorted* are the origional image and rectified image perspectively. *K*, *D*, *xi* are camera parameters. *KNew* and *new_size* are the camera matrix and image size for rectified image. *flags* is the rectification type, it can be:
- RECTIFY_PERSPECTIVE: rectify to perspective images, which will lose some filed of view.
- RECTIFY_CYLINDRICAL: rectify to cylindrical images, it preserves all view.
- RECTIFY_LONGLATI: rectify to longitude-latitude map like a world map of the earth. This rectification can be used to stereo reconstruction.
- RECTIFY_CYLINDRICAL: rectify to cylindrical images that preserve all view.
- RECTIFY_STEREOGRAPHIC: rectify to stereographic images that may lose a little view.
- RECTIFY_LONGLATI: rectify to longitude-latitude map like a world map of the earth. This rectification can be used to stereo reconstruction but may not be friendly for view. This map is described in paper:
*Li S. Binocular spherical stereo[J]. Intelligent Transportation Systems, IEEE Transactions on, 2008, 9(4): 589-600.*
The following three images are three types of rectified images discribed above:
The following four images are four types of rectified images discribed above:
![image](img/sample_rec_per.jpg)
![image](img/sample_rec_cyl.jpg)
![image](img/sample_rec_ste.jpg)
![image](img/sample_rec_log.jpg)
It can be observed that perspective rectified image perserves only a little field of view, while other two perserves all.
\ No newline at end of file
It can be observed that perspective rectified image perserves only a little field of view and is not goodlooking. Cylindrical rectification preserves all field of view and scene is unnatural only in the middle of bottom. The distortion of stereographic in the middle of bottom is smaller than cylindrical but the distortion of other places are larger, and it can not preserve all field of view. For images with very large distortion, the longitude-latitude rectification does not give a good result, but it is available to make epipolar constraint in a line so that stereo matching can be applied in omnidirectional images.
**Note**: To have a better result, you should carefully choose ```Knew``` and it is related to your camera. In general, a smaller focal length leads to a smaller field of view and vice versa. Here are recommonded settings.
For RECTIFY_PERSPECTIVE
```
Knew = Matx33f(new_size.width/4, 0, new_size.width/2,
0, new_size.height/4, new_size.height/2,
0, 0, 1);
```
For RECTIFY_CYLINDRICAL, RECTIFY_STEREOGRAPHIC, RECTIFY_LONGLATI
```
Knew = Matx33f(new_size.width/3.1415, 0, 0,
0, new_size.height/3.1415, 0,
0, 0, 1);
```
Maybe you need to change ```(u0, v0)``` to get a better view.
Stereo Reconstruction
------------------------------
Stereo reconstruction is to reconstruct 3D points from a calibrated stereo camera pair. It is a basic problem of computer vison. However, for omnidirectional camera, it is not very popular because of the large distortion make it a little difficult. Conventional methods rectify images to perspective ones and do stereo reconstruction in perspective images. However, the last section shows that recifying to perspective images lose too much field of view, which waste the advantage of omnidirectional camera, i.e. large field of view.
The first step of stereo reconstruction is stereo rectification so that epipolar lines are horizontal lines. Here, we use longitude-latitude rectification to preserve all filed of view, or perspective rectification which is available but is not recommended. The second step is stereo matching to get a disparity map. At last, 3D points can be generated from disparity map.
The API of stereo reconstruction for omnidrectional camera is ```omnidir::stereoReconstruct```. Here we use an example to show how it works.
First, calibrate a stereo pair of cameras as describe above and get parameters like ```K1```, ```D1```, ```xi1```, ```K2```, ```D2```, ```xi2```, ```rvec```, ```tvec```. Then read two images from the first and second camera respectively, for instance, ```image1``` and ```image2```, which are shown below.
![image](img/imgs.jpg)
Second, run ```omnidir::stereoReconstruct``` like:
```
cv::Size imgSize = img1.size();
int numDisparities = 16*5;
int SADWindowSize = 5;
cv::Mat disMap;
int flag = cv::omnidir::RECTIFY_LONGLATI;
int pointType = omnidir::XYZRGB;
// the range of theta is (0, pi) and the range of phi is (0, pi)
cv::Matx33d KNew(imgSize.width / 3.1415, 0, 0, 0, imgSize.height / 3.1415, 0, 0, 0, 1);
Mat imageRec1, imageRec2, pointCloud;
cv::omnidir::stereoReconstruct(img1, img2, K1, D1, xi1, K2, D2, xi2, R, T, flag, numDisparities, SADWindowSize, imageRec1, imageRec2, disMap, imgSize, KNew, pointCloud);
```
Here variable ```flag``` indicates the recectify type, only ```RECTIFY_LONGLATI```(recommend) and ```RECTIFY_PERSPECTIVE``` make sense. ```numDisparities``` is the max disparity value and ```SADWindowSize``` is the window size of ```cv::StereoSGBM```. ```pointType``` is a flag to define the type of point cloud, ```omnidir::XYZRGB``` each point is a 6-dimensional vector, the first three elements are xyz coordinate, the last three elements are rgb color information. Another type ```omnidir::XYZ``` means each point is 3-dimensional and has only xyz coordiante.
Moreover, ```imageRec1``` and ```imagerec2``` are rectified versions of the first and second images. The epipolar lines of them have the same y-coordinate so that stereo matching becomes easy. Here are an example of them:
![image](img/lines.jpg)
It can be observed that they are well aligned. The variable ```disMap``` is the disparity map computed by ```cv::StereoSGBM``` from ```imageRec1``` and ```imageRec2```. The disparity map of the above two images is:
![image](img/disparity.jpg)
After we have disparity, we can compute 3D location for each pixel. The point cloud is stored in ```pointCloud```, which is a 3-channel or 6-channel ```cv::Mat```. We show the point cloud in the following image.
![image](img/pointCloud.jpg)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment