Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
O
opencv
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
opencv
Commits
c5d4ecae
Commit
c5d4ecae
authored
Dec 15, 2015
by
Vadim Pisarevsky
Browse files
Options
Browse Files
Download
Plain Diff
Merge pull request #5588 from T-Dunker:ScheimpflugModel
parents
aee03cd5
6882c10b
Hide whitespace changes
Inline
Side-by-side
Showing
9 changed files
with
1144 additions
and
95 deletions
+1144
-95
opencv.bib
doc/opencv.bib
+10
-0
calib3d.hpp
modules/calib3d/include/opencv2/calib3d.hpp
+71
-15
calib3d_c.h
modules/calib3d/include/opencv2/calib3d/calib3d_c.h
+2
-0
calibration.cpp
modules/calib3d/src/calibration.cpp
+150
-60
test_cameracalibration_tilt.cpp
modules/calib3d/test/test_cameracalibration_tilt.cpp
+700
-0
matx.hpp
modules/core/include/opencv2/core/matx.hpp
+20
-0
imgproc.hpp
modules/imgproc/include/opencv2/imgproc.hpp
+28
-8
distortion_model.hpp
...gproc/include/opencv2/imgproc/detail/distortion_model.hpp
+123
-0
undistort.cpp
modules/imgproc/src/undistort.cpp
+40
-12
No files found.
doc/opencv.bib
View file @
c5d4ecae
...
...
@@ -415,6 +415,16 @@
pages = {2548--2555},
organization = {IEEE}
}
@ARTICLE{Louhichi07,
author = {Louhichi, H. and Fournel, T. and Lavest, J. M. and Ben Aissia, H.},
title = {Self-calibration of Scheimpflug cameras: an easy protocol},
year = {2007},
pages = {2616–2622},
journal = {Meas. Sci. Technol.},
volume = {18},
number = {8},
publisher = {IOP Publishing Ltd}
}
@ARTICLE{LibSVM,
author = {Chang, Chih-Chung and Lin, Chih-Jen},
title = {LIBSVM: a library for support vector machines},
...
...
modules/calib3d/include/opencv2/calib3d.hpp
View file @
c5d4ecae
...
...
@@ -99,14 +99,50 @@ v = f_y*y' + c_y
Real lenses usually have some distortion, mostly radial distortion and slight tangential distortion.
So, the above model is extended as:
\f[\begin{array}{l} \vecthree{x}{y}{z} = R \vecthree{X}{Y}{Z} + t \\ x' = x/z \\ y' = y/z \\ x'' = x' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + 2 p_1 x' y' + p_2(r^2 + 2 x'^2) + s_1 r^2 + s_2 r^4 \\ y'' = y' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' + s_3 r^2 + s_4 r^4 \\ \text{where} \quad r^2 = x'^2 + y'^2 \\ u = f_x*x'' + c_x \\ v = f_y*y'' + c_y \end{array}\f]
\f[\begin{array}{l}
\vecthree{x}{y}{z} = R \vecthree{X}{Y}{Z} + t \\
x' = x/z \\
y' = y/z \\
x'' = x' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + 2 p_1 x' y' + p_2(r^2 + 2 x'^2) + s_1 r^2 + s_2 r^4 \\
y'' = y' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' + s_3 r^2 + s_4 r^4 \\
\text{where} \quad r^2 = x'^2 + y'^2 \\
u = f_x*x'' + c_x \\
v = f_y*y'' + c_y
\end{array}\f]
\f$k_1\f$, \f$k_2\f$, \f$k_3\f$, \f$k_4\f$, \f$k_5\f$, and \f$k_6\f$ are radial distortion coefficients. \f$p_1\f$ and \f$p_2\f$ are
tangential distortion coefficients. \f$s_1\f$, \f$s_2\f$, \f$s_3\f$, and \f$s_4\f$, are the thin prism distortion
coefficients. Higher-order coefficients are not considered in OpenCV. In the functions below the
coefficients are passed or returned as
coefficients. Higher-order coefficients are not considered in OpenCV.
In some cases the image sensor may be tilted in order to focus an oblique plane in front of the
camera (Scheimpfug condition). This can be useful for particle image velocimetry (PIV) or
triangulation with a laser fan. The tilt causes a perspective distortion of \f$x''\f$ and
\f$y''\f$. This distortion can be modelled in the following way, see e.g. @cite Louhichi07.
\f[\begin{array}{l}
s\vecthree{x'''}{y'''}{1} =
\vecthreethree{R_{33}(\tau_x, \tau_y)}{0}{-R_{13}(\tau_x, \tau_y)}
{0}{R_{33}(\tau_x, \tau_y)}{-R_{23}(\tau_x, \tau_y)}
{0}{0}{1} R(\tau_x, \tau_y) \vecthree{x''}{y''}{1}\\
u = f_x*x''' + c_x \\
v = f_y*y''' + c_y
\end{array}\f]
where the matrix \f$R(\tau_x, \tau_y)\f$ is defined by two rotations with angular parameter \f$\tau_x\f$
and \f$\tau_y\f$, respectively,
\f[(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f]
\f[
R(\tau_x, \tau_y) =
\vecthreethree{\cos(\tau_y)}{0}{-\sin(\tau_y)}{0}{1}{0}{\sin(\tau_y)}{0}{\cos(\tau_y)}
\vecthreethree{1}{0}{0}{0}{\cos(\tau_x)}{\sin(\tau_x)}{0}{-\sin(\tau_x)}{\cos(\tau_x)} =
\vecthreethree{\cos(\tau_y)}{\sin(\tau_y)\sin(\tau_x)}{-\sin(\tau_y)\cos(\tau_x)}
{0}{\cos(\tau_x)}{\sin(\tau_x)}
{\sin(\tau_y)}{-\cos(\tau_y)\sin(\tau_x)}{\cos(\tau_y)\cos(\tau_x)}.
\f]
In the functions below the coefficients are passed or returned as
\f[(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f]
vector. That is, if the vector contains four elements, it means that \f$k_3=0\f$ . The distortion
coefficients do not depend on the scene viewed. Thus, they also belong to the intrinsic camera
...
...
@@ -221,6 +257,8 @@ enum { CALIB_USE_INTRINSIC_GUESS = 0x00001,
CALIB_RATIONAL_MODEL
=
0x04000
,
CALIB_THIN_PRISM_MODEL
=
0x08000
,
CALIB_FIX_S1_S2_S3_S4
=
0x10000
,
CALIB_TILTED_MODEL
=
0x40000
,
CALIB_FIX_TAUX_TAUY
=
0x80000
,
// only for stereo
CALIB_FIX_INTRINSIC
=
0x00100
,
CALIB_SAME_FOCAL_LENGTH
=
0x00200
,
...
...
@@ -444,8 +482,8 @@ vector\<Point3f\> ), where N is the number of points in the view.
@param tvec Translation vector.
@param cameraMatrix Camera matrix \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\f$ .
@param distCoeffs Input vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements. I
f
the vector is empty, the zero distortion coefficients are assumed.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ o
f
4, 5, 8, 12 or 14 elements. If
the vector is empty, the zero distortion coefficients are assumed.
@param imagePoints Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or
vector\<Point2f\> .
@param jacobian Optional output 2Nx(10+\<numDistCoeffs\>) jacobian matrix of derivatives of image
...
...
@@ -483,8 +521,9 @@ CV_EXPORTS_W void projectPoints( InputArray objectPoints,
where N is the number of points. vector\<Point2f\> can be also passed here.
@param cameraMatrix Input camera matrix \f$A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1}\f$ .
@param distCoeffs Input vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements. If
the vector is NULL/empty, the zero distortion coefficients are assumed.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are
assumed.
@param rvec Output rotation vector (see Rodrigues ) that, together with tvec , brings points from
the model coordinate system to the camera coordinate system.
@param tvec Output translation vector.
...
...
@@ -539,8 +578,9 @@ CV_EXPORTS_W bool solvePnP( InputArray objectPoints, InputArray imagePoints,
where N is the number of points. vector\<Point2f\> can be also passed here.
@param cameraMatrix Input camera matrix \f$A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1}\f$ .
@param distCoeffs Input vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements. If
the vector is NULL/empty, the zero distortion coefficients are assumed.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are
assumed.
@param rvec Output rotation vector (see Rodrigues ) that, together with tvec , brings points from
the model coordinate system to the camera coordinate system.
@param tvec Output translation vector.
...
...
@@ -719,7 +759,8 @@ together.
and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be
initialized before calling the function.
@param distCoeffs Output vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
4, 5, 8, 12 or 14 elements.
@param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view
(e.g. std::vector<cv::Mat>>). That is, each k-th rotation vector together with the corresponding
k-th translation vector (see the next output parameter description) brings the calibration pattern
...
...
@@ -755,6 +796,13 @@ set, the function computes and returns only 5 distortion coefficients.
- **CALIB_FIX_S1_S2_S3_S4** The thin prism distortion coefficients are not changed during
the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
- **CALIB_TILTED_MODEL** Coefficients tauX and tauY are enabled. To provide the
backward compatibility, this extra flag should be explicitly specified to make the
calibration function use the tilted sensor model and return 14 coefficients. If the flag is not
set, the function computes and returns only 5 distortion coefficients.
- **CALIB_FIX_TAUX_TAUY** The coefficients of the tilted sensor model are not changed during
the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
@param criteria Termination criteria for the iterative optimization algorithm.
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
...
...
@@ -839,8 +887,8 @@ any of CV_CALIB_USE_INTRINSIC_GUESS , CV_CALIB_FIX_ASPECT_RATIO ,
CV_CALIB_FIX_INTRINSIC , or CV_CALIB_FIX_FOCAL_LENGTH are specified, some or all of the
matrix components must be initialized. See the flags description for details.
@param distCoeffs1 Input/output vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 ot 12 elements. The
output vector length depends on the flags.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
4, 5, 8, 12 or 14 elements. The
output vector length depends on the flags.
@param cameraMatrix2 Input/output second camera matrix. The parameter is similar to cameraMatrix1
@param distCoeffs2 Input/output lens distortion coefficients for the second camera. The parameter
is similar to distCoeffs1 .
...
...
@@ -875,6 +923,13 @@ set, the function computes and returns only 5 distortion coefficients.
- **CALIB_FIX_S1_S2_S3_S4** The thin prism distortion coefficients are not changed during
the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
- **CALIB_TILTED_MODEL** Coefficients tauX and tauY are enabled. To provide the
backward compatibility, this extra flag should be explicitly specified to make the
calibration function use the tilted sensor model and return 14 coefficients. If the flag is not
set, the function computes and returns only 5 distortion coefficients.
- **CALIB_FIX_TAUX_TAUY** The coefficients of the tilted sensor model are not changed during
the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
@param criteria Termination criteria for the iterative optimization algorithm.
The function estimates transformation between two cameras making a stereo pair. If you have a stereo
...
...
@@ -1058,8 +1113,9 @@ CV_EXPORTS_W float rectify3Collinear( InputArray cameraMatrix1, InputArray distC
@param cameraMatrix Input camera matrix.
@param distCoeffs Input vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements. If
the vector is NULL/empty, the zero distortion coefficients are assumed.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of
4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are
assumed.
@param imageSize Original image size.
@param alpha Free scaling parameter between 0 (when all the pixels in the undistorted image are
valid) and 1 (when all the source image pixels are retained in the undistorted image). See
...
...
modules/calib3d/include/opencv2/calib3d/calib3d_c.h
View file @
c5d4ecae
...
...
@@ -243,6 +243,8 @@ CVAPI(void) cvDrawChessboardCorners( CvArr* image, CvSize pattern_size,
#define CV_CALIB_RATIONAL_MODEL 16384
#define CV_CALIB_THIN_PRISM_MODEL 32768
#define CV_CALIB_FIX_S1_S2_S3_S4 65536
#define CV_CALIB_TILTED_MODEL 262144
#define CV_CALIB_FIX_TAUX_TAUY 524288
/* Finds intrinsic and extrinsic camera parameters
...
...
modules/calib3d/src/calibration.cpp
View file @
c5d4ecae
...
...
@@ -42,6 +42,7 @@
#include "precomp.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/imgproc/detail/distortion_model.hpp"
#include "opencv2/calib3d/calib3d_c.h"
#include <stdio.h>
#include <iterator>
...
...
@@ -523,7 +524,7 @@ CV_IMPL int cvRodrigues2( const CvMat* src, CvMat* dst, CvMat* jacobian )
}
static
const
char
*
cvDistCoeffErr
=
"Distortion coefficients must be 1x4, 4x1, 1x5, 5x1, 1x8, 8x1, 1x12
or 12
x1 floating-point vector"
;
static
const
char
*
cvDistCoeffErr
=
"Distortion coefficients must be 1x4, 4x1, 1x5, 5x1, 1x8, 8x1, 1x12
, 12x1, 1x14 or 14
x1 floating-point vector"
;
CV_IMPL
void
cvProjectPoints2
(
const
CvMat
*
objectPoints
,
const
CvMat
*
r_vec
,
...
...
@@ -542,7 +543,10 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints,
int
calc_derivatives
;
const
CvPoint3D64f
*
M
;
CvPoint2D64f
*
m
;
double
r
[
3
],
R
[
9
],
dRdr
[
27
],
t
[
3
],
a
[
9
],
k
[
12
]
=
{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
},
fx
,
fy
,
cx
,
cy
;
double
r
[
3
],
R
[
9
],
dRdr
[
27
],
t
[
3
],
a
[
9
],
k
[
14
]
=
{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
},
fx
,
fy
,
cx
,
cy
;
Matx33d
matTilt
=
Matx33d
::
eye
();
Matx33d
dMatTiltdTauX
(
0
,
0
,
0
,
0
,
0
,
0
,
0
,
-
1
,
0
);
Matx33d
dMatTiltdTauY
(
0
,
0
,
0
,
0
,
0
,
0
,
1
,
0
,
0
);
CvMat
_r
,
_t
,
_a
=
cvMat
(
3
,
3
,
CV_64F
,
a
),
_k
;
CvMat
matR
=
cvMat
(
3
,
3
,
CV_64F
,
R
),
_dRdr
=
cvMat
(
3
,
9
,
CV_64F
,
dRdr
);
double
*
dpdr_p
=
0
,
*
dpdt_p
=
0
,
*
dpdk_p
=
0
,
*
dpdf_p
=
0
,
*
dpdc_p
=
0
;
...
...
@@ -646,12 +650,18 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints,
(
distCoeffs
->
rows
*
distCoeffs
->
cols
*
CV_MAT_CN
(
distCoeffs
->
type
)
!=
4
&&
distCoeffs
->
rows
*
distCoeffs
->
cols
*
CV_MAT_CN
(
distCoeffs
->
type
)
!=
5
&&
distCoeffs
->
rows
*
distCoeffs
->
cols
*
CV_MAT_CN
(
distCoeffs
->
type
)
!=
8
&&
distCoeffs
->
rows
*
distCoeffs
->
cols
*
CV_MAT_CN
(
distCoeffs
->
type
)
!=
12
)
)
distCoeffs
->
rows
*
distCoeffs
->
cols
*
CV_MAT_CN
(
distCoeffs
->
type
)
!=
12
&&
distCoeffs
->
rows
*
distCoeffs
->
cols
*
CV_MAT_CN
(
distCoeffs
->
type
)
!=
14
)
)
CV_Error
(
CV_StsBadArg
,
cvDistCoeffErr
);
_k
=
cvMat
(
distCoeffs
->
rows
,
distCoeffs
->
cols
,
CV_MAKETYPE
(
CV_64F
,
CV_MAT_CN
(
distCoeffs
->
type
)),
k
);
cvConvert
(
distCoeffs
,
&
_k
);
if
(
k
[
12
]
!=
0
||
k
[
13
]
!=
0
)
{
detail
::
computeTiltProjectionMatrix
(
k
[
12
],
k
[
13
],
&
matTilt
,
&
dMatTiltdTauX
,
&
dMatTiltdTauY
);
}
}
if
(
dpdr
)
...
...
@@ -728,8 +738,8 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints,
{
if
(
!
CV_IS_MAT
(
dpdk
)
||
(
CV_MAT_TYPE
(
dpdk
->
type
)
!=
CV_32FC1
&&
CV_MAT_TYPE
(
dpdk
->
type
)
!=
CV_64FC1
)
||
dpdk
->
rows
!=
count
*
2
||
(
dpdk
->
cols
!=
12
&&
dpdk
->
cols
!=
8
&&
dpdk
->
cols
!=
5
&&
dpdk
->
cols
!=
4
&&
dpdk
->
cols
!=
2
)
)
CV_Error
(
CV_StsBadArg
,
"dp/df must be 2Nx12, 2Nx8, 2Nx5, 2Nx4 or 2Nx2 floating-point matrix"
);
dpdk
->
rows
!=
count
*
2
||
(
dpdk
->
cols
!=
1
4
&&
dpdk
->
cols
!=
1
2
&&
dpdk
->
cols
!=
8
&&
dpdk
->
cols
!=
5
&&
dpdk
->
cols
!=
4
&&
dpdk
->
cols
!=
2
)
)
CV_Error
(
CV_StsBadArg
,
"dp/df must be 2Nx1
4, 2Nx1
2, 2Nx8, 2Nx5, 2Nx4 or 2Nx2 floating-point matrix"
);
if
(
!
distCoeffs
)
CV_Error
(
CV_StsNullPtr
,
"distCoeffs is NULL while dpdk is not"
);
...
...
@@ -753,7 +763,11 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints,
double
y
=
R
[
3
]
*
X
+
R
[
4
]
*
Y
+
R
[
5
]
*
Z
+
t
[
1
];
double
z
=
R
[
6
]
*
X
+
R
[
7
]
*
Y
+
R
[
8
]
*
Z
+
t
[
2
];
double
r2
,
r4
,
r6
,
a1
,
a2
,
a3
,
cdist
,
icdist2
;
double
xd
,
yd
;
double
xd
,
yd
,
xd0
,
yd0
,
invProj
;
Vec3d
vecTilt
;
Vec3d
dVecTilt
;
Matx22d
dMatTilt
;
Vec2d
dXdYd
;
z
=
z
?
1.
/
z
:
1
;
x
*=
z
;
y
*=
z
;
...
...
@@ -766,8 +780,14 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints,
a3
=
r2
+
2
*
y
*
y
;
cdist
=
1
+
k
[
0
]
*
r2
+
k
[
1
]
*
r4
+
k
[
4
]
*
r6
;
icdist2
=
1.
/
(
1
+
k
[
5
]
*
r2
+
k
[
6
]
*
r4
+
k
[
7
]
*
r6
);
xd
=
x
*
cdist
*
icdist2
+
k
[
2
]
*
a1
+
k
[
3
]
*
a2
+
k
[
8
]
*
r2
+
k
[
9
]
*
r4
;
yd
=
y
*
cdist
*
icdist2
+
k
[
2
]
*
a3
+
k
[
3
]
*
a1
+
k
[
10
]
*
r2
+
k
[
11
]
*
r4
;
xd0
=
x
*
cdist
*
icdist2
+
k
[
2
]
*
a1
+
k
[
3
]
*
a2
+
k
[
8
]
*
r2
+
k
[
9
]
*
r4
;
yd0
=
y
*
cdist
*
icdist2
+
k
[
2
]
*
a3
+
k
[
3
]
*
a1
+
k
[
10
]
*
r2
+
k
[
11
]
*
r4
;
// additional distortion by projecting onto a tilt plane
vecTilt
=
matTilt
*
Vec3d
(
xd0
,
yd0
,
1
);
invProj
=
vecTilt
(
2
)
?
1.
/
vecTilt
(
2
)
:
1
;
xd
=
invProj
*
vecTilt
(
0
);
yd
=
invProj
*
vecTilt
(
1
);
m
[
i
].
x
=
xd
*
fx
+
cx
;
m
[
i
].
y
=
yd
*
fy
+
cy
;
...
...
@@ -798,42 +818,75 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints,
}
dpdf_p
+=
dpdf_step
*
2
;
}
for
(
int
row
=
0
;
row
<
2
;
++
row
)
for
(
int
col
=
0
;
col
<
2
;
++
col
)
dMatTilt
(
row
,
col
)
=
matTilt
(
row
,
col
)
*
vecTilt
(
2
)
-
matTilt
(
2
,
col
)
*
vecTilt
(
row
);
double
invProjSquare
=
(
invProj
*
invProj
);
dMatTilt
*=
invProjSquare
;
if
(
dpdk_p
)
{
dpdk_p
[
0
]
=
fx
*
x
*
icdist2
*
r2
;
dpdk_p
[
1
]
=
fx
*
x
*
icdist2
*
r4
;
dpdk_p
[
dpdk_step
]
=
fy
*
y
*
icdist2
*
r2
;
dpdk_p
[
dpdk_step
+
1
]
=
fy
*
y
*
icdist2
*
r4
;
dXdYd
=
dMatTilt
*
Vec2d
(
x
*
icdist2
*
r2
,
y
*
icdist2
*
r2
);
dpdk_p
[
0
]
=
fx
*
dXdYd
(
0
);
dpdk_p
[
dpdk_step
]
=
fy
*
dXdYd
(
1
);
dXdYd
=
dMatTilt
*
Vec2d
(
x
*
icdist2
*
r4
,
y
*
icdist2
*
r4
);
dpdk_p
[
1
]
=
fx
*
dXdYd
(
0
);
dpdk_p
[
dpdk_step
+
1
]
=
fy
*
dXdYd
(
1
);
if
(
_dpdk
->
cols
>
2
)
{
dpdk_p
[
2
]
=
fx
*
a1
;
dpdk_p
[
3
]
=
fx
*
a2
;
dpdk_p
[
dpdk_step
+
2
]
=
fy
*
a3
;
dpdk_p
[
dpdk_step
+
3
]
=
fy
*
a1
;
dXdYd
=
dMatTilt
*
Vec2d
(
a1
,
a3
);
dpdk_p
[
2
]
=
fx
*
dXdYd
(
0
);
dpdk_p
[
dpdk_step
+
2
]
=
fy
*
dXdYd
(
1
);
dXdYd
=
dMatTilt
*
Vec2d
(
a2
,
a1
);
dpdk_p
[
3
]
=
fx
*
dXdYd
(
0
);
dpdk_p
[
dpdk_step
+
3
]
=
fy
*
dXdYd
(
1
);
if
(
_dpdk
->
cols
>
4
)
{
dpdk_p
[
4
]
=
fx
*
x
*
icdist2
*
r6
;
dpdk_p
[
dpdk_step
+
4
]
=
fy
*
y
*
icdist2
*
r6
;
dXdYd
=
dMatTilt
*
Vec2d
(
x
*
icdist2
*
r6
,
y
*
icdist2
*
r6
);
dpdk_p
[
4
]
=
fx
*
dXdYd
(
0
);
dpdk_p
[
dpdk_step
+
4
]
=
fy
*
dXdYd
(
1
);
if
(
_dpdk
->
cols
>
5
)
{
dpdk_p
[
5
]
=
fx
*
x
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r2
;
dpdk_p
[
dpdk_step
+
5
]
=
fy
*
y
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r2
;
dpdk_p
[
6
]
=
fx
*
x
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r4
;
dpdk_p
[
dpdk_step
+
6
]
=
fy
*
y
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r4
;
dpdk_p
[
7
]
=
fx
*
x
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r6
;
dpdk_p
[
dpdk_step
+
7
]
=
fy
*
y
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r6
;
dXdYd
=
dMatTilt
*
Vec2d
(
x
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r2
,
y
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r2
);
dpdk_p
[
5
]
=
fx
*
dXdYd
(
0
);
dpdk_p
[
dpdk_step
+
5
]
=
fy
*
dXdYd
(
1
);
dXdYd
=
dMatTilt
*
Vec2d
(
x
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r4
,
y
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r4
);
dpdk_p
[
6
]
=
fx
*
dXdYd
(
0
);
dpdk_p
[
dpdk_step
+
6
]
=
fy
*
dXdYd
(
1
);
dXdYd
=
dMatTilt
*
Vec2d
(
x
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r6
,
y
*
cdist
*
(
-
icdist2
)
*
icdist2
*
r6
);
dpdk_p
[
7
]
=
fx
*
dXdYd
(
0
);
dpdk_p
[
dpdk_step
+
7
]
=
fy
*
dXdYd
(
1
);
if
(
_dpdk
->
cols
>
8
)
{
dpdk_p
[
8
]
=
fx
*
r2
;
//s1
dpdk_p
[
9
]
=
fx
*
r4
;
//s2
dpdk_p
[
10
]
=
0
;
//s3
dpdk_p
[
11
]
=
0
;
//s4
dpdk_p
[
dpdk_step
+
8
]
=
0
;
//s1
dpdk_p
[
dpdk_step
+
9
]
=
0
;
//s2
dpdk_p
[
dpdk_step
+
10
]
=
fy
*
r2
;
//s3
dpdk_p
[
dpdk_step
+
11
]
=
fy
*
r4
;
//s4
dXdYd
=
dMatTilt
*
Vec2d
(
r2
,
0
);
dpdk_p
[
8
]
=
fx
*
dXdYd
(
0
);
//s1
dpdk_p
[
dpdk_step
+
8
]
=
fy
*
dXdYd
(
1
);
//s1
dXdYd
=
dMatTilt
*
Vec2d
(
r4
,
0
);
dpdk_p
[
9
]
=
fx
*
dXdYd
(
0
);
//s2
dpdk_p
[
dpdk_step
+
9
]
=
fy
*
dXdYd
(
1
);
//s2
dXdYd
=
dMatTilt
*
Vec2d
(
0
,
r2
);
dpdk_p
[
10
]
=
fx
*
dXdYd
(
0
);
//s3
dpdk_p
[
dpdk_step
+
10
]
=
fy
*
dXdYd
(
1
);
//s3
dXdYd
=
dMatTilt
*
Vec2d
(
0
,
r4
);
dpdk_p
[
11
]
=
fx
*
dXdYd
(
0
);
//s4
dpdk_p
[
dpdk_step
+
11
]
=
fy
*
dXdYd
(
1
);
//s4
if
(
_dpdk
->
cols
>
12
)
{
dVecTilt
=
dMatTiltdTauX
*
Vec3d
(
xd0
,
yd0
,
1
);
dpdk_p
[
12
]
=
fx
*
invProjSquare
*
(
dVecTilt
(
0
)
*
vecTilt
(
2
)
-
dVecTilt
(
2
)
*
vecTilt
(
0
));
dpdk_p
[
dpdk_step
+
12
]
=
fy
*
invProjSquare
*
(
dVecTilt
(
1
)
*
vecTilt
(
2
)
-
dVecTilt
(
2
)
*
vecTilt
(
1
));
dVecTilt
=
dMatTiltdTauY
*
Vec3d
(
xd0
,
yd0
,
1
);
dpdk_p
[
13
]
=
fx
*
invProjSquare
*
(
dVecTilt
(
0
)
*
vecTilt
(
2
)
-
dVecTilt
(
2
)
*
vecTilt
(
0
));
dpdk_p
[
dpdk_step
+
13
]
=
fy
*
invProjSquare
*
(
dVecTilt
(
1
)
*
vecTilt
(
2
)
-
dVecTilt
(
2
)
*
vecTilt
(
1
));
}
}
}
}
...
...
@@ -850,12 +903,13 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints,
double
dcdist_dt
=
k
[
0
]
*
dr2dt
+
2
*
k
[
1
]
*
r2
*
dr2dt
+
3
*
k
[
4
]
*
r4
*
dr2dt
;
double
dicdist2_dt
=
-
icdist2
*
icdist2
*
(
k
[
5
]
*
dr2dt
+
2
*
k
[
6
]
*
r2
*
dr2dt
+
3
*
k
[
7
]
*
r4
*
dr2dt
);
double
da1dt
=
2
*
(
x
*
dydt
[
j
]
+
y
*
dxdt
[
j
]);
double
dmxdt
=
fx
*
(
dxdt
[
j
]
*
cdist
*
icdist2
+
x
*
dcdist_dt
*
icdist2
+
x
*
cdist
*
dicdist2_dt
+
double
dmxdt
=
(
dxdt
[
j
]
*
cdist
*
icdist2
+
x
*
dcdist_dt
*
icdist2
+
x
*
cdist
*
dicdist2_dt
+
k
[
2
]
*
da1dt
+
k
[
3
]
*
(
dr2dt
+
2
*
x
*
dxdt
[
j
])
+
k
[
8
]
*
dr2dt
+
2
*
r2
*
k
[
9
]
*
dr2dt
);
double
dmydt
=
fy
*
(
dydt
[
j
]
*
cdist
*
icdist2
+
y
*
dcdist_dt
*
icdist2
+
y
*
cdist
*
dicdist2_dt
+
double
dmydt
=
(
dydt
[
j
]
*
cdist
*
icdist2
+
y
*
dcdist_dt
*
icdist2
+
y
*
cdist
*
dicdist2_dt
+
k
[
2
]
*
(
dr2dt
+
2
*
y
*
dydt
[
j
])
+
k
[
3
]
*
da1dt
+
k
[
10
]
*
dr2dt
+
2
*
r2
*
k
[
11
]
*
dr2dt
);
dpdt_p
[
j
]
=
dmxdt
;
dpdt_p
[
dpdt_step
+
j
]
=
dmydt
;
dXdYd
=
dMatTilt
*
Vec2d
(
dmxdt
,
dmydt
);
dpdt_p
[
j
]
=
fx
*
dXdYd
(
0
);
dpdt_p
[
dpdt_step
+
j
]
=
fy
*
dXdYd
(
1
);
}
dpdt_p
+=
dpdt_step
*
2
;
}
...
...
@@ -885,15 +939,16 @@ CV_IMPL void cvProjectPoints2( const CvMat* objectPoints,
double
dxdr
=
z
*
(
dx0dr
[
j
]
-
x
*
dz0dr
[
j
]);
double
dydr
=
z
*
(
dy0dr
[
j
]
-
y
*
dz0dr
[
j
]);
double
dr2dr
=
2
*
x
*
dxdr
+
2
*
y
*
dydr
;
double
dcdist_dr
=
k
[
0
]
*
dr2dr
+
2
*
k
[
1
]
*
r2
*
dr2dr
+
3
*
k
[
4
]
*
r4
*
dr2dr
;
double
dicdist2_dr
=
-
icdist2
*
icdist2
*
(
k
[
5
]
*
dr2dr
+
2
*
k
[
6
]
*
r2
*
dr2dr
+
3
*
k
[
7
]
*
r4
*
dr2dr
)
;
double
dcdist_dr
=
(
k
[
0
]
+
2
*
k
[
1
]
*
r2
+
3
*
k
[
4
]
*
r4
)
*
dr2dr
;
double
dicdist2_dr
=
-
icdist2
*
icdist2
*
(
k
[
5
]
+
2
*
k
[
6
]
*
r2
+
3
*
k
[
7
]
*
r4
)
*
dr2dr
;
double
da1dr
=
2
*
(
x
*
dydr
+
y
*
dxdr
);
double
dmxdr
=
fx
*
(
dxdr
*
cdist
*
icdist2
+
x
*
dcdist_dr
*
icdist2
+
x
*
cdist
*
dicdist2_dr
+
k
[
2
]
*
da1dr
+
k
[
3
]
*
(
dr2dr
+
2
*
x
*
dxdr
)
+
k
[
8
]
*
dr2dr
+
2
*
r2
*
k
[
9
]
*
dr2dr
);
double
dmydr
=
fy
*
(
dydr
*
cdist
*
icdist2
+
y
*
dcdist_dr
*
icdist2
+
y
*
cdist
*
dicdist2_dr
+
k
[
2
]
*
(
dr2dr
+
2
*
y
*
dydr
)
+
k
[
3
]
*
da1dr
+
k
[
10
]
*
dr2dr
+
2
*
r2
*
k
[
11
]
*
dr2dr
);
dpdr_p
[
j
]
=
dmxdr
;
dpdr_p
[
dpdr_step
+
j
]
=
dmydr
;
double
dmxdr
=
(
dxdr
*
cdist
*
icdist2
+
x
*
dcdist_dr
*
icdist2
+
x
*
cdist
*
dicdist2_dr
+
k
[
2
]
*
da1dr
+
k
[
3
]
*
(
dr2dr
+
2
*
x
*
dxdr
)
+
(
k
[
8
]
+
2
*
r2
*
k
[
9
])
*
dr2dr
);
double
dmydr
=
(
dydr
*
cdist
*
icdist2
+
y
*
dcdist_dr
*
icdist2
+
y
*
cdist
*
dicdist2_dr
+
k
[
2
]
*
(
dr2dr
+
2
*
y
*
dydr
)
+
k
[
3
]
*
da1dr
+
(
k
[
10
]
+
2
*
r2
*
k
[
11
])
*
dr2dr
);
dXdYd
=
dMatTilt
*
Vec2d
(
dmxdr
,
dmydr
);
dpdr_p
[
j
]
=
fx
*
dXdYd
(
0
);
dpdr_p
[
dpdr_step
+
j
]
=
fy
*
dXdYd
(
1
);
}
dpdr_p
+=
dpdr_step
*
2
;
}
...
...
@@ -1231,11 +1286,11 @@ CV_IMPL double cvCalibrateCamera2( const CvMat* objectPoints,
CvSize
imageSize
,
CvMat
*
cameraMatrix
,
CvMat
*
distCoeffs
,
CvMat
*
rvecs
,
CvMat
*
tvecs
,
int
flags
,
CvTermCriteria
termCrit
)
{
const
int
NINTRINSIC
=
1
6
;
const
int
NINTRINSIC
=
1
8
;
double
reprojErr
=
0
;
Matx33d
A
;
double
k
[
1
2
]
=
{
0
};
double
k
[
1
4
]
=
{
0
};
CvMat
matA
=
cvMat
(
3
,
3
,
CV_64F
,
A
.
val
),
_k
;
int
i
,
nimages
,
maxPoints
=
0
,
ni
=
0
,
pos
,
total
=
0
,
nparams
,
npstep
,
cn
;
double
aspectRatio
=
0.
;
...
...
@@ -1252,9 +1307,19 @@ CV_IMPL double cvCalibrateCamera2( const CvMat* objectPoints,
(
npoints
->
rows
!=
1
&&
npoints
->
cols
!=
1
)
)
CV_Error
(
CV_StsUnsupportedFormat
,
"the array of point counters must be 1-dimensional integer vector"
);
//when the thin prism model is used the distortion coefficients matrix must have 12 parameters
if
((
flags
&
CALIB_THIN_PRISM_MODEL
)
&&
(
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
12
))
CV_Error
(
CV_StsBadArg
,
"Thin prism model must have 12 parameters in the distortion matrix"
);
if
(
flags
&
CV_CALIB_TILTED_MODEL
)
{
//when the tilted sensor model is used the distortion coefficients matrix must have 14 parameters
if
(
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
14
)
CV_Error
(
CV_StsBadArg
,
"The tilted sensor model must have 14 parameters in the distortion matrix"
);
}
else
{
//when the thin prism model is used the distortion coefficients matrix must have 12 parameters
if
(
flags
&
CV_CALIB_THIN_PRISM_MODEL
)
if
(
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
12
)
CV_Error
(
CV_StsBadArg
,
"Thin prism model must have 12 parameters in the distortion matrix"
);
}
nimages
=
npoints
->
rows
*
npoints
->
cols
;
npstep
=
npoints
->
rows
==
1
?
1
:
npoints
->
step
/
CV_ELEM_SIZE
(
npoints
->
type
);
...
...
@@ -1293,7 +1358,8 @@ CV_IMPL double cvCalibrateCamera2( const CvMat* objectPoints,
(
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
4
&&
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
5
&&
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
8
&&
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
12
)
)
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
12
&&
distCoeffs
->
cols
*
distCoeffs
->
rows
!=
14
)
)
CV_Error
(
CV_StsBadArg
,
cvDistCoeffErr
);
for
(
i
=
0
;
i
<
nimages
;
i
++
)
...
...
@@ -1398,7 +1464,7 @@ CV_IMPL double cvCalibrateCamera2( const CvMat* objectPoints,
uchar
*
mask
=
solver
.
mask
->
data
.
ptr
;
param
[
0
]
=
A
(
0
,
0
);
param
[
1
]
=
A
(
1
,
1
);
param
[
2
]
=
A
(
0
,
2
);
param
[
3
]
=
A
(
1
,
2
);
std
::
copy
(
k
,
k
+
1
2
,
param
+
4
);
std
::
copy
(
k
,
k
+
1
4
,
param
+
4
);
if
(
flags
&
CV_CALIB_FIX_FOCAL_LENGTH
)
mask
[
0
]
=
mask
[
1
]
=
0
;
...
...
@@ -1413,6 +1479,8 @@ CV_IMPL double cvCalibrateCamera2( const CvMat* objectPoints,
flags
|=
CALIB_FIX_K4
+
CALIB_FIX_K5
+
CALIB_FIX_K6
;
if
(
!
(
flags
&
CV_CALIB_THIN_PRISM_MODEL
))
flags
|=
CALIB_FIX_S1_S2_S3_S4
;
if
(
!
(
flags
&
CV_CALIB_TILTED_MODEL
))
flags
|=
CALIB_FIX_TAUX_TAUY
;
mask
[
4
]
=
!
(
flags
&
CALIB_FIX_K1
);
mask
[
5
]
=
!
(
flags
&
CALIB_FIX_K2
);
...
...
@@ -1428,6 +1496,11 @@ CV_IMPL double cvCalibrateCamera2( const CvMat* objectPoints,
mask
[
14
]
=
0
;
mask
[
15
]
=
0
;
}
if
(
flags
&
CALIB_FIX_TAUX_TAUY
)
{
mask
[
16
]
=
0
;
mask
[
17
]
=
0
;
}
}
// 2. initialize extrinsic parameters
...
...
@@ -1461,7 +1534,7 @@ CV_IMPL double cvCalibrateCamera2( const CvMat* objectPoints,
}
A
(
0
,
0
)
=
param
[
0
];
A
(
1
,
1
)
=
param
[
1
];
A
(
0
,
2
)
=
param
[
2
];
A
(
1
,
2
)
=
param
[
3
];
std
::
copy
(
param
+
4
,
param
+
4
+
1
2
,
k
);
std
::
copy
(
param
+
4
,
param
+
4
+
1
4
,
k
);
if
(
!
proceed
)
break
;
...
...
@@ -1636,11 +1709,11 @@ double cvStereoCalibrate( const CvMat* _objectPoints, const CvMat* _imagePoints1
int
flags
,
CvTermCriteria
termCrit
)
{
const
int
NINTRINSIC
=
1
6
;
const
int
NINTRINSIC
=
1
8
;
Ptr
<
CvMat
>
npoints
,
err
,
J_LR
,
Je
,
Ji
,
imagePoints
[
2
],
objectPoints
,
RT0
;
double
reprojErr
=
0
;
double
A
[
2
][
9
],
dk
[
2
][
1
2
]
=
{{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
},{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
}},
rlr
[
9
];
double
A
[
2
][
9
],
dk
[
2
][
1
4
]
=
{{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
},{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
}},
rlr
[
9
];
CvMat
K
[
2
],
Dist
[
2
],
om_LR
,
T_LR
;
CvMat
R_LR
=
cvMat
(
3
,
3
,
CV_64F
,
rlr
);
int
i
,
k
,
p
,
ni
=
0
,
ofs
,
nimages
,
pointsTotal
,
maxPoints
=
0
;
...
...
@@ -1686,7 +1759,7 @@ double cvStereoCalibrate( const CvMat* _objectPoints, const CvMat* _imagePoints1
(
_imagePoints1
->
rows
==
1
&&
_imagePoints1
->
cols
==
pointsTotal
&&
cn
==
2
))
);
K
[
k
]
=
cvMat
(
3
,
3
,
CV_64F
,
A
[
k
]);
Dist
[
k
]
=
cvMat
(
1
,
1
2
,
CV_64F
,
dk
[
k
]);
Dist
[
k
]
=
cvMat
(
1
,
1
4
,
CV_64F
,
dk
[
k
]);
imagePoints
[
k
].
reset
(
cvCreateMat
(
points
->
rows
,
points
->
cols
,
CV_64FC
(
CV_MAT_CN
(
points
->
type
))));
cvConvert
(
points
,
imagePoints
[
k
]
);
...
...
@@ -1752,6 +1825,8 @@ double cvStereoCalibrate( const CvMat* _objectPoints, const CvMat* _imagePoints1
flags
|=
CV_CALIB_FIX_K4
|
CV_CALIB_FIX_K5
|
CV_CALIB_FIX_K6
;
if
(
!
(
flags
&
CV_CALIB_THIN_PRISM_MODEL
)
)
flags
|=
CV_CALIB_FIX_S1_S2_S3_S4
;
if
(
!
(
flags
&
CV_CALIB_TILTED_MODEL
)
)
flags
|=
CV_CALIB_FIX_TAUX_TAUY
;
if
(
flags
&
CV_CALIB_FIX_ASPECT_RATIO
)
imask
[
0
]
=
imask
[
NINTRINSIC
]
=
0
;
if
(
flags
&
CV_CALIB_FIX_FOCAL_LENGTH
)
...
...
@@ -1779,6 +1854,11 @@ double cvStereoCalibrate( const CvMat* _objectPoints, const CvMat* _imagePoints1
imask
[
14
]
=
imask
[
NINTRINSIC
+
14
]
=
0
;
imask
[
15
]
=
imask
[
NINTRINSIC
+
15
]
=
0
;
}
if
(
flags
&
CV_CALIB_FIX_TAUX_TAUY
)
{
imask
[
16
]
=
imask
[
NINTRINSIC
+
16
]
=
0
;
imask
[
17
]
=
imask
[
NINTRINSIC
+
17
]
=
0
;
}
}
/*
...
...
@@ -1857,6 +1937,8 @@ double cvStereoCalibrate( const CvMat* _objectPoints, const CvMat* _imagePoints1
iparam
[
13
]
=
dk
[
k
][
9
];
iparam
[
14
]
=
dk
[
k
][
10
];
iparam
[
15
]
=
dk
[
k
][
11
];
iparam
[
16
]
=
dk
[
k
][
12
];
iparam
[
17
]
=
dk
[
k
][
13
];
}
om_LR
=
cvMat
(
3
,
1
,
CV_64F
,
solver
.
param
->
data
.
db
);
...
...
@@ -1927,6 +2009,8 @@ double cvStereoCalibrate( const CvMat* _objectPoints, const CvMat* _imagePoints1
dk
[
k
][
9
]
=
iparam
[
k
*
NINTRINSIC
+
13
];
dk
[
k
][
10
]
=
iparam
[
k
*
NINTRINSIC
+
14
];
dk
[
k
][
11
]
=
iparam
[
k
*
NINTRINSIC
+
15
];
dk
[
k
][
12
]
=
iparam
[
k
*
NINTRINSIC
+
16
];
dk
[
k
][
13
]
=
iparam
[
k
*
NINTRINSIC
+
17
];
}
}
...
...
@@ -3026,15 +3110,17 @@ static Mat prepareCameraMatrix(Mat& cameraMatrix0, int rtype)
static
Mat
prepareDistCoeffs
(
Mat
&
distCoeffs0
,
int
rtype
)
{
Mat
distCoeffs
=
Mat
::
zeros
(
distCoeffs0
.
cols
==
1
?
Size
(
1
,
1
2
)
:
Size
(
12
,
1
),
rtype
);
Mat
distCoeffs
=
Mat
::
zeros
(
distCoeffs0
.
cols
==
1
?
Size
(
1
,
1
4
)
:
Size
(
14
,
1
),
rtype
);
if
(
distCoeffs0
.
size
()
==
Size
(
1
,
4
)
||
distCoeffs0
.
size
()
==
Size
(
1
,
5
)
||
distCoeffs0
.
size
()
==
Size
(
1
,
8
)
||
distCoeffs0
.
size
()
==
Size
(
1
,
12
)
||
distCoeffs0
.
size
()
==
Size
(
1
,
14
)
||
distCoeffs0
.
size
()
==
Size
(
4
,
1
)
||
distCoeffs0
.
size
()
==
Size
(
5
,
1
)
||
distCoeffs0
.
size
()
==
Size
(
8
,
1
)
||
distCoeffs0
.
size
()
==
Size
(
12
,
1
)
)
distCoeffs0
.
size
()
==
Size
(
12
,
1
)
||
distCoeffs0
.
size
()
==
Size
(
14
,
1
)
)
{
Mat
dstCoeffs
(
distCoeffs
,
Rect
(
0
,
0
,
distCoeffs0
.
cols
,
distCoeffs0
.
rows
));
distCoeffs0
.
convertTo
(
dstCoeffs
,
rtype
);
...
...
@@ -3219,7 +3305,9 @@ double cv::calibrateCamera( InputArrayOfArrays _objectPoints,
cameraMatrix
=
prepareCameraMatrix
(
cameraMatrix
,
rtype
);
Mat
distCoeffs
=
_distCoeffs
.
getMat
();
distCoeffs
=
prepareDistCoeffs
(
distCoeffs
,
rtype
);
if
(
!
(
flags
&
CALIB_RATIONAL_MODEL
)
&&
(
!
(
flags
&
CALIB_THIN_PRISM_MODEL
)))
if
(
!
(
flags
&
CALIB_RATIONAL_MODEL
)
&&
(
!
(
flags
&
CALIB_THIN_PRISM_MODEL
))
&&
(
!
(
flags
&
CALIB_TILTED_MODEL
)))
distCoeffs
=
distCoeffs
.
rows
==
1
?
distCoeffs
.
colRange
(
0
,
5
)
:
distCoeffs
.
rowRange
(
0
,
5
);
int
nimages
=
int
(
_objectPoints
.
total
());
...
...
@@ -3314,7 +3402,9 @@ double cv::stereoCalibrate( InputArrayOfArrays _objectPoints,
distCoeffs1
=
prepareDistCoeffs
(
distCoeffs1
,
rtype
);
distCoeffs2
=
prepareDistCoeffs
(
distCoeffs2
,
rtype
);
if
(
!
(
flags
&
CALIB_RATIONAL_MODEL
)
&&
(
!
(
flags
&
CALIB_THIN_PRISM_MODEL
)))
if
(
!
(
flags
&
CALIB_RATIONAL_MODEL
)
&&
(
!
(
flags
&
CALIB_THIN_PRISM_MODEL
))
&&
(
!
(
flags
&
CALIB_TILTED_MODEL
)))
{
distCoeffs1
=
distCoeffs1
.
rows
==
1
?
distCoeffs1
.
colRange
(
0
,
5
)
:
distCoeffs1
.
rowRange
(
0
,
5
);
distCoeffs2
=
distCoeffs2
.
rows
==
1
?
distCoeffs2
.
colRange
(
0
,
5
)
:
distCoeffs2
.
rowRange
(
0
,
5
);
...
...
modules/calib3d/test/test_cameracalibration_tilt.cpp
0 → 100644
View file @
c5d4ecae
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "test_precomp.hpp"
#include <opencv2/ts/cuda_test.hpp>
#include "opencv2/calib3d.hpp"
#define NUM_DIST_COEFF_TILT 14
/**
Some conventions:
- the first camera determines the world coordinate system
- y points down, hence top means minimal y value (negative) and
bottom means maximal y value (positive)
- the field of view plane is tilted around x such that it
intersects the xy-plane in a line with a large (positive)
y-value
- image sensor and object are both modelled in the halfspace
z > 0
**/
class
cameraCalibrationTiltTest
:
public
::
testing
::
Test
{
protected
:
cameraCalibrationTiltTest
()
:
m_toRadian
(
acos
(
-
1.0
)
/
180.0
)
,
m_toDegree
(
180.0
/
acos
(
-
1.0
))
{}
virtual
void
SetUp
();
protected
:
static
const
cv
::
Size
m_imageSize
;
static
const
double
m_pixelSize
;
static
const
double
m_circleConfusionPixel
;
static
const
double
m_lensFocalLength
;
static
const
double
m_lensFNumber
;
static
const
double
m_objectDistance
;
static
const
double
m_planeTiltDegree
;
static
const
double
m_pointTargetDist
;
static
const
int
m_pointTargetNum
;
/** image distance coresponding to working distance */
double
m_imageDistance
;
/** image tilt angle corresponding to the tilt of the object plane */
double
m_imageTiltDegree
;
/** center of the field of view, near and far plane */
std
::
vector
<
cv
::
Vec3d
>
m_fovCenter
;
/** normal of the field of view, near and far plane */
std
::
vector
<
cv
::
Vec3d
>
m_fovNormal
;
/** points on a plane calibration target */
std
::
vector
<
cv
::
Point3d
>
m_pointTarget
;
/** rotations for the calibration target */
std
::
vector
<
cv
::
Vec3d
>
m_pointTargetRvec
;
/** translations for the calibration target */
std
::
vector
<
cv
::
Vec3d
>
m_pointTargetTvec
;
/** camera matrix */
cv
::
Matx33d
m_cameraMatrix
;
/** distortion coefficients */
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>
m_distortionCoeff
;
/** random generator */
cv
::
RNG
m_rng
;
/** degree to radian conversion factor */
const
double
m_toRadian
;
/** radian to degree conversion factor */
const
double
m_toDegree
;
/**
computes for a given distance of an image or object point
the distance of the corresponding object or image point
*/
double
opticalMap
(
double
dist
)
{
return
m_lensFocalLength
*
dist
/
(
dist
-
m_lensFocalLength
);
}
/** magnification of the optical map */
double
magnification
(
double
dist
)
{
return
m_lensFocalLength
/
(
dist
-
m_lensFocalLength
);
}
/**
Changes given distortion coefficients randomly by adding
a uniformly distributed random variable in [-max max]
\param coeff input
\param max limits for the random variables
*/
void
randomDistortionCoeff
(
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>&
coeff
,
const
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>&
max
)
{
for
(
int
i
=
0
;
i
<
coeff
.
rows
;
++
i
)
coeff
(
i
)
+=
m_rng
.
uniform
(
-
max
(
i
),
max
(
i
));
}
/** numerical jacobian */
void
numericalDerivative
(
cv
::
Mat
&
jac
,
double
eps
,
const
std
::
vector
<
cv
::
Point3d
>&
obj
,
const
cv
::
Vec3d
&
rvec
,
const
cv
::
Vec3d
&
tvec
,
const
cv
::
Matx33d
&
camera
,
const
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>&
distor
);
/** remove points with projection outside the sensor array */
void
removeInvalidPoints
(
std
::
vector
<
cv
::
Point2d
>&
imagePoints
,
std
::
vector
<
cv
::
Point3d
>&
objectPoints
);
/** add uniform distribute noise in [-halfWidthNoise, halfWidthNoise]
to the image points and remove out of range points */
void
addNoiseRemoveInvalidPoints
(
std
::
vector
<
cv
::
Point2f
>&
imagePoints
,
std
::
vector
<
cv
::
Point3f
>&
objectPoints
,
std
::
vector
<
cv
::
Point2f
>&
noisyImagePoints
,
double
halfWidthNoise
);
};
/** Number of Pixel of the sensor */
const
cv
::
Size
cameraCalibrationTiltTest
::
m_imageSize
(
1600
,
1200
);
/** Size of a pixel in mm */
const
double
cameraCalibrationTiltTest
::
m_pixelSize
(
.005
);
/** Diameter of the circle of confusion */
const
double
cameraCalibrationTiltTest
::
m_circleConfusionPixel
(
3
);
/** Focal length of the lens */
const
double
cameraCalibrationTiltTest
::
m_lensFocalLength
(
16.4
);
/** F-Number */
const
double
cameraCalibrationTiltTest
::
m_lensFNumber
(
8
);
/** Working distance */
const
double
cameraCalibrationTiltTest
::
m_objectDistance
(
200
);
/** Angle between optical axis and object plane normal */
const
double
cameraCalibrationTiltTest
::
m_planeTiltDegree
(
55
);
/** the calibration target are points on a square grid with this side length */
const
double
cameraCalibrationTiltTest
::
m_pointTargetDist
(
5
);
/** the calibration target has (2*n + 1) x (2*n + 1) points */
const
int
cameraCalibrationTiltTest
::
m_pointTargetNum
(
15
);
void
cameraCalibrationTiltTest
::
SetUp
()
{
m_imageDistance
=
opticalMap
(
m_objectDistance
);
m_imageTiltDegree
=
m_toDegree
*
atan2
(
m_imageDistance
*
tan
(
m_toRadian
*
m_planeTiltDegree
),
m_objectDistance
);
// half sensor height
double
tmp
=
.5
*
(
m_imageSize
.
height
-
1
)
*
m_pixelSize
*
cos
(
m_toRadian
*
m_imageTiltDegree
);
// y-Value of tilted sensor
double
yImage
[
2
]
=
{
tmp
,
-
tmp
};
// change in z because of the tilt
tmp
*=
sin
(
m_toRadian
*
m_imageTiltDegree
);
// z-values of the sensor lower and upper corner
double
zImage
[
2
]
=
{
m_imageDistance
+
tmp
,
m_imageDistance
-
tmp
};
// circle of confusion
double
circleConfusion
=
m_circleConfusionPixel
*
m_pixelSize
;
// aperture of the lense
double
aperture
=
m_lensFocalLength
/
m_lensFNumber
;
// near and far factor on the image side
double
nearFarFactorImage
[
2
]
=
{
aperture
/
(
aperture
-
circleConfusion
),
aperture
/
(
aperture
+
circleConfusion
)};
// on the object side - points that determin the field of
// view
std
::
vector
<
cv
::
Vec3d
>
fovBottomTop
(
6
);
std
::
vector
<
cv
::
Vec3d
>::
iterator
itFov
=
fovBottomTop
.
begin
();
for
(
size_t
iBottomTop
=
0
;
iBottomTop
<
2
;
++
iBottomTop
)
{
// mapping sensor to field of view
*
itFov
=
cv
::
Vec3d
(
0
,
yImage
[
iBottomTop
],
zImage
[
iBottomTop
]);
*
itFov
*=
magnification
((
*
itFov
)(
2
));
++
itFov
;
for
(
size_t
iNearFar
=
0
;
iNearFar
<
2
;
++
iNearFar
,
++
itFov
)
{
// scaling to the near and far distance on the
// image side
*
itFov
=
cv
::
Vec3d
(
0
,
yImage
[
iBottomTop
],
zImage
[
iBottomTop
])
*
nearFarFactorImage
[
iNearFar
];
// scaling to the object side
*
itFov
*=
magnification
((
*
itFov
)(
2
));
}
}
m_fovCenter
.
resize
(
3
);
m_fovNormal
.
resize
(
3
);
for
(
size_t
i
=
0
;
i
<
3
;
++
i
)
{
m_fovCenter
[
i
]
=
.5
*
(
fovBottomTop
[
i
]
+
fovBottomTop
[
i
+
3
]);
m_fovNormal
[
i
]
=
fovBottomTop
[
i
+
3
]
-
fovBottomTop
[
i
];
m_fovNormal
[
i
]
=
cv
::
normalize
(
m_fovNormal
[
i
]);
m_fovNormal
[
i
]
=
cv
::
Vec3d
(
m_fovNormal
[
i
](
0
),
-
m_fovNormal
[
i
](
2
),
m_fovNormal
[
i
](
1
));
// one target position in each plane
m_pointTargetTvec
.
push_back
(
m_fovCenter
[
i
]);
cv
::
Vec3d
rvec
=
cv
::
Vec3d
(
0
,
0
,
1
).
cross
(
m_fovNormal
[
i
]);
rvec
=
cv
::
normalize
(
rvec
);
rvec
*=
acos
(
m_fovNormal
[
i
](
2
));
m_pointTargetRvec
.
push_back
(
rvec
);
}
// calibration target
size_t
num
=
2
*
m_pointTargetNum
+
1
;
m_pointTarget
.
resize
(
num
*
num
);
std
::
vector
<
cv
::
Point3d
>::
iterator
itTarget
=
m_pointTarget
.
begin
();
for
(
int
iY
=
-
m_pointTargetNum
;
iY
<=
m_pointTargetNum
;
++
iY
)
{
for
(
int
iX
=
-
m_pointTargetNum
;
iX
<=
m_pointTargetNum
;
++
iX
,
++
itTarget
)
{
*
itTarget
=
cv
::
Point3d
(
iX
,
iY
,
0
)
*
m_pointTargetDist
;
}
}
// oblique target positions
// approximate distance to the near and far plane
double
dist
=
std
::
max
(
std
::
abs
(
m_fovNormal
[
0
].
dot
(
m_fovCenter
[
0
]
-
m_fovCenter
[
1
])),
std
::
abs
(
m_fovNormal
[
0
].
dot
(
m_fovCenter
[
0
]
-
m_fovCenter
[
2
])));
// maximal angle such that target border "reaches" near and far plane
double
maxAngle
=
atan2
(
dist
,
m_pointTargetNum
*
m_pointTargetDist
);
std
::
vector
<
double
>
angle
;
angle
.
push_back
(
-
maxAngle
);
angle
.
push_back
(
maxAngle
);
cv
::
Matx33d
baseMatrix
;
cv
::
Rodrigues
(
m_pointTargetRvec
.
front
(),
baseMatrix
);
for
(
std
::
vector
<
double
>::
const_iterator
itAngle
=
angle
.
begin
();
itAngle
!=
angle
.
end
();
++
itAngle
)
{
cv
::
Matx33d
rmat
;
for
(
int
i
=
0
;
i
<
2
;
++
i
)
{
cv
::
Vec3d
rvec
(
0
,
0
,
0
);
rvec
(
i
)
=
*
itAngle
;
cv
::
Rodrigues
(
rvec
,
rmat
);
rmat
=
baseMatrix
*
rmat
;
cv
::
Rodrigues
(
rmat
,
rvec
);
m_pointTargetTvec
.
push_back
(
m_fovCenter
.
front
());
m_pointTargetRvec
.
push_back
(
rvec
);
}
}
// camera matrix
double
cx
=
.5
*
(
m_imageSize
.
width
-
1
);
double
cy
=
.5
*
(
m_imageSize
.
height
-
1
);
double
f
=
m_imageDistance
/
m_pixelSize
;
m_cameraMatrix
=
cv
::
Matx33d
(
f
,
0
,
cx
,
0
,
f
,
cy
,
0
,
0
,
1
);
// distortion coefficients
m_distortionCoeff
=
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>::
all
(
0
);
// tauX
m_distortionCoeff
(
12
)
=
-
m_toRadian
*
m_imageTiltDegree
;
}
void
cameraCalibrationTiltTest
::
numericalDerivative
(
cv
::
Mat
&
jac
,
double
eps
,
const
std
::
vector
<
cv
::
Point3d
>&
obj
,
const
cv
::
Vec3d
&
rvec
,
const
cv
::
Vec3d
&
tvec
,
const
cv
::
Matx33d
&
camera
,
const
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>&
distor
)
{
cv
::
Vec3d
r
(
rvec
);
cv
::
Vec3d
t
(
tvec
);
cv
::
Matx33d
cm
(
camera
);
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>
dc
(
distor
);
double
*
param
[
10
+
NUM_DIST_COEFF_TILT
]
=
{
&
r
(
0
),
&
r
(
1
),
&
r
(
2
),
&
t
(
0
),
&
t
(
1
),
&
t
(
2
),
&
cm
(
0
,
0
),
&
cm
(
1
,
1
),
&
cm
(
0
,
2
),
&
cm
(
1
,
2
),
&
dc
(
0
),
&
dc
(
1
),
&
dc
(
2
),
&
dc
(
3
),
&
dc
(
4
),
&
dc
(
5
),
&
dc
(
6
),
&
dc
(
7
),
&
dc
(
8
),
&
dc
(
9
),
&
dc
(
10
),
&
dc
(
11
),
&
dc
(
12
),
&
dc
(
13
)};
std
::
vector
<
cv
::
Point2d
>
pix0
,
pix1
;
double
invEps
=
.5
/
eps
;
for
(
int
col
=
0
;
col
<
10
+
NUM_DIST_COEFF_TILT
;
++
col
)
{
double
save
=
*
(
param
[
col
]);
*
(
param
[
col
])
=
save
+
eps
;
cv
::
projectPoints
(
obj
,
r
,
t
,
cm
,
dc
,
pix0
);
*
(
param
[
col
])
=
save
-
eps
;
cv
::
projectPoints
(
obj
,
r
,
t
,
cm
,
dc
,
pix1
);
*
(
param
[
col
])
=
save
;
std
::
vector
<
cv
::
Point2d
>::
const_iterator
it0
=
pix0
.
begin
();
std
::
vector
<
cv
::
Point2d
>::
const_iterator
it1
=
pix1
.
begin
();
int
row
=
0
;
for
(;
it0
!=
pix0
.
end
();
++
it0
,
++
it1
)
{
cv
::
Point2d
d
=
invEps
*
(
*
it0
-
*
it1
);
jac
.
at
<
double
>
(
row
,
col
)
=
d
.
x
;
++
row
;
jac
.
at
<
double
>
(
row
,
col
)
=
d
.
y
;
++
row
;
}
}
}
void
cameraCalibrationTiltTest
::
removeInvalidPoints
(
std
::
vector
<
cv
::
Point2d
>&
imagePoints
,
std
::
vector
<
cv
::
Point3d
>&
objectPoints
)
{
// remove object and imgage points out of range
std
::
vector
<
cv
::
Point2d
>::
iterator
itImg
=
imagePoints
.
begin
();
std
::
vector
<
cv
::
Point3d
>::
iterator
itObj
=
objectPoints
.
begin
();
while
(
itImg
!=
imagePoints
.
end
())
{
bool
ok
=
itImg
->
x
>=
0
&&
itImg
->
x
<=
m_imageSize
.
width
-
1.0
&&
itImg
->
y
>=
0
&&
itImg
->
y
<=
m_imageSize
.
height
-
1.0
;
if
(
ok
)
{
++
itImg
;
++
itObj
;
}
else
{
itImg
=
imagePoints
.
erase
(
itImg
);
itObj
=
objectPoints
.
erase
(
itObj
);
}
}
}
void
cameraCalibrationTiltTest
::
addNoiseRemoveInvalidPoints
(
std
::
vector
<
cv
::
Point2f
>&
imagePoints
,
std
::
vector
<
cv
::
Point3f
>&
objectPoints
,
std
::
vector
<
cv
::
Point2f
>&
noisyImagePoints
,
double
halfWidthNoise
)
{
std
::
vector
<
cv
::
Point2f
>::
iterator
itImg
=
imagePoints
.
begin
();
std
::
vector
<
cv
::
Point3f
>::
iterator
itObj
=
objectPoints
.
begin
();
noisyImagePoints
.
clear
();
noisyImagePoints
.
reserve
(
imagePoints
.
size
());
while
(
itImg
!=
imagePoints
.
end
())
{
cv
::
Point2f
pix
=
*
itImg
+
cv
::
Point2f
(
(
float
)
m_rng
.
uniform
(
-
halfWidthNoise
,
halfWidthNoise
),
(
float
)
m_rng
.
uniform
(
-
halfWidthNoise
,
halfWidthNoise
));
bool
ok
=
pix
.
x
>=
0
&&
pix
.
x
<=
m_imageSize
.
width
-
1.0
&&
pix
.
y
>=
0
&&
pix
.
y
<=
m_imageSize
.
height
-
1.0
;
if
(
ok
)
{
noisyImagePoints
.
push_back
(
pix
);
++
itImg
;
++
itObj
;
}
else
{
itImg
=
imagePoints
.
erase
(
itImg
);
itObj
=
objectPoints
.
erase
(
itObj
);
}
}
}
TEST_F
(
cameraCalibrationTiltTest
,
projectPoints
)
{
std
::
vector
<
cv
::
Point2d
>
imagePoints
;
std
::
vector
<
cv
::
Point3d
>
objectPoints
=
m_pointTarget
;
cv
::
Vec3d
rvec
=
m_pointTargetRvec
.
front
();
cv
::
Vec3d
tvec
=
m_pointTargetTvec
.
front
();
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>
coeffNoiseHalfWidth
(
.1
,
.1
,
// k1 k2
.01
,
.01
,
// p1 p2
.001
,
.001
,
.001
,
.001
,
// k3 k4 k5 k6
.001
,
.001
,
.001
,
.001
,
// s1 s2 s3 s4
.01
,
.01
);
// tauX tauY
for
(
size_t
numTest
=
0
;
numTest
<
10
;
++
numTest
)
{
// create random distortion coefficients
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>
distortionCoeff
=
m_distortionCoeff
;
randomDistortionCoeff
(
distortionCoeff
,
coeffNoiseHalfWidth
);
// projection
cv
::
projectPoints
(
objectPoints
,
rvec
,
tvec
,
m_cameraMatrix
,
distortionCoeff
,
imagePoints
);
// remove object and imgage points out of range
removeInvalidPoints
(
imagePoints
,
objectPoints
);
int
numPoints
=
(
int
)
imagePoints
.
size
();
int
numParams
=
10
+
distortionCoeff
.
rows
;
cv
::
Mat
jacobian
(
2
*
numPoints
,
numParams
,
CV_64FC1
);
// projection and jacobian
cv
::
projectPoints
(
objectPoints
,
rvec
,
tvec
,
m_cameraMatrix
,
distortionCoeff
,
imagePoints
,
jacobian
);
// numerical derivatives
cv
::
Mat
numericJacobian
(
2
*
numPoints
,
numParams
,
CV_64FC1
);
double
eps
=
1e-7
;
numericalDerivative
(
numericJacobian
,
eps
,
objectPoints
,
rvec
,
tvec
,
m_cameraMatrix
,
distortionCoeff
);
#if 0
for (size_t row = 0; row < 2; ++row)
{
std::cout << "------ Row = " << row << " ------\n";
for (size_t i = 0; i < 10+NUM_DIST_COEFF_TILT; ++i)
{
std::cout << i
<< " jac = " << jacobian.at<double>(row,i)
<< " num = " << numericJacobian.at<double>(row,i)
<< " rel. diff = " << abs(numericJacobian.at<double>(row,i) - jacobian.at<double>(row,i))/abs(numericJacobian.at<double>(row,i))
<< "\n";
}
}
#endif
// relative difference for large values (rvec and tvec)
cv
::
Mat
check
=
abs
(
jacobian
(
cv
::
Range
::
all
(),
cv
::
Range
(
0
,
6
))
-
numericJacobian
(
cv
::
Range
::
all
(),
cv
::
Range
(
0
,
6
)))
/
(
1
+
abs
(
jacobian
(
cv
::
Range
::
all
(),
cv
::
Range
(
0
,
6
))));
double
minVal
,
maxVal
;
cv
::
minMaxIdx
(
check
,
&
minVal
,
&
maxVal
);
EXPECT_LE
(
maxVal
,
.01
);
// absolute difference for distortion and camera matrix
EXPECT_MAT_NEAR
(
jacobian
(
cv
::
Range
::
all
(),
cv
::
Range
(
6
,
numParams
)),
numericJacobian
(
cv
::
Range
::
all
(),
cv
::
Range
(
6
,
numParams
)),
1e-5
);
}
}
TEST_F
(
cameraCalibrationTiltTest
,
undistortPoints
)
{
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>
coeffNoiseHalfWidth
(
.2
,
.1
,
// k1 k2
.01
,
.01
,
// p1 p2
.01
,
.01
,
.01
,
.01
,
// k3 k4 k5 k6
.001
,
.001
,
.001
,
.001
,
// s1 s2 s3 s4
.001
,
.001
);
// tauX tauY
double
step
=
99
;
double
toleranceBackProjection
=
1e-5
;
for
(
size_t
numTest
=
0
;
numTest
<
10
;
++
numTest
)
{
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>
distortionCoeff
=
m_distortionCoeff
;
randomDistortionCoeff
(
distortionCoeff
,
coeffNoiseHalfWidth
);
// distorted points
std
::
vector
<
cv
::
Point2d
>
distorted
;
for
(
double
x
=
0
;
x
<=
m_imageSize
.
width
-
1
;
x
+=
step
)
for
(
double
y
=
0
;
y
<=
m_imageSize
.
height
-
1
;
y
+=
step
)
distorted
.
push_back
(
cv
::
Point2d
(
x
,
y
));
std
::
vector
<
cv
::
Point2d
>
normalizedUndistorted
;
// undistort
cv
::
undistortPoints
(
distorted
,
normalizedUndistorted
,
m_cameraMatrix
,
distortionCoeff
);
// copy normalized points to 3D
std
::
vector
<
cv
::
Point3d
>
objectPoints
;
for
(
std
::
vector
<
cv
::
Point2d
>::
const_iterator
itPnt
=
normalizedUndistorted
.
begin
();
itPnt
!=
normalizedUndistorted
.
end
();
++
itPnt
)
objectPoints
.
push_back
(
cv
::
Point3d
(
itPnt
->
x
,
itPnt
->
y
,
1
));
// project
std
::
vector
<
cv
::
Point2d
>
imagePoints
(
objectPoints
.
size
());
cv
::
projectPoints
(
objectPoints
,
cv
::
Vec3d
(
0
,
0
,
0
),
cv
::
Vec3d
(
0
,
0
,
0
),
m_cameraMatrix
,
distortionCoeff
,
imagePoints
);
EXPECT_MAT_NEAR
(
distorted
,
imagePoints
,
toleranceBackProjection
);
}
}
template
<
typename
INPUT
,
typename
ESTIMATE
>
void
show
(
const
std
::
string
&
name
,
const
INPUT
in
,
const
ESTIMATE
est
)
{
std
::
cout
<<
name
<<
" = "
<<
est
<<
" (init = "
<<
in
<<
", diff = "
<<
est
-
in
<<
")
\n
"
;
}
template
<
typename
INPUT
>
void
showVec
(
const
std
::
string
&
name
,
const
INPUT
&
in
,
const
cv
::
Mat
&
est
)
{
for
(
size_t
i
=
0
;
i
<
in
.
channels
;
++
i
)
{
std
::
stringstream
ss
;
ss
<<
name
<<
"["
<<
i
<<
"]"
;
show
(
ss
.
str
(),
in
(
i
),
est
.
at
<
double
>
(
i
));
}
}
/**
For given camera matrix and distortion coefficients
- project point target in different positions onto the sensor
- add pixel noise
- estimate camera modell with noisy measurements
- compare result with initial model parameter
Parameter are differently affected by the noise
*/
TEST_F
(
cameraCalibrationTiltTest
,
calibrateCamera
)
{
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>
coeffNoiseHalfWidth
(
.2
,
.1
,
// k1 k2
.01
,
.01
,
// p1 p2
0
,
0
,
0
,
0
,
// k3 k4 k5 k6
.001
,
.001
,
.001
,
.001
,
// s1 s2 s3 s4
.001
,
.001
);
// tauX tauY
double
pixelNoiseHalfWidth
=
.5
;
std
::
vector
<
cv
::
Point3f
>
pointTarget
;
pointTarget
.
reserve
(
m_pointTarget
.
size
());
for
(
std
::
vector
<
cv
::
Point3d
>::
const_iterator
it
=
m_pointTarget
.
begin
();
it
!=
m_pointTarget
.
end
();
++
it
)
pointTarget
.
push_back
(
cv
::
Point3f
(
(
float
)(
it
->
x
),
(
float
)(
it
->
y
),
(
float
)(
it
->
z
)));
for
(
size_t
numTest
=
0
;
numTest
<
5
;
++
numTest
)
{
// create random distortion coefficients
cv
::
Vec
<
double
,
NUM_DIST_COEFF_TILT
>
distortionCoeff
=
m_distortionCoeff
;
randomDistortionCoeff
(
distortionCoeff
,
coeffNoiseHalfWidth
);
// container for calibration data
std
::
vector
<
std
::
vector
<
cv
::
Point3f
>
>
viewsObjectPoints
;
std
::
vector
<
std
::
vector
<
cv
::
Point2f
>
>
viewsImagePoints
;
std
::
vector
<
std
::
vector
<
cv
::
Point2f
>
>
viewsNoisyImagePoints
;
// simulate calibration data with projectPoints
std
::
vector
<
cv
::
Vec3d
>::
const_iterator
itRvec
=
m_pointTargetRvec
.
begin
();
std
::
vector
<
cv
::
Vec3d
>::
const_iterator
itTvec
=
m_pointTargetTvec
.
begin
();
// loop over different views
for
(;
itRvec
!=
m_pointTargetRvec
.
end
();
++
itRvec
,
++
itTvec
)
{
std
::
vector
<
cv
::
Point3f
>
objectPoints
(
pointTarget
);
std
::
vector
<
cv
::
Point2f
>
imagePoints
;
std
::
vector
<
cv
::
Point2f
>
noisyImagePoints
;
// project calibration target to sensor
cv
::
projectPoints
(
objectPoints
,
*
itRvec
,
*
itTvec
,
m_cameraMatrix
,
distortionCoeff
,
imagePoints
);
// remove invisible points
addNoiseRemoveInvalidPoints
(
imagePoints
,
objectPoints
,
noisyImagePoints
,
pixelNoiseHalfWidth
);
// add data for view
viewsNoisyImagePoints
.
push_back
(
noisyImagePoints
);
viewsImagePoints
.
push_back
(
imagePoints
);
viewsObjectPoints
.
push_back
(
objectPoints
);
}
// Output
std
::
vector
<
cv
::
Mat
>
outRvecs
,
outTvecs
;
cv
::
Mat
outCameraMatrix
(
3
,
3
,
CV_64F
,
cv
::
Scalar
::
all
(
1
)),
outDistCoeff
;
// Stopping criteria
cv
::
TermCriteria
stop
(
cv
::
TermCriteria
::
COUNT
+
cv
::
TermCriteria
::
EPS
,
50000
,
1e-14
);
// modell coice
int
flag
=
cv
::
CALIB_FIX_ASPECT_RATIO
|
// cv::CALIB_RATIONAL_MODEL |
cv
::
CALIB_FIX_K3
|
// cv::CALIB_FIX_K6 |
cv
::
CALIB_THIN_PRISM_MODEL
|
cv
::
CALIB_TILTED_MODEL
;
// estimate
double
backProjErr
=
cv
::
calibrateCamera
(
viewsObjectPoints
,
viewsNoisyImagePoints
,
m_imageSize
,
outCameraMatrix
,
outDistCoeff
,
outRvecs
,
outTvecs
,
flag
,
stop
);
EXPECT_LE
(
backProjErr
,
pixelNoiseHalfWidth
);
#if 0
std::cout << "------ estimate ------\n";
std::cout << "back projection error = " << backProjErr << "\n";
std::cout << "points per view = {" << viewsObjectPoints.front().size();
for (size_t i = 1; i < viewsObjectPoints.size(); ++i)
std::cout << ", " << viewsObjectPoints[i].size();
std::cout << "}\n";
show("fx", m_cameraMatrix(0,0), outCameraMatrix.at<double>(0,0));
show("fy", m_cameraMatrix(1,1), outCameraMatrix.at<double>(1,1));
show("cx", m_cameraMatrix(0,2), outCameraMatrix.at<double>(0,2));
show("cy", m_cameraMatrix(1,2), outCameraMatrix.at<double>(1,2));
showVec("distor", distortionCoeff, outDistCoeff);
#endif
if
(
pixelNoiseHalfWidth
>
0
)
{
double
tolRvec
=
pixelNoiseHalfWidth
;
double
tolTvec
=
m_objectDistance
*
tolRvec
;
// back projection error
for
(
size_t
i
=
0
;
i
<
viewsNoisyImagePoints
.
size
();
++
i
)
{
double
dRvec
=
norm
(
m_pointTargetRvec
[
i
]
-
cv
::
Vec3d
(
outRvecs
[
i
].
at
<
double
>
(
0
),
outRvecs
[
i
].
at
<
double
>
(
1
),
outRvecs
[
i
].
at
<
double
>
(
2
)));
// std::cout << dRvec << " " << tolRvec << "\n";
EXPECT_LE
(
dRvec
,
tolRvec
);
double
dTvec
=
norm
(
m_pointTargetTvec
[
i
]
-
cv
::
Vec3d
(
outTvecs
[
i
].
at
<
double
>
(
0
),
outTvecs
[
i
].
at
<
double
>
(
1
),
outTvecs
[
i
].
at
<
double
>
(
2
)));
// std::cout << dTvec << " " << tolTvec << "\n";
EXPECT_LE
(
dTvec
,
tolTvec
);
std
::
vector
<
cv
::
Point2f
>
backProjection
;
cv
::
projectPoints
(
viewsObjectPoints
[
i
],
outRvecs
[
i
],
outTvecs
[
i
],
outCameraMatrix
,
outDistCoeff
,
backProjection
);
EXPECT_MAT_NEAR
(
backProjection
,
viewsNoisyImagePoints
[
i
],
1.5
*
pixelNoiseHalfWidth
);
EXPECT_MAT_NEAR
(
backProjection
,
viewsImagePoints
[
i
],
1.5
*
pixelNoiseHalfWidth
);
}
}
pixelNoiseHalfWidth
*=
.25
;
}
}
modules/core/include/opencv2/core/matx.hpp
View file @
c5d4ecae
...
...
@@ -114,6 +114,10 @@ public:
Matx
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
,
_Tp
v10
,
_Tp
v11
);
//!< 1x12, 2x6, 3x4, 4x3, 6x2 or 12x1 matrix
Matx
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
,
_Tp
v10
,
_Tp
v11
,
_Tp
v12
,
_Tp
v13
);
//!< 1x14, 2x7, 7x2 or 14x1 matrix
Matx
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
,
_Tp
v10
,
_Tp
v11
,
...
...
@@ -319,6 +323,7 @@ public:
Vec
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
);
//!< 8-element vector constructor
Vec
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
);
//!< 9-element vector constructor
Vec
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
);
//!< 10-element vector constructor
Vec
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
,
_Tp
v10
,
_Tp
v11
,
_Tp
v12
,
_Tp
v13
);
//!< 14-element vector constructor
explicit
Vec
(
const
_Tp
*
values
);
Vec
(
const
Vec
<
_Tp
,
cn
>&
v
);
...
...
@@ -581,6 +586,17 @@ Matx<_Tp,m,n>::Matx(_Tp v0, _Tp v1, _Tp v2, _Tp v3, _Tp v4, _Tp v5, _Tp v6, _Tp
for
(
int
i
=
12
;
i
<
channels
;
i
++
)
val
[
i
]
=
_Tp
(
0
);
}
template
<
typename
_Tp
,
int
m
,
int
n
>
inline
Matx
<
_Tp
,
m
,
n
>::
Matx
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
,
_Tp
v10
,
_Tp
v11
,
_Tp
v12
,
_Tp
v13
)
{
CV_StaticAssert
(
channels
==
14
,
"Matx should have at least 14 elements."
);
val
[
0
]
=
v0
;
val
[
1
]
=
v1
;
val
[
2
]
=
v2
;
val
[
3
]
=
v3
;
val
[
4
]
=
v4
;
val
[
5
]
=
v5
;
val
[
6
]
=
v6
;
val
[
7
]
=
v7
;
val
[
8
]
=
v8
;
val
[
9
]
=
v9
;
val
[
10
]
=
v10
;
val
[
11
]
=
v11
;
val
[
12
]
=
v12
;
val
[
13
]
=
v13
;
}
template
<
typename
_Tp
,
int
m
,
int
n
>
inline
Matx
<
_Tp
,
m
,
n
>::
Matx
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
,
_Tp
v10
,
_Tp
v11
,
_Tp
v12
,
_Tp
v13
,
_Tp
v14
,
_Tp
v15
)
{
...
...
@@ -931,6 +947,10 @@ template<typename _Tp, int cn> inline
Vec
<
_Tp
,
cn
>::
Vec
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
)
:
Matx
<
_Tp
,
cn
,
1
>
(
v0
,
v1
,
v2
,
v3
,
v4
,
v5
,
v6
,
v7
,
v8
,
v9
)
{}
template
<
typename
_Tp
,
int
cn
>
inline
Vec
<
_Tp
,
cn
>::
Vec
(
_Tp
v0
,
_Tp
v1
,
_Tp
v2
,
_Tp
v3
,
_Tp
v4
,
_Tp
v5
,
_Tp
v6
,
_Tp
v7
,
_Tp
v8
,
_Tp
v9
,
_Tp
v10
,
_Tp
v11
,
_Tp
v12
,
_Tp
v13
)
:
Matx
<
_Tp
,
cn
,
1
>
(
v0
,
v1
,
v2
,
v3
,
v4
,
v5
,
v6
,
v7
,
v8
,
v9
,
v10
,
v11
,
v12
,
v13
)
{}
template
<
typename
_Tp
,
int
cn
>
inline
Vec
<
_Tp
,
cn
>::
Vec
(
const
_Tp
*
values
)
:
Matx
<
_Tp
,
cn
,
1
>
(
values
)
{}
...
...
modules/imgproc/include/opencv2/imgproc.hpp
View file @
c5d4ecae
...
...
@@ -2598,8 +2598,8 @@ the same.
@param dst Output (corrected) image that has the same size and type as src .
@param cameraMatrix Input camera matrix \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
@param distCoeffs Input vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
]])\f$ of 4, 5, or 8 elements. If the vector is
NULL/empty, the zero distortion coefficients are assumed.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
of 4, 5, 8, 12 or 14 elements. If the vector is
NULL/empty, the zero distortion coefficients are assumed.
@param newCameraMatrix Camera matrix of the distorted image. By default, it is the same as
cameraMatrix but you may additionally scale and shift the result by using a different matrix.
*/
...
...
@@ -2625,8 +2625,28 @@ The function actually builds the maps for the inverse mapping algorithm that is
is, for each pixel \f$(u, v)\f$ in the destination (corrected and rectified) image, the function
computes the corresponding coordinates in the source image (that is, in the original image from
camera). The following process is applied:
\f[\begin{array}{l} x \leftarrow (u - {c'}_x)/{f'}_x \\ y \leftarrow (v - {c'}_y)/{f'}_y \\{[X\,Y\,W]} ^T \leftarrow R^{-1}*[x \, y \, 1]^T \\ x' \leftarrow X/W \\ y' \leftarrow Y/W \\ x" \leftarrow x' (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) + 2p_1 x' y' + p_2(r^2 + 2 x'^2) \\ y" \leftarrow y' (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) + p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' \\ map_x(u,v) \leftarrow x" f_x + c_x \\ map_y(u,v) \leftarrow y" f_y + c_y \end{array}\f]
where \f$(k_1, k_2, p_1, p_2[, k_3])\f$ are the distortion coefficients.
\f[
\begin{array}{l}
x \leftarrow (u - {c'}_x)/{f'}_x \\
y \leftarrow (v - {c'}_y)/{f'}_y \\
{[X\,Y\,W]} ^T \leftarrow R^{-1}*[x \, y \, 1]^T \\
x' \leftarrow X/W \\
y' \leftarrow Y/W \\
r^2 \leftarrow x'^2 + y'^2 \\
x'' \leftarrow x' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}
+ 2p_1 x' y' + p_2(r^2 + 2 x'^2) + s_1 r^2 + s_2 r^4\\
y'' \leftarrow y' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}
+ p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' + s_3 r^2 + s_4 r^4 \\
s\vecthree{x'''}{y'''}{1} =
\vecthreethree{R_{33}(\tau_x, \tau_y)}{0}{-R_{13}((\tau_x, \tau_y)}
{0}{R_{33}(\tau_x, \tau_y)}{-R_{23}(\tau_x, \tau_y)}
{0}{0}{1} R(\tau_x, \tau_y) \vecthree{x''}{y''}{1}\\
map_x(u,v) \leftarrow x''' f_x + c_x \\
map_y(u,v) \leftarrow y''' f_y + c_y
\end{array}
\f]
where \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
are the distortion coefficients.
In case of a stereo camera, this function is called twice: once for each camera head, after
stereoRectify, which in its turn is called after cv::stereoCalibrate. But if the stereo camera
...
...
@@ -2639,8 +2659,8 @@ where cameraMatrix can be chosen arbitrarily.
@param cameraMatrix Input camera matrix \f$A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
@param distCoeffs Input vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
]])\f$ of 4, 5, or 8 elements. If the vector is
NULL/empty, the zero distortion coefficients are assumed.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
of 4, 5, 8, 12 or 14 elements. If the vector is
NULL/empty, the zero distortion coefficients are assumed.
@param R Optional rectification transformation in the object space (3x3 matrix). R1 or R2 ,
computed by stereoRectify can be passed here. If the matrix is empty, the identity transformation
is assumed. In cvInitUndistortMap R assumed to be an identity matrix.
...
...
@@ -2715,8 +2735,8 @@ The function can be used for both a stereo camera head or a monocular camera (wh
transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.
@param cameraMatrix Camera matrix \f$\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
@param distCoeffs Input vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
]])\f$ of 4, 5, or 8 elements. If the vector is
NULL/empty, the zero distortion coefficients are assumed.
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
of 4, 5, 8, 12 or 14 elements. If the vector is
NULL/empty, the zero distortion coefficients are assumed.
@param R Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by
cv::stereoRectify can be passed here. If the matrix is empty, the identity transformation is used.
@param P New camera matrix (3x3) or new projection matrix (3x4). P1 or P2 computed by
...
...
modules/imgproc/include/opencv2/imgproc/detail/distortion_model.hpp
0 → 100644
View file @
c5d4ecae
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_IMGPROC_DETAIL_DISTORTION_MODEL_HPP__
#define __OPENCV_IMGPROC_DETAIL_DISTORTION_MODEL_HPP__
//! @cond IGNORED
namespace
cv
{
namespace
detail
{
/**
Computes the matrix for the projection onto a tilted image sensor
\param tauX angular parameter rotation around x-axis
\param tauY angular parameter rotation around y-axis
\param matTilt if not NULL returns the matrix
\f[
\vecthreethree{R_{33}(\tau_x, \tau_y)}{0}{-R_{13}((\tau_x, \tau_y)}
{0}{R_{33}(\tau_x, \tau_y)}{-R_{23}(\tau_x, \tau_y)}
{0}{0}{1} R(\tau_x, \tau_y)
\f]
where
\f[
R(\tau_x, \tau_y) =
\vecthreethree{\cos(\tau_y)}{0}{-\sin(\tau_y)}{0}{1}{0}{\sin(\tau_y)}{0}{\cos(\tau_y)}
\vecthreethree{1}{0}{0}{0}{\cos(\tau_x)}{\sin(\tau_x)}{0}{-\sin(\tau_x)}{\cos(\tau_x)} =
\vecthreethree{\cos(\tau_y)}{\sin(\tau_y)\sin(\tau_x)}{-\sin(\tau_y)\cos(\tau_x)}
{0}{\cos(\tau_x)}{\sin(\tau_x)}
{\sin(\tau_y)}{-\cos(\tau_y)\sin(\tau_x)}{\cos(\tau_y)\cos(\tau_x)}.
\f]
\param dMatTiltdTauX if not NULL it returns the derivative of matTilt with
respect to \f$\tau_x\f$.
\param dMatTiltdTauY if not NULL it returns the derivative of matTilt with
respect to \f$\tau_y\f$.
\param invMatTilt if not NULL it returns the inverse of matTilt
**/
template
<
typename
FLOAT
>
void
computeTiltProjectionMatrix
(
FLOAT
tauX
,
FLOAT
tauY
,
Matx
<
FLOAT
,
3
,
3
>*
matTilt
=
0
,
Matx
<
FLOAT
,
3
,
3
>*
dMatTiltdTauX
=
0
,
Matx
<
FLOAT
,
3
,
3
>*
dMatTiltdTauY
=
0
,
Matx
<
FLOAT
,
3
,
3
>*
invMatTilt
=
0
)
{
FLOAT
cTauX
=
cos
(
tauX
);
FLOAT
sTauX
=
sin
(
tauX
);
FLOAT
cTauY
=
cos
(
tauY
);
FLOAT
sTauY
=
sin
(
tauY
);
Matx
<
FLOAT
,
3
,
3
>
matRotX
=
Matx
<
FLOAT
,
3
,
3
>
(
1
,
0
,
0
,
0
,
cTauX
,
sTauX
,
0
,
-
sTauX
,
cTauX
);
Matx
<
FLOAT
,
3
,
3
>
matRotY
=
Matx
<
FLOAT
,
3
,
3
>
(
cTauY
,
0
,
-
sTauY
,
0
,
1
,
0
,
sTauY
,
0
,
cTauY
);
Matx
<
FLOAT
,
3
,
3
>
matRotXY
=
matRotY
*
matRotX
;
Matx
<
FLOAT
,
3
,
3
>
matProjZ
=
Matx
<
FLOAT
,
3
,
3
>
(
matRotXY
(
2
,
2
),
0
,
-
matRotXY
(
0
,
2
),
0
,
matRotXY
(
2
,
2
),
-
matRotXY
(
1
,
2
),
0
,
0
,
1
);
if
(
matTilt
)
{
// Matrix for trapezoidal distortion of tilted image sensor
*
matTilt
=
matProjZ
*
matRotXY
;
}
if
(
dMatTiltdTauX
)
{
// Derivative with respect to tauX
Matx
<
FLOAT
,
3
,
3
>
dMatRotXYdTauX
=
matRotY
*
Matx
<
FLOAT
,
3
,
3
>
(
0
,
0
,
0
,
0
,
-
sTauX
,
cTauX
,
0
,
-
cTauX
,
-
sTauX
);
Matx
<
FLOAT
,
3
,
3
>
dMatProjZdTauX
=
Matx
<
FLOAT
,
3
,
3
>
(
dMatRotXYdTauX
(
2
,
2
),
0
,
-
dMatRotXYdTauX
(
0
,
2
),
0
,
dMatRotXYdTauX
(
2
,
2
),
-
dMatRotXYdTauX
(
1
,
2
),
0
,
0
,
0
);
*
dMatTiltdTauX
=
(
matProjZ
*
dMatRotXYdTauX
)
+
(
dMatProjZdTauX
*
matRotXY
);
}
if
(
dMatTiltdTauY
)
{
// Derivative with respect to tauY
Matx
<
FLOAT
,
3
,
3
>
dMatRotXYdTauY
=
Matx
<
FLOAT
,
3
,
3
>
(
-
sTauY
,
0
,
-
cTauY
,
0
,
0
,
0
,
cTauY
,
0
,
-
sTauY
)
*
matRotX
;
Matx
<
FLOAT
,
3
,
3
>
dMatProjZdTauY
=
Matx
<
FLOAT
,
3
,
3
>
(
dMatRotXYdTauY
(
2
,
2
),
0
,
-
dMatRotXYdTauY
(
0
,
2
),
0
,
dMatRotXYdTauY
(
2
,
2
),
-
dMatRotXYdTauY
(
1
,
2
),
0
,
0
,
0
);
*
dMatTiltdTauY
=
(
matProjZ
*
dMatRotXYdTauY
)
+
(
dMatProjZdTauY
*
matRotXY
);
}
if
(
invMatTilt
)
{
FLOAT
inv
=
1.
/
matRotXY
(
2
,
2
);
Matx
<
FLOAT
,
3
,
3
>
invMatProjZ
=
Matx
<
FLOAT
,
3
,
3
>
(
inv
,
0
,
inv
*
matRotXY
(
0
,
2
),
0
,
inv
,
inv
*
matRotXY
(
1
,
2
),
0
,
0
,
1
);
*
invMatTilt
=
matRotXY
.
t
()
*
invMatProjZ
;
}
}
}}
// namespace detail, cv
//! @endcond
#endif // __OPENCV_IMGPROC_DETAIL_DISTORTION_MODEL_HPP__
modules/imgproc/src/undistort.cpp
View file @
c5d4ecae
...
...
@@ -41,6 +41,7 @@
//M*/
#include "precomp.hpp"
#include "opencv2/imgproc/detail/distortion_model.hpp"
cv
::
Mat
cv
::
getDefaultNewCameraMatrix
(
InputArray
_cameraMatrix
,
Size
imgsize
,
bool
centerPrincipalPoint
)
...
...
@@ -94,7 +95,7 @@ void cv::initUndistortRectifyMap( InputArray _cameraMatrix, InputArray _distCoef
distCoeffs
=
Mat_
<
double
>
(
distCoeffs
);
else
{
distCoeffs
.
create
(
1
2
,
1
,
CV_64F
);
distCoeffs
.
create
(
1
4
,
1
,
CV_64F
);
distCoeffs
=
0.
;
}
...
...
@@ -109,7 +110,8 @@ void cv::initUndistortRectifyMap( InputArray _cameraMatrix, InputArray _distCoef
CV_Assert
(
distCoeffs
.
size
()
==
Size
(
1
,
4
)
||
distCoeffs
.
size
()
==
Size
(
4
,
1
)
||
distCoeffs
.
size
()
==
Size
(
1
,
5
)
||
distCoeffs
.
size
()
==
Size
(
5
,
1
)
||
distCoeffs
.
size
()
==
Size
(
1
,
8
)
||
distCoeffs
.
size
()
==
Size
(
8
,
1
)
||
distCoeffs
.
size
()
==
Size
(
1
,
12
)
||
distCoeffs
.
size
()
==
Size
(
12
,
1
));
distCoeffs
.
size
()
==
Size
(
1
,
12
)
||
distCoeffs
.
size
()
==
Size
(
12
,
1
)
||
distCoeffs
.
size
()
==
Size
(
1
,
14
)
||
distCoeffs
.
size
()
==
Size
(
14
,
1
));
if
(
distCoeffs
.
rows
!=
1
&&
!
distCoeffs
.
isContinuous
()
)
distCoeffs
=
distCoeffs
.
t
();
...
...
@@ -127,6 +129,12 @@ void cv::initUndistortRectifyMap( InputArray _cameraMatrix, InputArray _distCoef
double
s2
=
distCoeffs
.
cols
+
distCoeffs
.
rows
-
1
>=
12
?
distPtr
[
9
]
:
0.
;
double
s3
=
distCoeffs
.
cols
+
distCoeffs
.
rows
-
1
>=
12
?
distPtr
[
10
]
:
0.
;
double
s4
=
distCoeffs
.
cols
+
distCoeffs
.
rows
-
1
>=
12
?
distPtr
[
11
]
:
0.
;
double
tauX
=
distCoeffs
.
cols
+
distCoeffs
.
rows
-
1
>=
14
?
distPtr
[
12
]
:
0.
;
double
tauY
=
distCoeffs
.
cols
+
distCoeffs
.
rows
-
1
>=
14
?
distPtr
[
13
]
:
0.
;
// Matrix for trapezoidal distortion of tilted image sensor
cv
::
Matx33d
matTilt
=
cv
::
Matx33d
::
eye
();
cv
::
detail
::
computeTiltProjectionMatrix
(
tauX
,
tauY
,
&
matTilt
);
for
(
int
i
=
0
;
i
<
size
.
height
;
i
++
)
{
...
...
@@ -142,8 +150,12 @@ void cv::initUndistortRectifyMap( InputArray _cameraMatrix, InputArray _distCoef
double
x2
=
x
*
x
,
y2
=
y
*
y
;
double
r2
=
x2
+
y2
,
_2xy
=
2
*
x
*
y
;
double
kr
=
(
1
+
((
k3
*
r2
+
k2
)
*
r2
+
k1
)
*
r2
)
/
(
1
+
((
k6
*
r2
+
k5
)
*
r2
+
k4
)
*
r2
);
double
u
=
fx
*
(
x
*
kr
+
p1
*
_2xy
+
p2
*
(
r2
+
2
*
x2
)
+
s1
*
r2
+
s2
*
r2
*
r2
)
+
u0
;
double
v
=
fy
*
(
y
*
kr
+
p1
*
(
r2
+
2
*
y2
)
+
p2
*
_2xy
+
s3
*
r2
+
s4
*
r2
*
r2
)
+
v0
;
double
xd
=
(
x
*
kr
+
p1
*
_2xy
+
p2
*
(
r2
+
2
*
x2
)
+
s1
*
r2
+
s2
*
r2
*
r2
);
double
yd
=
(
y
*
kr
+
p1
*
(
r2
+
2
*
y2
)
+
p2
*
_2xy
+
s3
*
r2
+
s4
*
r2
*
r2
);
cv
::
Vec3d
vecTilt
=
matTilt
*
cv
::
Vec3d
(
xd
,
yd
,
1
);
double
invProj
=
vecTilt
(
2
)
?
1.
/
vecTilt
(
2
)
:
1
;
double
u
=
fx
*
invProj
*
vecTilt
(
0
)
+
u0
;
double
v
=
fy
*
invProj
*
vecTilt
(
1
)
+
v0
;
if
(
m1type
==
CV_16SC2
)
{
int
iu
=
saturate_cast
<
int
>
(
u
*
INTER_TAB_SIZE
);
...
...
@@ -266,7 +278,7 @@ void cvUndistortPoints( const CvMat* _src, CvMat* _dst, const CvMat* _cameraMatr
const
CvMat
*
_distCoeffs
,
const
CvMat
*
matR
,
const
CvMat
*
matP
)
{
double
A
[
3
][
3
],
RR
[
3
][
3
],
k
[
1
2
]
=
{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
},
fx
,
fy
,
ifx
,
ify
,
cx
,
cy
;
double
A
[
3
][
3
],
RR
[
3
][
3
],
k
[
1
4
]
=
{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
},
fx
,
fy
,
ifx
,
ify
,
cx
,
cy
;
CvMat
matA
=
cvMat
(
3
,
3
,
CV_64F
,
A
),
_Dk
;
CvMat
_RR
=
cvMat
(
3
,
3
,
CV_64F
,
RR
);
const
CvPoint2D32f
*
srcf
;
...
...
@@ -276,6 +288,7 @@ void cvUndistortPoints( const CvMat* _src, CvMat* _dst, const CvMat* _cameraMatr
int
stype
,
dtype
;
int
sstep
,
dstep
;
int
i
,
j
,
n
,
iters
=
1
;
cv
::
Matx33d
invMatTilt
=
cv
::
Matx33d
::
eye
();
CV_Assert
(
CV_IS_MAT
(
_src
)
&&
CV_IS_MAT
(
_dst
)
&&
(
_src
->
rows
==
1
||
_src
->
cols
==
1
)
&&
...
...
@@ -296,13 +309,16 @@ void cvUndistortPoints( const CvMat* _src, CvMat* _dst, const CvMat* _cameraMatr
(
_distCoeffs
->
rows
*
_distCoeffs
->
cols
==
4
||
_distCoeffs
->
rows
*
_distCoeffs
->
cols
==
5
||
_distCoeffs
->
rows
*
_distCoeffs
->
cols
==
8
||
_distCoeffs
->
rows
*
_distCoeffs
->
cols
==
12
));
_distCoeffs
->
rows
*
_distCoeffs
->
cols
==
12
||
_distCoeffs
->
rows
*
_distCoeffs
->
cols
==
14
));
_Dk
=
cvMat
(
_distCoeffs
->
rows
,
_distCoeffs
->
cols
,
CV_MAKETYPE
(
CV_64F
,
CV_MAT_CN
(
_distCoeffs
->
type
)),
k
);
cvConvert
(
_distCoeffs
,
&
_Dk
);
iters
=
5
;
if
(
k
[
12
]
!=
0
||
k
[
13
]
!=
0
)
cv
::
detail
::
computeTiltProjectionMatrix
<
double
>
(
k
[
12
],
k
[
13
],
NULL
,
NULL
,
NULL
,
&
invMatTilt
);
}
if
(
matR
)
...
...
@@ -354,8 +370,14 @@ void cvUndistortPoints( const CvMat* _src, CvMat* _dst, const CvMat* _cameraMatr
y
=
srcd
[
i
*
sstep
].
y
;
}
x0
=
x
=
(
x
-
cx
)
*
ifx
;
y0
=
y
=
(
y
-
cy
)
*
ify
;
x
=
(
x
-
cx
)
*
ifx
;
y
=
(
y
-
cy
)
*
ify
;
// compensate tilt distortion
cv
::
Vec3d
vecUntilt
=
invMatTilt
*
cv
::
Vec3d
(
x
,
y
,
1
);
double
invProj
=
vecUntilt
(
2
)
?
1.
/
vecUntilt
(
2
)
:
1
;
x0
=
x
=
invProj
*
vecUntilt
(
0
);
y0
=
y
=
invProj
*
vecUntilt
(
1
);
// compensate distortion iteratively
for
(
j
=
0
;
j
<
iters
;
j
++
)
...
...
@@ -500,7 +522,7 @@ float cv::initWideAngleProjMap( InputArray _cameraMatrix0, InputArray _distCoeff
OutputArray
_map1
,
OutputArray
_map2
,
int
projType
,
double
_alpha
)
{
Mat
cameraMatrix0
=
_cameraMatrix0
.
getMat
(),
distCoeffs0
=
_distCoeffs0
.
getMat
();
double
k
[
1
2
]
=
{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
},
M
[
9
]
=
{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
};
double
k
[
1
4
]
=
{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
},
M
[
9
]
=
{
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
};
Mat
distCoeffs
(
distCoeffs0
.
rows
,
distCoeffs0
.
cols
,
CV_MAKETYPE
(
CV_64F
,
distCoeffs0
.
channels
()),
k
);
Mat
cameraMatrix
(
3
,
3
,
CV_64F
,
M
);
Point2f
scenter
((
float
)
cameraMatrix
.
at
<
double
>
(
0
,
2
),
(
float
)
cameraMatrix
.
at
<
double
>
(
1
,
2
));
...
...
@@ -513,7 +535,7 @@ float cv::initWideAngleProjMap( InputArray _cameraMatrix0, InputArray _distCoeff
int
ndcoeffs
=
distCoeffs0
.
cols
*
distCoeffs0
.
rows
*
distCoeffs0
.
channels
();
CV_Assert
((
distCoeffs0
.
cols
==
1
||
distCoeffs0
.
rows
==
1
)
&&
(
ndcoeffs
==
4
||
ndcoeffs
==
5
||
ndcoeffs
==
8
));
(
ndcoeffs
==
4
||
ndcoeffs
==
5
||
ndcoeffs
==
8
||
ndcoeffs
==
12
||
ndcoeffs
==
14
));
CV_Assert
(
cameraMatrix0
.
size
()
==
Size
(
3
,
3
));
distCoeffs0
.
convertTo
(
distCoeffs
,
CV_64F
);
cameraMatrix0
.
convertTo
(
cameraMatrix
,
CV_64F
);
...
...
@@ -540,6 +562,8 @@ float cv::initWideAngleProjMap( InputArray _cameraMatrix0, InputArray _distCoeff
Mat
mapxy
(
dsize
,
CV_32FC2
);
double
k1
=
k
[
0
],
k2
=
k
[
1
],
k3
=
k
[
2
],
p1
=
k
[
3
],
p2
=
k
[
4
],
k4
=
k
[
5
],
k5
=
k
[
6
],
k6
=
k
[
7
],
s1
=
k
[
8
],
s2
=
k
[
9
],
s3
=
k
[
10
],
s4
=
k
[
11
];
double
fx
=
cameraMatrix
.
at
<
double
>
(
0
,
0
),
fy
=
cameraMatrix
.
at
<
double
>
(
1
,
1
),
cx
=
scenter
.
x
,
cy
=
scenter
.
y
;
cv
::
Matx33d
matTilt
;
cv
::
detail
::
computeTiltProjectionMatrix
(
k
[
12
],
k
[
13
],
&
matTilt
);
for
(
int
y
=
0
;
y
<
dsize
.
height
;
y
++
)
{
...
...
@@ -556,8 +580,12 @@ float cv::initWideAngleProjMap( InputArray _cameraMatrix0, InputArray _distCoeff
double
x2
=
q
.
x
*
q
.
x
,
y2
=
q
.
y
*
q
.
y
;
double
r2
=
x2
+
y2
,
_2xy
=
2
*
q
.
x
*
q
.
y
;
double
kr
=
1
+
((
k3
*
r2
+
k2
)
*
r2
+
k1
)
*
r2
/
(
1
+
((
k6
*
r2
+
k5
)
*
r2
+
k4
)
*
r2
);
double
u
=
fx
*
(
q
.
x
*
kr
+
p1
*
_2xy
+
p2
*
(
r2
+
2
*
x2
)
+
s1
*
r2
+
s2
*
r2
*
r2
)
+
cx
;
double
v
=
fy
*
(
q
.
y
*
kr
+
p1
*
(
r2
+
2
*
y2
)
+
p2
*
_2xy
+
s3
*
r2
+
s4
*
r2
*
r2
)
+
cy
;
double
xd
=
(
q
.
x
*
kr
+
p1
*
_2xy
+
p2
*
(
r2
+
2
*
x2
)
+
s1
*
r2
+
s2
*
r2
*
r2
);
double
yd
=
(
q
.
y
*
kr
+
p1
*
(
r2
+
2
*
y2
)
+
p2
*
_2xy
+
s3
*
r2
+
s4
*
r2
*
r2
);
cv
::
Vec3d
vecTilt
=
matTilt
*
cv
::
Vec3d
(
xd
,
yd
,
1
);
double
invProj
=
vecTilt
(
2
)
?
1.
/
vecTilt
(
2
)
:
1
;
double
u
=
fx
*
invProj
*
vecTilt
(
0
)
+
cx
;
double
v
=
fy
*
invProj
*
vecTilt
(
1
)
+
cy
;
mxy
[
x
]
=
Point2f
((
float
)
u
,
(
float
)
v
);
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment