Commit ab3b5429 authored by Alfonso Sanchez-Beato's avatar Alfonso Sanchez-Beato

reg module: Add sample and README file

README.md file with a short description of the module is added. An
example on how to use the module can be found now in reg/samples/.
parent e1980726
# OpenCV pixel-intensity based registration module
Author and maintainer: Alfonso Sanchez-Beato
alfonsosanchezbeato\_\_\_\_gmail.com
These classes implement a module for OpenCV for parametric image registration.
The implemented method is direct alignment, that is, it uses directly the pixel
values for calculating the registration between a pair of images, as opposed to
feature-based registration. The implementation follows essentially the
corresponding part of the paper "Image Alignment and Stitching: A Tutorial",
from Richard Szeliski.
Feature based methods have some advantages over pixel based methods when we are
trying to register pictures that have been shoot under different lighting
conditions or exposition times, or when the images overlap only partially. On
the other hand, the main advantage of pixel-based methods when compared to
feature based methods is their better precision for some pictures (those shoot
under similar lighting conditions and that have a significative overlap), due to
the fact that we are using all the information available in the image, which
allows us to achieve subpixel accuracy. This is particularly important for
certain applications like multi-frame denoising or super-resolution.
In fact, pixel and feature registration methods can complement each other: an
application could first obtain a coarse registration using features and then
refine the registration using a pixel based method on the overlapping area of
the images. The code developed allows this use case.
The module implements classes derived from the abstract classes cv::reg::Map or
cv::reg::Mapper. The former models a coordinate transformation between two
reference frames, while the later encapsulates a way of invoking a method that
calculates a Map between two images. Although the objective has been to
implement pixel based methods, the module could be extended to support other
methods that can calculate transformations between images (feature methods,
optical flow, etc.).
Each class derived from Map implements a motion model, as follows:
* MapShift: Models a simple translation
* MapAffine: Models an affine transformation
* MapProject: Models a projective transformation
MapProject can also be used to model affine motion or translations, but some
operations on it are more costly, and that is the reason for defining the other
two classes.
The classes derived from Mapper are
* MapperGradShift: Gradient based alignment for calculating translations. It
produces a MapShift (two parameters that correspond to the shift vector).
* MapperGradEuclid: Gradient based alignment for euclidean motions, that is,
rotations and translations. It calculates three parameters (angle and shift
vector), although the result is stored in a MapAffine object for convenience.
* MapperGradSimilar: Gradient based alignment for calculating similarities,
which adds scaling to the euclidean motion. It calculates four parameters (two
for the anti-symmetric matrix and two for the shift vector), although the result
is stored in a MapAffine object for convenience.
* MapperGradAffine: Gradient based alignment for an affine motion model. The
number of parameters is six and the result is stored in a MapAffine object.
* MapperGradProj: Gradient based alignment for calculating projective
transformations. The number of parameters is eight and the result is stored in a
MapProject object.
* MapperPyramid: It implements hyerarchical motion estimation using a Gaussian
pyramid. Its constructor accepts as argument any other object that implements
the Mapper interface, and it is that mapper the one called by MapperPyramid for
each scale of the pyramid.
If the motion between the images is not very small, the normal way of using
these classes is to create a MapperGrad\* object and use it as input to create a
MapperPyramid, which in turn is called to perform the calculation. However, if
the motion between the images is small enough, we can use directly the
MapperGrad\* classes. Another possibility is to use first a feature based method
to perform a coarse registration and then do a refinement through MapperPyramid
or directly a MapperGrad\* object. The "calculate" method of the mappers accepts
an initial estimation of the motion as input.
When deciding which MapperGrad to use we must take into account that mappers
with more parameters can handle more complex motions, but involve more
calculations and are therefore slower. Also, if we are confident on the motion
model that is followed by the sequence, increasing the number of parameters
beyond what we need will decrease the accuracy: it is better to use the least
number of degrees of freedom that we can.
In the file map_test.cpp some examples on how to use this module can be seen.
There is a test function for each MapperGrad\*. A motion is simulated on an input
image and then we register the moved image using a MapperPyramid created with
the right MapperGrad\*. The difference images of the pictures before and after
registering are displayed, and the ground truth parameters and the calculated
ones are printed. Additionally, two images from a real video are registered
using first SURF features and then MapperGradProj+MapperPyramid. The difference
between the images and the difference of the registered images using the two
methods are displayed. It can be seen in the differences shown that using a
pixel based difference we can achieve more accuracy.
cmake_minimum_required(VERSION 2.8)
project(map_test)
find_package(OpenCV REQUIRED)
set(SOURCES map_test.cpp)
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(map_test ${SOURCES} ${HEADERS})
target_link_libraries(map_test ${OpenCV_LIBS})
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
// Copyright (C) 2013, Alfonso Sanchez-Beato, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include <iostream>
#define _USE_MATH_DEFINES
#include <cmath>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <opencv2/highgui/highgui.hpp> // OpenCV window I/O
#include <opencv2/imgproc/imgproc.hpp> // OpenCV image transformations
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "opencv2/reg/mapaffine.hpp"
#include "opencv2/reg/mapshift.hpp"
#include "opencv2/reg/mapprojec.hpp"
#include "opencv2/reg/mappergradshift.hpp"
#include "opencv2/reg/mappergradeuclid.hpp"
#include "opencv2/reg/mappergradsimilar.hpp"
#include "opencv2/reg/mappergradaffine.hpp"
#include "opencv2/reg/mappergradproj.hpp"
#include "opencv2/reg/mapperpyramid.hpp"
static const char* DIFF_IM = "Image difference";
static const char* DIFF_REGPIX_IM = "Image difference: pixel registered";
using namespace cv;
using namespace cv::reg;
using namespace std;
void showDifference(const Mat& image1, const Mat& image2, const char* title)
{
Mat img1, img2;
image1.convertTo(img1, CV_32FC3);
image2.convertTo(img2, CV_32FC3);
if(img1.channels() != 1)
cvtColor(img1, img1, CV_RGB2GRAY);
if(img2.channels() != 1)
cvtColor(img2, img2, CV_RGB2GRAY);
Mat imgDiff;
img1.copyTo(imgDiff);
imgDiff -= img2;
imgDiff /= 2.f;
imgDiff += 128.f;
Mat imgSh;
imgDiff.convertTo(imgSh, CV_8UC3);
imshow(title, imgSh);
}
void testShift(const Mat& img1)
{
Mat img2;
// Warp original image
Vec<double, 2> shift(5., 5.);
MapShift mapTest(shift);
mapTest.warp(img1, img2);
showDifference(img1, img2, DIFF_IM);
// Register
MapperGradShift mapper;
MapperPyramid mappPyr(mapper);
Ptr<Map> mapPtr;
mappPyr.calculate(img1, img2, mapPtr);
// Print result
MapShift* mapShift = dynamic_cast<MapShift*>(mapPtr.get());
cout << endl << "--- Testing shift mapper ---" << endl;
cout << Mat(shift) << endl;
cout << Mat(mapShift->getShift()) << endl;
// Display registration accuracy
Mat dest;
mapShift->inverseWarp(img2, dest);
showDifference(img1, dest, DIFF_REGPIX_IM);
waitKey(0);
cvDestroyWindow(DIFF_IM);
cvDestroyWindow(DIFF_REGPIX_IM);
}
void testEuclidean(const Mat& img1)
{
Mat img2;
// Warp original image
double theta = 3*M_PI/180;
double cosT = cos(theta);
double sinT = sin(theta);
Matx<double, 2, 2> linTr(cosT, -sinT, sinT, cosT);
Vec<double, 2> shift(5., 5.);
MapAffine mapTest(linTr, shift);
mapTest.warp(img1, img2);
showDifference(img1, img2, DIFF_IM);
// Register
MapperGradEuclid mapper;
MapperPyramid mappPyr(mapper);
Ptr<Map> mapPtr;
mappPyr.calculate(img1, img2, mapPtr);
// Print result
MapAffine* mapAff = dynamic_cast<MapAffine*>(mapPtr.get());
cout << endl << "--- Testing Euclidean mapper ---" << endl;
cout << Mat(linTr) << endl;
cout << Mat(shift) << endl;
cout << Mat(mapAff->getLinTr()) << endl;
cout << Mat(mapAff->getShift()) << endl;
// Display registration accuracy
Mat dest;
mapAff->inverseWarp(img2, dest);
showDifference(img1, dest, DIFF_REGPIX_IM);
waitKey(0);
cvDestroyWindow(DIFF_IM);
cvDestroyWindow(DIFF_REGPIX_IM);
}
void testSimilarity(const Mat& img1)
{
Mat img2;
// Warp original image
double theta = 3*M_PI/180;
double scale = 0.95;
double a = scale*cos(theta);
double b = scale*sin(theta);
Matx<double, 2, 2> linTr(a, -b, b, a);
Vec<double, 2> shift(5., 5.);
MapAffine mapTest(linTr, shift);
mapTest.warp(img1, img2);
showDifference(img1, img2, DIFF_IM);
// Register
MapperGradSimilar mapper;
MapperPyramid mappPyr(mapper);
Ptr<Map> mapPtr;
mappPyr.calculate(img1, img2, mapPtr);
// Print result
MapAffine* mapAff = dynamic_cast<MapAffine*>(mapPtr.get());
cout << endl << "--- Testing similarity mapper ---" << endl;
cout << Mat(linTr) << endl;
cout << Mat(shift) << endl;
cout << Mat(mapAff->getLinTr()) << endl;
cout << Mat(mapAff->getShift()) << endl;
// Display registration accuracy
Mat dest;
mapAff->inverseWarp(img2, dest);
showDifference(img1, dest, DIFF_REGPIX_IM);
waitKey(0);
cvDestroyWindow(DIFF_IM);
cvDestroyWindow(DIFF_REGPIX_IM);
}
void testAffine(const Mat& img1)
{
Mat img2;
// Warp original image
Matx<double, 2, 2> linTr(1., 0.1, -0.01, 1.);
Vec<double, 2> shift(1., 1.);
MapAffine mapTest(linTr, shift);
mapTest.warp(img1, img2);
showDifference(img1, img2, DIFF_IM);
// Register
MapperGradAffine mapper;
MapperPyramid mappPyr(mapper);
Ptr<Map> mapPtr;
mappPyr.calculate(img1, img2, mapPtr);
// Print result
MapAffine* mapAff = dynamic_cast<MapAffine*>(mapPtr.get());
cout << endl << "--- Testing affine mapper ---" << endl;
cout << Mat(linTr) << endl;
cout << Mat(shift) << endl;
cout << Mat(mapAff->getLinTr()) << endl;
cout << Mat(mapAff->getShift()) << endl;
// Display registration accuracy
Mat dest;
mapAff->inverseWarp(img2, dest);
showDifference(img1, dest, DIFF_REGPIX_IM);
waitKey(0);
cvDestroyWindow(DIFF_IM);
cvDestroyWindow(DIFF_REGPIX_IM);
}
void testProjective(const Mat& img1)
{
Mat img2;
// Warp original image
Matx<double, 3, 3> projTr(1., 0., 0., 0., 1., 0., 0.0001, 0.0001, 1);
MapProjec mapTest(projTr);
mapTest.warp(img1, img2);
showDifference(img1, img2, DIFF_IM);
// Register
MapperGradProj mapper;
MapperPyramid mappPyr(mapper);
Ptr<Map> mapPtr;
mappPyr.calculate(img1, img2, mapPtr);
// Print result
MapProjec* mapProj = dynamic_cast<MapProjec*>(mapPtr.get());
mapProj->normalize();
cout << endl << "--- Testing projective transformation mapper ---" << endl;
cout << Mat(projTr) << endl;
cout << Mat(mapProj->getProjTr()) << endl;
// Display registration accuracy
Mat dest;
mapProj->inverseWarp(img2, dest);
showDifference(img1, dest, DIFF_REGPIX_IM);
waitKey(0);
cvDestroyWindow(DIFF_IM);
cvDestroyWindow(DIFF_REGPIX_IM);
}
//
// Following an example from
// http:// ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
//
void calcHomographyFeature(const Mat& image1, const Mat& image2)
{
static const char* difffeat = "Difference feature registered";
Mat gray_image1;
Mat gray_image2;
// Convert to Grayscale
if(image1.channels() != 1)
cvtColor(image1, gray_image1, CV_RGB2GRAY);
else
image1.copyTo(gray_image1);
if(image2.channels() != 1)
cvtColor(image2, gray_image2, CV_RGB2GRAY);
else
image2.copyTo(gray_image2);
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector(minHessian);
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect(gray_image1, keypoints_object);
detector.detect(gray_image2, keypoints_scene);
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute(gray_image1, keypoints_object, descriptors_object);
extractor.compute(gray_image2, keypoints_scene, descriptors_scene);
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector<DMatch> matches;
matcher.match(descriptors_object, descriptors_scene, matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for(int i = 0; i < descriptors_object.rows; i++)
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist)
std::vector<DMatch> good_matches;
for(int i = 0; i < descriptors_object.rows; i++) {
if(matches[i].distance < 3*min_dist) {
good_matches.push_back( matches[i]);
}
}
std::vector< Point2f > obj;
std::vector< Point2f > scene;
for(size_t i = 0; i < good_matches.size(); i++)
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
// Find the Homography Matrix
Mat H = findHomography( obj, scene, CV_RANSAC );
// Use the Homography Matrix to warp the images
Mat result;
Mat Hinv = H.inv();
warpPerspective(image2, result, Hinv, image1.size());
cout << "--- Feature method\n" << H << endl;
Mat imf1, resf;
image1.convertTo(imf1, CV_64FC3);
result.convertTo(resf, CV_64FC3);
showDifference(imf1, resf, difffeat);
}
void calcHomographyPixel(const Mat& img1, const Mat& img2)
{
static const char* diffpixel = "Difference pixel registered";
// Register using pixel differences
MapperGradProj mapper;
MapperPyramid mappPyr(mapper);
Ptr<Map> mapPtr;
mappPyr.calculate(img1, img2, mapPtr);
// Print result
MapProjec* mapProj = dynamic_cast<MapProjec*>(mapPtr.get());
mapProj->normalize();
cout << "--- Pixel-based method\n" << Mat(mapProj->getProjTr()) << endl;
// Display registration accuracy
Mat dest;
mapProj->inverseWarp(img2, dest);
showDifference(img1, dest, diffpixel);
}
void comparePixelVsFeature(const Mat& img1_8b, const Mat& img2_8b)
{
static const char* difforig = "Difference non-registered";
// Show difference of images
Mat img1, img2;
img1_8b.convertTo(img1, CV_64FC3);
img2_8b.convertTo(img2, CV_64FC3);
showDifference(img1, img2, difforig);
cout << endl << "--- Comparing feature-based with pixel difference based ---" << endl;
// Register using SURF keypoints
calcHomographyFeature(img1_8b, img2_8b);
// Register using pixel differences
calcHomographyPixel(img1, img2);
waitKey(0);
}
int main(void)
{
Mat img1;
img1 = imread("home.png", CV_LOAD_IMAGE_UNCHANGED);
if(!img1.data) {
cout << "Could not open or find file" << endl;
return -1;
}
// Convert to double, 3 channels
img1.convertTo(img1, CV_64FC3);
testShift(img1);
testEuclidean(img1);
testSimilarity(img1);
testAffine(img1);
testProjective(img1);
Mat imgcmp1 = imread("LR_05.png", CV_LOAD_IMAGE_UNCHANGED);
if(!imgcmp1.data) {
cout << "Could not open or find file" << endl;
return -1;
}
Mat imgcmp2 = imread("LR_06.png", CV_LOAD_IMAGE_UNCHANGED);
if(!imgcmp2.data) {
cout << "Could not open or find file" << endl;
return -1;
}
comparePixelVsFeature(imgcmp1, imgcmp2);
return 0;
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment