Commit 286d6ffe authored by Vadim Pisarevsky's avatar Vadim Pisarevsky

Merge pull request #49 from Bellaktris/gsoc

Structured edge detection & State-of-the-art image inpainting
parents 468d3ecc 63e1c044
.. ximgproc:
Structured forests for fast edge detection
******************************************
Introduction
------------
Today most digital images and imaging devices use 8 bits per channel thus limiting the dynamic range of the device to two orders of magnitude (actually 256 levels), while human eye can adapt to lighting conditions varying by ten orders of magnitude. When we take photographs of a real world scene bright regions may be overexposed, while the dark ones may be underexposed, so we can't capture all details using a single exposure. HDR imaging works with images that use more that 8 bits per channel (usually 32-bit float values), allowing much wider dynamic range.
There are different ways to obtain HDR images, but the most common one is to use photographs of the scene taken with different exposure values. To combine this exposures it is useful to know your cameras response function and there are algorithms to estimate it. After the HDR image has been blended it has to be converted back to 8-bit to view it on usual displays. This process is called tonemapping. Additional complexities arise when objects of the scene or camera move between shots, since images with different exposures should be registered and aligned.
In this tutorial we show how to generate and display HDR image from an exposure sequence. In our case images are already aligned and there are no moving objects. We also demonstrate an alternative approach called exposure fusion that produces low dynamic range image. Each step of HDR pipeline can be implemented using different algorithms so take a look at the reference manual to see them all.
Examples
--------
.. image:: images/01.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/02.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/03.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/04.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/05.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/06.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/07.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/08.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/09.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/10.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/11.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
.. image:: images/12.jpg
:height: 238pt
:width: 750pt
:alt: First example
:align: center
**Note :** binarization techniques like Canny edge detector are applicable
to edges produced by both algorithms (``Sobel`` and ``StructuredEdgeDetection::detectEdges``).
Source Code
-----------
.. literalinclude:: ../../../../modules/ximpgroc/samples/cpp/structured_edge_detection.cpp
:language: cpp
:linenos:
:tab-width: 4
Explanation
-----------
1. **Load source color image**
.. code-block:: cpp
cv::Mat image = cv::imread(inFilename, 1);
if ( image.empty() )
{
printf("Cannot read image file: %s\n", inFilename.c_str());
return -1;
}
2. **Convert source image to [0;1] range**
.. code-block:: cpp
image.convertTo(image, cv::DataType<float>::type, 1/255.0);
3. **Run main algorithm**
.. code-block:: cpp
cv::Mat edges(image.size(), image.type());
cv::Ptr<StructuredEdgeDetection> pDollar =
cv::createStructuredEdgeDetection(modelFilename);
pDollar->detectEdges(image, edges);
4. **Show results**
.. code-block:: cpp
if ( outFilename == "" )
{
cv::namedWindow("edges", 1);
cv::imshow("edges", edges);
cv::waitKey(0);
}
else
cv::imwrite(outFilename, 255*edges);
Literature
----------
For more information, refer to the following papers :
.. [Dollar2013] Dollar P., Zitnick C. L., "Structured forests for fast edge detection",
IEEE International Conference on Computer Vision (ICCV), 2013,
pp. 1841-1848. `DOI <http://dx.doi.org/10.1109/ICCV.2013.231>`_
.. [Lim2013] Lim J. J., Zitnick C. L., Dollar P., "Sketch Tokens: A Learned
Mid-level Representation for Contour and Object Detection",
Comoputer Vision and Pattern Recognition (CVPR), 2013,
pp. 3158-3165. `DOI <http://dx.doi.org/10.1109/CVPR.2013.406>`_
function modelConvert(model, outname)
%% script for converting Piotr's matlab model into YAML format
outfile = fopen(outname, 'w');
fprintf(outfile, '%%YAML:1.0\n\n');
fprintf(outfile, ['options:\n'...
' numberOfTrees: 8\n'...
' numberOfTreesToEvaluate: 4\n'...
' selfsimilarityGridSize: 5\n'...
' stride: 2\n'...
' shrinkNumber: 2\n'...
' patchSize: 32\n'...
' patchInnerSize: 16\n'...
' numberOfGradientOrientations: 4\n'...
' gradientSmoothingRadius: 0\n'...
' regFeatureSmoothingRadius: 2\n'...
' ssFeatureSmoothingRadius: 8\n'...
' gradientNormalizationRadius: 4\n\n']);
fprintf(outfile, 'childs:\n');
printToYML(outfile, model.child', 0);
fprintf(outfile, 'featureIds:\n');
printToYML(outfile, model.fids', 0);
fprintf(outfile, 'thresholds:\n');
printToYML(outfile, model.thrs', 0);
N = 1000;
fprintf(outfile, 'edgeBoundaries:\n');
printToYML(outfile, model.eBnds, N);
fprintf(outfile, 'edgeBins:\n');
printToYML(outfile, model.eBins, N);
fclose(outfile);
gzip(outname);
end
function printToYML(outfile, A, N)
%% append matrix A to outfile as
%% - [a11, a12, a13, a14, ..., a1n]
%% - [a21, a22, a23, a24, ..., a2n]
%% ...
%%
%% if size(A, 2) == 1, A is printed by N elemnent per row
if (length(size(A)) ~= 2)
error('printToYML: second-argument matrix should have two dimensions');
end
if (size(A,2) ~= 1)
for i=1:size(A,1)
fprintf(outfile, ' - [');
fprintf(outfile, '%d,', A(i, 1:end-1));
fprintf(outfile, '%d]\n', A(i, end));
end
else
len = length(A);
for i=1:ceil(len/N)
first = (i-1)*N + 1;
last = min(i*N, len) - 1;
fprintf(outfile, ' - [');
fprintf(outfile, '%d,', A(first:last));
fprintf(outfile, '%d]\n', A(last + 1));
end
end
fprintf(outfile, '\n');
end
\ No newline at end of file
.. ximgproc:
Structured forest training
**************************
Introduction
------------
In this tutorial we show how to train your own structured forest using author's initial Matlab implementation.
Training pipeline
-----------------
1. Download "Piotr's Toolbox" from `link <http://vision.ucsd.edu/~pdollar/toolbox/doc/index.html>`_
and put it into separate directory, e.g. PToolbox
2. Download BSDS500 dataset from `link <http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/>`
and put it into separate directory named exactly BSR
3. Add both directory and their subdirectories to Matlab path.
4. Download detector code from `link <http://research.microsoft.com/en-us/downloads/389109f6-b4e8-404c-84bf-239f7cbf4e3d/>`
and put it into root directory. Now you should have ::
.
BSR
PToolbox
models
private
Contents.m
edgesChns.m
edgesDemo.m
edgesDemoRgbd.m
edgesDetect.m
edgesEval.m
edgesEvalDir.m
edgesEvalImg.m
edgesEvalPlot.m
edgesSweeps.m
edgesTrain.m
license.txt
readme.txt
5. Rename models/forest/modelFinal.mat to models/forest/modelFinal.mat.backup
6. Open edgesChns.m and comment lines 26--41. Add after commented lines the following: ::
shrink=opts.shrink;
chns = single(getFeatures( im2double(I) ));
7. Now it is time to compile promised getFeatures. I do with the following code:
.. code-block:: cpp
#include <cv.h>
#include <highgui.h>
#include <mat.h>
#include <mex.h>
#include "MxArray.hpp" // https://github.com/kyamagu/mexopencv
class NewRFFeatureGetter : public cv::RFFeatureGetter
{
public:
NewRFFeatureGetter() : name("NewRFFeatureGetter"){}
virtual void getFeatures(const cv::Mat &src, NChannelsMat &features,
const int gnrmRad, const int gsmthRad,
const int shrink, const int outNum, const int gradNum) const
{
// here your feature extraction code, the default one is:
// resulting features Mat should be n-channels, floating point matrix
}
protected:
cv::String name;
};
MEXFUNCTION_LINKAGE void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
if (nlhs != 1) mexErrMsgTxt("nlhs != 1");
if (nrhs != 1) mexErrMsgTxt("nrhs != 1");
cv::Mat src = MxArray(prhs[0]).toMat();
src.convertTo(src, cv::DataType<float>::type);
std::string modelFile = MxArray(prhs[1]).toString();
NewRFFeatureGetter *pDollar = createNewRFFeatureGetter();
cv::Mat edges;
pDollar->getFeatures(src, edges, 4, 0, 2, 13, 4);
// you can use other numbers here
edges.convertTo(edges, cv::DataType<double>::type);
plhs[0] = MxArray(edges);
}
8. Place compiled mex file into root dir and run edgesDemo.
You will need to wait a couple of hours after that the new model
will appear inside models/forest/.
9. The final step is converting trained model from Matlab binary format
to YAML which you can use with our ocv::StructuredEdgeDetection.
For this purpose run opencv_contrib/doc/tutorials/ximpgroc/training/modelConvert(model, "model.yml")
How to use your model
---------------------
Just use expanded constructor with above defined class NewRFFeatureGetter
.. code-block:: cpp
cv::StructuredEdgeDetection pDollar
= cv::createStructuredEdgeDetection( modelName, makePtr<NewRFFeatureGetter>() );
set(the_description "Extended image processing module. It includes edge-aware filters and etc.")
ocv_define_module(ximgproc opencv_imgproc opencv_core opencv_highgui)
target_link_libraries(opencv_ximgproc)
\ No newline at end of file
target_link_libraries(opencv_ximgproc)
Structured forests for fast edge detection
******************************************
.. highlight:: cpp
This module contains implementations of modern structured edge detection algorithms,
i.e. algorithms which somehow takes into account pixel affinities in natural images.
StructuredEdgeDetection
-----------------------
.. ocv:class:: StructuredEdgeDetection : public Algorithm
Class implementing edge detection algorithm from [Dollar2013]_ ::
/*! \class StructuredEdgeDetection
Prediction part of [P. Dollar and C. L. Zitnick. Structured Forests for Fast Edge Detection, 2013].
*/
class CV_EXPORTS_W StructuredEdgeDetection : public Algorithm
{
public:
/*!
* The function detects edges in src and draw them to dst
*
* \param src : source image (RGB, float, in [0;1]) to detect edges
* \param dst : destination image (grayscale, float, in [0;1])
* where edges are drawn
*/
CV_WRAP virtual void detectEdges(const Mat src, Mat dst) = 0;
};
/*!
* The only available constructor loading data from model file
*
* \param model : name of the file where the model is stored
*/
CV_EXPORTS_W Ptr<StructuredEdgeDetection> createStructuredEdgeDetection(const String &model);
StructuredEdgeDetection::detectEdges
++++++++++++++++++++++++++++++++++++
.. ocv:function:: void detectEdges(const Mat src, Mat dst)
The function detects edges in src and draw them to dst. The algorithm underlies this function
is much more robust to texture presence, than common approaches, e.g. Sobel
:param src: source image (RGB, float, in [0;1]) to detect edges
:param dst: destination image (grayscale, float, in [0;1])
where edges are drawn
.. seealso::
:ocv:class:`Sobel`,
:ocv:class:`Canny`
createStructuredEdgeDetection
+++++++++++++++++++++++++++++
.. ocv:function:: Ptr<cv::StructuredEdgeDetection> createStructuredEdgeDetection(String model)
The only available constructor
:param model: model file name
.. [Dollar2013] P. Dollár, C. L. Zitnick, "Structured forests for fast edge detection",
IEEE International Conference on Computer Vision (ICCV), 2013,
pp. 1841-1848. `DOI <http://dx.doi.org/10.1109/ICCV.2013.231>`_
......@@ -8,3 +8,4 @@ ximgproc. Extended image processing module.
:maxdepth: 2
edge_aware_filters
structured_edge_detection
......@@ -38,5 +38,6 @@
#define __OPENCV_XIMGPROC_HPP__
#include "ximgproc/edge_filter.hpp"
#include "ximgproc/structured_edge_detection.hpp"
#endif
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_STRUCTURED_EDGE_DETECTION_HPP__
#define __OPENCV_STRUCTURED_EDGE_DETECTION_HPP__
#ifdef __cplusplus
/*
* structured_edge_detection.hpp
*
* Created on: Jun 17, 2014
* Author: Yury Gitman
*/
#include <opencv2/core.hpp>
/*! \namespace cv
Namespace where all the C++ OpenCV functionality resides
*/
namespace cv
{
namespace ximgproc
{
/*! \class RFFeatureGetter
Helper class for training part of [P. Dollar and C. L. Zitnick. Structured Forests for Fast Edge Detection, 2013].
*/
class CV_EXPORTS_W RFFeatureGetter : public Algorithm
{
public:
/*!
* This functions extracts feature channels from src.
* Than StructureEdgeDetection uses this feature space
* to detect edges.
*
* \param src : source image to extract features
* \param features : output n-channel floating point feature matrix.
*
* \param gnrmRad : __rf.options.gradientNormalizationRadius
* \param gsmthRad : __rf.options.gradientSmoothingRadius
* \param shrink : __rf.options.shrinkNumber
* \param outNum : __rf.options.numberOfOutputChannels
* \param gradNum : __rf.options.numberOfGradientOrientations
*/
CV_WRAP virtual void getFeatures(const Mat &src, Mat &features,
const int gnrmRad,
const int gsmthRad,
const int shrink,
const int outNum,
const int gradNum) const = 0;
};
CV_EXPORTS_W Ptr<RFFeatureGetter> createRFFeatureGetter();
/*! \class StructuredEdgeDetection
Prediction part of [P. Dollar and C. L. Zitnick. Structured Forests for Fast Edge Detection, 2013].
*/
class CV_EXPORTS_W StructuredEdgeDetection : public Algorithm
{
public:
/*!
* The function detects edges in src and draw them to dst
*
* \param src : source image (RGB, float, in [0;1]) to detect edges
* \param dst : destination image (grayscale, float, in [0;1])
* where edges are drawn
*/
CV_WRAP virtual void detectEdges(const Mat &src, Mat &dst) const = 0;
};
/*!
* The only constructor
*
* \param model : name of the file where the model is stored
* \param howToGetFeatures : optional object inheriting from RFFeatureGetter.
* You need it only if you would like to train your
* own forest, pass NULL otherwise
*/
CV_EXPORTS_W Ptr<StructuredEdgeDetection> createStructuredEdgeDetection(const String &model,
const RFFeatureGetter *howToGetFeatures = NULL);
}
}
#endif
#endif /* __OPENCV_STRUCTURED_EDGE_DETECTION_HPP__ */
\ No newline at end of file
#include <opencv2/ximgproc.hpp>
#include "opencv2/highgui.hpp"
#include "opencv2/core/utility.hpp"
using namespace cv;
using namespace cv::ximgproc;
const char* keys =
{
"{i || input image name}"
"{m || model name}"
"{o || output image name}"
};
int main( int argc, const char** argv )
{
bool printHelp = ( argc == 1 );
printHelp = printHelp || ( argc == 2 && std::string(argv[1]) == "--help" );
printHelp = printHelp || ( argc == 2 && std::string(argv[1]) == "-h" );
if ( printHelp )
{
printf("\nThis sample demonstrates structured forests for fast edge detection\n"
"Call:\n"
" structured_edge_detection -i=in_image_name -m=model_name [-o=out_image_name]\n\n");
return 0;
}
cv::CommandLineParser parser(argc, argv, keys);
if ( !parser.check() )
{
parser.printErrors();
return -1;
}
std::string modelFilename = parser.get<std::string>("m");
std::string inFilename = parser.get<std::string>("i");
std::string outFilename = parser.get<std::string>("o");
cv::Mat image = cv::imread(inFilename, 1);
if ( image.empty() )
{
printf("Cannot read image file: %s\n", inFilename.c_str());
return -1;
}
image.convertTo(image, cv::DataType<float>::type, 1/255.0);
cv::Mat edges(image.size(), image.type());
cv::Ptr<StructuredEdgeDetection> pDollar =
createStructuredEdgeDetection(modelFilename);
pDollar->detectEdges(image, edges);
if ( outFilename == "" )
{
cv::namedWindow("edges", 1);
cv::imshow("edges", edges);
cv::waitKey(0);
}
else
cv::imwrite(outFilename, 255*edges);
return 0;
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __ADVANCED_TYPES_HPP__
#define __ADVANCED_TYPES_HPP__
#ifdef __cplusplus
#include <opencv2/core.hpp>
/********************* Defines *********************/
#ifndef CV_SQR
# define CV_SQR(x) ((x)*(x))
#endif
#ifndef CV_CUBE
# define CV_CUBE(x) ((x)*(x)*(x))
#endif
#ifndef CV_INIT_VECTOR
# define CV_INIT_VECTOR(vname, type, ...) \
static const type vname##_a[] = __VA_ARGS__; \
std::vector <type> vname(vname##_a, \
vname##_a + sizeof(vname##_a) / sizeof(*vname##_a))
#endif
/********************* Types *********************/
/*! fictitious type to highlight that function
* can process n-channels arguments */
typedef cv::Mat NChannelsMat;
/********************* Functions *********************/
namespace cv
{
namespace ximgproc
{
template <typename _Tp, typename _Tp2> inline
cv::Size_<_Tp> operator * (const _Tp2 &x, const cv::Size_<_Tp> &sz)
{
return cv::Size_<_Tp>(cv::saturate_cast<_Tp>(x*sz.width), cv::saturate_cast<_Tp>(x*sz.height));
}
template <typename _Tp, typename _Tp2> inline
cv::Size_<_Tp> operator / (const cv::Size_<_Tp> &sz, const _Tp2 &x)
{
return cv::Size_<_Tp>(cv::saturate_cast<_Tp>(sz.width/x), cv::saturate_cast<_Tp>(sz.height/x));
}
} // cv
}
#endif
#endif /* __ADVANCED_TYPES_HPP__ */
\ No newline at end of file
This diff is collapsed.
#include "test_precomp.hpp"
CV_TEST_MAIN("")
\ No newline at end of file
CV_TEST_MAIN("ximpgroc")
......@@ -9,12 +9,13 @@
#ifndef __OPENCV_TEST_PRECOMP_HPP__
#define __OPENCV_TEST_PRECOMP_HPP__
#include <opencv2/ts.hpp>
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgproc/types_c.h"
#include "opencv2/ximgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/ts.hpp"
#include <opencv2/ts/ts_perf.hpp>
#include <opencv2/core.hpp>
#include <opencv2/core/utility.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/ximgproc.hpp>
#endif
\ No newline at end of file
#endif
#include "test_precomp.hpp"
namespace cvtest
{
TEST(ximpgroc_StructuredEdgeDetection, regression)
{
cv::String dir = cvtest::TS::ptr()->get_data_path();
int nTests = 12;
float threshold = 0.01f;
cv::String modelName = dir + "model.yml.gz";
cv::Ptr<cv::ximgproc::StructuredEdgeDetection> pDollar =
cv::ximgproc::createStructuredEdgeDetection(modelName);
for (int i = 0; i < nTests; ++i)
{
cv::String srcName = dir + cv::format( "sources/%02d.png", i + 1);
cv::Mat src = cv::imread( srcName, 1 );
cv::String previousResultName = dir + cv::format( "results/%02d.png", i + 1 );
cv::Mat previousResult = cv::imread( previousResultName, 0 );
previousResult.convertTo( previousResult, cv::DataType<float>::type, 1/255.0 );
src.convertTo( src, cv::DataType<float>::type, 1/255.0 );
cv::Mat currentResult( src.size(), src.type() );
pDollar->detectEdges( src, currentResult );
cv::Mat sqrError = ( currentResult - previousResult )
.mul( currentResult - previousResult );
cv::Scalar mse = cv::sum(sqrError) / cv::Scalar::all( double( sqrError.total() ) );
EXPECT_LE( mse[0], threshold );
}
}
}
\ No newline at end of file
set(the_description "Addon to basic photo module")
ocv_define_module(xphoto opencv_core opencv_imgproc OPTIONAL opencv_photo opencv_highgui opencv_photo)
\ No newline at end of file
Automatic white balance correction
**********************************
.. highlight:: cpp
balanceWhite
------------
.. ocv:function:: void balanceWhite(const Mat &src, Mat &dst, const int algorithmType, const float inputMin = 0.0f, const float inputMax = 255.0f, const float outputMin = 0.0f, const float outputMax = 255.0f)
The function implements different algorithm of automatic white balance, i.e.
it tries to map image's white color to perceptual white (this can be violated
due to specific illumination or camera settings).
:param src : source image
:param dst : destination image
:param algorithmType : type of the algorithm to use. Use WHITE_BALANCE_SIMPLE to perform smart histogram adjustments (ignoring 4% pixels with minimal and maximal values) for each channel.
:param inputMin : minimum value in the input image
:param inputMax : maximum value in the input image
:param outputMin : minimum value in the output image
:param outputMax : maximum value in the output image
.. seealso::
:ocv:func:`cvtColor`,
:ocv:func:`equalizeHist`
\ No newline at end of file
Image denoising techniques
**************************
.. highlight:: cpp
dctDenoising
------------
.. ocv:function:: void dctDenoising(const Mat &src, Mat &dst, const float sigma)
The function implements simple dct-based denoising,
link: http://www.ipol.im/pub/art/2011/ys-dct/.
:param src : source image
:param dst : destination image
:param sigma : expected noise standard deviation
:param psize : size of block side where dct is computed
.. seealso::
:ocv:func:`fastNlMeansDenoising`
Single image inpainting
***********************
.. highlight:: cpp
Inpainting
----------
.. ocv:function:: void inpaint(const Mat &src, const Mat &mask, Mat &dst, const int algorithmType)
The function implements different single-image inpainting algorithms.
:param src : source image, it could be of any type and any number of channels from 1 to 4. In case of 3- and 4-channels images the function expect them in CIELab colorspace or similar one, where first color component shows intensity, while second and third shows colors. Nonetheless you can try any colorspaces.
:param mask : mask (CV_8UC1), where non-zero pixels indicate valid image area, while zero pixels indicate area to be inpainted
:param dst : destination image
:param algorithmType : expected noise standard deviation
* INPAINT_SHIFTMAP: This algorithm searches for dominant correspondences (transformations) of image patches and tries to seamlessly fill-in the area to be inpainted using this transformations. Look in the original paper [He2012]_ for details.
.. [He2012] K. He, J. Sun., "Statistics of Patch Offsets for Image Completion",
IEEE European Conference on Computer Vision (ICCV), 2012,
pp. 16-29. `DOI <http://dx.doi.org/10.1007/978-3-642-33709-3_2>`_
\ No newline at end of file
**********************************
xphoto. Addon to basic photo modul
**********************************
.. toctree::
:maxdepth: 2
Color balance <colorbalance/whitebalance>
Denoising <denoising/denoising>
Inpainting <inpainting/inpainting>
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_EDGEDETECTION_HPP__
#define __OPENCV_EDGEDETECTION_HPP__
#include "opencv2/xphoto.hpp"
#include "opencv2/xphoto/inpainting.hpp"
#include "opencv2/xphoto/simple_color_balance.hpp"
#include "opencv2/xphoto/dct_image_denoising.hpp"
#endif
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_DCT_IMAGE_DENOISING_HPP__
#define __OPENCV_DCT_IMAGE_DENOISING_HPP__
/*
* dct_image_denoising.hpp
*
* Created on: Jun 26, 2014
* Author: Yury Gitman
*/
#include <opencv2/core.hpp>
/*! \namespace cv
Namespace where all the C++ OpenCV functionality resides
*/
namespace cv
{
/*! This function implements simple dct-based image denoising,
* link: http://www.ipol.im/pub/art/2011/ys-dct/
*
* \param src : source image
* \param dst : destination image
* \param sigma : expected noise standard deviation
* \param psize : size of block side where dct is computed
*/
CV_EXPORTS_W void dctDenoising(const Mat &src, Mat &dst, const double sigma, const int psize = 16);
}
#endif // __OPENCV_DCT_IMAGE_DENOISING_HPP__
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_INPAINTING_HPP__
#define __OPENCV_INPAINTING_HPP__
/*
* inpainting.hpp
*
* Created on: Jul 22, 2014
* Author: Yury Gitman
*/
#include <opencv2/core.hpp>
/*! \namespace cv
Namespace where all the C++ OpenCV functionality resides
*/
namespace cv
{
//! various inpainting algorithms
enum
{
INPAINT_SHIFTMAP = 0
};
/*! The function reconstructs the selected image area from known area.
* \param src : source image.
* \param mask : inpainting mask, 8-bit 1-channel image. Zero pixels indicate the area that needs to be inpainted.
* \param dst : destination image.
* \param algorithmType : inpainting method.
*/
CV_EXPORTS_W void inpaint(const Mat &src, const Mat &mask, Mat &dst, const int algorithmType);
}
#endif // __OPENCV_INPAINTING_HPP__
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_SIMPLE_COLOR_BALANCE_HPP__
#define __OPENCV_SIMPLE_COLOR_BALANCE_HPP__
/*
* simple_color_balance.hpp
*
* Created on: Jun 26, 2014
* Author: Yury Gitman
*/
#include <opencv2/core.hpp>
/*! \namespace cv
Namespace where all the C++ OpenCV functionality resides
*/
namespace cv
{
//! various white balance algorithms
enum
{
WHITE_BALANCE_SIMPLE = 0,
WHITE_BALANCE_GRAYWORLD = 1
};
/*! This function implements different white balance algorithms
* \param src : source image
* \param dst : destination image
* \param algorithmType : type of the algorithm to use
* \param inputMin : minimum input value
* \param inputMax : maximum output value
* \param outputMin : minimum input value
* \param outputMax : maximum output value
*/
CV_EXPORTS_W void balanceWhite(const Mat &src, Mat &dst, const int algorithmType,
const float inputMin = 0.0f, const float inputMax = 255.0f,
const float outputMin = 0.0f, const float outputMax = 255.0f);
}
#endif // __OPENCV_SIMPLE_COLOR_BALANCE_HPP__
\ No newline at end of file
#include "opencv2/xphoto.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/imgproc/types_c.h"
const char* keys =
{
"{i || input image name}"
"{o || output image name}"
"{sigma || expected noise standard deviation}"
"{psize |16| expected noise standard deviation}"
};
int main( int argc, const char** argv )
{
bool printHelp = ( argc == 1 );
printHelp = printHelp || ( argc == 2 && std::string(argv[1]) == "--help" );
printHelp = printHelp || ( argc == 2 && std::string(argv[1]) == "-h" );
if ( printHelp )
{
printf("\nThis sample demonstrates dct-based image denoising\n"
"Call:\n"
" dct_image_denoising -i=<string> -sigma=<double> -psize=<int> [-o=<string>]\n\n");
return 0;
}
cv::CommandLineParser parser(argc, argv, keys);
if ( !parser.check() )
{
parser.printErrors();
return -1;
}
std::string inFilename = parser.get<std::string>("i");
std::string outFilename = parser.get<std::string>("o");
cv::Mat src = cv::imread(inFilename, 1);
if ( src.empty() )
{
printf("Cannot read image file: %s\n", inFilename.c_str());
return -1;
}
double sigma = parser.get<double>("sigma");
if (sigma == 0.0)
sigma = 15.0;
int psize = parser.get<int>("psize");
if (psize == 0)
psize = 16;
cv::Mat res(src.size(), src.type());
cv::dctDenoising(src, res, sigma, psize);
if ( outFilename == "" )
{
cv::namedWindow("denoising result", 1);
cv::imshow("denoising result", res);
cv::waitKey(0);
}
else
cv::imwrite(outFilename, res);
return 0;
}
\ No newline at end of file
#include "opencv2/xphoto.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/imgproc/types_c.h"
const char* keys =
{
"{i || input image name}"
"{m || mask image name}"
"{o || output image name}"
};
int main( int argc, const char** argv )
{
bool printHelp = ( argc == 1 );
printHelp = printHelp || ( argc == 2 && std::string(argv[1]) == "--help" );
printHelp = printHelp || ( argc == 2 && std::string(argv[1]) == "-h" );
if ( printHelp )
{
printf("\nThis sample demonstrates shift-map image inpainting\n"
"Call:\n"
" inpainting -i=<string> -m=<string> [-o=<string>]\n\n");
return 0;
}
cv::CommandLineParser parser(argc, argv, keys);
if ( !parser.check() )
{
parser.printErrors();
return -1;
}
std::string inFilename = parser.get<std::string>("i");
std::string maskFilename = parser.get<std::string>("m");
std::string outFilename = parser.get<std::string>("o");
cv::Mat src = cv::imread(inFilename, -1);
if ( src.empty() )
{
printf( "Cannot read image file: %s\n", inFilename.c_str() );
return -1;
}
cv::cvtColor(src, src, CV_RGB2Lab);
cv::Mat mask = cv::imread(maskFilename, 0);
if ( mask.empty() )
{
printf( "Cannot read image file: %s\n", maskFilename.c_str() );
return -1;
}
cv::Mat res(src.size(), src.type());
cv::inpaint( src, mask, res, cv::INPAINT_SHIFTMAP );
cv::cvtColor(res, res, CV_Lab2RGB);
if ( outFilename == "" )
{
cv::namedWindow("inpainting result", 1);
cv::imshow("inpainting result", res);
cv::waitKey(0);
}
else
cv::imwrite(outFilename, res);
return 0;
}
\ No newline at end of file
#include "opencv2/xphoto.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/core/utility.hpp"
#include "opencv2/imgproc/types_c.h"
const char* keys =
{
"{i || input image name}"
"{o || output image name}"
};
int main( int argc, const char** argv )
{
bool printHelp = ( argc == 1 );
printHelp = printHelp || ( argc == 2 && std::string(argv[1]) == "--help" );
printHelp = printHelp || ( argc == 2 && std::string(argv[1]) == "-h" );
if ( printHelp )
{
printf("\nThis sample demonstrates simple color balance algorithm\n"
"Call:\n"
" simple_color_blance -i=in_image_name [-o=out_image_name]\n\n");
return 0;
}
cv::CommandLineParser parser(argc, argv, keys);
if ( !parser.check() )
{
parser.printErrors();
return -1;
}
std::string inFilename = parser.get<std::string>("i");
std::string outFilename = parser.get<std::string>("o");
cv::Mat src = cv::imread(inFilename, 1);
if ( src.empty() )
{
printf("Cannot read image file: %s\n", inFilename.c_str());
return -1;
}
cv::Mat res(src.size(), src.type());
cv::balanceWhite(src, res, cv::WHITE_BALANCE_SIMPLE);
if ( outFilename == "" )
{
cv::namedWindow("after white balance", 1);
cv::imshow("after white balance", res);
cv::waitKey(0);
}
else
cv::imwrite(outFilename, res);
return 0;
}
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __ANNF_HPP__
#define __ANNF_HPP__
#include "norm2.hpp"
#include "whs.hpp"
/************************* KDTree class *************************/
template <typename ForwardIterator> void
generate_seq(ForwardIterator it, int first, int last)
{
for (int i = first; i < last; ++i, ++it)
*it = i;
}
/////////////////////////////////////////////////////
/////////////////////////////////////////////////////
template <typename Tp, int cn> class KDTree
{
private:
class KDTreeComparator
{
const KDTree <Tp, cn> *main; // main class
int dimIdx; // dimension to compare
public:
bool operator () (const int &x, const int &y) const
{
cv::Vec <Tp, cn> u = main->data[main->idx[x]];
cv::Vec <Tp, cn> v = main->data[main->idx[y]];
return u[dimIdx] < v[dimIdx];
}
KDTreeComparator(const KDTree <Tp, cn> *_main, int _dimIdx)
: main(_main), dimIdx(_dimIdx) {}
};
const int height, width;
const int leafNumber; // maximum number of point per leaf
const int zeroThresh; // radius of prohibited shifts
std::vector <cv::Vec <Tp, cn> > data;
std::vector <int> idx;
std::vector <cv::Point2i> nodes;
int getMaxSpreadN(const int left, const int right) const;
void operator =(const KDTree <Tp, cn> &) const {};
public:
void updateDist(const int leaf, const int &idx0, int &bestIdx, double &dist);
KDTree(const cv::Mat &data, const int leafNumber = 8, const int zeroThresh = 16);
~KDTree(){};
};
template <typename Tp, int cn> int KDTree <Tp, cn>::
getMaxSpreadN(const int _left, const int _right) const
{
cv::Vec<Tp, cn> maxValue = data[ idx[_left] ],
minValue = data[ idx[_left] ];
for (int i = _left + 1; i < _right; i += cn)
for (int j = 0; j < cn; ++j)
{
minValue[j] = std::min( minValue[j], data[idx[i]][j] );
maxValue[j] = std::max( maxValue[j], data[idx[i]][j] );
}
cv::Vec<Tp, cn> spread = maxValue - minValue;
Tp *begIt = &spread[0];
return int(std::max_element(begIt, begIt + cn) - begIt);
}
template <typename Tp, int cn> KDTree <Tp, cn>::
KDTree(const cv::Mat &img, const int _leafNumber, const int _zeroThresh)
: height(img.rows), width(img.cols),
leafNumber(_leafNumber), zeroThresh(_zeroThresh)
///////////////////////////////////////////////////
{
CV_Assert( img.isContinuous() );
std::copy( (cv::Vec <Tp, cn> *) img.data,
(cv::Vec <Tp, cn> *) img.data + img.total(),
std::back_inserter(data) );
generate_seq( std::back_inserter(idx), 0, int(data.size()) );
std::fill_n( std::back_inserter(nodes),
int(data.size()), cv::Point2i(0, 0) );
std::stack <int> left, right;
left.push( 0 );
right.push( int(idx.size()) );
while ( !left.empty() )
{
int _left = left.top(); left.pop();
int _right = right.top(); right.pop();
if ( _right - _left <= leafNumber)
{
for (int i = _left; i < _right; ++i)
nodes[idx[i]] = cv::Point2i(_left, _right);
continue;
}
int nth = _left + (_right - _left)/2;
int dimIdx = getMaxSpreadN(_left, _right);
KDTreeComparator comp( this, dimIdx );
std::nth_element(/**/
idx.begin() + _left,
idx.begin() + nth,
idx.begin() + _right, comp
/**/);
left.push(_left); right.push(nth + 1);
left.push(nth + 1); right.push(_right);
}
}
template <typename Tp, int cn> void KDTree <Tp, cn>::
updateDist(const int leaf, const int &idx0, int &bestIdx, double &dist)
{
for (int k = nodes[leaf].x; k < nodes[leaf].y; ++k)
{
int y = idx0/width, ny = idx[k]/width;
int x = idx0%width, nx = idx[k]%width;
if (abs(ny - y) < zeroThresh &&
abs(nx - x) < zeroThresh)
continue;
if (nx > width - 1 || nx < 1 ||
ny > height - 1 || ny > 1 )
continue;
double ndist = norm2(data[idx0], data[idx[k]]);
if (ndist < dist)
{
dist = ndist;
bestIdx = idx[k];
}
}
}
/************************** ANNF search **************************/
static void dominantTransforms(const cv::Mat &img, std::vector <cv::Matx33f> &transforms,
const int nTransform, const int psize)
{
/** Walsh-Hadamard Transformation **/
std::vector <cv::Mat> channels;
cv::split(img, channels);
int cncase = std::max(img.channels() - 2, 0);
const int np[] = {cncase == 0 ? 12 : (cncase == 1 ? 16 : 10),
cncase == 0 ? 12 : (cncase == 1 ? 04 : 02),
cncase == 0 ? 00 : (cncase == 1 ? 04 : 02),
cncase == 0 ? 00 : (cncase == 1 ? 00 : 10)};
for (int i = 0; i < img.channels(); ++i)
rgb2whs(channels[i], channels[i], np[i], psize);
cv::Mat whs; // Walsh-Hadamard series
cv::merge(channels, whs);
KDTree <float, 24> kdTree(whs, 16, 32);
std::vector <int> annf( whs.total(), 0 );
/** Propagation-assisted kd-tree search **/
for (int i = 0; i < whs.rows; ++i)
for (int j = 0; j < whs.cols; ++j)
{
double dist = std::numeric_limits <double>::max();
int current = i*whs.cols + j;
int dy[] = {0, 1, 0}, dx[] = {0, 0, 1};
for (int k = 0; k < int( sizeof(dy)/sizeof(int) ); ++k)
if (i - dy[k] >= 0 && j - dx[k] >= 0)
{
int neighbor = (i - dy[k])*whs.cols + (j - dx[k]);
int leafIdx = k == 0 ? neighbor :
annf[neighbor] + dy[k]*whs.cols + dx[k];
kdTree.updateDist(leafIdx, current,
annf[i*whs.cols + j], dist);
}
}
/** Local maxima extraction **/
cv::Mat_<double> annfHist(2*whs.rows - 1, 2*whs.cols - 1, 0.0),
_annfHist(2*whs.rows - 1, 2*whs.cols - 1, 0.0);
for (size_t i = 0; i < annf.size(); ++i)
++annfHist( (annf[i] - int(i))/whs.cols + whs.rows - 1,
(annf[i] - int(i))%whs.cols + whs.cols - 1 );
cv::GaussianBlur( annfHist, annfHist,
cv::Size(0, 0), std::sqrt(2.0), 0.0, cv::BORDER_CONSTANT);
cv::dilate( annfHist, _annfHist,
cv::Matx<uchar, 9, 9>::ones() );
std::vector < std::pair<double, int> > amount;
std::vector <cv::Point2i> shiftM;
for (int i = 0, t = 0; i < annfHist.rows; ++i)
{
double *pAnnfHist = annfHist.ptr<double>(i);
double *_pAnnfHist = _annfHist.ptr<double>(i);
for (int j = 0; j < annfHist.cols; ++j)
if ( pAnnfHist[j] != 0 && pAnnfHist[j] == _pAnnfHist[j] )
{
amount.push_back( std::make_pair(pAnnfHist[j], t++) );
shiftM.push_back( cv::Point2i(j - whs.cols + 1,
i - whs.rows + 1) );
}
}
std::partial_sort( amount.begin(), amount.begin() + nTransform,
amount.end(), std::greater< std::pair<double, int> >() );
transforms.resize(nTransform);
for (int i = 0; i < nTransform; ++i)
{
int idx = amount[i].second;
transforms[i] = cv::Matx33f(1, 0, float(shiftM[idx].x),
0, 1, float(shiftM[idx].y),
0, 0, 1 );
}
}
#endif /* __ANNF_HPP__ */
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include <vector>
#include <algorithm>
#include <iterator>
#include <iostream>
#include "opencv2/xphoto.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/core.hpp"
#include "opencv2/core/core_c.h"
#include "opencv2/core/types.hpp"
#include "opencv2/core/types_c.h"
namespace cv
{
void grayDctDenoising(const Mat &, Mat &, const double, const int);
void rgbDctDenoising(const Mat &, Mat &, const double, const int);
void dctDenoising(const Mat &, Mat &, const double, const int);
struct grayDctDenoisingInvoker : public ParallelLoopBody
{
public:
grayDctDenoisingInvoker(const Mat &src, std::vector <Mat> &patches, const double sigma, const int psize);
~grayDctDenoisingInvoker(){};
void operator() (const Range &range) const;
private:
const Mat &src;
std::vector <Mat> &patches; // image decomposition into sliding patches
const int psize; // size of block to compute dct
const double sigma; // expected noise standard deviation
const double thresh; // thresholding estimate
void operator =(const grayDctDenoisingInvoker&) const {};
};
grayDctDenoisingInvoker::grayDctDenoisingInvoker(const Mat &_src, std::vector <Mat> &_patches,
const double _sigma, const int _psize)
: src(_src), patches(_patches), psize(_psize), sigma(_sigma), thresh(3*_sigma) {}
void grayDctDenoisingInvoker::operator() (const Range &range) const
{
for (int i = range.start; i <= range.end - 1; ++i)
{
int y = i / (src.cols - psize);
int x = i % (src.cols - psize);
Rect patchNum( x, y, psize, psize );
Mat patch(psize, psize, CV_32FC1);
src(patchNum).copyTo( patch );
dct(patch, patch);
float *data = (float *) patch.data;
for (int k = 0; k < psize*psize; ++k)
data[k] *= fabs(data[k]) > thresh;
idct(patch, patches[i]);
}
}
void grayDctDenoising(const Mat &src, Mat &dst, const double sigma, const int psize)
{
CV_Assert( src.type() == CV_MAKE_TYPE(CV_32F, 1) );
int npixels = (src.rows - psize)*(src.cols - psize);
std::vector <Mat> patches;
for (int i = 0; i < npixels; ++i)
patches.push_back( Mat(psize, psize, CV_32FC1) );
parallel_for_( cv::Range(0, npixels),
grayDctDenoisingInvoker(src, patches, sigma, psize) );
Mat res( src.size(), CV_32FC1, 0.0f ),
num( src.size(), CV_32FC1, 0.0f );
for (int k = 0; k < npixels; ++k)
{
int i = k / (src.cols - psize);
int j = k % (src.cols - psize);
res( Rect(j, i, psize, psize) ) += patches[k];
num( Rect(j, i, psize, psize) ) += Mat::ones(psize, psize, CV_32FC1);
}
res /= num;
res.convertTo( dst, src.type() );
}
void rgbDctDenoising(const Mat &src, Mat &dst, const double sigma, const int psize)
{
CV_Assert( src.type() == CV_MAKE_TYPE(CV_32F, 3) );
cv::Matx33f mt(cvInvSqrt(3), cvInvSqrt(3), cvInvSqrt(3),
cvInvSqrt(2), 0.0f, -cvInvSqrt(2),
cvInvSqrt(6), -2.0f*cvInvSqrt(6), cvInvSqrt(6));
cv::transform(src, dst, mt);
std::vector <Mat> mv;
split(dst, mv);
for (size_t i = 0; i < mv.size(); ++i)
grayDctDenoising(mv[i], mv[i], sigma, psize);
merge(mv, dst);
cv::transform( dst, dst, mt.inv() );
}
/*! This function implements simple dct-based image denoising,
* link: http://www.ipol.im/pub/art/2011/ys-dct/
*
* \param src : source image (rgb, or gray)
* \param dst : destination image
* \param sigma : expected noise standard deviation
* \param psize : size of block side where dct is computed
*/
void dctDenoising(const Mat &src, Mat &dst, const double sigma, const int psize)
{
CV_Assert( src.channels() == 3 || src.channels() == 1 );
int xtype = CV_MAKE_TYPE( CV_32F, src.channels() );
Mat img( src.size(), xtype );
src.convertTo(img, xtype);
if ( img.type() == CV_32FC3 )
rgbDctDenoising( img, img, sigma, psize );
else if ( img.type() == CV_32FC1 )
grayDctDenoising( img, img, sigma, psize );
else
CV_Error_( CV_StsNotImplemented,
("Unsupported source image format (=%d)", img.type()) );
img.convertTo( dst, src.type() );
}
}
This diff is collapsed.
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include <vector>
#include <stack>
#include <limits>
#include <algorithm>
#include <iterator>
#include <iostream>
#include <time.h>
#include <functional>
#include "opencv2/xphoto.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/core.hpp"
#include "opencv2/core/core_c.h"
#include "opencv2/core/types.hpp"
#include "opencv2/core/types_c.h"
#include "opencv2/highgui.hpp"
namespace xphotoInternal
{
# include "photomontage.hpp"
# include "annf.hpp"
}
namespace cv
{
template <typename Tp, unsigned int cn>
static void shiftMapInpaint(const Mat &src, const Mat &mask, Mat &dst,
const int nTransform = 60, const int psize = 8)
{
/** Preparing input **/
cv::Mat img;
src.convertTo( img, CV_32F );
img.setTo(0, 255 - mask);
/** ANNF computation **/
std::vector <Matx33f> transforms( nTransform );
xphotoInternal::dominantTransforms(img,
transforms, nTransform, psize);
/** Warping **/
std::vector <Mat> images( nTransform + 1 ); // source image transformed with transforms
std::vector <Mat> masks( nTransform + 1 ); // definition domain for current shift
Mat_<uchar> invMask = 255 - mask;
dilate(invMask, invMask, Mat(), Point(-1,-1), 2);
img.copyTo( images[0] );
mask.copyTo( masks[0] );
for (int i = 0; i < nTransform; ++i)
{
warpPerspective( images[0], images[i + 1], transforms[i],
images[0].size(), INTER_LINEAR );
warpPerspective( masks[0], masks[i + 1], transforms[i],
masks[0].size(), INTER_NEAREST);
masks[i + 1] &= invMask;
}
/** Stitching **/
Mat photomontageResult;
xphotoInternal::Photomontage < cv::Vec <float, cn> >( images, masks )
.assignResImage(photomontageResult);
/** Writing result **/
photomontageResult.convertTo( dst, dst.type() );
}
template <typename Tp, unsigned int cn>
void inpaint(const Mat &src, const Mat &mask, Mat &dst, const int algorithmType)
{
dst.create( src.size(), src.type() );
switch ( algorithmType )
{
case INPAINT_SHIFTMAP:
shiftMapInpaint <Tp, cn>(src, mask, dst);
break;
default:
CV_Error_( CV_StsNotImplemented,
("Unsupported algorithm type (=%d)", algorithmType) );
break;
}
}
/*! The function reconstructs the selected image area from known area.
* \param src : source image.
* \param mask : inpainting mask, 8-bit 1-channel image. Zero pixels indicate the area that needs to be inpainted.
* \param dst : destination image.
* \param algorithmType : inpainting method.
*/
void inpaint(const Mat &src, const Mat &mask, Mat &dst, const int algorithmType)
{
CV_Assert( mask.channels() == 1 && mask.depth() == CV_8U );
CV_Assert( src.rows == mask.rows && src.cols == mask.cols );
switch ( src.type() )
{
case CV_8SC1:
inpaint <char, 1>( src, mask, dst, algorithmType );
break;
case CV_8SC2:
inpaint <char, 2>( src, mask, dst, algorithmType );
break;
case CV_8SC3:
inpaint <char, 3>( src, mask, dst, algorithmType );
break;
case CV_8SC4:
inpaint <char, 4>( src, mask, dst, algorithmType );
break;
case CV_8UC1:
inpaint <uchar, 1>( src, mask, dst, algorithmType );
break;
case CV_8UC2:
inpaint <uchar, 2>( src, mask, dst, algorithmType );
break;
case CV_8UC3:
inpaint <uchar, 3>( src, mask, dst, algorithmType );
break;
case CV_8UC4:
inpaint <uchar, 4>( src, mask, dst, algorithmType );
break;
case CV_16SC1:
inpaint <short, 1>( src, mask, dst, algorithmType );
break;
case CV_16SC2:
inpaint <short, 2>( src, mask, dst, algorithmType );
break;
case CV_16SC3:
inpaint <short, 3>( src, mask, dst, algorithmType );
break;
case CV_16SC4:
inpaint <short, 4>( src, mask, dst, algorithmType );
break;
case CV_16UC1:
inpaint <ushort, 1>( src, mask, dst, algorithmType );
break;
case CV_16UC2:
inpaint <ushort, 2>( src, mask, dst, algorithmType );
break;
case CV_16UC3:
inpaint <ushort, 3>( src, mask, dst, algorithmType );
break;
case CV_16UC4:
inpaint <ushort, 4>( src, mask, dst, algorithmType );
break;
case CV_32SC1:
inpaint <int, 1>( src, mask, dst, algorithmType );
break;
case CV_32SC2:
inpaint <int, 2>( src, mask, dst, algorithmType );
break;
case CV_32SC3:
inpaint <int, 3>( src, mask, dst, algorithmType );
break;
case CV_32SC4:
inpaint <int, 4>( src, mask, dst, algorithmType );
break;
case CV_32FC1:
inpaint <float, 1>( src, mask, dst, algorithmType );
break;
case CV_32FC2:
inpaint <float, 2>( src, mask, dst, algorithmType );
break;
case CV_32FC3:
inpaint <float, 3>( src, mask, dst, algorithmType );
break;
case CV_32FC4:
inpaint <float, 4>( src, mask, dst, algorithmType );
break;
case CV_64FC1:
inpaint <double, 1>( src, mask, dst, algorithmType );
break;
case CV_64FC2:
inpaint <double, 2>( src, mask, dst, algorithmType );
break;
case CV_64FC3:
inpaint <double, 3>( src, mask, dst, algorithmType );
break;
case CV_64FC4:
inpaint <double, 4>( src, mask, dst, algorithmType );
break;
default:
CV_Error_( CV_StsNotImplemented,
("Unsupported source image format (=%d)",
src.type()) );
break;
}
}
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __NORM2_HPP__
#define __NORM2_HPP__
/************************ General template *************************/
template <typename Tp> static inline Tp sqr(Tp x) { return x*x; }
template <typename Tp, int cn> static inline Tp sqr( cv::Vec<Tp, cn> x) { return x.dot(x); }
template <typename Tp> static inline Tp norm2(const Tp &a, const Tp &b) { return sqr(a - b); }
template <typename Tp, int cn> static inline
Tp norm2(const cv::Vec <Tp, cn> &a, const cv::Vec<Tp, cn> &b) { return sqr(a - b); }
/******************* uchar, char, ushort, uint *********************/
static inline int norm2(const uchar &a, const uchar &b) { return sqr(int(a) - int(b)); }
template <int cn> static inline
int norm2(const cv::Vec <uchar, cn> &a, const cv::Vec<uchar, cn> &b)
{
return sqr( cv::Vec<int, cn>(a) - cv::Vec<int, cn>(b) );
}
static inline int norm2(const char &a, const char &b) { return sqr(int(a) - int(b)); }
template <int cn> static inline
int norm2(const cv::Vec <char, cn> &a, const cv::Vec<char, cn> &b)
{
return sqr( cv::Vec<int, cn>(a) - cv::Vec<int, cn>(b) );
}
static inline short norm2(const ushort &a, const ushort &b) { return sqr <short>(short(a) - short(b)); }
template <int cn> static inline
short norm2(const cv::Vec <ushort, cn> &a, const cv::Vec<ushort, cn> &b)
{
return sqr( cv::Vec<short, cn>(a) - cv::Vec<short, cn>(b) );
}
static inline int norm2(const uint &a, const uint &b) { return sqr(int(a) - int(b)); }
template <int cn> static inline
int norm2(const cv::Vec <uint, cn> &a, const cv::Vec<uint, cn> &b)
{
return sqr( cv::Vec<int, cn>(a) - cv::Vec<int, cn>(b) );
}
#endif /* __NORM2_HPP__ */
\ No newline at end of file
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __PHOTOMONTAGE_HPP__
#define __PHOTOMONTAGE_HPP__
#include "norm2.hpp"
#include "gcgraph.hpp"
#define GCInfinity 10*1000*1000*1000.0
#define eps 0.02
template <typename Tp> static int min_idx(std::vector <Tp> vec)
{
return int( std::min_element(vec.begin(), vec.end()) - vec.begin() );
}
////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////
template <typename Tp> class Photomontage
{
private:
const std::vector <cv::Mat> &images; // vector of images for different labels
const std::vector <cv::Mat> &masks; // vector of definition domains for each image
std::vector <cv::Mat> labelings; // vector of labelings for different expansions
std::vector <double> distances; // vector of max-flow costs for different labeling
const int height;
const int width;
const int type;
const int channels;
const int lsize;
cv::Mat x_i; // current best labeling
double singleExpansion(const int alpha); // single neighbor computing
void gradientDescent(); // gradient descent in alpha-expansion topology
class ParallelExpansion : public cv::ParallelLoopBody
{
public:
Photomontage <Tp> *main;
ParallelExpansion(Photomontage <Tp> *_main) : main(_main){}
~ParallelExpansion(){};
void operator () (const cv::Range &range) const
{
for (int i = range.start; i <= range.end - 1; ++i)
main->distances[i] = main->singleExpansion(i);
}
};
void operator =(const Photomontage <Tp>&) const {};
protected:
virtual double dist(const Tp &l1p1, const Tp &l1p2, const Tp &l2p1, const Tp &l2p2);
virtual void setWeights(GCGraph <double> &graph, const cv::Point &pA,
const cv::Point &pB, const int lA, const int lB, const int lX);
public:
Photomontage(const std::vector <cv::Mat> &images, const std::vector <cv::Mat> &masks);
virtual ~Photomontage(){};
void assignLabeling(cv::Mat &img);
void assignResImage(cv::Mat &img);
};
template <typename Tp> inline double Photomontage <Tp>::
dist(const Tp &l1p1, const Tp &l1p2, const Tp &l2p1, const Tp &l2p2)
{
return norm2(l1p1, l2p1) + norm2(l1p2, l2p2);
}
template <typename Tp> void Photomontage <Tp>::
setWeights(GCGraph <double> &graph, const cv::Point &pA, const cv::Point &pB, const int lA, const int lB, const int lX)
{
if (lA == lB)
{
/** Link from A to B **/
double weightAB = dist( images[lA].template at<Tp>(pA),
images[lA].template at<Tp>(pB),
images[lX].template at<Tp>(pA),
images[lX].template at<Tp>(pB) );
graph.addEdges( int(pA.y*width + pA.x), int(pB.y*width + pB.x), weightAB, weightAB);
}
else
{
int X = graph.addVtx();
/** Link from X to sink **/
double weightXS = dist( images[lA].template at<Tp>(pA),
images[lA].template at<Tp>(pB),
images[lB].template at<Tp>(pA),
images[lB].template at<Tp>(pB) );
graph.addTermWeights(X, 0, weightXS);
/** Link from A to X **/
double weightAX = dist( images[lA].template at<Tp>(pA),
images[lA].template at<Tp>(pB),
images[lX].template at<Tp>(pA),
images[lX].template at<Tp>(pB) );
graph.addEdges( int(pA.y*width + pA.x), X, weightAX, weightAX);
/** Link from X to B **/
double weightXB = dist( images[lX].template at<Tp>(pA),
images[lX].template at<Tp>(pB),
images[lB].template at<Tp>(pA),
images[lB].template at<Tp>(pB) );
graph.addEdges(X, int(pB.y*width + pB.x), weightXB, weightXB);
}
}
template <typename Tp> double Photomontage <Tp>::
singleExpansion(const int alpha)
{
int actualEdges = (height - 1)*width + height*(width - 1);
GCGraph <double> graph(actualEdges + height*width, 2*actualEdges);
/** Terminal links **/
for (int i = 0; i < height; ++i)
{
const uchar *maskAlphaRow = masks[alpha].template ptr <uchar>(i);
const int *labelRow = (const int *) x_i.template ptr <int>(i);
for (int j = 0; j < width; ++j)
graph.addTermWeights( graph.addVtx(),
maskAlphaRow[j] ? 0 : GCInfinity,
masks[ labelRow[j] ].template at<uchar>(i, j) ? 0 : GCInfinity );
}
/** Neighbor links **/
for (int i = 0; i < height - 1; ++i)
{
const int *currentRow = (const int *) x_i.template ptr <int>(i);
const int *nextRow = (const int *) x_i.template ptr <int>(i + 1);
for (int j = 0; j < width - 1; ++j)
{
setWeights( graph, cv::Point(j, i), cv::Point(j + 1, i), currentRow[j], currentRow[j + 1], alpha );
setWeights( graph, cv::Point(j, i), cv::Point(j, i + 1), currentRow[j], nextRow[j], alpha );
}
setWeights( graph, cv::Point(width - 1, i), cv::Point(width - 1, i + 1),
currentRow[width - 1], nextRow[width - 1], alpha );
}
const int *currentRow = (const int *) x_i.template ptr <int>(height - 1);
for (int i = 0; i < width - 1; ++i)
setWeights( graph, cv::Point(i, height - 1), cv::Point(i + 1, height - 1),
currentRow[i], currentRow[i + 1], alpha );
/** Max-flow computation **/
double result = graph.maxFlow();
/** Writing results **/
labelings[alpha].create( height, width, CV_32SC1 );
for (int i = 0; i < height; ++i)
{
const int *inRow = (const int *) x_i.template ptr <int>(i);
int *outRow = (int *) labelings[alpha].template ptr <int>(i);
for (int j = 0; j < width; ++j)
outRow[j] = graph.inSourceSegment(i*width + j)
? inRow[j] : alpha;
}
return result;
}
template <typename Tp> void Photomontage <Tp>::
gradientDescent()
{
double optValue = std::numeric_limits<double>::max();
for (int num = -1; /**/; num = -1)
{
parallel_for_( cv::Range(0, lsize),
ParallelExpansion(this) );
int minIndex = min_idx(distances);
double minValue = distances[minIndex];
if (minValue < (1.00 - eps)*optValue)
optValue = distances[num = minIndex];
if (num == -1)
break;
labelings[num].copyTo(x_i);
}
}
template <typename Tp> void Photomontage <Tp>::
assignLabeling(cv::Mat &img)
{
x_i.setTo(0);
gradientDescent();
x_i.copyTo(img);
}
template <typename Tp> void Photomontage <Tp>::
assignResImage(cv::Mat &img)
{
cv::Mat optimalLabeling;
assignLabeling(optimalLabeling);
img.create( height, width, type );
for (int i = 0; i < height; ++i)
for (int j = 0; j < width; ++j)
{
cv::Mat M = images[optimalLabeling.template at<int>(i, j)];
img.template at<Tp>(i, j) = M.template at<Tp>(i, j);
}
}
template <typename Tp> Photomontage <Tp>::
Photomontage(const std::vector <cv::Mat> &_images, const std::vector <cv::Mat> &_masks)
:
images(_images), masks(_masks), labelings(images.size()), distances(images.size()),
height(int(images[0].rows)), width(int(images[0].cols)), type(images[0].type()),
channels(images[0].channels()), lsize(int(images.size())), x_i(height, width, CV_32SC1){}
#endif /* __PHOTOMONTAGE_HPP__ */
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include <vector>
#include <algorithm>
#include <iterator>
#include <iostream>
#include "opencv2/xphoto.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/core.hpp"
#include "opencv2/core/core_c.h"
#include "opencv2/core/types.hpp"
#include "opencv2/core/types_c.h"
namespace cv
{
template <typename T>
void balanceWhite(std::vector < Mat_<T> > &src, Mat &dst,
const float inputMin, const float inputMax,
const float outputMin, const float outputMax, const int algorithmType)
{
switch ( algorithmType )
{
case WHITE_BALANCE_SIMPLE:
{
/********************* Simple white balance *********************/
float s1 = 2.0f; // low quantile
float s2 = 2.0f; // high quantile
int depth = 2; // depth of histogram tree
if (src[0].depth() != CV_8U)
++depth;
int bins = 16; // number of bins at each histogram level
int nElements = int( pow(bins, depth) );
// number of elements in histogram tree
for (size_t i = 0; i < src.size(); ++i)
{
std::vector <int> hist(nElements, 0);
typename Mat_<T>::iterator beginIt = src[i].begin();
typename Mat_<T>::iterator endIt = src[i].end();
for (typename Mat_<T>::iterator it = beginIt; it != endIt; ++it)
// histogram filling
{
int pos = 0;
float minValue = inputMin - 0.5f;
float maxValue = inputMax + 0.5f;
T val = *it;
float interval = float(maxValue - minValue) / bins;
for (int j = 0; j < depth; ++j)
{
int currentBin = int( (val - minValue + 1e-4f) / interval );
++hist[pos + currentBin];
pos = (pos + currentBin)*bins;
minValue = minValue + currentBin*interval;
maxValue = minValue + interval;
interval /= bins;
}
}
int total = int( src[i].total() );
int p1 = 0, p2 = bins - 1;
int n1 = 0, n2 = total;
float minValue = inputMin - 0.5f;
float maxValue = inputMax + 0.5f;
float interval = (maxValue - minValue) / float(bins);
for (int j = 0; j < depth; ++j)
// searching for s1 and s2
{
while (n1 + hist[p1] < s1 * total / 100.0f)
{
n1 += hist[p1++];
minValue += interval;
}
p1 *= bins;
while (n2 - hist[p2] > (100.0f - s2) * total / 100.0f)
{
n2 -= hist[p2--];
maxValue -= interval;
}
p2 = p2*bins - 1;
interval /= bins;
}
src[i] = (outputMax - outputMin) * (src[i] - minValue)
/ (maxValue - minValue) + outputMin;
}
/****************************************************************/
break;
}
default:
CV_Error_( CV_StsNotImplemented,
("Unsupported algorithm type (=%d)", algorithmType) );
}
dst.create(/**/ src[0].size(), CV_MAKETYPE( src[0].depth(), int( src.size() ) ) /**/);
cv::merge(src, dst);
}
/*!
* Wrappers over different white balance algorithm
*
* \param src : source image (RGB)
* \param dst : destination image
*
* \param inputMin : minimum input value
* \param inputMax : maximum input value
* \param outputMin : minimum output value
* \param outputMax : maximum output value
*
* \param algorithmType : type of the algorithm to use
*/
void balanceWhite(const Mat &src, Mat &dst, const int algorithmType,
const float inputMin, const float inputMax,
const float outputMin, const float outputMax)
{
switch ( src.depth() )
{
case CV_8U:
{
std::vector < Mat_<uchar> > mv;
split(src, mv);
balanceWhite(mv, dst, inputMin, inputMax, outputMin, outputMax, algorithmType);
break;
}
case CV_16S:
{
std::vector < Mat_<short> > mv;
split(src, mv);
balanceWhite(mv, dst, inputMin, inputMax, outputMin, outputMax, algorithmType);
break;
}
case CV_32S:
{
std::vector < Mat_<int> > mv;
split(src, mv);
balanceWhite(mv, dst, inputMin, inputMax, outputMin, outputMax, algorithmType);
break;
}
case CV_32F:
{
std::vector < Mat_<float> > mv;
split(src, mv);
balanceWhite(mv, dst, inputMin, inputMax, outputMin, outputMax, algorithmType);
break;
}
default:
CV_Error_( CV_StsNotImplemented,
("Unsupported source image format (=%d)", src.type()) );
break;
}
}
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of Intel Corporation may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __WHS_HPP__
#define __WHS_HPP__
static inline int hl(int x)
{
int res = 0;
while (x)
{
res += x&1;
x >>= 1;
}
return res;
}
static inline int rp2(int x)
{
int res = 1;
while (res < x)
res <<= 1;
return res;
}
template <typename ForwardIterator>
static void generate_snake(ForwardIterator snake, const int n)
{
cv::Point previous;
if (n > 0)
{
previous = cv::Point(0, 0);
*snake = previous;
}
for (int k = 1, num = 1; num <= n; ++k)
{
const cv::Point2i dv[] = { cv::Point2i( !(k&1), (k&1) ),
cv::Point2i( -(k&1), -!(k&1) ) };
*snake = previous = previous - dv[1];
++num;
for (int i = 0; i < 2; ++i)
for (int j = 0; j < k && num < n; ++j)
{
*snake = previous = previous + dv[i];
++num;
}
}
}
static void nextProjection(std::vector <cv::Mat> &projections, const cv::Point &A,
const cv::Point &B, const int psize)
{
int xsign = (A.x != B.x)*(hl(A.x&B.x) + (B.x > A.x))&1;
int ysign = (A.y != B.y)*(hl(A.y&B.y) + (B.y > A.y))&1;
bool plusToMinusUpdate = xsign || ysign;
int dx = (A.x != B.x) << ( hl(psize - 1) - hl(A.x ^ B.x) );
int dy = (A.y != B.y) << ( hl(psize - 1) - hl(A.y ^ B.y) );
cv::Mat proj = projections[projections.size() - 1],
nproj = -proj.clone();
for (int i = dy; i < nproj.rows; ++i)
{
float *vxNext = nproj.ptr<float>(i - dy);
float *vNext = nproj.ptr<float>(i);
float *vxCurrent = proj.ptr<float>(i - dy);
if (plusToMinusUpdate)
for (int j = dx; j < nproj.cols; ++j)
vNext[j] += vxCurrent[j - dx] - vxNext[j - dx];
else
for (int j = dx; j < nproj.cols; ++j)
vNext[j] -= vxCurrent[j - dx] - vxNext[j - dx];
}
projections.push_back(nproj);
}
static void rgb2whs(const cv::Mat &src, cv::Mat &dst, const int nProjections, const int psize)
{
CV_Assert(nProjections <= psize*psize && src.type() == CV_32FC1);
const int npsize = rp2(psize);
std::vector <cv::Mat> projections;
cv::Mat img, proj;
cv::copyMakeBorder(src, img, npsize, npsize, npsize, npsize,
cv::BORDER_CONSTANT, 0);
cv::boxFilter(img, proj, CV_32F, cv::Size(npsize, npsize),
cv::Point(-1, -1), true, cv::BORDER_REFLECT);
projections.push_back(proj);
std::vector <cv::Point2i> snake_idx;
generate_snake(std::back_inserter(snake_idx), nProjections);
for (int i = 1; i < nProjections; ++i)
nextProjection(projections, snake_idx[i - 1],
snake_idx[i], npsize);
cv::merge(projections, img);
img(cv::Rect(npsize, npsize, src.cols, src.rows)).copyTo(dst);
}
#endif /* __WHS_HPP__ */
#include "test_precomp.hpp"
namespace cvtest
{
TEST(xphoto_dctimagedenoising, regression)
{
cv::String dir = cvtest::TS::ptr()->get_data_path() + "dct_image_denoising/";
int nTests = 1;
double thresholds[] = {0.1};
int psize[] = {8};
double sigma[] = {9.0};
for (int i = 0; i < nTests; ++i)
{
cv::String srcName = dir + cv::format( "sources/%02d.png", i + 1);
cv::Mat src = cv::imread( srcName, 1 );
cv::String previousResultName = dir + cv::format( "results/%02d.png", i + 1 );
cv::Mat previousResult = cv::imread( previousResultName, 1 );
cv::Mat currentResult, fastNlMeansResult;
cv::dctDenoising(src, currentResult, sigma[i], psize[i]);
cv::Mat sqrError = ( currentResult - previousResult )
.mul( currentResult - previousResult );
cv::Scalar mse = cv::sum(sqrError) / cv::Scalar::all( double(sqrError.total()*sqrError.channels()) );
EXPECT_LE( mse[0] + mse[1] + mse[2] + mse[3], thresholds[i] );
}
}
}
\ No newline at end of file
#include "test_precomp.hpp"
namespace cvtest
{
TEST(xphoto_simplecolorbalance, regression)
{
cv::String dir = cvtest::TS::ptr()->get_data_path() + "simple_white_balance/";
int nTests = 12;
float threshold = 0.005f;
for (int i = 0; i < nTests; ++i)
{
cv::String srcName = dir + cv::format( "sources/%02d.png", i + 1);
cv::Mat src = cv::imread( srcName, 1 );
cv::String previousResultName = dir + cv::format( "results/%02d.png", i + 1 );
cv::Mat previousResult = cv::imread( previousResultName, 1 );
cv::Mat currentResult;
cv::balanceWhite(src, currentResult, cv::WHITE_BALANCE_SIMPLE);
cv::Mat sqrError = ( currentResult - previousResult )
.mul( currentResult - previousResult );
cv::Scalar mse = cv::sum(sqrError) / cv::Scalar::all( double( sqrError.total()*sqrError.channels() ) );
EXPECT_LE( mse[0]+mse[1]+mse[2]+mse[3], threshold );
}
}
}
\ No newline at end of file
#include "test_precomp.hpp"
CV_TEST_MAIN("xphoto")
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-declarations"
# if defined __clang__ || defined __APPLE__
# pragma GCC diagnostic ignored "-Wmissing-prototypes"
# pragma GCC diagnostic ignored "-Wextra"
# endif
#endif
#ifndef __OPENCV_TEST_PRECOMP_HPP__
#define __OPENCV_TEST_PRECOMP_HPP__
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgproc/types_c.h"
#include "opencv2/highgui.hpp"
#include "opencv2/photo.hpp"
#include "opencv2/xphoto.hpp"
#include "opencv2/ts.hpp"
#include <ctime>
#include <iostream>
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment