Commit 30d7e1c3 authored by Roman Donchenko's avatar Roman Donchenko

Merge remote-tracking branch 'origin/master' into merge-2.4

Conflicts:
	doc/tutorials/bioinspired/retina_model/retina_model.rst~
parents 6133aeed 4c7ecf20
......@@ -196,6 +196,7 @@ OCV_OPTION(ENABLE_SSSE3 "Enable SSSE3 instructions"
OCV_OPTION(ENABLE_SSE41 "Enable SSE4.1 instructions" OFF IF ((CV_ICC OR CMAKE_COMPILER_IS_GNUCXX) AND (X86 OR X86_64)) )
OCV_OPTION(ENABLE_SSE42 "Enable SSE4.2 instructions" OFF IF (CMAKE_COMPILER_IS_GNUCXX AND (X86 OR X86_64)) )
OCV_OPTION(ENABLE_AVX "Enable AVX instructions" OFF IF ((MSVC OR CMAKE_COMPILER_IS_GNUCXX) AND (X86 OR X86_64)) )
OCV_OPTION(ENABLE_NEON "Enable NEON instructions" OFF IF (CMAKE_COMPILER_IS_GNUCXX AND ARM) )
OCV_OPTION(ENABLE_NOISY_WARNINGS "Show all warnings even if they are too noisy" OFF )
OCV_OPTION(OPENCV_WARNINGS_ARE_ERRORS "Treat warnings as errors" OFF )
OCV_OPTION(ENABLE_WINRT_MODE "Build with Windows Runtime support" OFF IF WIN32 )
......
......@@ -161,6 +161,10 @@ if(CMAKE_COMPILER_IS_GNUCXX)
endif()
endif()
if(ENABLE_NEON)
add_extra_compiler_option(-mfpu=neon)
endif()
# Profiling?
if(ENABLE_PROFILING)
add_extra_compiler_option("-pg -g")
......
.. _hdrimaging:
High Dynamic Range Imaging
***************************************
Introduction
------------------
Today most digital images and imaging devices use 8 bits per channel thus limiting the dynamic range of the device to two orders of magnitude (actually 256 levels), while human eye can adapt to lighting conditions varying by ten orders of magnitude. When we take photographs of a real world scene bright regions may be overexposed, while the dark ones may be underexposed, so we can’t capture all details using a single exposure. HDR imaging works with images that use more that 8 bits per channel (usually 32-bit float values), allowing much wider dynamic range.
There are different ways to obtain HDR images, but the most common one is to use photographs of the scene taken with different exposure values. To combine this exposures it is useful to know your camera’s response function and there are algorithms to estimate it. After the HDR image has been blended it has to be converted back to 8-bit to view it on usual displays. This process is called tonemapping. Additional complexities arise when objects of the scene or camera move between shots, since images with different exposures should be registered and aligned.
In this tutorial we show how to generate and display HDR image from an exposure sequence. In our case images are already aligned and there are no moving objects. We also demonstrate an alternative approach called exposure fusion that produces low dynamic range image. Each step of HDR pipeline can be implemented using different algorithms so take a look at the reference manual to see them all.
Exposure sequence
------------------
.. image:: images/memorial.png
:height: 357pt
:width: 242pt
:alt: Exposure sequence
:align: center
Source Code
===========
.. literalinclude:: ../../../../samples/cpp/tutorial_code/photo/hdr_imaging/hdr_imaging.cpp
:language: cpp
:linenos:
:tab-width: 4
Explanation
===========
1. **Load images and exposure times**
.. code-block:: cpp
vector<Mat> images;
vector<float> times;
loadExposureSeq(argv[1], images, times);
Firstly we load input images and exposure times from user-defined folder. The folder should contain images and *list.txt* - file that contains file names and inverse exposure times.
For our image sequence the list is following:
.. code-block:: none
memorial00.png 0.03125
memorial01.png 0.0625
...
memorial15.png 1024
2. **Estimate camera response**
.. code-block:: cpp
Mat response;
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
calibrate->process(images, response, times);
It is necessary to know camera response function (CRF) for a lot of HDR construction algorithms. We use one of the calibration algorithms to estimate inverse CRF for all 256 pixel values.
3. **Make HDR image**
.. code-block:: cpp
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
We use Debevec's weighting scheme to construct HDR image using response calculated in the previous item.
4. **Tonemap HDR image**
.. code-block:: cpp
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
Since we want to see our results on common LDR display we have to map our HDR image to 8-bit range preserving most details. It is the main goal of tonemapping methods. We use tonemapper with bilateral filtering and set 2.2 as the value for gamma correction.
5. **Perform exposure fusion**
.. code-block:: cpp
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
There is an alternative way to merge our exposures in case when we don't need HDR image. This process is called exposure fusion and produces LDR image that doesn't require gamma correction. It also doesn't use exposure values of the photographs.
6. **Write results**
.. code-block:: cpp
imwrite("fusion.png", fusion * 255);
imwrite("ldr.png", ldr * 255);
imwrite("hdr.hdr", hdr);
Now it's time to look at the results. Note that HDR image can't be stored in one of common image formats, so we save it to Radiance image (.hdr). Also all HDR imaging functions return results in [0, 1] range so we should multiply result by 255.
Results
=======
Tonemapped image
------------------
.. image:: images/ldr.png
:height: 357pt
:width: 242pt
:alt: Tonemapped image
:align: center
Exposure fusion
------------------
.. image:: images/fusion.png
:height: 357pt
:width: 242pt
:alt: Exposure fusion
:align: center
.. _Table-Of-Content-Photo:
*photo* module. Computational photography
-----------------------------------------------------------
Use OpenCV for advanced photo processing.
.. include:: ../../definitions/tocDefinitions.rst
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
============ ==============================================
|HDR| **Title:** :ref:`hdrimaging`
*Compatibility:* > OpenCV 3.0
*Author:* Fedor Morozov
Learn how to create and process high dynamic range images.
============ ==============================================
.. |HDR| image:: images/hdr.png
:height: 90pt
:width: 90pt
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../hdr_imaging/hdr_imaging
......@@ -132,7 +132,7 @@ As always, we would be happy to hear your comments and receive your contribution
.. cssclass:: toctableopencv
=========== =======================================================
|ml| Use the powerfull machine learning classes for statistical classification, regression and clustering of data.
|ml| Use the powerful machine learning classes for statistical classification, regression and clustering of data.
=========== =======================================================
......@@ -141,6 +141,21 @@ As always, we would be happy to hear your comments and receive your contribution
:width: 80pt
:alt: ml Icon
* :ref:`Table-Of-Content-Photo`
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=========== =======================================================
|photo| Use OpenCV for advanced photo processing.
=========== =======================================================
.. |photo| image:: images/photo.png
:height: 80pt
:width: 80pt
:alt: photo Icon
* :ref:`Table-Of-Content-GPU`
.. tabularcolumns:: m{100pt} m{300pt}
......@@ -233,6 +248,7 @@ As always, we would be happy to hear your comments and receive your contribution
video/table_of_content_video/table_of_content_video
objdetect/table_of_content_objdetect/table_of_content_objdetect
ml/table_of_content_ml/table_of_content_ml
photo/table_of_content_photo/table_of_content_photo
gpu/table_of_content_gpu/table_of_content_gpu
bioinspired/table_of_content_bioinspired/table_of_content_bioinspired
ios/table_of_content_ios/table_of_content_ios
......
......@@ -47,26 +47,37 @@
#include "precomp.hpp"
typedef double polyfit_type;
void cv::polyfit(const Mat& src_x, const Mat& src_y, Mat& dst, int order)
{
CV_Assert((src_x.rows>0)&&(src_y.rows>0)&&(src_x.cols==1)&&(src_y.cols==1)
&&(dst.cols==1)&&(dst.rows==(order+1))&&(order>=1));
Mat X;
X = Mat::zeros(src_x.rows, order+1,CV_32FC1);
Mat copy;
for(int i = 0; i <=order;i++)
const int wdepth = DataType<polyfit_type>::depth;
int npoints = src_x.checkVector(1);
int nypoints = src_y.checkVector(1);
CV_Assert(npoints == nypoints && npoints >= order+1);
Mat srcX = Mat_<polyfit_type>(src_x), srcY = Mat_<polyfit_type>(src_y);
Mat X = Mat::zeros(order + 1, npoints, wdepth);
polyfit_type* pSrcX = (polyfit_type*)srcX.data;
polyfit_type* pXData = (polyfit_type*)X.data;
int stepX = (int)(X.step/X.elemSize1());
for (int y = 0; y < order + 1; ++y)
{
copy = src_x.clone();
pow(copy,i,copy);
Mat M1 = X.col(i);
copy.col(0).copyTo(M1);
for (int x = 0; x < npoints; ++x)
{
if (y == 0)
pXData[x] = 1;
else if (y == 1)
pXData[x + stepX] = pSrcX[x];
else pXData[x + y*stepX] = pSrcX[x]* pXData[x + (y-1)*stepX];
}
}
Mat X_t, X_inv;
transpose(X,X_t);
Mat temp = X_t*X;
Mat temp2;
invert (temp,temp2);
Mat temp3 = temp2*X_t;
Mat W = temp3*src_y;
W.copyTo(dst);
Mat A, b, w;
mulTransposed(X, A, false);
b = X*srcY;
solve(A, b, w, DECOMP_SVD);
w.convertTo(dst, std::max(std::max(src_x.depth(), src_y.depth()), CV_32F));
}
......@@ -50,6 +50,8 @@ file(GLOB grfmt_hdrs src/grfmt*.hpp)
file(GLOB grfmt_srcs src/grfmt*.cpp)
list(APPEND grfmt_hdrs src/bitstrm.hpp)
list(APPEND grfmt_srcs src/bitstrm.cpp)
list(APPEND grfmt_hdrs src/rgbe.hpp)
list(APPEND grfmt_srcs src/rgbe.cpp)
source_group("Src\\grfmts" FILES ${grfmt_hdrs} ${grfmt_srcs})
......
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
#include "grfmt_hdr.hpp"
#include "rgbe.hpp"
namespace cv
{
HdrDecoder::HdrDecoder()
{
m_signature = "#?RGBE";
m_signature_alt = "#?RADIANCE";
file = NULL;
m_type = CV_32FC3;
}
HdrDecoder::~HdrDecoder()
{
}
size_t HdrDecoder::signatureLength() const
{
return m_signature.size() > m_signature_alt.size() ?
m_signature.size() : m_signature_alt.size();
}
bool HdrDecoder::readHeader()
{
file = fopen(m_filename.c_str(), "rb");
if(!file) {
return false;
}
RGBE_ReadHeader(file, &m_width, &m_height, NULL);
if(m_width <= 0 || m_height <= 0) {
fclose(file);
file = NULL;
return false;
}
return true;
}
bool HdrDecoder::readData(Mat& _img)
{
Mat img(m_height, m_width, CV_32FC3);
if(!file) {
if(!readHeader()) {
return false;
}
}
RGBE_ReadPixels_RLE(file, const_cast<float*>(img.ptr<float>()), img.cols, img.rows);
fclose(file); file = NULL;
if(_img.depth() == img.depth()) {
img.convertTo(_img, _img.type());
} else {
img.convertTo(_img, _img.type(), 255);
}
return true;
}
bool HdrDecoder::checkSignature( const String& signature ) const
{
if(signature.size() >= m_signature.size() &&
(!memcmp(signature.c_str(), m_signature.c_str(), m_signature.size()) ||
!memcmp(signature.c_str(), m_signature_alt.c_str(), m_signature_alt.size())))
return true;
return false;
}
ImageDecoder HdrDecoder::newDecoder() const
{
return makePtr<HdrDecoder>();
}
HdrEncoder::HdrEncoder()
{
m_description = "Radiance HDR (*.hdr;*.pic)";
}
HdrEncoder::~HdrEncoder()
{
}
bool HdrEncoder::write( const Mat& input_img, const std::vector<int>& params )
{
Mat img;
CV_Assert(input_img.channels() == 3 || input_img.channels() == 1);
if(input_img.channels() == 1) {
std::vector<Mat> splitted(3, input_img);
merge(splitted, img);
} else {
input_img.copyTo(img);
}
if(img.depth() != CV_32F) {
img.convertTo(img, CV_32FC3, 1/255.0f);
}
CV_Assert(params.empty() || params[0] == HDR_NONE || params[0] == HDR_RLE);
FILE *fout = fopen(m_filename.c_str(), "wb");
if(!fout) {
return false;
}
RGBE_WriteHeader(fout, img.cols, img.rows, NULL);
if(params.empty() || params[0] == HDR_RLE) {
RGBE_WritePixels_RLE(fout, const_cast<float*>(img.ptr<float>()), img.cols, img.rows);
} else {
RGBE_WritePixels(fout, const_cast<float*>(img.ptr<float>()), img.cols * img.rows);
}
fclose(fout);
return true;
}
ImageEncoder HdrEncoder::newEncoder() const
{
return makePtr<HdrEncoder>();
}
bool HdrEncoder::isFormatSupported( int depth ) const {
return depth != CV_64F;
}
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef _GRFMT_HDR_H_
#define _GRFMT_HDR_H_
#include "grfmt_base.hpp"
namespace cv
{
enum HdrCompression
{
HDR_NONE = 0,
HDR_RLE = 1
};
// Radiance rgbe (.hdr) reader
class HdrDecoder : public BaseImageDecoder
{
public:
HdrDecoder();
~HdrDecoder();
bool readHeader();
bool readData( Mat& img );
bool checkSignature( const String& signature ) const;
ImageDecoder newDecoder() const;
size_t signatureLength() const;
protected:
String m_signature_alt;
FILE *file;
};
// ... writer
class HdrEncoder : public BaseImageEncoder
{
public:
HdrEncoder();
~HdrEncoder();
bool write( const Mat& img, const std::vector<int>& params );
ImageEncoder newEncoder() const;
bool isFormatSupported( int depth ) const;
protected:
};
}
#endif/*_GRFMT_HDR_H_*/
\ No newline at end of file
......@@ -47,6 +47,7 @@
#include "precomp.hpp"
#include "grfmt_tiff.hpp"
#include <opencv2/imgproc.hpp>
namespace cv
{
......@@ -71,6 +72,7 @@ TiffDecoder::TiffDecoder()
TIFFSetErrorHandler( GrFmtSilentTIFFErrorHandler );
TIFFSetWarningHandler( GrFmtSilentTIFFErrorHandler );
}
m_hdr = false;
}
......@@ -133,6 +135,14 @@ bool TiffDecoder::readHeader()
m_width = wdth;
m_height = hght;
if((bpp == 32 && ncn == 3) || photometric == PHOTOMETRIC_LOGLUV)
{
m_type = CV_32FC3;
m_hdr = true;
return true;
}
m_hdr = false;
if( bpp > 8 &&
((photometric != 2 && photometric != 1) ||
(ncn != 1 && ncn != 3 && ncn != 4)))
......@@ -171,6 +181,10 @@ bool TiffDecoder::readHeader()
bool TiffDecoder::readData( Mat& img )
{
if(m_hdr && img.type() == CV_32FC3)
{
return readHdrData(img);
}
bool result = false;
bool color = img.channels() > 1;
uchar* data = img.data;
......@@ -380,6 +394,37 @@ bool TiffDecoder::readData( Mat& img )
return result;
}
bool TiffDecoder::readHdrData(Mat& img)
{
int rows_per_strip = 0, photometric = 0;
if(!m_tif)
{
return false;
}
TIFF *tif = static_cast<TIFF*>(m_tif);
TIFFGetField(tif, TIFFTAG_ROWSPERSTRIP, &rows_per_strip);
TIFFGetField( tif, TIFFTAG_PHOTOMETRIC, &photometric );
TIFFSetField(tif, TIFFTAG_SGILOGDATAFMT, SGILOGDATAFMT_FLOAT);
int size = 3 * m_width * m_height * sizeof (float);
int strip_size = 3 * m_width * rows_per_strip;
float *ptr = img.ptr<float>();
for (size_t i = 0; i < TIFFNumberOfStrips(tif); i++, ptr += strip_size)
{
TIFFReadEncodedStrip(tif, i, ptr, size);
size -= strip_size * sizeof(float);
}
close();
if(photometric == PHOTOMETRIC_LOGLUV)
{
cvtColor(img, img, COLOR_XYZ2BGR);
}
else
{
cvtColor(img, img, COLOR_RGB2BGR);
}
return true;
}
#endif
//////////////////////////////////////////////////////////////////////////////////////////
......@@ -405,7 +450,11 @@ ImageEncoder TiffEncoder::newEncoder() const
bool TiffEncoder::isFormatSupported( int depth ) const
{
#ifdef HAVE_TIFF
return depth == CV_8U || depth == CV_16U || depth == CV_32F;
#else
return depth == CV_8U || depth == CV_16U;
#endif
}
void TiffEncoder::writeTag( WLByteStream& strm, TiffTag tag,
......@@ -557,6 +606,33 @@ bool TiffEncoder::writeLibTiff( const Mat& img, const std::vector<int>& params)
return true;
}
bool TiffEncoder::writeHdr(const Mat& _img)
{
Mat img;
cvtColor(_img, img, COLOR_BGR2XYZ);
TIFF* tif = TIFFOpen(m_filename.c_str(), "w");
if (!tif)
{
return false;
}
TIFFSetField(tif, TIFFTAG_IMAGEWIDTH, img.cols);
TIFFSetField(tif, TIFFTAG_IMAGELENGTH, img.rows);
TIFFSetField(tif, TIFFTAG_SAMPLESPERPIXEL, 3);
TIFFSetField(tif, TIFFTAG_COMPRESSION, COMPRESSION_SGILOG);
TIFFSetField(tif, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_LOGLUV);
TIFFSetField(tif, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG);
TIFFSetField(tif, TIFFTAG_SGILOGDATAFMT, SGILOGDATAFMT_FLOAT);
TIFFSetField(tif, TIFFTAG_ROWSPERSTRIP, 1);
int strip_size = 3 * img.cols;
float *ptr = const_cast<float*>(img.ptr<float>());
for (int i = 0; i < img.rows; i++, ptr += strip_size)
{
TIFFWriteEncodedStrip(tif, i, ptr, strip_size * sizeof(float));
}
TIFFClose(tif);
return true;
}
#endif
#ifdef HAVE_TIFF
......@@ -568,6 +644,12 @@ bool TiffEncoder::write( const Mat& img, const std::vector<int>& /*params*/)
int channels = img.channels();
int width = img.cols, height = img.rows;
int depth = img.depth();
#ifdef HAVE_TIFF
if(img.type() == CV_32FC3)
{
return writeHdr(img);
}
#endif
if (depth != CV_8U && depth != CV_16U)
return false;
......
......@@ -108,6 +108,8 @@ public:
protected:
void* m_tif;
int normalizeChannelsNumber(int channels) const;
bool readHdrData(Mat& img);
bool m_hdr;
};
#endif
......@@ -130,6 +132,7 @@ protected:
int count, int value );
bool writeLibTiff( const Mat& img, const std::vector<int>& params );
bool writeHdr( const Mat& img );
};
}
......
......@@ -52,5 +52,6 @@
#include "grfmt_jpeg2000.hpp"
#include "grfmt_exr.hpp"
#include "grfmt_webp.hpp"
#include "grfmt_hdr.hpp"
#endif/*_GRFMTS_H_*/
......@@ -47,6 +47,7 @@
#include "grfmts.hpp"
#undef min
#undef max
#include <iostream>
/****************************************************************************************\
* Image Codecs *
......@@ -60,6 +61,8 @@ struct ImageCodecInitializer
{
decoders.push_back( makePtr<BmpDecoder>() );
encoders.push_back( makePtr<BmpEncoder>() );
decoders.push_back( makePtr<HdrDecoder>() );
encoders.push_back( makePtr<HdrEncoder>() );
#ifdef HAVE_JPEG
decoders.push_back( makePtr<JpegDecoder>() );
encoders.push_back( makePtr<JpegEncoder>() );
......@@ -203,7 +206,6 @@ imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 )
decoder->setSource(filename);
if( !decoder->readHeader() )
return 0;
CvSize size;
size.width = decoder->width();
size.height = decoder->height();
......@@ -271,7 +273,6 @@ static bool imwrite_( const String& filename, const Mat& image,
ImageEncoder encoder = findEncoder( filename );
if( !encoder )
CV_Error( CV_StsError, "could not find a writer for the specified extension" );
if( !encoder->isFormatSupported(image.depth()) )
{
CV_Assert( encoder->isFormatSupported(CV_8U) );
......
This diff is collapsed.
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef _RGBE_HDR_H_
#define _RGBE_HDR_H_
// posted to http://www.graphics.cornell.edu/~bjw/
// written by Bruce Walter (bjw@graphics.cornell.edu) 5/26/95
// based on code written by Greg Ward
#include <stdio.h>
typedef struct {
int valid; /* indicate which fields are valid */
char programtype[16]; /* listed at beginning of file to identify it
* after "#?". defaults to "RGBE" */
float gamma; /* image has already been gamma corrected with
* given gamma. defaults to 1.0 (no correction) */
float exposure; /* a value of 1.0 in an image corresponds to
* <exposure> watts/steradian/m^2.
* defaults to 1.0 */
} rgbe_header_info;
/* flags indicating which fields in an rgbe_header_info are valid */
#define RGBE_VALID_PROGRAMTYPE 0x01
#define RGBE_VALID_GAMMA 0x02
#define RGBE_VALID_EXPOSURE 0x04
/* return codes for rgbe routines */
#define RGBE_RETURN_SUCCESS 0
#define RGBE_RETURN_FAILURE -1
/* read or write headers */
/* you may set rgbe_header_info to null if you want to */
int RGBE_WriteHeader(FILE *fp, int width, int height, rgbe_header_info *info);
int RGBE_ReadHeader(FILE *fp, int *width, int *height, rgbe_header_info *info);
/* read or write pixels */
/* can read or write pixels in chunks of any size including single pixels*/
int RGBE_WritePixels(FILE *fp, float *data, int numpixels);
int RGBE_ReadPixels(FILE *fp, float *data, int numpixels);
/* read or write run length encoded files */
/* must be called to read or write whole scanlines */
int RGBE_WritePixels_RLE(FILE *fp, float *data, int scanline_width,
int num_scanlines);
int RGBE_ReadPixels_RLE(FILE *fp, float *data, int scanline_width,
int num_scanlines);
#endif/*_RGBE_HDR_H_*/
......@@ -430,11 +430,11 @@ TEST(Highgui_Tiff, decode_tile16384x16384)
TEST(Highgui_WebP, encode_decode_lossless_webp)
{
cvtest::TS& ts = *cvtest::TS::ptr();
std::string input = std::string(ts.get_data_path()) + "../cv/shared/lena.png";
string input = string(ts.get_data_path()) + "../cv/shared/lena.png";
cv::Mat img = cv::imread(input);
ASSERT_FALSE(img.empty());
std::string output = cv::tempfile(".webp");
string output = cv::tempfile(".webp");
EXPECT_NO_THROW(cv::imwrite(output, img)); // lossless
cv::Mat img_webp = cv::imread(output);
......@@ -525,3 +525,28 @@ TEST(Highgui_WebP, encode_decode_with_alpha_webp)
}
#endif
TEST(Highgui_Hdr, regression)
{
string folder = string(cvtest::TS::ptr()->get_data_path()) + "/readwrite/";
string name_rle = folder + "rle.hdr";
string name_no_rle = folder + "no_rle.hdr";
Mat img_rle = imread(name_rle, -1);
ASSERT_FALSE(img_rle.empty()) << "Could not open " << name_rle;
Mat img_no_rle = imread(name_no_rle, -1);
ASSERT_FALSE(img_no_rle.empty()) << "Could not open " << name_no_rle;
double min = 0.0, max = 1.0;
minMaxLoc(abs(img_rle - img_no_rle), &min, &max);
ASSERT_FALSE(max > DBL_EPSILON);
string tmp_file_name = tempfile(".hdr");
vector<int>param(1);
for(int i = 0; i < 2; i++) {
param[0] = i;
imwrite(tmp_file_name, img_rle, param);
Mat written_img = imread(tmp_file_name, -1);
ASSERT_FALSE(written_img.empty()) << "Could not open " << tmp_file_name;
minMaxLoc(abs(img_rle - written_img), &min, &max);
ASSERT_FALSE(max > DBL_EPSILON);
}
}
......@@ -509,11 +509,11 @@ Line segment detector class, following the algorithm described at [Rafael12]_.
.. ocv:class:: LineSegmentDetector : public Algorithm
createLineSegmentDetectorPtr
----------------------------
createLineSegmentDetector
-------------------------
Creates a smart pointer to a LineSegmentDetector object and initializes it.
.. ocv:function:: Ptr<LineSegmentDetector> createLineSegmentDetectorPtr(int _refine = LSD_REFINE_STD, double _scale = 0.8, double _sigma_scale = 0.6, double _quant = 2.0, double _ang_th = 22.5, double _log_eps = 0, double _density_th = 0.7, int _n_bins = 1024)
.. ocv:function:: Ptr<LineSegmentDetector> createLineSegmentDetector(int _refine = LSD_REFINE_STD, double _scale = 0.8, double _sigma_scale = 0.6, double _quant = 2.0, double _ang_th = 22.5, double _log_eps = 0, double _density_th = 0.7, int _n_bins = 1024)
:param _refine: The way found lines will be refined:
......
......@@ -904,7 +904,7 @@ protected:
Point2f bottomRight;
};
class LineSegmentDetector : public Algorithm
class CV_EXPORTS_W LineSegmentDetector : public Algorithm
{
public:
/**
......@@ -926,7 +926,7 @@ public:
* * 1 corresponds to 0.1 mean false alarms
* This vector will be calculated _only_ when the objects type is REFINE_ADV
*/
virtual void detect(InputArray _image, OutputArray _lines,
CV_WRAP virtual void detect(InputArray _image, OutputArray _lines,
OutputArray width = noArray(), OutputArray prec = noArray(),
OutputArray nfa = noArray()) = 0;
......@@ -937,7 +937,7 @@ public:
* Should have the size of the image, where the lines were found
* @param lines The lines that need to be drawn
*/
virtual void drawSegments(InputOutputArray _image, InputArray lines) = 0;
CV_WRAP virtual void drawSegments(InputOutputArray _image, InputArray lines) = 0;
/**
* Draw both vectors on the image canvas. Uses blue for lines 1 and red for lines 2.
......@@ -949,13 +949,13 @@ public:
* Should have the size of the image, where the lines were found
* @return The number of mismatching pixels between lines1 and lines2.
*/
virtual int compareSegments(const Size& size, InputArray lines1, InputArray lines2, InputOutputArray _image = noArray()) = 0;
CV_WRAP virtual int compareSegments(const Size& size, InputArray lines1, InputArray lines2, InputOutputArray _image = noArray()) = 0;
virtual ~LineSegmentDetector() {};
};
//! Returns a pointer to a LineSegmentDetector class.
CV_EXPORTS Ptr<LineSegmentDetector> createLineSegmentDetectorPtr(
CV_EXPORTS_W Ptr<LineSegmentDetector> createLineSegmentDetector(
int _refine = LSD_REFINE_STD, double _scale = 0.8,
double _sigma_scale = 0.6, double _quant = 2.0, double _ang_th = 22.5,
double _log_eps = 0, double _density_th = 0.7, int _n_bins = 1024);
......
......@@ -388,7 +388,7 @@ private:
/////////////////////////////////////////////////////////////////////////////////////////
CV_EXPORTS Ptr<LineSegmentDetector> createLineSegmentDetectorPtr(
CV_EXPORTS Ptr<LineSegmentDetector> createLineSegmentDetector(
int _refine, double _scale, double _sigma_scale, double _quant, double _ang_th,
double _log_eps, double _density_th, int _n_bins)
{
......
......@@ -110,7 +110,7 @@ TEST_F(Imgproc_LSD_ADV, whiteNoise)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateWhiteNoise(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_ADV);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_ADV);
detector->detect(test_image, lines);
if(40u >= lines.size()) ++passedtests;
......@@ -123,7 +123,7 @@ TEST_F(Imgproc_LSD_ADV, constColor)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateConstColor(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_ADV);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_ADV);
detector->detect(test_image, lines);
if(0u == lines.size()) ++passedtests;
......@@ -137,7 +137,7 @@ TEST_F(Imgproc_LSD_ADV, lines)
{
const unsigned int numOfLines = 1;
GenerateLines(test_image, numOfLines);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_ADV);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_ADV);
detector->detect(test_image, lines);
if(numOfLines * 2 == lines.size()) ++passedtests; // * 2 because of Gibbs effect
......@@ -150,7 +150,7 @@ TEST_F(Imgproc_LSD_ADV, rotatedRect)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateRotatedRect(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_ADV);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_ADV);
detector->detect(test_image, lines);
if(2u <= lines.size()) ++passedtests;
......@@ -163,7 +163,7 @@ TEST_F(Imgproc_LSD_STD, whiteNoise)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateWhiteNoise(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_STD);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_STD);
detector->detect(test_image, lines);
if(50u >= lines.size()) ++passedtests;
......@@ -176,7 +176,7 @@ TEST_F(Imgproc_LSD_STD, constColor)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateConstColor(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_STD);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_STD);
detector->detect(test_image, lines);
if(0u == lines.size()) ++passedtests;
......@@ -190,7 +190,7 @@ TEST_F(Imgproc_LSD_STD, lines)
{
const unsigned int numOfLines = 1;
GenerateLines(test_image, numOfLines);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_STD);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_STD);
detector->detect(test_image, lines);
if(numOfLines * 2 == lines.size()) ++passedtests; // * 2 because of Gibbs effect
......@@ -203,7 +203,7 @@ TEST_F(Imgproc_LSD_STD, rotatedRect)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateRotatedRect(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_STD);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_STD);
detector->detect(test_image, lines);
if(4u <= lines.size()) ++passedtests;
......@@ -216,7 +216,7 @@ TEST_F(Imgproc_LSD_NONE, whiteNoise)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateWhiteNoise(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_STD);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_STD);
detector->detect(test_image, lines);
if(50u >= lines.size()) ++passedtests;
......@@ -229,7 +229,7 @@ TEST_F(Imgproc_LSD_NONE, constColor)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateConstColor(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_NONE);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_NONE);
detector->detect(test_image, lines);
if(0u == lines.size()) ++passedtests;
......@@ -243,7 +243,7 @@ TEST_F(Imgproc_LSD_NONE, lines)
{
const unsigned int numOfLines = 1;
GenerateLines(test_image, numOfLines);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_NONE);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_NONE);
detector->detect(test_image, lines);
if(numOfLines * 2 == lines.size()) ++passedtests; // * 2 because of Gibbs effect
......@@ -256,7 +256,7 @@ TEST_F(Imgproc_LSD_NONE, rotatedRect)
for (int i = 0; i < EPOCHS; ++i)
{
GenerateRotatedRect(test_image);
Ptr<LineSegmentDetector> detector = createLineSegmentDetectorPtr(LSD_REFINE_NONE);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_NONE);
detector->detect(test_image, lines);
if(8u <= lines.size()) ++passedtests;
......
This diff is collapsed.
......@@ -9,3 +9,4 @@ photo. Computational Photography
inpainting
denoising
hdr_imaging
\ No newline at end of file
......@@ -80,6 +80,214 @@ CV_EXPORTS_W void fastNlMeansDenoisingColoredMulti( InputArrayOfArrays srcImgs,
float h = 3, float hColor = 3,
int templateWindowSize = 7, int searchWindowSize = 21);
enum { LDR_SIZE = 256 };
class CV_EXPORTS_W Tonemap : public Algorithm
{
public:
CV_WRAP virtual void process(InputArray src, OutputArray dst) = 0;
CV_WRAP virtual float getGamma() const = 0;
CV_WRAP virtual void setGamma(float gamma) = 0;
};
CV_EXPORTS_W Ptr<Tonemap> createTonemap(float gamma = 1.0f);
// "Adaptive Logarithmic Mapping For Displaying HighContrast Scenes", Drago et al., 2003
class CV_EXPORTS_W TonemapDrago : public Tonemap
{
public:
CV_WRAP virtual float getSaturation() const = 0;
CV_WRAP virtual void setSaturation(float saturation) = 0;
CV_WRAP virtual float getBias() const = 0;
CV_WRAP virtual void setBias(float bias) = 0;
};
CV_EXPORTS_W Ptr<TonemapDrago> createTonemapDrago(float gamma = 1.0f, float saturation = 1.0f, float bias = 0.85f);
// "Fast Bilateral Filtering for the Display of High-Dynamic-Range Images", Durand, Dorsey, 2002
class CV_EXPORTS_W TonemapDurand : public Tonemap
{
public:
CV_WRAP virtual float getSaturation() const = 0;
CV_WRAP virtual void setSaturation(float saturation) = 0;
CV_WRAP virtual float getContrast() const = 0;
CV_WRAP virtual void setContrast(float contrast) = 0;
CV_WRAP virtual float getSigmaSpace() const = 0;
CV_WRAP virtual void setSigmaSpace(float sigma_space) = 0;
CV_WRAP virtual float getSigmaColor() const = 0;
CV_WRAP virtual void setSigmaColor(float sigma_color) = 0;
};
CV_EXPORTS_W Ptr<TonemapDurand>
createTonemapDurand(float gamma = 1.0f, float contrast = 4.0f, float saturation = 1.0f, float sigma_space = 2.0f, float sigma_color = 2.0f);
// "Dynamic Range Reduction Inspired by Photoreceptor Physiology", Reinhard, Devlin, 2005
class CV_EXPORTS_W TonemapReinhard : public Tonemap
{
public:
CV_WRAP virtual float getIntensity() const = 0;
CV_WRAP virtual void setIntensity(float intensity) = 0;
CV_WRAP virtual float getLightAdaptation() const = 0;
CV_WRAP virtual void setLightAdaptation(float light_adapt) = 0;
CV_WRAP virtual float getColorAdaptation() const = 0;
CV_WRAP virtual void setColorAdaptation(float color_adapt) = 0;
};
CV_EXPORTS_W Ptr<TonemapReinhard>
createTonemapReinhard(float gamma = 1.0f, float intensity = 0.0f, float light_adapt = 1.0f, float color_adapt = 0.0f);
// "Perceptual Framework for Contrast Processing of High Dynamic Range Images", Mantiuk et al., 2006
class CV_EXPORTS_W TonemapMantiuk : public Tonemap
{
public:
CV_WRAP virtual float getScale() const = 0;
CV_WRAP virtual void setScale(float scale) = 0;
CV_WRAP virtual float getSaturation() const = 0;
CV_WRAP virtual void setSaturation(float saturation) = 0;
};
CV_EXPORTS_W Ptr<TonemapMantiuk>
createTonemapMantiuk(float gamma = 1.0f, float scale = 0.7f, float saturation = 1.0f);
class CV_EXPORTS_W AlignExposures : public Algorithm
{
public:
CV_WRAP virtual void process(InputArrayOfArrays src, std::vector<Mat>& dst,
InputArray times, InputArray response) = 0;
};
// "Fast, Robust Image Registration for Compositing High Dynamic Range Photographs from Handheld Exposures", Ward, 2003
class CV_EXPORTS_W AlignMTB : public AlignExposures
{
public:
CV_WRAP virtual void process(InputArrayOfArrays src, std::vector<Mat>& dst,
InputArray times, InputArray response) = 0;
CV_WRAP virtual void process(InputArrayOfArrays src, std::vector<Mat>& dst) = 0;
CV_WRAP virtual Point calculateShift(InputArray img0, InputArray img1) = 0;
CV_WRAP virtual void shiftMat(InputArray src, OutputArray dst, const Point shift) = 0;
CV_WRAP virtual void computeBitmaps(InputArray img, OutputArray tb, OutputArray eb) = 0;
CV_WRAP virtual int getMaxBits() const = 0;
CV_WRAP virtual void setMaxBits(int max_bits) = 0;
CV_WRAP virtual int getExcludeRange() const = 0;
CV_WRAP virtual void setExcludeRange(int exclude_range) = 0;
CV_WRAP virtual bool getCut() const = 0;
CV_WRAP virtual void setCut(bool value) = 0;
};
CV_EXPORTS_W Ptr<AlignMTB> createAlignMTB(int max_bits = 6, int exclude_range = 4, bool cut = true);
class CV_EXPORTS_W CalibrateCRF : public Algorithm
{
public:
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst, InputArray times) = 0;
};
// "Recovering High Dynamic Range Radiance Maps from Photographs", Debevec, Malik, 1997
class CV_EXPORTS_W CalibrateDebevec : public CalibrateCRF
{
public:
CV_WRAP virtual float getLambda() const = 0;
CV_WRAP virtual void setLambda(float lambda) = 0;
CV_WRAP virtual int getSamples() const = 0;
CV_WRAP virtual void setSamples(int samples) = 0;
CV_WRAP virtual bool getRandom() const = 0;
CV_WRAP virtual void setRandom(bool random) = 0;
};
CV_EXPORTS_W Ptr<CalibrateDebevec> createCalibrateDebevec(int samples = 70, float lambda = 10.0f, bool random = false);
// "Dynamic range improvement through multiple exposures", Robertson et al., 1999
class CV_EXPORTS_W CalibrateRobertson : public CalibrateCRF
{
public:
CV_WRAP virtual int getMaxIter() const = 0;
CV_WRAP virtual void setMaxIter(int max_iter) = 0;
CV_WRAP virtual float getThreshold() const = 0;
CV_WRAP virtual void setThreshold(float threshold) = 0;
CV_WRAP virtual Mat getRadiance() const = 0;
};
CV_EXPORTS_W Ptr<CalibrateRobertson> createCalibrateRobertson(int max_iter = 30, float threshold = 0.01f);
class CV_EXPORTS_W MergeExposures : public Algorithm
{
public:
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst,
InputArray times, InputArray response) = 0;
};
// "Recovering High Dynamic Range Radiance Maps from Photographs", Debevec, Malik, 1997
class CV_EXPORTS_W MergeDebevec : public MergeExposures
{
public:
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst,
InputArray times, InputArray response) = 0;
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst, InputArray times) = 0;
};
CV_EXPORTS_W Ptr<MergeDebevec> createMergeDebevec();
// "Exposure Fusion", Mertens et al., 2007
class CV_EXPORTS_W MergeMertens : public MergeExposures
{
public:
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst,
InputArray times, InputArray response) = 0;
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst) = 0;
CV_WRAP virtual float getContrastWeight() const = 0;
CV_WRAP virtual void setContrastWeight(float contrast_weiht) = 0;
CV_WRAP virtual float getSaturationWeight() const = 0;
CV_WRAP virtual void setSaturationWeight(float saturation_weight) = 0;
CV_WRAP virtual float getExposureWeight() const = 0;
CV_WRAP virtual void setExposureWeight(float exposure_weight) = 0;
};
CV_EXPORTS_W Ptr<MergeMertens>
createMergeMertens(float contrast_weight = 1.0f, float saturation_weight = 1.0f, float exposure_weight = 0.0f);
// "Dynamic range improvement through multiple exposures", Robertson et al., 1999
class CV_EXPORTS_W MergeRobertson : public MergeExposures
{
public:
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst,
InputArray times, InputArray response) = 0;
CV_WRAP virtual void process(InputArrayOfArrays src, OutputArray dst, InputArray times) = 0;
};
CV_EXPORTS_W Ptr<MergeRobertson> createMergeRobertson();
} // cv
#endif
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
#include "opencv2/photo.hpp"
#include "opencv2/imgproc.hpp"
#include "hdr_common.hpp"
namespace cv
{
class AlignMTBImpl : public AlignMTB
{
public:
AlignMTBImpl(int _max_bits, int _exclude_range, bool _cut) :
name("AlignMTB"),
max_bits(_max_bits),
exclude_range(_exclude_range),
cut(_cut)
{
}
void process(InputArrayOfArrays src, std::vector<Mat>& dst,
InputArray, InputArray)
{
process(src, dst);
}
void process(InputArrayOfArrays _src, std::vector<Mat>& dst)
{
std::vector<Mat> src;
_src.getMatVector(src);
checkImageDimensions(src);
dst.resize(src.size());
size_t pivot = src.size() / 2;
dst[pivot] = src[pivot];
Mat gray_base;
cvtColor(src[pivot], gray_base, COLOR_RGB2GRAY);
std::vector<Point> shifts;
for(size_t i = 0; i < src.size(); i++) {
if(i == pivot) {
shifts.push_back(Point(0, 0));
continue;
}
Mat gray;
cvtColor(src[i], gray, COLOR_RGB2GRAY);
Point shift = calculateShift(gray_base, gray);
shifts.push_back(shift);
shiftMat(src[i], dst[i], shift);
}
if(cut) {
Point max(0, 0), min(0, 0);
for(size_t i = 0; i < shifts.size(); i++) {
if(shifts[i].x > max.x) {
max.x = shifts[i].x;
}
if(shifts[i].y > max.y) {
max.y = shifts[i].y;
}
if(shifts[i].x < min.x) {
min.x = shifts[i].x;
}
if(shifts[i].y < min.y) {
min.y = shifts[i].y;
}
}
Point size = dst[0].size();
for(size_t i = 0; i < dst.size(); i++) {
dst[i] = dst[i](Rect(max, min + size));
}
}
}
Point calculateShift(InputArray _img0, InputArray _img1)
{
Mat img0 = _img0.getMat();
Mat img1 = _img1.getMat();
CV_Assert(img0.channels() == 1 && img0.type() == img1.type());
CV_Assert(img0.size() == img0.size());
int maxlevel = static_cast<int>(log((double)max(img0.rows, img0.cols)) / log(2.0)) - 1;
maxlevel = min(maxlevel, max_bits - 1);
std::vector<Mat> pyr0;
std::vector<Mat> pyr1;
buildPyr(img0, pyr0, maxlevel);
buildPyr(img1, pyr1, maxlevel);
Point shift(0, 0);
for(int level = maxlevel; level >= 0; level--) {
shift *= 2;
Mat tb1, tb2, eb1, eb2;
computeBitmaps(pyr0[level], tb1, eb1);
computeBitmaps(pyr1[level], tb2, eb2);
int min_err = pyr0[level].total();
Point new_shift(shift);
for(int i = -1; i <= 1; i++) {
for(int j = -1; j <= 1; j++) {
Point test_shift = shift + Point(i, j);
Mat shifted_tb2, shifted_eb2, diff;
shiftMat(tb2, shifted_tb2, test_shift);
shiftMat(eb2, shifted_eb2, test_shift);
bitwise_xor(tb1, shifted_tb2, diff);
bitwise_and(diff, eb1, diff);
bitwise_and(diff, shifted_eb2, diff);
int err = countNonZero(diff);
if(err < min_err) {
new_shift = test_shift;
min_err = err;
}
}
}
shift = new_shift;
}
return shift;
}
void shiftMat(InputArray _src, OutputArray _dst, const Point shift)
{
Mat src = _src.getMat();
_dst.create(src.size(), src.type());
Mat dst = _dst.getMat();
Mat res = Mat::zeros(src.size(), src.type());
int width = src.cols - abs(shift.x);
int height = src.rows - abs(shift.y);
Rect dst_rect(max(shift.x, 0), max(shift.y, 0), width, height);
Rect src_rect(max(-shift.x, 0), max(-shift.y, 0), width, height);
src(src_rect).copyTo(res(dst_rect));
res.copyTo(dst);
}
int getMaxBits() const { return max_bits; }
void setMaxBits(int val) { max_bits = val; }
int getExcludeRange() const { return exclude_range; }
void setExcludeRange(int val) { exclude_range = val; }
bool getCut() const { return cut; }
void setCut(bool val) { cut = val; }
void write(FileStorage& fs) const
{
fs << "name" << name
<< "max_bits" << max_bits
<< "exclude_range" << exclude_range
<< "cut" << static_cast<int>(cut);
}
void read(const FileNode& fn)
{
FileNode n = fn["name"];
CV_Assert(n.isString() && String(n) == name);
max_bits = fn["max_bits"];
exclude_range = fn["exclude_range"];
int cut_val = fn["cut"];
cut = (cut_val != 0);
}
void computeBitmaps(InputArray _img, OutputArray _tb, OutputArray _eb)
{
Mat img = _img.getMat();
_tb.create(img.size(), CV_8U);
_eb.create(img.size(), CV_8U);
Mat tb = _tb.getMat(), eb = _eb.getMat();
int median = getMedian(img);
compare(img, median, tb, CMP_GT);
compare(abs(img - median), exclude_range, eb, CMP_GT);
}
protected:
String name;
int max_bits, exclude_range;
bool cut;
void downsample(Mat& src, Mat& dst)
{
dst = Mat(src.rows / 2, src.cols / 2, CV_8UC1);
int offset = src.cols * 2;
uchar *src_ptr = src.ptr();
uchar *dst_ptr = dst.ptr();
for(int y = 0; y < dst.rows; y ++) {
uchar *ptr = src_ptr;
for(int x = 0; x < dst.cols; x++) {
dst_ptr[0] = ptr[0];
dst_ptr++;
ptr += 2;
}
src_ptr += offset;
}
}
void buildPyr(Mat& img, std::vector<Mat>& pyr, int maxlevel)
{
pyr.resize(maxlevel + 1);
pyr[0] = img.clone();
for(int level = 0; level < maxlevel; level++) {
downsample(pyr[level], pyr[level + 1]);
}
}
int getMedian(Mat& img)
{
int channels = 0;
Mat hist;
int hist_size = LDR_SIZE;
float range[] = {0, LDR_SIZE} ;
const float* ranges[] = {range};
calcHist(&img, 1, &channels, Mat(), hist, 1, &hist_size, ranges);
float *ptr = hist.ptr<float>();
int median = 0, sum = 0;
int thresh = img.total() / 2;
while(sum < thresh && median < LDR_SIZE) {
sum += static_cast<int>(ptr[median]);
median++;
}
return median;
}
};
Ptr<AlignMTB> createAlignMTB(int max_bits, int exclude_range, bool cut)
{
return makePtr<AlignMTBImpl>(max_bits, exclude_range, cut);
}
}
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
#include "opencv2/photo.hpp"
#include "opencv2/imgproc.hpp"
//#include "opencv2/highgui.hpp"
#include "hdr_common.hpp"
namespace cv
{
class CalibrateDebevecImpl : public CalibrateDebevec
{
public:
CalibrateDebevecImpl(int _samples, float _lambda, bool _random) :
name("CalibrateDebevec"),
samples(_samples),
lambda(_lambda),
random(_random),
w(tringleWeights())
{
}
void process(InputArrayOfArrays src, OutputArray dst, InputArray _times)
{
std::vector<Mat> images;
src.getMatVector(images);
Mat times = _times.getMat();
CV_Assert(images.size() == times.total());
checkImageDimensions(images);
CV_Assert(images[0].depth() == CV_8U);
int channels = images[0].channels();
int CV_32FCC = CV_MAKETYPE(CV_32F, channels);
dst.create(LDR_SIZE, 1, CV_32FCC);
Mat result = dst.getMat();
std::vector<Point> sample_points;
if(random) {
for(int i = 0; i < samples; i++) {
sample_points.push_back(Point(rand() % images[0].cols, rand() % images[0].rows));
}
} else {
int x_points = static_cast<int>(sqrt(static_cast<double>(samples) * images[0].cols / images[0].rows));
int y_points = samples / x_points;
int step_x = images[0].cols / x_points;
int step_y = images[0].rows / y_points;
for(int i = 0, x = step_x / 2; i < x_points; i++, x += step_x) {
for(int j = 0, y = step_y; j < y_points; j++, y += step_y) {
sample_points.push_back(Point(x, y));
}
}
}
std::vector<Mat> result_split(channels);
for(int channel = 0; channel < channels; channel++) {
Mat A = Mat::zeros(sample_points.size() * images.size() + LDR_SIZE + 1, LDR_SIZE + sample_points.size(), CV_32F);
Mat B = Mat::zeros(A.rows, 1, CV_32F);
int eq = 0;
for(size_t i = 0; i < sample_points.size(); i++) {
for(size_t j = 0; j < images.size(); j++) {
int val = images[j].ptr()[3*(sample_points[i].y * images[j].cols + sample_points[j].x) + channel];
A.at<float>(eq, val) = w.at<float>(val);
A.at<float>(eq, LDR_SIZE + i) = -w.at<float>(val);
B.at<float>(eq, 0) = w.at<float>(val) * log(times.at<float>(j));
eq++;
}
}
A.at<float>(eq, LDR_SIZE / 2) = 1;
eq++;
for(int i = 0; i < 254; i++) {
A.at<float>(eq, i) = lambda * w.at<float>(i + 1);
A.at<float>(eq, i + 1) = -2 * lambda * w.at<float>(i + 1);
A.at<float>(eq, i + 2) = lambda * w.at<float>(i + 1);
eq++;
}
Mat solution;
solve(A, B, solution, DECOMP_SVD);
solution.rowRange(0, LDR_SIZE).copyTo(result_split[channel]);
}
merge(result_split, result);
exp(result, result);
}
int getSamples() const { return samples; }
void setSamples(int val) { samples = val; }
float getLambda() const { return lambda; }
void setLambda(float val) { lambda = val; }
bool getRandom() const { return random; }
void setRandom(bool val) { random = val; }
void write(FileStorage& fs) const
{
fs << "name" << name
<< "samples" << samples
<< "lambda" << lambda
<< "random" << static_cast<int>(random);
}
void read(const FileNode& fn)
{
FileNode n = fn["name"];
CV_Assert(n.isString() && String(n) == name);
samples = fn["samples"];
lambda = fn["lambda"];
int random_val = fn["random"];
random = (random_val != 0);
}
protected:
String name;
int samples;
float lambda;
bool random;
Mat w;
};
Ptr<CalibrateDebevec> createCalibrateDebevec(int samples, float lambda, bool random)
{
return makePtr<CalibrateDebevecImpl>(samples, lambda, random);
}
class CalibrateRobertsonImpl : public CalibrateRobertson
{
public:
CalibrateRobertsonImpl(int _max_iter, float _threshold) :
name("CalibrateRobertson"),
max_iter(_max_iter),
threshold(_threshold),
weight(RobertsonWeights())
{
}
void process(InputArrayOfArrays src, OutputArray dst, InputArray _times)
{
std::vector<Mat> images;
src.getMatVector(images);
Mat times = _times.getMat();
CV_Assert(images.size() == times.total());
checkImageDimensions(images);
CV_Assert(images[0].depth() == CV_8U);
int channels = images[0].channels();
int CV_32FCC = CV_MAKETYPE(CV_32F, channels);
dst.create(LDR_SIZE, 1, CV_32FCC);
Mat response = dst.getMat();
response = linearResponse(3) / (LDR_SIZE / 2.0f);
Mat card = Mat::zeros(LDR_SIZE, 1, CV_32FCC);
for(size_t i = 0; i < images.size(); i++) {
uchar *ptr = images[i].ptr();
for(size_t pos = 0; pos < images[i].total(); pos++) {
for(int c = 0; c < channels; c++, ptr++) {
card.at<Vec3f>(*ptr)[c] += 1;
}
}
}
card = 1.0 / card;
Ptr<MergeRobertson> merge = createMergeRobertson();
for(int iter = 0; iter < max_iter; iter++) {
radiance = Mat::zeros(images[0].size(), CV_32FCC);
merge->process(images, radiance, times, response);
Mat new_response = Mat::zeros(LDR_SIZE, 1, CV_32FC3);
for(size_t i = 0; i < images.size(); i++) {
uchar *ptr = images[i].ptr();
float* rad_ptr = radiance.ptr<float>();
for(size_t pos = 0; pos < images[i].total(); pos++) {
for(int c = 0; c < channels; c++, ptr++, rad_ptr++) {
new_response.at<Vec3f>(*ptr)[c] += times.at<float>(i) * *rad_ptr;
}
}
}
new_response = new_response.mul(card);
for(int c = 0; c < 3; c++) {
float middle = new_response.at<Vec3f>(LDR_SIZE / 2)[c];
for(int i = 0; i < LDR_SIZE; i++) {
new_response.at<Vec3f>(i)[c] /= middle;
}
}
float diff = static_cast<float>(sum(sum(abs(new_response - response)))[0] / channels);
new_response.copyTo(response);
if(diff < threshold) {
break;
}
}
}
int getMaxIter() const { return max_iter; }
void setMaxIter(int val) { max_iter = val; }
float getThreshold() const { return threshold; }
void setThreshold(float val) { threshold = val; }
Mat getRadiance() const { return radiance; }
void write(FileStorage& fs) const
{
fs << "name" << name
<< "max_iter" << max_iter
<< "threshold" << threshold;
}
void read(const FileNode& fn)
{
FileNode n = fn["name"];
CV_Assert(n.isString() && String(n) == name);
max_iter = fn["max_iter"];
threshold = fn["threshold"];
}
protected:
String name;
int max_iter;
float threshold;
Mat weight, radiance;
};
Ptr<CalibrateRobertson> createCalibrateRobertson(int max_iter, float threshold)
{
return makePtr<CalibrateRobertsonImpl>(max_iter, threshold);
}
}
......@@ -116,7 +116,7 @@ static void fastNlMeansDenoisingMultiCheckPreconditions(
int imgToDenoiseIndex, int temporalWindowSize,
int templateWindowSize, int searchWindowSize)
{
int src_imgs_size = (int)srcImgs.size();
int src_imgs_size = static_cast<int>(srcImgs.size());
if (src_imgs_size == 0) {
CV_Error(Error::StsBadArg, "Input images vector should not be empty!");
}
......@@ -198,7 +198,7 @@ void cv::fastNlMeansDenoisingColoredMulti( InputArrayOfArrays _srcImgs, OutputAr
_dst.create(srcImgs[0].size(), srcImgs[0].type());
Mat dst = _dst.getMat();
int src_imgs_size = (int)srcImgs.size();
int src_imgs_size = static_cast<int>(srcImgs.size());
if (srcImgs[0].type() != CV_8UC3) {
CV_Error(Error::StsBadArg, "Type of input images should be CV_8UC3!");
......
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "precomp.hpp"
#include "opencv2/photo.hpp"
#include "hdr_common.hpp"
namespace cv
{
void checkImageDimensions(const std::vector<Mat>& images)
{
CV_Assert(!images.empty());
int width = images[0].cols;
int height = images[0].rows;
int type = images[0].type();
for(size_t i = 0; i < images.size(); i++) {
CV_Assert(images[i].cols == width && images[i].rows == height);
CV_Assert(images[i].type() == type);
}
}
Mat tringleWeights()
{
Mat w(LDR_SIZE, 1, CV_32F);
int half = LDR_SIZE / 2;
for(int i = 0; i < LDR_SIZE; i++) {
w.at<float>(i) = i < half ? i + 1.0f : LDR_SIZE - i;
}
return w;
}
Mat RobertsonWeights()
{
Mat weight(LDR_SIZE, 1, CV_32FC3);
float q = (LDR_SIZE - 1) / 4.0f;
for(int i = 0; i < LDR_SIZE; i++) {
float value = i / q - 2.0f;
value = exp(-value * value);
weight.at<Vec3f>(i) = Vec3f::all(value);
}
return weight;
}
void mapLuminance(Mat src, Mat dst, Mat lum, Mat new_lum, float saturation)
{
std::vector<Mat> channels(3);
split(src, channels);
for(int i = 0; i < 3; i++) {
channels[i] = channels[i].mul(1.0f / lum);
pow(channels[i], saturation, channels[i]);
channels[i] = channels[i].mul(new_lum);
}
merge(channels, dst);
}
Mat linearResponse(int channels)
{
Mat response = Mat(LDR_SIZE, 1, CV_MAKETYPE(CV_32F, channels));
for(int i = 0; i < LDR_SIZE; i++) {
response.at<Vec3f>(i) = Vec3f::all(static_cast<float>(i));
}
return response;
}
};
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#ifndef __OPENCV_HDR_COMMON_HPP__
#define __OPENCV_HDR_COMMON_HPP__
#include "precomp.hpp"
#include "opencv2/photo.hpp"
namespace cv
{
void checkImageDimensions(const std::vector<Mat>& images);
Mat tringleWeights();
void mapLuminance(Mat src, Mat dst, Mat lum, Mat new_lum, float saturation);
Mat RobertsonWeights();
Mat linearResponse(int channels);
};
#endif
This diff is collapsed.
This diff is collapsed.
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
#include "test_precomp.hpp"
#include <string>
#include <algorithm>
#include <fstream>
using namespace cv;
using namespace std;
void loadImage(string path, Mat &img)
{
img = imread(path, -1);
ASSERT_FALSE(img.empty()) << "Could not load input image " << path;
}
void checkEqual(Mat img0, Mat img1, double threshold)
{
double max = 1.0;
minMaxLoc(abs(img0 - img1), NULL, &max);
ASSERT_FALSE(max > threshold) << max;
}
static vector<float> DEFAULT_VECTOR;
void loadExposureSeq(String path, vector<Mat>& images, vector<float>& times = DEFAULT_VECTOR)
{
ifstream list_file((path + "list.txt").c_str());
ASSERT_TRUE(list_file.is_open());
string name;
float val;
while(list_file >> name >> val) {
Mat img = imread(path + name);
ASSERT_FALSE(img.empty()) << "Could not load input image " << path + name;
images.push_back(img);
times.push_back(1 / val);
}
list_file.close();
}
void loadResponseCSV(String path, Mat& response)
{
response = Mat(256, 1, CV_32FC3);
ifstream resp_file(path.c_str());
for(int i = 0; i < 256; i++) {
for(int c = 0; c < 3; c++) {
resp_file >> response.at<Vec3f>(i)[c];
resp_file.ignore(1);
}
}
resp_file.close();
}
TEST(Photo_Tonemap, regression)
{
string test_path = string(cvtest::TS::ptr()->get_data_path()) + "hdr/tonemap/";
Mat img, expected, result;
loadImage(test_path + "image.hdr", img);
float gamma = 2.2f;
Ptr<Tonemap> linear = createTonemap(gamma);
linear->process(img, result);
loadImage(test_path + "linear.png", expected);
result.convertTo(result, CV_8UC3, 255);
checkEqual(result, expected, 3);
Ptr<TonemapDrago> drago = createTonemapDrago(gamma);
drago->process(img, result);
loadImage(test_path + "drago.png", expected);
result.convertTo(result, CV_8UC3, 255);
checkEqual(result, expected, 3);
Ptr<TonemapDurand> durand = createTonemapDurand(gamma);
durand->process(img, result);
loadImage(test_path + "durand.png", expected);
result.convertTo(result, CV_8UC3, 255);
checkEqual(result, expected, 3);
Ptr<TonemapReinhard> reinhard = createTonemapReinhard(gamma);
reinhard->process(img, result);
loadImage(test_path + "reinhard.png", expected);
result.convertTo(result, CV_8UC3, 255);
checkEqual(result, expected, 3);
Ptr<TonemapMantiuk> mantiuk = createTonemapMantiuk(gamma);
mantiuk->process(img, result);
loadImage(test_path + "mantiuk.png", expected);
result.convertTo(result, CV_8UC3, 255);
checkEqual(result, expected, 3);
}
TEST(Photo_AlignMTB, regression)
{
const int TESTS_COUNT = 100;
string folder = string(cvtest::TS::ptr()->get_data_path()) + "shared/";
string file_name = folder + "lena.png";
Mat img;
loadImage(file_name, img);
cvtColor(img, img, COLOR_RGB2GRAY);
int max_bits = 5;
int max_shift = 32;
srand(static_cast<unsigned>(time(0)));
int errors = 0;
Ptr<AlignMTB> align = createAlignMTB(max_bits);
for(int i = 0; i < TESTS_COUNT; i++) {
Point shift(rand() % max_shift, rand() % max_shift);
Mat res;
align->shiftMat(img, res, shift);
Point calc = align->calculateShift(img, res);
errors += (calc != -shift);
}
ASSERT_TRUE(errors < 5) << errors << " errors";
}
TEST(Photo_MergeMertens, regression)
{
string test_path = string(cvtest::TS::ptr()->get_data_path()) + "hdr/";
vector<Mat> images;
loadExposureSeq((test_path + "exposures/").c_str() , images);
Ptr<MergeMertens> merge = createMergeMertens();
Mat result, expected;
loadImage(test_path + "merge/mertens.png", expected);
merge->process(images, result);
result.convertTo(result, CV_8UC3, 255);
checkEqual(expected, result, 3);
}
TEST(Photo_MergeDebevec, regression)
{
string test_path = string(cvtest::TS::ptr()->get_data_path()) + "hdr/";
vector<Mat> images;
vector<float> times;
Mat response;
loadExposureSeq(test_path + "exposures/", images, times);
loadResponseCSV(test_path + "exposures/response.csv", response);
Ptr<MergeDebevec> merge = createMergeDebevec();
Mat result, expected;
loadImage(test_path + "merge/debevec.hdr", expected);
merge->process(images, result, times, response);
Ptr<Tonemap> map = createTonemap();
map->process(result, result);
map->process(expected, expected);
checkEqual(expected, result, 1e-2f);
}
TEST(Photo_MergeRobertson, regression)
{
string test_path = string(cvtest::TS::ptr()->get_data_path()) + "hdr/";
vector<Mat> images;
vector<float> times;
loadExposureSeq(test_path + "exposures/", images, times);
Ptr<MergeRobertson> merge = createMergeRobertson();
Mat result, expected;
loadImage(test_path + "merge/robertson.hdr", expected);
merge->process(images, result, times);
Ptr<Tonemap> map = createTonemap();
map->process(result, result);
map->process(expected, expected);
checkEqual(expected, result, 1e-2f);
}
TEST(Photo_CalibrateDebevec, regression)
{
string test_path = string(cvtest::TS::ptr()->get_data_path()) + "hdr/";
vector<Mat> images;
vector<float> times;
Mat response, expected;
loadExposureSeq(test_path + "exposures/", images, times);
loadResponseCSV(test_path + "calibrate/debevec.csv", expected);
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
calibrate->process(images, response, times);
Mat diff = abs(response - expected);
diff = diff.mul(1.0f / response);
double max;
minMaxLoc(diff, NULL, &max);
ASSERT_FALSE(max > 0.1);
}
TEST(Photo_CalibrateRobertson, regression)
{
string test_path = string(cvtest::TS::ptr()->get_data_path()) + "hdr/";
vector<Mat> images;
vector<float> times;
Mat response, expected;
loadExposureSeq(test_path + "exposures/", images, times);
loadResponseCSV(test_path + "calibrate/robertson.csv", expected);
Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
calibrate->process(images, response, times);
checkEqual(expected, response, 1e-3f);
}
......@@ -135,8 +135,22 @@ typedef Ptr<StereoMatcher> Ptr_StereoMatcher;
typedef Ptr<StereoBM> Ptr_StereoBM;
typedef Ptr<StereoSGBM> Ptr_StereoSGBM;
typedef Ptr<Tonemap> Ptr_Tonemap;
typedef Ptr<TonemapDrago> Ptr_TonemapDrago;
typedef Ptr<TonemapReinhard> Ptr_TonemapReinhard;
typedef Ptr<TonemapDurand> Ptr_TonemapDurand;
typedef Ptr<TonemapMantiuk> Ptr_TonemapMantiuk;
typedef Ptr<AlignMTB> Ptr_AlignMTB;
typedef Ptr<CalibrateDebevec> Ptr_CalibrateDebevec;
typedef Ptr<CalibrateRobertson> Ptr_CalibrateRobertson;
typedef Ptr<MergeDebevec> Ptr_MergeDebevec;
typedef Ptr<MergeRobertson> Ptr_MergeRobertson;
typedef Ptr<MergeMertens> Ptr_MergeMertens;
typedef Ptr<MergeRobertson> Ptr_MergeRobertson;
typedef Ptr<cv::softcascade::ChannelFeatureBuilder> Ptr_ChannelFeatureBuilder;
typedef Ptr<CLAHE> Ptr_CLAHE;
typedef Ptr<LineSegmentDetector > Ptr_LineSegmentDetector;
typedef SimpleBlobDetector::Params SimpleBlobDetector_Params;
......
......@@ -30,9 +30,9 @@ int main(int argc, char** argv)
// Create and LSD detector with standard or no refinement.
#if 1
Ptr<LineSegmentDetector> ls = createLineSegmentDetectorPtr(LSD_REFINE_STD);
Ptr<LineSegmentDetector> ls = createLineSegmentDetector(LSD_REFINE_STD);
#else
Ptr<LineSegmentDetector> ls = createLineSegmentDetectorPtr(LSD_REFINE_NONE);
Ptr<LineSegmentDetector> ls = createLineSegmentDetector(LSD_REFINE_NONE);
#endif
double start = double(getTickCount());
......
#include <opencv2/photo.hpp>
#include <opencv2/highgui.hpp>
#include <vector>
#include <iostream>
#include <fstream>
using namespace cv;
using namespace std;
void loadExposureSeq(String, vector<Mat>&, vector<float>&);
int main(int, char**argv)
{
vector<Mat> images;
vector<float> times;
loadExposureSeq(argv[1], images, times);
Mat response;
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
calibrate->process(images, response, times);
Mat hdr;
Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
merge_debevec->process(images, hdr, times, response);
Mat ldr;
Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
tonemap->process(hdr, ldr);
Mat fusion;
Ptr<MergeMertens> merge_mertens = createMergeMertens();
merge_mertens->process(images, fusion);
imwrite("fusion.png", fusion * 255);
imwrite("ldr.png", ldr * 255);
imwrite("hdr.hdr", hdr);
return 0;
}
void loadExposureSeq(String path, vector<Mat>& images, vector<float>& times)
{
path = path + std::string("/");
ifstream list_file((path + "list.txt").c_str());
string name;
float val;
while(list_file >> name >> val) {
Mat img = imread(path + name);
images.push_back(img);
times.push_back(1 / val);
}
list_file.close();
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment