Commit 5146e0d2 authored by Vladislav Samsonov's avatar Vladislav Samsonov
parents 17831add 159534a2
<!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => :grey_question:
- Operating System / Platform => :grey_question:
- Compiler => :grey_question:
##### Detailed description
<!-- your description -->
##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
\ No newline at end of file
<!-- Please use this line to close one or multiple issues when this pullrequest gets merged
You can add another line right under the first one:
resolves #1234
resolves #1235
-->
### This pullrequest changes
<!-- Please describe what your pullrequest is changing -->
...@@ -4,7 +4,7 @@ compiler: ...@@ -4,7 +4,7 @@ compiler:
- clang - clang
before_script: before_script:
- cd ../ - cd ../
- git clone https://github.com/Itseez/opencv.git - git clone https://github.com/opencv/opencv.git
- mkdir build-opencv - mkdir build-opencv
- cd build-opencv - cd build-opencv
- cmake -DOPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules ../opencv - cmake -DOPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules ../opencv
......
## Contributing guidelines ## Contributing guidelines
All guidelines for contributing to the OpenCV repository can be found at [`How to contribute guideline`](https://github.com/Itseez/opencv/wiki/How_to_contribute). All guidelines for contributing to the OpenCV repository can be found at [`How to contribute guideline`](https://github.com/opencv/opencv/wiki/How_to_contribute).
cmake_minimum_required(VERSION 2.8) cmake_minimum_required(VERSION 2.8)
if(APPLE_FRAMEWORK OR WINRT) if(APPLE_FRAMEWORK OR WINRT
OR AARCH64 # protobuf doesn't know this platform
)
ocv_module_disable(dnn) ocv_module_disable(dnn)
endif() endif()
......
...@@ -3,7 +3,7 @@ Build opencv_contrib with dnn module {#tutorial_dnn_build} ...@@ -3,7 +3,7 @@ Build opencv_contrib with dnn module {#tutorial_dnn_build}
Introduction Introduction
------------ ------------
opencv_dnn module is placed in the secondary [opencv_contrib](https://github.com/Itseez/opencv_contrib) repository, opencv_dnn module is placed in the secondary [opencv_contrib](https://github.com/opencv/opencv_contrib) repository,
which isn't distributed in binary form, therefore you need to build it manually. which isn't distributed in binary form, therefore you need to build it manually.
To do this you need to have installed: [CMake](http://www.cmake.org/download), git, and build system (*gcc* with *make* for Linux or *MS Visual Studio* for Windows) To do this you need to have installed: [CMake](http://www.cmake.org/download), git, and build system (*gcc* with *make* for Linux or *MS Visual Studio* for Windows)
...@@ -12,12 +12,12 @@ Steps ...@@ -12,12 +12,12 @@ Steps
----- -----
-# Make any directory, for example **opencv_root** -# Make any directory, for example **opencv_root**
-# Clone [opencv](https://github.com/Itseez/opencv) and [opencv_contrib](https://github.com/Itseez/opencv_contrib) repos to the **opencv_root**. -# Clone [opencv](https://github.com/opencv/opencv) and [opencv_contrib](https://github.com/opencv/opencv_contrib) repos to the **opencv_root**.
You can do it in terminal like here: You can do it in terminal like here:
@code @code
cd opencv_root cd opencv_root
git clone https://github.com/Itseez/opencv git clone https://github.com/opencv/opencv
git clone https://github.com/Itseez/opencv_contrib git clone https://github.com/opencv/opencv_contrib
@endcode @endcode
-# Run [CMake-gui] and set source and build directories: -# Run [CMake-gui] and set source and build directories:
......
...@@ -305,6 +305,25 @@ public: ...@@ -305,6 +305,25 @@ public:
/** @copybrief getGradientDescentIterations @see getGradientDescentIterations */ /** @copybrief getGradientDescentIterations @see getGradientDescentIterations */
CV_WRAP virtual void setVariationalRefinementIterations(int val) = 0; CV_WRAP virtual void setVariationalRefinementIterations(int val) = 0;
/** @brief Weight of the smoothness term
@see setVariationalRefinementAlpha */
CV_WRAP virtual float getVariationalRefinementAlpha() const = 0;
/** @copybrief getVariationalRefinementAlpha @see getVariationalRefinementAlpha */
CV_WRAP virtual void setVariationalRefinementAlpha(float val) = 0;
/** @brief Weight of the color constancy term
@see setVariationalRefinementDelta */
CV_WRAP virtual float getVariationalRefinementDelta() const = 0;
/** @copybrief getVariationalRefinementDelta @see getVariationalRefinementDelta */
CV_WRAP virtual void setVariationalRefinementDelta(float val) = 0;
/** @brief Weight of the gradient constancy term
@see setVariationalRefinementGamma */
CV_WRAP virtual float getVariationalRefinementGamma() const = 0;
/** @copybrief getVariationalRefinementGamma @see getVariationalRefinementGamma */
CV_WRAP virtual void setVariationalRefinementGamma(float val) = 0;
/** @brief Whether to use mean-normalization of patches when computing patch distance. It is turned on /** @brief Whether to use mean-normalization of patches when computing patch distance. It is turned on
by default as it typically provides a noticeable quality boost because of increased robustness to by default as it typically provides a noticeable quality boost because of increased robustness to
illumanition variations. Turn it off if you are certain that your sequence does't contain any changes illumanition variations. Turn it off if you are certain that your sequence does't contain any changes
......
This diff is collapsed.
...@@ -65,6 +65,9 @@ class DISOpticalFlowImpl : public DISOpticalFlow ...@@ -65,6 +65,9 @@ class DISOpticalFlowImpl : public DISOpticalFlow
int patch_stride; int patch_stride;
int grad_descent_iter; int grad_descent_iter;
int variational_refinement_iter; int variational_refinement_iter;
float variational_refinement_alpha;
float variational_refinement_gamma;
float variational_refinement_delta;
bool use_mean_normalization; bool use_mean_normalization;
bool use_spatial_propagation; bool use_spatial_propagation;
...@@ -84,6 +87,13 @@ class DISOpticalFlowImpl : public DISOpticalFlow ...@@ -84,6 +87,13 @@ class DISOpticalFlowImpl : public DISOpticalFlow
void setGradientDescentIterations(int val) { grad_descent_iter = val; } void setGradientDescentIterations(int val) { grad_descent_iter = val; }
int getVariationalRefinementIterations() const { return variational_refinement_iter; } int getVariationalRefinementIterations() const { return variational_refinement_iter; }
void setVariationalRefinementIterations(int val) { variational_refinement_iter = val; } void setVariationalRefinementIterations(int val) { variational_refinement_iter = val; }
float getVariationalRefinementAlpha() const { return variational_refinement_alpha; }
void setVariationalRefinementAlpha(float val) { variational_refinement_alpha = val; }
float getVariationalRefinementDelta() const { return variational_refinement_delta; }
void setVariationalRefinementDelta(float val) { variational_refinement_delta = val; }
float getVariationalRefinementGamma() const { return variational_refinement_gamma; }
void setVariationalRefinementGamma(float val) { variational_refinement_gamma = val; }
bool getUseMeanNormalization() const { return use_mean_normalization; } bool getUseMeanNormalization() const { return use_mean_normalization; }
void setUseMeanNormalization(bool val) { use_mean_normalization = val; } void setUseMeanNormalization(bool val) { use_mean_normalization = val; }
bool getUseSpatialPropagation() const { return use_spatial_propagation; } bool getUseSpatialPropagation() const { return use_spatial_propagation; }
...@@ -161,6 +171,10 @@ DISOpticalFlowImpl::DISOpticalFlowImpl() ...@@ -161,6 +171,10 @@ DISOpticalFlowImpl::DISOpticalFlowImpl()
patch_stride = 4; patch_stride = 4;
grad_descent_iter = 16; grad_descent_iter = 16;
variational_refinement_iter = 5; variational_refinement_iter = 5;
variational_refinement_alpha = 20.f;
variational_refinement_gamma = 10.f;
variational_refinement_delta = 5.f;
border_size = 16; border_size = 16;
use_mean_normalization = true; use_mean_normalization = true;
use_spatial_propagation = true; use_spatial_propagation = true;
...@@ -234,9 +248,9 @@ void DISOpticalFlowImpl::prepareBuffers(Mat &I0, Mat &I1) ...@@ -234,9 +248,9 @@ void DISOpticalFlowImpl::prepareBuffers(Mat &I0, Mat &I1)
spatialGradient(I0s[i], I0xs[i], I0ys[i]); spatialGradient(I0s[i], I0xs[i], I0ys[i]);
Ux[i].create(cur_rows, cur_cols); Ux[i].create(cur_rows, cur_cols);
Uy[i].create(cur_rows, cur_cols); Uy[i].create(cur_rows, cur_cols);
variational_refinement_processors[i]->setAlpha(20.0f); variational_refinement_processors[i]->setAlpha(variational_refinement_alpha);
variational_refinement_processors[i]->setDelta(5.0f); variational_refinement_processors[i]->setDelta(variational_refinement_delta);
variational_refinement_processors[i]->setGamma(10.0f); variational_refinement_processors[i]->setGamma(variational_refinement_gamma);
variational_refinement_processors[i]->setSorIterations(5); variational_refinement_processors[i]->setSorIterations(5);
variational_refinement_processors[i]->setFixedPointIterations(variational_refinement_iter); variational_refinement_processors[i]->setFixedPointIterations(variational_refinement_iter);
} }
......
...@@ -1074,9 +1074,10 @@ void VariationalRefinementImpl::RedBlackSOR_ParBody::operator()(const Range &ran ...@@ -1074,9 +1074,10 @@ void VariationalRefinementImpl::RedBlackSOR_ParBody::operator()(const Range &ran
void VariationalRefinementImpl::calc(InputArray I0, InputArray I1, InputOutputArray flow) void VariationalRefinementImpl::calc(InputArray I0, InputArray I1, InputOutputArray flow)
{ {
CV_Assert(!I0.empty() && I0.depth() == CV_8U && I0.channels() == 1); CV_Assert(!I0.empty() && I0.channels() == 1);
CV_Assert(!I1.empty() && I1.depth() == CV_8U && I1.channels() == 1); CV_Assert(!I1.empty() && I1.channels() == 1);
CV_Assert(I0.sameSize(I1)); CV_Assert(I0.sameSize(I1));
CV_Assert((I0.depth() == CV_8U && I1.depth() == CV_8U) || (I0.depth() == CV_32F && I1.depth() == CV_32F));
CV_Assert(!flow.empty() && flow.depth() == CV_32F && flow.channels() == 2); CV_Assert(!flow.empty() && flow.depth() == CV_32F && flow.channels() == 2);
CV_Assert(I0.sameSize(flow)); CV_Assert(I0.sameSize(flow));
...@@ -1089,9 +1090,10 @@ void VariationalRefinementImpl::calc(InputArray I0, InputArray I1, InputOutputAr ...@@ -1089,9 +1090,10 @@ void VariationalRefinementImpl::calc(InputArray I0, InputArray I1, InputOutputAr
void VariationalRefinementImpl::calcUV(InputArray I0, InputArray I1, InputOutputArray flow_u, InputOutputArray flow_v) void VariationalRefinementImpl::calcUV(InputArray I0, InputArray I1, InputOutputArray flow_u, InputOutputArray flow_v)
{ {
CV_Assert(!I0.empty() && I0.depth() == CV_8U && I0.channels() == 1); CV_Assert(!I0.empty() && I0.channels() == 1);
CV_Assert(!I1.empty() && I1.depth() == CV_8U && I1.channels() == 1); CV_Assert(!I1.empty() && I1.channels() == 1);
CV_Assert(I0.sameSize(I1)); CV_Assert(I0.sameSize(I1));
CV_Assert((I0.depth() == CV_8U && I1.depth() == CV_8U) || (I0.depth() == CV_32F && I1.depth() == CV_32F));
CV_Assert(!flow_u.empty() && flow_u.depth() == CV_32F && flow_u.channels() == 1); CV_Assert(!flow_u.empty() && flow_u.depth() == CV_32F && flow_u.channels() == 1);
CV_Assert(!flow_v.empty() && flow_v.depth() == CV_32F && flow_v.channels() == 1); CV_Assert(!flow_v.empty() && flow_v.depth() == CV_32F && flow_v.channels() == 1);
CV_Assert(I0.sameSize(flow_u)); CV_Assert(I0.sameSize(flow_u));
......
...@@ -92,7 +92,7 @@ grouping horizontally aligned text, and the method proposed by Lluis Gomez and D ...@@ -92,7 +92,7 @@ grouping horizontally aligned text, and the method proposed by Lluis Gomez and D
in [Gomez13][Gomez14] for grouping arbitrary oriented text (see erGrouping). in [Gomez13][Gomez14] for grouping arbitrary oriented text (see erGrouping).
To see the text detector at work, have a look at the textdetection demo: To see the text detector at work, have a look at the textdetection demo:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp> <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp>
@defgroup text_recognize Scene Text Recognition @defgroup text_recognize Scene Text Recognition
@} @}
......
...@@ -345,7 +345,7 @@ single vector\<Point\>, the function separates them in two different vectors (th ...@@ -345,7 +345,7 @@ single vector\<Point\>, the function separates them in two different vectors (th
ERStats where extracted from two different channels). ERStats where extracted from two different channels).
An example of MSERsToERStats in use can be found in the text detection webcam_demo: An example of MSERsToERStats in use can be found in the text detection webcam_demo:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp> <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
*/ */
CV_EXPORTS void MSERsToERStats(InputArray image, std::vector<std::vector<Point> > &contours, CV_EXPORTS void MSERsToERStats(InputArray image, std::vector<std::vector<Point> > &contours,
std::vector<std::vector<ERStat> > &regions); std::vector<std::vector<ERStat> > &regions);
......
...@@ -81,10 +81,10 @@ Notice that it is compiled only when tesseract-ocr is correctly installed. ...@@ -81,10 +81,10 @@ Notice that it is compiled only when tesseract-ocr is correctly installed.
@note @note
- (C++) An example of OCRTesseract recognition combined with scene text detection can be found - (C++) An example of OCRTesseract recognition combined with scene text detection can be found
at the end_to_end_recognition demo: at the end_to_end_recognition demo:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/end_to_end_recognition.cpp> <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/end_to_end_recognition.cpp>
- (C++) Another example of OCRTesseract recognition combined with scene text detection can be - (C++) Another example of OCRTesseract recognition combined with scene text detection can be
found at the webcam_demo: found at the webcam_demo:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp> <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
*/ */
class CV_EXPORTS_W OCRTesseract : public BaseOCR class CV_EXPORTS_W OCRTesseract : public BaseOCR
{ {
...@@ -152,7 +152,7 @@ enum decoder_mode ...@@ -152,7 +152,7 @@ enum decoder_mode
@note @note
- (C++) An example on using OCRHMMDecoder recognition combined with scene text detection can - (C++) An example on using OCRHMMDecoder recognition combined with scene text detection can
be found at the webcam_demo sample: be found at the webcam_demo sample:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp> <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
*/ */
class CV_EXPORTS_W OCRHMMDecoder : public BaseOCR class CV_EXPORTS_W OCRHMMDecoder : public BaseOCR
{ {
...@@ -165,7 +165,7 @@ public: ...@@ -165,7 +165,7 @@ public:
The default character classifier and feature extractor can be loaded using the utility funtion The default character classifier and feature extractor can be loaded using the utility funtion
loadOCRHMMClassifierNM and KNN model provided in loadOCRHMMClassifierNM and KNN model provided in
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/OCRHMM_knn_model_data.xml.gz>. <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRHMM_knn_model_data.xml.gz>.
*/ */
class CV_EXPORTS_W ClassifierCallback class CV_EXPORTS_W ClassifierCallback
{ {
...@@ -321,7 +321,7 @@ CV_EXPORTS_W Ptr<OCRHMMDecoder::ClassifierCallback> loadOCRHMMClassifierCNN(cons ...@@ -321,7 +321,7 @@ CV_EXPORTS_W Ptr<OCRHMMDecoder::ClassifierCallback> loadOCRHMMClassifierCNN(cons
* The function calculate frequency statistics of character pairs from the given lexicon and fills the output transition_probabilities_table with them. The transition_probabilities_table can be used as input in the OCRHMMDecoder::create() and OCRBeamSearchDecoder::create() methods. * The function calculate frequency statistics of character pairs from the given lexicon and fills the output transition_probabilities_table with them. The transition_probabilities_table can be used as input in the OCRHMMDecoder::create() and OCRBeamSearchDecoder::create() methods.
* @note * @note
* - (C++) An alternative would be to load the default generic language transition table provided in the text module samples folder (created from ispell 42869 english words list) : * - (C++) An alternative would be to load the default generic language transition table provided in the text module samples folder (created from ispell 42869 english words list) :
* <https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/OCRHMM_transitions_table.xml> * <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRHMM_transitions_table.xml>
**/ **/
CV_EXPORTS void createOCRHMMTransitionsTable(std::string& vocabulary, std::vector<std::string>& lexicon, OutputArray transition_probabilities_table); CV_EXPORTS void createOCRHMMTransitionsTable(std::string& vocabulary, std::vector<std::string>& lexicon, OutputArray transition_probabilities_table);
...@@ -335,7 +335,7 @@ CV_EXPORTS_W Mat createOCRHMMTransitionsTable(const String& vocabulary, std::vec ...@@ -335,7 +335,7 @@ CV_EXPORTS_W Mat createOCRHMMTransitionsTable(const String& vocabulary, std::vec
@note @note
- (C++) An example on using OCRBeamSearchDecoder recognition combined with scene text detection can - (C++) An example on using OCRBeamSearchDecoder recognition combined with scene text detection can
be found at the demo sample: be found at the demo sample:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/word_recognition.cpp> <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/word_recognition.cpp>
*/ */
class CV_EXPORTS_W OCRBeamSearchDecoder : public BaseOCR class CV_EXPORTS_W OCRBeamSearchDecoder : public BaseOCR
{ {
...@@ -348,7 +348,7 @@ public: ...@@ -348,7 +348,7 @@ public:
The default character classifier and feature extractor can be loaded using the utility funtion The default character classifier and feature extractor can be loaded using the utility funtion
loadOCRBeamSearchClassifierCNN with all its parameters provided in loadOCRBeamSearchClassifierCNN with all its parameters provided in
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/OCRBeamSearch_CNN_model_data.xml.gz>. <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRBeamSearch_CNN_model_data.xml.gz>.
*/ */
class CV_EXPORTS_W ClassifierCallback class CV_EXPORTS_W ClassifierCallback
{ {
......
...@@ -2820,12 +2820,12 @@ bool guo_hall_thinning(const Mat1b & img, Mat& skeleton) ...@@ -2820,12 +2820,12 @@ bool guo_hall_thinning(const Mat1b & img, Mat& skeleton)
p8 = (skeleton.data[row * skeleton.cols + col-1]) > 0; p8 = (skeleton.data[row * skeleton.cols + col-1]) > 0;
p9 = (skeleton.data[(row-1) * skeleton.cols + col-1]) > 0; p9 = (skeleton.data[(row-1) * skeleton.cols + col-1]) > 0;
int C = (!p2 & (p3 | p4)) + (!p4 & (p5 | p6)) + int C = (!p2 && (p3 || p4)) + (!p4 && (p5 || p6)) +
(!p6 & (p7 | p8)) + (!p8 & (p9 | p2)); (!p6 && (p7 || p8)) + (!p8 && (p9 || p2));
int N1 = (p9 | p2) + (p3 | p4) + (p5 | p6) + (p7 | p8); int N1 = (p9 || p2) + (p3 || p4) + (p5 || p6) + (p7 || p8);
int N2 = (p2 | p3) + (p4 | p5) + (p6 | p7) + (p8 | p9); int N2 = (p2 || p3) + (p4 || p5) + (p6 || p7) + (p8 || p9);
int N = N1 < N2 ? N1 : N2; int N = N1 < N2 ? N1 : N2;
int m = iter == 0 ? ((p6 | p7 | !p9) & p8) : ((p2 | p3 | !p5) & p4); int m = iter == 0 ? ((p6 || p7 || !p9) && p8) : ((p2 || p3 || !p5) && p4);
if ((C == 1) && (N >= 2) && (N <= 3) && (m == 0)) if ((C == 1) && (N >= 2) && (N <= 3) && (m == 0))
{ {
......
...@@ -1206,7 +1206,7 @@ the output transition_probabilities_table with them. ...@@ -1206,7 +1206,7 @@ the output transition_probabilities_table with them.
The transition_probabilities_table can be used as input in the OCRHMMDecoder::create() and OCRBeamSearchDecoder::create() methods. The transition_probabilities_table can be used as input in the OCRHMMDecoder::create() and OCRBeamSearchDecoder::create() methods.
@note @note
- (C++) An alternative would be to load the default generic language transition table provided in the text module samples folder (created from ispell 42869 english words list) : - (C++) An alternative would be to load the default generic language transition table provided in the text module samples folder (created from ispell 42869 english words list) :
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/OCRHMM_transitions_table.xml> <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRHMM_transitions_table.xml>
*/ */
void createOCRHMMTransitionsTable(string& vocabulary, vector<string>& lexicon, OutputArray _transitions) void createOCRHMMTransitionsTable(string& vocabulary, vector<string>& lexicon, OutputArray _transitions)
{ {
......
...@@ -28,8 +28,8 @@ Explanation ...@@ -28,8 +28,8 @@ Explanation
as shown in help. In the help, it means that the image files are numbered with 4 digits as shown in help. In the help, it means that the image files are numbered with 4 digits
(e.g. the file naming will be 0001.jpg, 0002.jpg, and so on). (e.g. the file naming will be 0001.jpg, 0002.jpg, and so on).
You can find video samples in Itseez/opencv_extra/testdata/cv/tracking You can find video samples in opencv_extra/testdata/cv/tracking
<https://github.com/Itseez/opencv_extra/tree/master/testdata/cv/tracking> <https://github.com/opencv/opencv_extra/tree/master/testdata/cv/tracking>
-# **Declares the required variables** -# **Declares the required variables**
......
...@@ -9,3 +9,4 @@ Extended Image Processing ...@@ -9,3 +9,4 @@ Extended Image Processing
6. Superpixels 6. Superpixels
7. Graph segmentation 7. Graph segmentation
8. Selective search from segmentation 8. Selective search from segmentation
10. Paillou Filter
...@@ -166,3 +166,13 @@ ...@@ -166,3 +166,13 @@
year={2014}, year={2014},
organization={IEEE} organization={IEEE}
} }
@article{paillou1997detecting,
title={Detecting step edges in noisy SAR images: a new linear operator},
author={Paillou, Philippe},
journal={IEEE transactions on geoscience and remote sensing},
volume={35},
number={1},
pages={191--196},
year={1997}
}
...@@ -48,6 +48,8 @@ ...@@ -48,6 +48,8 @@
#include "ximgproc/weighted_median_filter.hpp" #include "ximgproc/weighted_median_filter.hpp"
#include "ximgproc/slic.hpp" #include "ximgproc/slic.hpp"
#include "ximgproc/lsc.hpp" #include "ximgproc/lsc.hpp"
#include "ximgproc/paillou_filter.hpp"
/** @defgroup ximgproc Extended Image Processing /** @defgroup ximgproc Extended Image Processing
@{ @{
......
/*
* By downloading, copying, installing or using the software you agree to this license.
* If you do not agree to this license, do not download, install,
* copy or use the software.
*
*
* License Agreement
* For Open Source Computer Vision Library
* (3 - clause BSD License)
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met :
*
* *Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and / or other materials provided with the distribution.
*
* * Neither the names of the copyright holders nor the names of the contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* This software is provided by the copyright holders and contributors "as is" and
* any express or implied warranties, including, but not limited to, the implied
* warranties of merchantability and fitness for a particular purpose are disclaimed.
* In no event shall copyright holders or contributors be liable for any direct,
* indirect, incidental, special, exemplary, or consequential damages
* (including, but not limited to, procurement of substitute goods or services;
* loss of use, data, or profits; or business interruption) however caused
* and on any theory of liability, whether in contract, strict liability,
* or tort(including negligence or otherwise) arising in any way out of
* the use of this software, even if advised of the possibility of such damage.
*/
#ifndef __OPENCV_PAILLOUFILTER_HPP__
#define __OPENCV_PAILLOUFILTER_HPP__
#ifdef __cplusplus
#include <opencv2/core.hpp>
namespace cv {
namespace ximgproc {
//! @addtogroup ximgproc_filters
//! @{
/**
* @brief Applies Paillou filter to an image.
*
* For more details about this implementation, please see @cite paillou1997detecting
*
* @param op Source 8-bit or 16bit image, 1-channel or 3-channel image.
* @param _dst result CV_32F image with same numeber of channel than op.
* @param omega double see paper
* @param alpha double see paper
*
* @sa GradientPaillouX, GradientPaillouY
*/
CV_EXPORTS void GradientPaillouY(InputArray op, OutputArray _dst, double alpha, double omega);
CV_EXPORTS void GradientPaillouX(InputArray op, OutputArray _dst, double alpha, double omega);
}
}
#endif
#endif
/*
* By downloading, copying, installing or using the software you agree to this license.
* If you do not agree to this license, do not download, install,
* copy or use the software.
*
*
* License Agreement
* For Open Source Computer Vision Library
* (3 - clause BSD License)
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met :
*
* *Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and / or other materials provided with the distribution.
*
* * Neither the names of the copyright holders nor the names of the contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* This software is provided by the copyright holders and contributors "as is" and
* any express or implied warranties, including, but not limited to, the implied
* warranties of merchantability and fitness for a particular purpose are disclaimed.
* In no event shall copyright holders or contributors be liable for any direct,
* indirect, incidental, special, exemplary, or consequential damages
* (including, but not limited to, procurement of substitute goods or services;
* loss of use, data, or profits; or business interruption) however caused
* and on any theory of liability, whether in contract, strict liability,
* or tort(including negligence or otherwise) arising in any way out of
* the use of this software, even if advised of the possibility of such damage.
*/
#include <opencv2/core.hpp>
#include <opencv2/core/utility.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/ximgproc.hpp>
#include "opencv2/ximgproc/paillou_filter.hpp"
using namespace cv;
using namespace cv::ximgproc;
#include <iostream>
using namespace std;
int aa = 100, ww = 10;
Mat dx, dy;
UMat img;
const char* window_name = "Gradient Modulus";
static void DisplayImage(Mat x,string s)
{
vector<Mat> sx;
split(x, sx);
vector<double> minVal(3), maxVal(3);
for (int i = 0; i < static_cast<int>(sx.size()); i++)
{
minMaxLoc(sx[i], &minVal[i], &maxVal[i]);
}
maxVal[0] = *max_element(maxVal.begin(), maxVal.end());
minVal[0] = *min_element(minVal.begin(), minVal.end());
Mat uc;
x.convertTo(uc, CV_8U,255/(maxVal[0]-minVal[0]),-255*minVal[0]/(maxVal[0]-minVal[0]));
imshow(s, uc);
}
/**
* @function paillouFilter
* @brief Trackbar callback
*/
static void PaillouFilter(int, void*)
{
Mat dst;
double a=aa/100.0,w=ww/100.0;
Mat rx,ry;
GradientPaillouX(img,rx,a,w);
GradientPaillouY(img,ry,a,w);
DisplayImage(rx, "Gx");
DisplayImage(ry, "Gy");
add(rx.mul(rx),ry.mul(ry),dst);
sqrt(dst,dst);
DisplayImage(dst, window_name );
}
int main(int argc, char* argv[])
{
if (argc==2)
imread(argv[1]).copyTo(img);
if (img.empty())
{
cout << "File not found or empty image\n";
}
imshow("Original",img);
namedWindow( window_name, WINDOW_AUTOSIZE );
/// Create a Trackbar for user to enter threshold
createTrackbar( "a:",window_name, &aa, 400, PaillouFilter );
createTrackbar( "w:", window_name, &ww, 400, PaillouFilter );
PaillouFilter(0,NULL);
waitKey();
return 0;
}
\ No newline at end of file
/************************************************************************************** /**************************************************************************************
The structered edge demo requires you to provide a model. The structered edge demo requires you to provide a model.
This model can be found at the opencv_extra repository on Github on the following link: This model can be found at the opencv_extra repository on Github on the following link:
https://github.com/Itseez/opencv_extra/blob/master/testdata/cv/ximgproc/model.yml.gz https://github.com/opencv/opencv_extra/blob/master/testdata/cv/ximgproc/model.yml.gz
***************************************************************************************/ ***************************************************************************************/
#include <opencv2/ximgproc.hpp> #include <opencv2/ximgproc.hpp>
......
This diff is collapsed.
...@@ -27,7 +27,7 @@ Source Stereoscopic Image ...@@ -27,7 +27,7 @@ Source Stereoscopic Image
Source Code Source Code
----------- -----------
We will be using snippets from the example application, that can be downloaded [here ](https://github.com/Itseez/opencv_contrib/blob/master/modules/ximgproc/samples/disparity_filtering.cpp). We will be using snippets from the example application, that can be downloaded [here ](https://github.com/opencv/opencv_contrib/blob/master/modules/ximgproc/samples/disparity_filtering.cpp).
Explanation Explanation
----------- -----------
......
...@@ -19,7 +19,7 @@ file(GLOB ${the_target}_SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp) ...@@ -19,7 +19,7 @@ file(GLOB ${the_target}_SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp)
add_executable(${the_target} ${${the_target}_SOURCES}) add_executable(${the_target} ${${the_target}_SOURCES})
target_link_libraries(${the_target} ${OPENCV_${the_target}_DEPS}) ocv_target_link_libraries(${the_target} ${OPENCV_${the_target}_DEPS})
set_target_properties(${the_target} PROPERTIES set_target_properties(${the_target} PROPERTIES
DEBUG_POSTFIX "${OPENCV_DEBUG_POSTFIX}" DEBUG_POSTFIX "${OPENCV_DEBUG_POSTFIX}"
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment