Commit bb6f65c1 authored by Suleyman TURKMEN's avatar Suleyman TURKMEN

Update documentation ( tutorials )

parent 55d09451
...@@ -25,51 +25,13 @@ By varying \f$\alpha\f$ from \f$0 \rightarrow 1\f$ this operator can be used to ...@@ -25,51 +25,13 @@ By varying \f$\alpha\f$ from \f$0 \rightarrow 1\f$ this operator can be used to
*cross-dissolve* between two images or videos, as seen in slide shows and film productions (cool, *cross-dissolve* between two images or videos, as seen in slide shows and film productions (cool,
eh?) eh?)
Code Source Code
---- -----------
As usual, after the not-so-lengthy explanation, let's go to the code:
@code{.cpp}
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
int main( int argc, char** argv )
{
double alpha = 0.5; double beta; double input;
Mat src1, src2, dst;
/// Ask the user enter alpha
std::cout<<" Simple Linear Blender "<<std::endl;
std::cout<<"-----------------------"<<std::endl;
std::cout<<"* Enter alpha [0-1]: ";
std::cin>>input;
/// We use the alpha provided by the user if it is between 0 and 1
if( input >= 0.0 && input <= 1.0 )
{ alpha = input; }
/// Read image ( same size, same type )
src1 = imread("../../images/LinuxLogo.jpg");
src2 = imread("../../images/WindowsLogo.jpg");
if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
/// Create Windows
namedWindow("Linear Blend", 1);
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst ); Download the source code from
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/core/AddingImages/AddingImages.cpp).
@include cpp/tutorial_code/core/AddingImages/AddingImages.cpp
waitKey(0);
return 0;
}
@endcode
Explanation Explanation
----------- -----------
...@@ -78,25 +40,21 @@ Explanation ...@@ -78,25 +40,21 @@ Explanation
\f[g(x) = (1 - \alpha)f_{0}(x) + \alpha f_{1}(x)\f] \f[g(x) = (1 - \alpha)f_{0}(x) + \alpha f_{1}(x)\f]
We need two source images (\f$f_{0}(x)\f$ and \f$f_{1}(x)\f$). So, we load them in the usual way: We need two source images (\f$f_{0}(x)\f$ and \f$f_{1}(x)\f$). So, we load them in the usual way:
@code{.cpp} @snippet cpp/tutorial_code/core/AddingImages/AddingImages.cpp load
src1 = imread("../../images/LinuxLogo.jpg");
src2 = imread("../../images/WindowsLogo.jpg");
@endcode
**warning** **warning**
Since we are *adding* *src1* and *src2*, they both have to be of the same size (width and Since we are *adding* *src1* and *src2*, they both have to be of the same size (width and
height) and type. height) and type.
-# Now we need to generate the `g(x)` image. For this, the function add_weighted:addWeighted comes quite handy: -# Now we need to generate the `g(x)` image. For this, the function @ref cv::addWeighted comes quite handy:
@code{.cpp} @snippet cpp/tutorial_code/core/AddingImages/AddingImages.cpp blend_images
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
@endcode
since @ref cv::addWeighted produces: since @ref cv::addWeighted produces:
\f[dst = \alpha \cdot src1 + \beta \cdot src2 + \gamma\f] \f[dst = \alpha \cdot src1 + \beta \cdot src2 + \gamma\f]
In this case, `gamma` is the argument \f$0.0\f$ in the code above. In this case, `gamma` is the argument \f$0.0\f$ in the code above.
-# Create windows, show the images and wait for the user to end the program. -# Create windows, show the images and wait for the user to end the program.
@snippet cpp/tutorial_code/core/AddingImages/AddingImages.cpp display
Result Result
------ ------
......
...@@ -48,69 +48,26 @@ Code ...@@ -48,69 +48,26 @@ Code
- This code is in your OpenCV sample folder. Otherwise you can grab it from - This code is in your OpenCV sample folder. Otherwise you can grab it from
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/core/Matrix/Drawing_1.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/core/Matrix/Drawing_1.cpp)
@include samples/cpp/tutorial_code/core/Matrix/Drawing_1.cpp
Explanation Explanation
----------- -----------
-# Since we plan to draw two examples (an atom and a rook), we have to create 02 images and two -# Since we plan to draw two examples (an atom and a rook), we have to create two images and two
windows to display them. windows to display them.
@code{.cpp} @snippet cpp/tutorial_code/core/Matrix/Drawing_1.cpp create_images
/// Windows names
char atom_window[] = "Drawing 1: Atom";
char rook_window[] = "Drawing 2: Rook";
/// Create black empty images
Mat atom_image = Mat::zeros( w, w, CV_8UC3 );
Mat rook_image = Mat::zeros( w, w, CV_8UC3 );
@endcode
-# We created functions to draw different geometric shapes. For instance, to draw the atom we used -# We created functions to draw different geometric shapes. For instance, to draw the atom we used
*MyEllipse* and *MyFilledCircle*: *MyEllipse* and *MyFilledCircle*:
@code{.cpp} @snippet cpp/tutorial_code/core/Matrix/Drawing_1.cpp draw_atom
/// 1. Draw a simple atom:
/// 1.a. Creating ellipses
MyEllipse( atom_image, 90 );
MyEllipse( atom_image, 0 );
MyEllipse( atom_image, 45 );
MyEllipse( atom_image, -45 );
/// 1.b. Creating circles
MyFilledCircle( atom_image, Point( w/2.0, w/2.0) );
@endcode
-# And to draw the rook we employed *MyLine*, *rectangle* and a *MyPolygon*: -# And to draw the rook we employed *MyLine*, *rectangle* and a *MyPolygon*:
@code{.cpp} @snippet cpp/tutorial_code/core/Matrix/Drawing_1.cpp draw_rook
/// 2. Draw a rook
/// 2.a. Create a convex polygon
MyPolygon( rook_image );
/// 2.b. Creating rectangles
rectangle( rook_image,
Point( 0, 7*w/8.0 ),
Point( w, w),
Scalar( 0, 255, 255 ),
-1,
8 );
/// 2.c. Create a few lines
MyLine( rook_image, Point( 0, 15*w/16 ), Point( w, 15*w/16 ) );
MyLine( rook_image, Point( w/4, 7*w/8 ), Point( w/4, w ) );
MyLine( rook_image, Point( w/2, 7*w/8 ), Point( w/2, w ) );
MyLine( rook_image, Point( 3*w/4, 7*w/8 ), Point( 3*w/4, w ) );
@endcode
-# Let's check what is inside each of these functions: -# Let's check what is inside each of these functions:
- *MyLine* - *MyLine*
@code{.cpp} @snippet cpp/tutorial_code/core/Matrix/Drawing_1.cpp myline
void MyLine( Mat img, Point start, Point end )
{
int thickness = 2;
int lineType = 8;
line( img, start, end,
Scalar( 0, 0, 0 ),
thickness,
lineType );
}
@endcode
As we can see, *MyLine* just call the function @ref cv::line , which does the following: As we can see, *MyLine* just call the function @ref cv::line , which does the following:
- Draw a line from Point **start** to Point **end** - Draw a line from Point **start** to Point **end**
...@@ -120,95 +77,31 @@ Explanation ...@@ -120,95 +77,31 @@ Explanation
- The line thickness is set to **thickness** (in this case 2) - The line thickness is set to **thickness** (in this case 2)
- The line is a 8-connected one (**lineType** = 8) - The line is a 8-connected one (**lineType** = 8)
- *MyEllipse* - *MyEllipse*
@code{.cpp} @snippet cpp/tutorial_code/core/Matrix/Drawing_1.cpp myellipse
void MyEllipse( Mat img, double angle )
{
int thickness = 2;
int lineType = 8;
ellipse( img,
Point( w/2.0, w/2.0 ),
Size( w/4.0, w/16.0 ),
angle,
0,
360,
Scalar( 255, 0, 0 ),
thickness,
lineType );
}
@endcode
From the code above, we can observe that the function @ref cv::ellipse draws an ellipse such From the code above, we can observe that the function @ref cv::ellipse draws an ellipse such
that: that:
- The ellipse is displayed in the image **img** - The ellipse is displayed in the image **img**
- The ellipse center is located in the point **(w/2.0, w/2.0)** and is enclosed in a box - The ellipse center is located in the point **(w/2, w/2)** and is enclosed in a box
of size **(w/4.0, w/16.0)** of size **(w/4, w/16)**
- The ellipse is rotated **angle** degrees - The ellipse is rotated **angle** degrees
- The ellipse extends an arc between **0** and **360** degrees - The ellipse extends an arc between **0** and **360** degrees
- The color of the figure will be **Scalar( 255, 0, 0)** which means blue in RGB value. - The color of the figure will be **Scalar( 255, 0, 0)** which means blue in BGR value.
- The ellipse's **thickness** is 2. - The ellipse's **thickness** is 2.
- *MyFilledCircle* - *MyFilledCircle*
@code{.cpp} @snippet cpp/tutorial_code/core/Matrix/Drawing_1.cpp myfilledcircle
void MyFilledCircle( Mat img, Point center )
{
int thickness = -1;
int lineType = 8;
circle( img,
center,
w/32.0,
Scalar( 0, 0, 255 ),
thickness,
lineType );
}
@endcode
Similar to the ellipse function, we can observe that *circle* receives as arguments: Similar to the ellipse function, we can observe that *circle* receives as arguments:
- The image where the circle will be displayed (**img**) - The image where the circle will be displayed (**img**)
- The center of the circle denoted as the Point **center** - The center of the circle denoted as the Point **center**
- The radius of the circle: **w/32.0** - The radius of the circle: **w/32**
- The color of the circle: **Scalar(0, 0, 255)** which means *Red* in BGR - The color of the circle: **Scalar(0, 0, 255)** which means *Red* in BGR
- Since **thickness** = -1, the circle will be drawn filled. - Since **thickness** = -1, the circle will be drawn filled.
- *MyPolygon* - *MyPolygon*
@code{.cpp} @snippet cpp/tutorial_code/core/Matrix/Drawing_1.cpp mypolygon
void MyPolygon( Mat img )
{
int lineType = 8;
/* Create some points */
Point rook_points[1][20];
rook_points[0][0] = Point( w/4.0, 7*w/8.0 );
rook_points[0][1] = Point( 3*w/4.0, 7*w/8.0 );
rook_points[0][2] = Point( 3*w/4.0, 13*w/16.0 );
rook_points[0][3] = Point( 11*w/16.0, 13*w/16.0 );
rook_points[0][4] = Point( 19*w/32.0, 3*w/8.0 );
rook_points[0][5] = Point( 3*w/4.0, 3*w/8.0 );
rook_points[0][6] = Point( 3*w/4.0, w/8.0 );
rook_points[0][7] = Point( 26*w/40.0, w/8.0 );
rook_points[0][8] = Point( 26*w/40.0, w/4.0 );
rook_points[0][9] = Point( 22*w/40.0, w/4.0 );
rook_points[0][10] = Point( 22*w/40.0, w/8.0 );
rook_points[0][11] = Point( 18*w/40.0, w/8.0 );
rook_points[0][12] = Point( 18*w/40.0, w/4.0 );
rook_points[0][13] = Point( 14*w/40.0, w/4.0 );
rook_points[0][14] = Point( 14*w/40.0, w/8.0 );
rook_points[0][15] = Point( w/4.0, w/8.0 );
rook_points[0][16] = Point( w/4.0, 3*w/8.0 );
rook_points[0][17] = Point( 13*w/32.0, 3*w/8.0 );
rook_points[0][18] = Point( 5*w/16.0, 13*w/16.0 );
rook_points[0][19] = Point( w/4.0, 13*w/16.0) ;
const Point* ppt[1] = { rook_points[0] };
int npt[] = { 20 };
fillPoly( img,
ppt,
npt,
1,
Scalar( 255, 255, 255 ),
lineType );
}
@endcode
To draw a filled polygon we use the function @ref cv::fillPoly . We note that: To draw a filled polygon we use the function @ref cv::fillPoly . We note that:
- The polygon will be drawn on **img** - The polygon will be drawn on **img**
...@@ -218,22 +111,17 @@ Explanation ...@@ -218,22 +111,17 @@ Explanation
- The color of the polygon is defined by **Scalar( 255, 255, 255)**, which is the BGR - The color of the polygon is defined by **Scalar( 255, 255, 255)**, which is the BGR
value for *white* value for *white*
- *rectangle* - *rectangle*
@code{.cpp} @snippet cpp/tutorial_code/core/Matrix/Drawing_1.cpp rectangle
rectangle( rook_image,
Point( 0, 7*w/8.0 ),
Point( w, w),
Scalar( 0, 255, 255 ),
-1, 8 );
@endcode
Finally we have the @ref cv::rectangle function (we did not create a special function for Finally we have the @ref cv::rectangle function (we did not create a special function for
this guy). We note that: this guy). We note that:
- The rectangle will be drawn on **rook_image** - The rectangle will be drawn on **rook_image**
- Two opposite vertices of the rectangle are defined by *\* Point( 0, 7*w/8.0 )*\* - Two opposite vertices of the rectangle are defined by *\* Point( 0, 7*w/8 )*\*
andPoint( w, w)*\* andPoint( w, w)*\*
- The color of the rectangle is given by **Scalar(0, 255, 255)** which is the BGR value - The color of the rectangle is given by **Scalar(0, 255, 255)** which is the BGR value
for *yellow* for *yellow*
- Since the thickness value is given by **-1**, the rectangle will be filled. - Since the thickness value is given by **FILLED (-1)**, the rectangle will be filled.
Result Result
------ ------
......
...@@ -17,105 +17,8 @@ Code ...@@ -17,105 +17,8 @@ Code
This tutorial code's is shown lines below. You can also download it from This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerSubPix_Demo.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerSubPix_Demo.cpp)
@code{.cpp} @include samples/cpp/tutorial_code/TrackingMotion/cornerSubPix_Demo.cpp
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int maxCorners = 10;
int maxTrackbar = 25;
RNG rng(12345);
char* source_window = "Image";
/// Function header
void goodFeaturesToTrack_Demo( int, void* );
/* @function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create Window
namedWindow( source_window, WINDOW_AUTOSIZE );
/// Create Trackbar to set the number of corners
createTrackbar( "Max corners:", source_window, &maxCorners, maxTrackbar, goodFeaturesToTrack_Demo);
imshow( source_window, src );
goodFeaturesToTrack_Demo( 0, 0 );
waitKey(0);
return(0);
}
/*
* @function goodFeaturesToTrack_Demo.cpp
* @brief Apply Shi-Tomasi corner detector
*/
void goodFeaturesToTrack_Demo( int, void* )
{
if( maxCorners < 1 ) { maxCorners = 1; }
/// Parameters for Shi-Tomasi algorithm
vector<Point2f> corners;
double qualityLevel = 0.01;
double minDistance = 10;
int blockSize = 3;
bool useHarrisDetector = false;
double k = 0.04;
/// Copy the source image
Mat copy;
copy = src.clone();
/// Apply corner detection
goodFeaturesToTrack( src_gray,
corners,
maxCorners,
qualityLevel,
minDistance,
Mat(),
blockSize,
useHarrisDetector,
k );
/// Draw corners detected
cout<<"** Number of corners detected: "<<corners.size()<<endl;
int r = 4;
for( int i = 0; i < corners.size(); i++ )
{ circle( copy, corners[i], r, Scalar(rng.uniform(0,255), rng.uniform(0,255),
rng.uniform(0,255)), -1, 8, 0 ); }
/// Show what you got
namedWindow( source_window, WINDOW_AUTOSIZE );
imshow( source_window, copy );
/// Set the neeed parameters to find the refined corners
Size winSize = Size( 5, 5 );
Size zeroZone = Size( -1, -1 );
TermCriteria criteria = TermCriteria( TermCriteria::EPS + TermCriteria::MAX_ITER, 40, 0.001 );
/// Calculate the refined corner locations
cornerSubPix( src_gray, corners, winSize, zeroZone, criteria );
/// Write them down
for( int i = 0; i < corners.size(); i++ )
{ cout<<" -- Refined Corner ["<<i<<"] ("<<corners[i].x<<","<<corners[i].y<<")"<<endl; }
}
@endcode
Explanation Explanation
----------- -----------
......
...@@ -16,95 +16,8 @@ Code ...@@ -16,95 +16,8 @@ Code
This tutorial code's is shown lines below. You can also download it from This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/goodFeaturesToTrack_Demo.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/goodFeaturesToTrack_Demo.cpp)
@code{.cpp} @include samples/cpp/tutorial_code/TrackingMotion/goodFeaturesToTrack_Demo.cpp
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int maxCorners = 23;
int maxTrackbar = 100;
RNG rng(12345);
char* source_window = "Image";
/// Function header
void goodFeaturesToTrack_Demo( int, void* );
/*
* @function main
*/
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create Window
namedWindow( source_window, WINDOW_AUTOSIZE );
/// Create Trackbar to set the number of corners
createTrackbar( "Max corners:", source_window, &maxCorners, maxTrackbar, goodFeaturesToTrack_Demo );
imshow( source_window, src );
goodFeaturesToTrack_Demo( 0, 0 );
waitKey(0);
return(0);
}
/*
* @function goodFeaturesToTrack_Demo.cpp
* @brief Apply Shi-Tomasi corner detector
*/
void goodFeaturesToTrack_Demo( int, void* )
{
if( maxCorners < 1 ) { maxCorners = 1; }
/// Parameters for Shi-Tomasi algorithm
vector<Point2f> corners;
double qualityLevel = 0.01;
double minDistance = 10;
int blockSize = 3;
bool useHarrisDetector = false;
double k = 0.04;
/// Copy the source image
Mat copy;
copy = src.clone();
/// Apply corner detection
goodFeaturesToTrack( src_gray,
corners,
maxCorners,
qualityLevel,
minDistance,
Mat(),
blockSize,
useHarrisDetector,
k );
/// Draw corners detected
cout<<"** Number of corners detected: "<<corners.size()<<endl;
int r = 4;
for( size_t i = 0; i < corners.size(); i++ )
{ circle( copy, corners[i], r, Scalar(rng.uniform(0,255), rng.uniform(0,255),
rng.uniform(0,255)), -1, 8, 0 ); }
/// Show what you got
namedWindow( source_window, WINDOW_AUTOSIZE );
imshow( source_window, copy );
}
@endcode
Explanation Explanation
----------- -----------
......
...@@ -120,79 +120,8 @@ Code ...@@ -120,79 +120,8 @@ Code
This tutorial code's is shown lines below. You can also download it from This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerHarris_Demo.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/TrackingMotion/cornerHarris_Demo.cpp)
@code{.cpp} @include samples/cpp/tutorial_code/TrackingMotion/cornerHarris_Demo.cpp
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
/// Global variables
Mat src, src_gray;
int thresh = 200;
int max_thresh = 255;
char* source_window = "Source image";
char* corners_window = "Corners detected";
/// Function header
void cornerHarris_demo( int, void* );
/* @function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
cvtColor( src, src_gray, COLOR_BGR2GRAY );
/// Create a window and a trackbar
namedWindow( source_window, WINDOW_AUTOSIZE );
createTrackbar( "Threshold: ", source_window, &thresh, max_thresh, cornerHarris_demo );
imshow( source_window, src );
cornerHarris_demo( 0, 0 );
waitKey(0);
return(0);
}
/* @function cornerHarris_demo */
void cornerHarris_demo( int, void* )
{
Mat dst, dst_norm, dst_norm_scaled;
dst = Mat::zeros( src.size(), CV_32FC1 );
/// Detector parameters
int blockSize = 2;
int apertureSize = 3;
double k = 0.04;
/// Detecting corners
cornerHarris( src_gray, dst, blockSize, apertureSize, k, BORDER_DEFAULT );
/// Normalizing
normalize( dst, dst_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
convertScaleAbs( dst_norm, dst_norm_scaled );
/// Drawing a circle around corners
for( int j = 0; j < dst_norm.rows ; j++ )
{ for( int i = 0; i < dst_norm.cols; i++ )
{
if( (int) dst_norm.at<float>(j,i) > thresh )
{
circle( dst_norm_scaled, Point( i, j ), 5, Scalar(0), 2, 8, 0 );
}
}
}
/// Showing the result
namedWindow( corners_window, WINDOW_AUTOSIZE );
imshow( corners_window, dst_norm_scaled );
}
@endcode
Explanation Explanation
----------- -----------
......
...@@ -4,7 +4,7 @@ Adding a Trackbar to our applications! {#tutorial_trackbar} ...@@ -4,7 +4,7 @@ Adding a Trackbar to our applications! {#tutorial_trackbar}
- In the previous tutorials (about *linear blending* and the *brightness and contrast - In the previous tutorials (about *linear blending* and the *brightness and contrast
adjustments*) you might have noted that we needed to give some **input** to our programs, such adjustments*) you might have noted that we needed to give some **input** to our programs, such
as \f$\alpha\f$ and \f$beta\f$. We accomplished that by entering this data using the Terminal as \f$\alpha\f$ and \f$beta\f$. We accomplished that by entering this data using the Terminal
- Well, it is time to use some fancy GUI tools. OpenCV provides some GUI utilities (*highgui.h*) - Well, it is time to use some fancy GUI tools. OpenCV provides some GUI utilities (*highgui.hpp*)
for you. An example of this is a **Trackbar** for you. An example of this is a **Trackbar**
![](images/Adding_Trackbars_Tutorial_Trackbar.png) ![](images/Adding_Trackbars_Tutorial_Trackbar.png)
...@@ -24,104 +24,36 @@ Code ...@@ -24,104 +24,36 @@ Code
Let's modify the program made in the tutorial @ref tutorial_adding_images. We will let the user enter the Let's modify the program made in the tutorial @ref tutorial_adding_images. We will let the user enter the
\f$\alpha\f$ value by using the Trackbar. \f$\alpha\f$ value by using the Trackbar.
@code{.cpp} This tutorial code's is shown lines below. You can also download it from
#include <opencv2/opencv.hpp> [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/HighGUI/AddingImagesTrackbar.cpp)
using namespace cv; @include cpp/tutorial_code/HighGUI/AddingImagesTrackbar.cpp
/// Global Variables
const int alpha_slider_max = 100;
int alpha_slider;
double alpha;
double beta;
/// Matrices to store images
Mat src1;
Mat src2;
Mat dst;
/*
* @function on_trackbar
* @brief Callback for trackbar
*/
void on_trackbar( int, void* )
{
alpha = (double) alpha_slider/alpha_slider_max ;
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst );
}
int main( int argc, char** argv )
{
/// Read image ( same size, same type )
src1 = imread("../../images/LinuxLogo.jpg");
src2 = imread("../../images/WindowsLogo.jpg");
if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
/// Initialize values
alpha_slider = 0;
/// Create Windows
namedWindow("Linear Blend", 1);
/// Create Trackbars
char TrackbarName[50];
sprintf( TrackbarName, "Alpha x %d", alpha_slider_max );
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
/// Show some stuff
on_trackbar( alpha_slider, 0 );
/// Wait until user press some key
waitKey(0);
return 0;
}
@endcode
Explanation Explanation
----------- -----------
We only analyze the code that is related to Trackbar: We only analyze the code that is related to Trackbar:
-# First, we load 02 images, which are going to be blended. -# First, we load two images, which are going to be blended.
@code{.cpp} @snippet cpp/tutorial_code/HighGUI/AddingImagesTrackbar.cpp load
src1 = imread("../../images/LinuxLogo.jpg");
src2 = imread("../../images/WindowsLogo.jpg");
@endcode
-# To create a trackbar, first we have to create the window in which it is going to be located. So: -# To create a trackbar, first we have to create the window in which it is going to be located. So:
@code{.cpp} @snippet cpp/tutorial_code/HighGUI/AddingImagesTrackbar.cpp window
namedWindow("Linear Blend", 1);
@endcode
-# Now we can create the Trackbar: -# Now we can create the Trackbar:
@code{.cpp} @snippet cpp/tutorial_code/HighGUI/AddingImagesTrackbar.cpp create_trackbar
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
@endcode
Note the following: Note the following:
- Our Trackbar has a label **TrackbarName** - Our Trackbar has a label **TrackbarName**
- The Trackbar is located in the window named **"Linear Blend"** - The Trackbar is located in the window named **Linear Blend**
- The Trackbar values will be in the range from \f$0\f$ to **alpha_slider_max** (the minimum - The Trackbar values will be in the range from \f$0\f$ to **alpha_slider_max** (the minimum
limit is always **zero**). limit is always **zero**).
- The numerical value of Trackbar is stored in **alpha_slider** - The numerical value of Trackbar is stored in **alpha_slider**
- Whenever the user moves the Trackbar, the callback function **on_trackbar** is called - Whenever the user moves the Trackbar, the callback function **on_trackbar** is called
-# Finally, we have to define the callback function **on_trackbar** -# Finally, we have to define the callback function **on_trackbar**
@code{.cpp} @snippet cpp/tutorial_code/HighGUI/AddingImagesTrackbar.cpp on_trackbar
void on_trackbar( int, void* )
{
alpha = (double) alpha_slider/alpha_slider_max ;
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst );
}
@endcode
Note that: Note that:
- We use the value of **alpha_slider** (integer) to get a double value for **alpha**. - We use the value of **alpha_slider** (integer) to get a double value for **alpha**.
- **alpha_slider** is updated each time the trackbar is displaced by the user. - **alpha_slider** is updated each time the trackbar is displaced by the user.
...@@ -135,7 +67,7 @@ Result ...@@ -135,7 +67,7 @@ Result
![](images/Adding_Trackbars_Tutorial_Result_0.jpg) ![](images/Adding_Trackbars_Tutorial_Result_0.jpg)
- As a manner of practice, you can also add 02 trackbars for the program made in - As a manner of practice, you can also add two trackbars for the program made in
@ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might @ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might
look like: look like:
......
...@@ -35,17 +35,13 @@ How to Read Raster Data using GDAL ...@@ -35,17 +35,13 @@ How to Read Raster Data using GDAL
This demonstration uses the default OpenCV imread function. The primary difference is that in order This demonstration uses the default OpenCV imread function. The primary difference is that in order
to force GDAL to load the image, you must use the appropriate flag. to force GDAL to load the image, you must use the appropriate flag.
@code{.cpp} @snippet cpp/tutorial_code/imgcodecs/GDAL_IO/gdal-image.cpp load1
cv::Mat image = cv::imread( argv[1], cv::IMREAD_LOAD_GDAL );
@endcode
When loading digital elevation models, the actual numeric value of each pixel is essential and When loading digital elevation models, the actual numeric value of each pixel is essential and
cannot be scaled or truncated. For example, with image data a pixel represented as a double with a cannot be scaled or truncated. For example, with image data a pixel represented as a double with a
value of 1 has an equal appearance to a pixel which is represented as an unsigned character with a value of 1 has an equal appearance to a pixel which is represented as an unsigned character with a
value of 255. With terrain data, the pixel value represents the elevation in meters. In order to value of 255. With terrain data, the pixel value represents the elevation in meters. In order to
ensure that OpenCV preserves the native value, use the GDAL flag in imread with the ANYDEPTH flag. ensure that OpenCV preserves the native value, use the GDAL flag in imread with the ANYDEPTH flag.
@code{.cpp} @snippet cpp/tutorial_code/imgcodecs/GDAL_IO/gdal-image.cpp load2
cv::Mat dem = cv::imread( argv[2], cv::IMREAD_LOAD_GDAL | cv::IMREAD_ANYDEPTH );
@endcode
If you know beforehand the type of DEM model you are loading, then it may be a safe bet to test the If you know beforehand the type of DEM model you are loading, then it may be a safe bet to test the
Mat::type() or Mat::depth() using an assert or other mechanism. NASA or DOD specification documents Mat::type() or Mat::depth() using an assert or other mechanism. NASA or DOD specification documents
can provide the input types for various elevation models. The major types, SRTM and DTED, are both can provide the input types for various elevation models. The major types, SRTM and DTED, are both
......
...@@ -71,7 +71,7 @@ Explanation ...@@ -71,7 +71,7 @@ Explanation
- Load an image (can be BGR or grayscale) - Load an image (can be BGR or grayscale)
- Create two windows (one for dilation output, the other for erosion) - Create two windows (one for dilation output, the other for erosion)
- Create a set of 02 Trackbars for each operation: - Create a set of two Trackbars for each operation:
- The first trackbar "Element" returns either **erosion_elem** or **dilation_elem** - The first trackbar "Element" returns either **erosion_elem** or **dilation_elem**
- The second trackbar "Kernel size" return **erosion_size** or **dilation_size** for the - The second trackbar "Kernel size" return **erosion_size** or **dilation_size** for the
corresponding operation. corresponding operation.
...@@ -81,23 +81,8 @@ Explanation ...@@ -81,23 +81,8 @@ Explanation
Let's analyze these two functions: Let's analyze these two functions:
-# **erosion:** -# **erosion:**
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Morphology_1.cpp erosion
/* @function Erosion */
void Erosion( int, void* )
{
int erosion_type;
if( erosion_elem == 0 ){ erosion_type = MORPH_RECT; }
else if( erosion_elem == 1 ){ erosion_type = MORPH_CROSS; }
else if( erosion_elem == 2) { erosion_type = MORPH_ELLIPSE; }
Mat element = getStructuringElement( erosion_type,
Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) );
/// Apply the erosion operation
erode( src, erosion_dst, element );
imshow( "Erosion Demo", erosion_dst );
}
@endcode
- The function that performs the *erosion* operation is @ref cv::erode . As we can see, it - The function that performs the *erosion* operation is @ref cv::erode . As we can see, it
receives three arguments: receives three arguments:
- *src*: The source image - *src*: The source image
...@@ -105,11 +90,8 @@ Explanation ...@@ -105,11 +90,8 @@ Explanation
- *element*: This is the kernel we will use to perform the operation. If we do not - *element*: This is the kernel we will use to perform the operation. If we do not
specify, the default is a simple `3x3` matrix. Otherwise, we can specify its specify, the default is a simple `3x3` matrix. Otherwise, we can specify its
shape. For this, we need to use the function cv::getStructuringElement : shape. For this, we need to use the function cv::getStructuringElement :
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Morphology_1.cpp kernel
Mat element = getStructuringElement( erosion_type,
Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) );
@endcode
We can choose any of three shapes for our kernel: We can choose any of three shapes for our kernel:
- Rectangular box: MORPH_RECT - Rectangular box: MORPH_RECT
...@@ -129,23 +111,7 @@ Reference for more details. ...@@ -129,23 +111,7 @@ Reference for more details.
The code is below. As you can see, it is completely similar to the snippet of code for **erosion**. The code is below. As you can see, it is completely similar to the snippet of code for **erosion**.
Here we also have the option of defining our kernel, its anchor point and the size of the operator Here we also have the option of defining our kernel, its anchor point and the size of the operator
to be used. to be used.
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Morphology_1.cpp dilation
/* @function Dilation */
void Dilation( int, void* )
{
int dilation_type;
if( dilation_elem == 0 ){ dilation_type = MORPH_RECT; }
else if( dilation_elem == 1 ){ dilation_type = MORPH_CROSS; }
else if( dilation_elem == 2) { dilation_type = MORPH_ELLIPSE; }
Mat element = getStructuringElement( dilation_type,
Size( 2*dilation_size + 1, 2*dilation_size+1 ),
Point( dilation_size, dilation_size ) );
/// Apply the dilation operation
dilate( src, dilation_dst, element );
imshow( "Dilation Demo", dilation_dst );
}
@endcode
Results Results
------- -------
......
...@@ -16,8 +16,7 @@ Theory ...@@ -16,8 +16,7 @@ Theory
------ ------
@note The explanation below belongs to the book [Computer Vision: Algorithms and @note The explanation below belongs to the book [Computer Vision: Algorithms and
Applications](http://szeliski.org/Book/) by Richard Szeliski and to *LearningOpenCV* .. container:: Applications](http://szeliski.org/Book/) by Richard Szeliski and to *LearningOpenCV*
enumeratevisibleitemswithsquare
- *Smoothing*, also called *blurring*, is a simple and frequently used image processing - *Smoothing*, also called *blurring*, is a simple and frequently used image processing
operation. operation.
...@@ -96,96 +95,7 @@ Code ...@@ -96,96 +95,7 @@ Code
- **Downloadable code**: Click - **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Smoothing.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Smoothing.cpp)
- **Code at glance:** - **Code at glance:**
@code{.cpp} @include samples/cpp/tutorial_code/ImgProc/Smoothing.cpp
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
using namespace std;
using namespace cv;
/// Global Variables
int DELAY_CAPTION = 1500;
int DELAY_BLUR = 100;
int MAX_KERNEL_LENGTH = 31;
Mat src; Mat dst;
char window_name[] = "Filter Demo 1";
/// Function headers
int display_caption( char* caption );
int display_dst( int delay );
/*
* function main
*/
int main( int argc, char** argv )
{
namedWindow( window_name, WINDOW_AUTOSIZE );
/// Load the source image
src = imread( "../images/lena.jpg", 1 );
if( display_caption( "Original Image" ) != 0 ) { return 0; }
dst = src.clone();
if( display_dst( DELAY_CAPTION ) != 0 ) { return 0; }
/// Applying Homogeneous blur
if( display_caption( "Homogeneous Blur" ) != 0 ) { return 0; }
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ blur( src, dst, Size( i, i ), Point(-1,-1) );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
/// Applying Gaussian blur
if( display_caption( "Gaussian Blur" ) != 0 ) { return 0; }
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ GaussianBlur( src, dst, Size( i, i ), 0, 0 );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
/// Applying Median blur
if( display_caption( "Median Blur" ) != 0 ) { return 0; }
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ medianBlur ( src, dst, i );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
/// Applying Bilateral Filter
if( display_caption( "Bilateral Blur" ) != 0 ) { return 0; }
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ bilateralFilter ( src, dst, i, i*2, i/2 );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
/// Wait until user press a key
display_caption( "End: Press a key!" );
waitKey(0);
return 0;
}
int display_caption( char* caption )
{
dst = Mat::zeros( src.size(), src.type() );
putText( dst, caption,
Point( src.cols/4, src.rows/2),
FONT_HERSHEY_COMPLEX, 1, Scalar(255, 255, 255) );
imshow( window_name, dst );
int c = waitKey( DELAY_CAPTION );
if( c >= 0 ) { return -1; }
return 0;
}
int display_dst( int delay )
{
imshow( window_name, dst );
int c = waitKey ( delay );
if( c >= 0 ) { return -1; }
return 0;
}
@endcode
Explanation Explanation
----------- -----------
...@@ -195,11 +105,8 @@ Explanation ...@@ -195,11 +105,8 @@ Explanation
-# **Normalized Block Filter:** -# **Normalized Block Filter:**
OpenCV offers the function @ref cv::blur to perform smoothing with this filter. OpenCV offers the function @ref cv::blur to perform smoothing with this filter.
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Smoothing.cpp blur
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ blur( src, dst, Size( i, i ), Point(-1,-1) );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
@endcode
We specify 4 arguments (more details, check the Reference): We specify 4 arguments (more details, check the Reference):
- *src*: Source image - *src*: Source image
...@@ -213,11 +120,8 @@ Explanation ...@@ -213,11 +120,8 @@ Explanation
-# **Gaussian Filter:** -# **Gaussian Filter:**
It is performed by the function @ref cv::GaussianBlur : It is performed by the function @ref cv::GaussianBlur :
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Smoothing.cpp gaussianblur
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ GaussianBlur( src, dst, Size( i, i ), 0, 0 );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
@endcode
Here we use 4 arguments (more details, check the OpenCV reference): Here we use 4 arguments (more details, check the OpenCV reference):
- *src*: Source image - *src*: Source image
...@@ -233,11 +137,8 @@ Explanation ...@@ -233,11 +137,8 @@ Explanation
-# **Median Filter:** -# **Median Filter:**
This filter is provided by the @ref cv::medianBlur function: This filter is provided by the @ref cv::medianBlur function:
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Smoothing.cpp medianblur
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ medianBlur ( src, dst, i );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
@endcode
We use three arguments: We use three arguments:
- *src*: Source image - *src*: Source image
...@@ -247,11 +148,8 @@ Explanation ...@@ -247,11 +148,8 @@ Explanation
-# **Bilateral Filter** -# **Bilateral Filter**
Provided by OpenCV function @ref cv::bilateralFilter Provided by OpenCV function @ref cv::bilateralFilter
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Smoothing.cpp bilateralfilter
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ bilateralFilter ( src, dst, i, i*2, i/2 );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
@endcode
We use 5 arguments: We use 5 arguments:
- *src*: Source image - *src*: Source image
......
...@@ -81,17 +81,8 @@ Explanation ...@@ -81,17 +81,8 @@ Explanation
----------- -----------
-# Create some needed variables: -# Create some needed variables:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp variables
Mat src, src_gray;
Mat dst, detected_edges;
int edgeThresh = 1;
int lowThreshold;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
char* window_name = "Edge Map";
@endcode
Note the following: Note the following:
-# We establish a ratio of lower:upper threshold of 3:1 (with the variable *ratio*) -# We establish a ratio of lower:upper threshold of 3:1 (with the variable *ratio*)
...@@ -100,29 +91,16 @@ Explanation ...@@ -100,29 +91,16 @@ Explanation
-# We set a maximum value for the lower Threshold of \f$100\f$. -# We set a maximum value for the lower Threshold of \f$100\f$.
-# Loads the source image: -# Loads the source image:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp load
/// Load an image
src = imread( argv[1] );
if( !src.data )
{ return -1; }
@endcode
-# Create a matrix of the same type and size of *src* (to be *dst*) -# Create a matrix of the same type and size of *src* (to be *dst*)
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp create_mat
dst.create( src.size(), src.type() );
@endcode
-# Convert the image to grayscale (using the function @ref cv::cvtColor : -# Convert the image to grayscale (using the function @ref cv::cvtColor :
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp convert_to_gray
cvtColor( src, src_gray, COLOR_BGR2GRAY );
@endcode
-# Create a window to display the results -# Create a window to display the results
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp create_window
namedWindow( window_name, WINDOW_AUTOSIZE );
@endcode
-# Create a Trackbar for the user to enter the lower threshold for our Canny detector: -# Create a Trackbar for the user to enter the lower threshold for our Canny detector:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp create_trackbar
createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
@endcode
Observe the following: Observe the following:
-# The variable to be controlled by the Trackbar is *lowThreshold* with a limit of -# The variable to be controlled by the Trackbar is *lowThreshold* with a limit of
...@@ -132,13 +110,9 @@ Explanation ...@@ -132,13 +110,9 @@ Explanation
-# Let's check the *CannyThreshold* function, step by step: -# Let's check the *CannyThreshold* function, step by step:
-# First, we blur the image with a filter of kernel size 3: -# First, we blur the image with a filter of kernel size 3:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp reduce_noise
blur( src_gray, detected_edges, Size(3,3) );
@endcode
-# Second, we apply the OpenCV function @ref cv::Canny : -# Second, we apply the OpenCV function @ref cv::Canny :
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp canny
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
@endcode
where the arguments are: where the arguments are:
- *detected_edges*: Source image, grayscale - *detected_edges*: Source image, grayscale
...@@ -150,23 +124,16 @@ Explanation ...@@ -150,23 +124,16 @@ Explanation
internally) internally)
-# We fill a *dst* image with zeros (meaning the image is completely black). -# We fill a *dst* image with zeros (meaning the image is completely black).
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp fill
dst = Scalar::all(0);
@endcode
-# Finally, we will use the function @ref cv::Mat::copyTo to map only the areas of the image that are -# Finally, we will use the function @ref cv::Mat::copyTo to map only the areas of the image that are
identified as edges (on a black background). identified as edges (on a black background).
@code{.cpp}
src.copyTo( dst, detected_edges);
@endcode
@ref cv::Mat::copyTo copy the *src* image onto *dst*. However, it will only copy the pixels in the @ref cv::Mat::copyTo copy the *src* image onto *dst*. However, it will only copy the pixels in the
locations where they have non-zero values. Since the output of the Canny detector is the edge locations where they have non-zero values. Since the output of the Canny detector is the edge
contours on a black background, the resulting *dst* will be black in all the area but the contours on a black background, the resulting *dst* will be black in all the area but the
detected edges. detected edges.
@snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp copyto
-# We display our result: -# We display our result:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/CannyDetector_Demo.cpp display
imshow( window_name, dst );
@endcode
Result Result
------ ------
......
...@@ -53,61 +53,29 @@ Explanation ...@@ -53,61 +53,29 @@ Explanation
----------- -----------
-# First we declare the variables we are going to use: -# First we declare the variables we are going to use:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp variables
Mat src, dst;
int top, bottom, left, right;
int borderType;
Scalar value;
char* window_name = "copyMakeBorder Demo";
RNG rng(12345);
@endcode
Especial attention deserves the variable *rng* which is a random number generator. We use it to Especial attention deserves the variable *rng* which is a random number generator. We use it to
generate the random border color, as we will see soon. generate the random border color, as we will see soon.
-# As usual we load our source image *src*: -# As usual we load our source image *src*:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp load
src = imread( argv[1] );
if( !src.data )
{ return -1;
printf(" No data entered, please enter the path to an image file \n");
}
@endcode
-# After giving a short intro of how to use the program, we create a window: -# After giving a short intro of how to use the program, we create a window:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp create_window
namedWindow( window_name, WINDOW_AUTOSIZE );
@endcode
-# Now we initialize the argument that defines the size of the borders (*top*, *bottom*, *left* and -# Now we initialize the argument that defines the size of the borders (*top*, *bottom*, *left* and
*right*). We give them a value of 5% the size of *src*. *right*). We give them a value of 5% the size of *src*.
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp init_arguments
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows); -# The program runs in a **for** loop. If the user presses 'c' or 'r', the *borderType* variable
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
@endcode
-# The program begins a *while* loop. If the user presses 'c' or 'r', the *borderType* variable
takes the value of *BORDER_CONSTANT* or *BORDER_REPLICATE* respectively: takes the value of *BORDER_CONSTANT* or *BORDER_REPLICATE* respectively:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp check_keypress
while( true )
{
c = waitKey(500);
if( (char)c == 27 )
{ break; }
else if( (char)c == 'c' )
{ borderType = BORDER_CONSTANT; }
else if( (char)c == 'r' )
{ borderType = BORDER_REPLICATE; }
@endcode
-# In each iteration (after 0.5 seconds), the variable *value* is updated... -# In each iteration (after 0.5 seconds), the variable *value* is updated...
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp update_value
value = Scalar( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) );
@endcode
with a random value generated by the **RNG** variable *rng*. This value is a number picked with a random value generated by the **RNG** variable *rng*. This value is a number picked
randomly in the range \f$[0,255]\f$ randomly in the range \f$[0,255]\f$
-# Finally, we call the function @ref cv::copyMakeBorder to apply the respective padding: -# Finally, we call the function @ref cv::copyMakeBorder to apply the respective padding:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp copymakeborder
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
@endcode
The arguments are: The arguments are:
-# *src*: Source image -# *src*: Source image
...@@ -120,9 +88,7 @@ Explanation ...@@ -120,9 +88,7 @@ Explanation
pixels. pixels.
-# We display our output image in the image created previously -# We display our output image in the image created previously
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp display
imshow( window_name, dst );
@endcode
Results Results
------- -------
......
...@@ -63,100 +63,25 @@ Code ...@@ -63,100 +63,25 @@ Code
-# The tutorial code's is shown lines below. You can also download it from -# The tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/filter2D_demo.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/filter2D_demo.cpp)
@code{.cpp} @include cpp/tutorial_code/ImgTrans/filter2D_demo.cpp
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/* @function main */
int main ( int argc, char** argv )
{
/// Declare variables
Mat src, dst;
Mat kernel;
Point anchor;
double delta;
int ddepth;
int kernel_size;
char* window_name = "filter2D Demo";
int c;
/// Load an image
src = imread( argv[1] );
if( !src.data )
{ return -1; }
/// Create window
namedWindow( window_name, WINDOW_AUTOSIZE );
/// Initialize arguments for the filter
anchor = Point( -1, -1 );
delta = 0;
ddepth = -1;
/// Loop - Will filter the image with different kernel sizes each 0.5 seconds
int ind = 0;
while( true )
{
c = waitKey(500);
/// Press 'ESC' to exit the program
if( (char)c == 27 )
{ break; }
/// Update kernel size for a normalized box filter
kernel_size = 3 + 2*( ind%5 );
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
/// Apply filter
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow( window_name, dst );
ind++;
}
return 0;
}
@endcode
Explanation Explanation
----------- -----------
-# Load an image -# Load an image
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp load
src = imread( argv[1] );
if( !src.data )
{ return -1; }
@endcode
-# Create a window to display the result
@code{.cpp}
namedWindow( window_name, WINDOW_AUTOSIZE );
@endcode
-# Initialize the arguments for the linear filter -# Initialize the arguments for the linear filter
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp init_arguments
anchor = Point( -1, -1 );
delta = 0;
ddepth = -1;
@endcode
-# Perform an infinite loop updating the kernel size and applying our linear filter to the input -# Perform an infinite loop updating the kernel size and applying our linear filter to the input
image. Let's analyze that more in detail: image. Let's analyze that more in detail:
-# First we define the kernel our filter is going to use. Here it is: -# First we define the kernel our filter is going to use. Here it is:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp update_kernel
kernel_size = 3 + 2*( ind%5 );
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
@endcode
The first line is to update the *kernel_size* to odd values in the range: \f$[3,11]\f$. The second The first line is to update the *kernel_size* to odd values in the range: \f$[3,11]\f$. The second
line actually builds the kernel by setting its value to a matrix filled with \f$1's\f$ and line actually builds the kernel by setting its value to a matrix filled with \f$1's\f$ and
normalizing it by dividing it between the number of elements. normalizing it by dividing it between the number of elements.
-# After setting the kernel, we can generate the filter by using the function @ref cv::filter2D : -# After setting the kernel, we can generate the filter by using the function @ref cv::filter2D :
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp apply_filter
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
@endcode
The arguments denote: The arguments denote:
-# *src*: Source image -# *src*: Source image
......
...@@ -48,63 +48,33 @@ Explanation ...@@ -48,63 +48,33 @@ Explanation
----------- -----------
-# Load an image -# Load an image
@code{.cpp} @snippet samples/cpp/houghcircles.cpp load
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
@endcode
-# Convert it to grayscale: -# Convert it to grayscale:
@code{.cpp} @snippet samples/cpp/houghcircles.cpp convert_to_gray
cvtColor( src, src_gray, COLOR_BGR2GRAY ); -# Apply a Median blur to reduce noise and avoid false circle detection:
@endcode @snippet samples/cpp/houghcircles.cpp reduce_noise
-# Apply a Gaussian blur to reduce noise and avoid false circle detection:
@code{.cpp}
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
@endcode
-# Proceed to apply Hough Circle Transform: -# Proceed to apply Hough Circle Transform:
@code{.cpp} @snippet samples/cpp/houghcircles.cpp houghcircles
vector<Vec3f> circles;
HoughCircles( src_gray, circles, HOUGH_GRADIENT, 1, src_gray.rows/8, 200, 100, 0, 0 );
@endcode
with the arguments: with the arguments:
- *src_gray*: Input image (grayscale). - *gray*: Input image (grayscale).
- *circles*: A vector that stores sets of 3 values: \f$x_{c}, y_{c}, r\f$ for each detected - *circles*: A vector that stores sets of 3 values: \f$x_{c}, y_{c}, r\f$ for each detected
circle. circle.
- *HOUGH_GRADIENT*: Define the detection method. Currently this is the only one available in - *HOUGH_GRADIENT*: Define the detection method. Currently this is the only one available in
OpenCV. OpenCV.
- *dp = 1*: The inverse ratio of resolution. - *dp = 1*: The inverse ratio of resolution.
- *min_dist = src_gray.rows/8*: Minimum distance between detected centers. - *min_dist = gray.rows/16*: Minimum distance between detected centers.
- *param_1 = 200*: Upper threshold for the internal Canny edge detector. - *param_1 = 200*: Upper threshold for the internal Canny edge detector.
- *param_2* = 100\*: Threshold for center detection. - *param_2* = 100\*: Threshold for center detection.
- *min_radius = 0*: Minimum radio to be detected. If unknown, put zero as default. - *min_radius = 0*: Minimum radio to be detected. If unknown, put zero as default.
- *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default. - *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default.
-# Draw the detected circles: -# Draw the detected circles:
@code{.cpp} @snippet samples/cpp/houghcircles.cpp draw
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
@endcode
You can see that we will draw the circle(s) on red and the center(s) with a small green dot You can see that we will draw the circle(s) on red and the center(s) with a small green dot
-# Display the detected circle(s): -# Display the detected circle(s) and wait for the user to exit the program:
@code{.cpp} @snippet samples/cpp/houghcircles.cpp display
namedWindow( "Hough Circle Transform Demo", WINDOW_AUTOSIZE );
imshow( "Hough Circle Transform Demo", src );
@endcode
-# Wait for the user to exit the program
@code{.cpp}
waitKey(0);
@endcode
Result Result
------ ------
......
...@@ -58,33 +58,15 @@ Explanation ...@@ -58,33 +58,15 @@ Explanation
----------- -----------
-# Create some needed variables: -# Create some needed variables:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp variables
Mat src, src_gray, dst;
int kernel_size = 3;
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
char* window_name = "Laplace Demo";
@endcode
-# Loads the source image: -# Loads the source image:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp load
src = imread( argv[1] );
if( !src.data )
{ return -1; }
@endcode
-# Apply a Gaussian blur to reduce noise: -# Apply a Gaussian blur to reduce noise:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp reduce_noise
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
@endcode
-# Convert the image to grayscale using @ref cv::cvtColor -# Convert the image to grayscale using @ref cv::cvtColor
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp convert_to_gray
cvtColor( src, src_gray, COLOR_RGB2GRAY );
@endcode
-# Apply the Laplacian operator to the grayscale image: -# Apply the Laplacian operator to the grayscale image:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp laplacian
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
@endcode
where the arguments are: where the arguments are:
- *src_gray*: The input image. - *src_gray*: The input image.
...@@ -96,13 +78,9 @@ Explanation ...@@ -96,13 +78,9 @@ Explanation
- *scale*, *delta* and *BORDER_DEFAULT*: We leave them as default values. - *scale*, *delta* and *BORDER_DEFAULT*: We leave them as default values.
-# Convert the output from the Laplacian operator to a *CV_8U* image: -# Convert the output from the Laplacian operator to a *CV_8U* image:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp convert
convertScaleAbs( dst, abs_dst );
@endcode
-# Display the result in a window: -# Display the result in a window:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp display
imshow( window_name, abs_dst );
@endcode
Results Results
------- -------
......
...@@ -115,40 +115,16 @@ Explanation ...@@ -115,40 +115,16 @@ Explanation
----------- -----------
-# First we declare the variables we are going to use: -# First we declare the variables we are going to use:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp variables
Mat src, src_gray;
Mat grad;
char* window_name = "Sobel Demo - Simple Edge Detector";
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
@endcode
-# As usual we load our source image *src*: -# As usual we load our source image *src*:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp load
src = imread( argv[1] );
if( !src.data )
{ return -1; }
@endcode
-# First, we apply a @ref cv::GaussianBlur to our image to reduce the noise ( kernel size = 3 ) -# First, we apply a @ref cv::GaussianBlur to our image to reduce the noise ( kernel size = 3 )
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp reduce_noise
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
@endcode
-# Now we convert our filtered image to grayscale: -# Now we convert our filtered image to grayscale:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp convert_to_gray
cvtColor( src, src_gray, COLOR_RGB2GRAY );
@endcode
-# Second, we calculate the "*derivatives*" in *x* and *y* directions. For this, we use the -# Second, we calculate the "*derivatives*" in *x* and *y* directions. For this, we use the
function @ref cv::Sobel as shown below: function @ref cv::Sobel as shown below:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp sobel
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
/// Gradient Y
Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
@endcode
The function takes the following arguments: The function takes the following arguments:
- *src_gray*: In our example, the input image. Here it is *CV_8U* - *src_gray*: In our example, the input image. Here it is *CV_8U*
...@@ -162,19 +138,12 @@ Explanation ...@@ -162,19 +138,12 @@ Explanation
\f$y_{order} = 0\f$. We do analogously for the *y* direction. \f$y_{order} = 0\f$. We do analogously for the *y* direction.
-# We convert our partial results back to *CV_8U*: -# We convert our partial results back to *CV_8U*:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp convert
convertScaleAbs( grad_x, abs_grad_x );
convertScaleAbs( grad_y, abs_grad_y );
@endcode
-# Finally, we try to approximate the *gradient* by adding both directional gradients (note that -# Finally, we try to approximate the *gradient* by adding both directional gradients (note that
this is not an exact calculation at all! but it is good for our purposes). this is not an exact calculation at all! but it is good for our purposes).
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp blend
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
@endcode
-# Finally, we show our result: -# Finally, we show our result:
@code{.cpp} @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp display
imshow( window_name, grad );
@endcode
Results Results
------- -------
......
...@@ -81,76 +81,7 @@ Code ...@@ -81,76 +81,7 @@ Code
This tutorial code's is shown lines below. You can also download it from This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Morphology_2.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Morphology_2.cpp)
@code{.cpp} @include cpp/tutorial_code/ImgProc/Morphology_2.cpp
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/// Global variables
Mat src, dst;
int morph_elem = 0;
int morph_size = 0;
int morph_operator = 0;
int const max_operator = 4;
int const max_elem = 2;
int const max_kernel_size = 21;
char* window_name = "Morphology Transformations Demo";
/* Function Headers */
void Morphology_Operations( int, void* );
/* @function main */
int main( int argc, char** argv )
{
/// Load an image
src = imread( argv[1] );
if( !src.data )
{ return -1; }
/// Create window
namedWindow( window_name, WINDOW_AUTOSIZE );
/// Create Trackbar to select Morphology operation
createTrackbar("Operator:\n 0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat", window_name, &morph_operator, max_operator, Morphology_Operations );
/// Create Trackbar to select kernel type
createTrackbar( "Element:\n 0: Rect - 1: Cross - 2: Ellipse", window_name,
&morph_elem, max_elem,
Morphology_Operations );
/// Create Trackbar to choose kernel size
createTrackbar( "Kernel size:\n 2n +1", window_name,
&morph_size, max_kernel_size,
Morphology_Operations );
/// Default start
Morphology_Operations( 0, 0 );
waitKey(0);
return 0;
}
/*
* @function Morphology_Operations
*/
void Morphology_Operations( int, void* )
{
// Since MORPH_X : 2,3,4,5 and 6
int operation = morph_operator + 2;
Mat element = getStructuringElement( morph_elem, Size( 2*morph_size + 1, 2*morph_size+1 ), Point( morph_size, morph_size ) );
/// Apply the specified morphology operation
morphologyEx( src, dst, operation, element );
imshow( window_name, dst );
}
@endcode
Explanation Explanation
----------- -----------
...@@ -158,47 +89,23 @@ Explanation ...@@ -158,47 +89,23 @@ Explanation
-# Let's check the general structure of the program: -# Let's check the general structure of the program:
- Load an image - Load an image
- Create a window to display results of the Morphological operations - Create a window to display results of the Morphological operations
- Create 03 Trackbars for the user to enter parameters: - Create three Trackbars for the user to enter parameters:
- The first trackbar **"Operator"** returns the kind of morphology operation to use - The first trackbar **Operator** returns the kind of morphology operation to use
(**morph_operator**). (**morph_operator**).
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Morphology_2.cpp create_trackbar1
createTrackbar("Operator:\n 0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat",
window_name, &morph_operator, max_operator, - The second trackbar **Element** returns **morph_elem**, which indicates what kind of
Morphology_Operations );
@endcode
- The second trackbar **"Element"** returns **morph_elem**, which indicates what kind of
structure our kernel is: structure our kernel is:
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Morphology_2.cpp create_trackbar2
createTrackbar( "Element:\n 0: Rect - 1: Cross - 2: Ellipse", window_name,
&morph_elem, max_elem, - The final trackbar **Kernel Size** returns the size of the kernel to be used
Morphology_Operations );
@endcode
- The final trackbar **"Kernel Size"** returns the size of the kernel to be used
(**morph_size**) (**morph_size**)
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Morphology_2.cpp create_trackbar3
createTrackbar( "Kernel size:\n 2n +1", window_name,
&morph_size, max_kernel_size,
Morphology_Operations );
@endcode
- Every time we move any slider, the user's function **Morphology_Operations** will be called - Every time we move any slider, the user's function **Morphology_Operations** will be called
to effectuate a new morphology operation and it will update the output image based on the to effectuate a new morphology operation and it will update the output image based on the
current trackbar values. current trackbar values.
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Morphology_2.cpp morphology_operations
/*
* @function Morphology_Operations
*/
void Morphology_Operations( int, void* )
{
// Since MORPH_X : 2,3,4,5 and 6
int operation = morph_operator + 2;
Mat element = getStructuringElement( morph_elem, Size( 2*morph_size + 1, 2*morph_size+1 ), Point( morph_size, morph_size ) );
/// Apply the specified morphology operation
morphologyEx( src, dst, operation, element );
imshow( window_name, dst );
}
@endcode
We can observe that the key function to perform the morphology transformations is @ref We can observe that the key function to perform the morphology transformations is @ref
cv::morphologyEx . In this example we use four arguments (leaving the rest as defaults): cv::morphologyEx . In this example we use four arguments (leaving the rest as defaults):
...@@ -216,9 +123,7 @@ Explanation ...@@ -216,9 +123,7 @@ Explanation
As you can see the values range from \<2-6\>, that is why we add (+2) to the values As you can see the values range from \<2-6\>, that is why we add (+2) to the values
entered by the Trackbar: entered by the Trackbar:
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Morphology_2.cpp operation
int operation = morph_operator + 2;
@endcode
- **element**: The kernel to be used. We use the function @ref cv::getStructuringElement - **element**: The kernel to be used. We use the function @ref cv::getStructuringElement
to define our own structure. to define our own structure.
......
...@@ -77,13 +77,7 @@ Let's check the general structure of the program: ...@@ -77,13 +77,7 @@ Let's check the general structure of the program:
- Load an image (in this case it is defined in the program, the user does not have to enter it - Load an image (in this case it is defined in the program, the user does not have to enter it
as an argument) as an argument)
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Pyramids.cpp load
/// Test image - Make sure it s divisible by 2^{n}
src = imread( "../images/chicky_512.jpg" );
if( !src.data )
{ printf(" No data! -- Exiting the program \n");
return -1; }
@endcode
- Create a Mat object to store the result of the operations (*dst*) and one to save temporal - Create a Mat object to store the result of the operations (*dst*) and one to save temporal
results (*tmp*). results (*tmp*).
...@@ -95,40 +89,15 @@ Let's check the general structure of the program: ...@@ -95,40 +89,15 @@ Let's check the general structure of the program:
@endcode @endcode
- Create a window to display the result - Create a window to display the result
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Pyramids.cpp create_window
namedWindow( window_name, WINDOW_AUTOSIZE );
imshow( window_name, dst );
@endcode
- Perform an infinite loop waiting for user input. - Perform an infinite loop waiting for user input.
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Pyramids.cpp infinite_loop
while( true )
{
int c;
c = waitKey(10);
if( (char)c == 27 )
{ break; }
if( (char)c == 'u' )
{ pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) );
printf( "** Zoom In: Image x 2 \n" );
}
else if( (char)c == 'd' )
{ pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) );
printf( "** Zoom Out: Image / 2 \n" );
}
imshow( window_name, dst );
tmp = dst;
}
@endcode
Our program exits if the user presses *ESC*. Besides, it has two options: Our program exits if the user presses *ESC*. Besides, it has two options:
- **Perform upsampling (after pressing 'u')** - **Perform upsampling (after pressing 'u')**
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Pyramids.cpp pyrup
pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) We use the function @ref cv::pyrUp with three arguments:
@endcode
We use the function @ref cv::pyrUp with 03 arguments:
- *tmp*: The current image, it is initialized with the *src* original image. - *tmp*: The current image, it is initialized with the *src* original image.
- *dst*: The destination image (to be shown on screen, supposedly the double of the - *dst*: The destination image (to be shown on screen, supposedly the double of the
...@@ -136,11 +105,8 @@ Let's check the general structure of the program: ...@@ -136,11 +105,8 @@ Let's check the general structure of the program:
- *Size( tmp.cols*2, tmp.rows\*2 )\* : The destination size. Since we are upsampling, - *Size( tmp.cols*2, tmp.rows\*2 )\* : The destination size. Since we are upsampling,
@ref cv::pyrUp expects a size double than the input image (in this case *tmp*). @ref cv::pyrUp expects a size double than the input image (in this case *tmp*).
- **Perform downsampling (after pressing 'd')** - **Perform downsampling (after pressing 'd')**
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Pyramids.cpp pyrdown
pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) Similarly as with @ref cv::pyrUp , we use the function @ref cv::pyrDown with three arguments:
@endcode
Similarly as with @ref cv::pyrUp , we use the function @ref cv::pyrDown with 03
arguments:
- *tmp*: The current image, it is initialized with the *src* original image. - *tmp*: The current image, it is initialized with the *src* original image.
- *dst*: The destination image (to be shown on screen, supposedly half the input - *dst*: The destination image (to be shown on screen, supposedly half the input
...@@ -151,15 +117,13 @@ Let's check the general structure of the program: ...@@ -151,15 +117,13 @@ Let's check the general structure of the program:
both dimensions). Otherwise, an error will be shown. both dimensions). Otherwise, an error will be shown.
- Finally, we update the input image **tmp** with the current image displayed, so the - Finally, we update the input image **tmp** with the current image displayed, so the
subsequent operations are performed on it. subsequent operations are performed on it.
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Pyramids.cpp update_tmp
tmp = dst;
@endcode
Results Results
------- -------
- After compiling the code above we can test it. The program calls an image **chicky_512.jpg** - After compiling the code above we can test it. The program calls an image **chicky_512.jpg**
that comes in the *tutorial_code/image* folder. Notice that this image is \f$512 \times 512\f$, that comes in the *samples/data* folder. Notice that this image is \f$512 \times 512\f$,
hence a downsample won't generate any error (\f$512 = 2^{9}\f$). The original image is shown below: hence a downsample won't generate any error (\f$512 = 2^{9}\f$). The original image is shown below:
![](images/Pyramids_Tutorial_Original_Image.jpg) ![](images/Pyramids_Tutorial_Original_Image.jpg)
......
...@@ -106,51 +106,23 @@ Explanation ...@@ -106,51 +106,23 @@ Explanation
-# Let's check the general structure of the program: -# Let's check the general structure of the program:
- Load an image. If it is BGR we convert it to Grayscale. For this, remember that we can use - Load an image. If it is BGR we convert it to Grayscale. For this, remember that we can use
the function @ref cv::cvtColor : the function @ref cv::cvtColor :
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Threshold.cpp load
src = imread( argv[1], 1 );
/// Convert the image to Gray
cvtColor( src, src_gray, COLOR_BGR2GRAY );
@endcode
- Create a window to display the result - Create a window to display the result
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Threshold.cpp window
namedWindow( window_name, WINDOW_AUTOSIZE );
@endcode
- Create \f$2\f$ trackbars for the user to enter user input: - Create \f$2\f$ trackbars for the user to enter user input:
- **Type of thresholding**: Binary, To Zero, etc... - **Type of thresholding**: Binary, To Zero, etc...
- **Threshold value** - **Threshold value**
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Threshold.cpp trackbar
createTrackbar( trackbar_type,
window_name, &threshold_type,
max_type, Threshold_Demo );
createTrackbar( trackbar_value,
window_name, &threshold_value,
max_value, Threshold_Demo );
@endcode
- Wait until the user enters the threshold value, the type of thresholding (or until the - Wait until the user enters the threshold value, the type of thresholding (or until the
program exits) program exits)
- Whenever the user changes the value of any of the Trackbars, the function *Threshold_Demo* - Whenever the user changes the value of any of the Trackbars, the function *Threshold_Demo*
is called: is called:
@code{.cpp} @snippet cpp/tutorial_code/ImgProc/Threshold.cpp Threshold_Demo
/*
* @function Threshold_Demo
*/
void Threshold_Demo( int, void* )
{
/* 0: Binary
1: Binary Inverted
2: Threshold Truncated
3: Threshold to Zero
4: Threshold to Zero Inverted
*/
threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type );
imshow( window_name, dst );
}
@endcode
As you can see, the function @ref cv::threshold is invoked. We give \f$5\f$ parameters: As you can see, the function @ref cv::threshold is invoked. We give \f$5\f$ parameters:
- *src_gray*: Our input image - *src_gray*: Our input image
......
...@@ -21,92 +21,8 @@ This tutorial code's is shown lines below. You can also download it from ...@@ -21,92 +21,8 @@ This tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/tree/master/samples/cpp/tutorial_code/objectDetection/objectDetection.cpp) [here](https://github.com/opencv/tree/master/samples/cpp/tutorial_code/objectDetection/objectDetection.cpp)
. The second version (using LBP for face detection) can be [found . The second version (using LBP for face detection) can be [found
here](https://github.com/opencv/tree/master/samples/cpp/tutorial_code/objectDetection/objectDetection2.cpp) here](https://github.com/opencv/tree/master/samples/cpp/tutorial_code/objectDetection/objectDetection2.cpp)
@code{.cpp} @include samples/cpp/tutorial_code/objectDetection/objectDetection.cpp
#include "opencv2/objdetect.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/* Function Headers */
void detectAndDisplay( Mat frame );
/* Global variables */
String face_cascade_name = "haarcascade_frontalface_alt.xml";
String eyes_cascade_name = "haarcascade_eye_tree_eyeglasses.xml";
CascadeClassifier face_cascade;
CascadeClassifier eyes_cascade;
String window_name = "Capture - Face detection";
/* @function main */
int main( void )
{
VideoCapture capture;
Mat frame;
//-- 1. Load the cascades
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading face cascade\n"); return -1; };
if( !eyes_cascade.load( eyes_cascade_name ) ){ printf("--(!)Error loading eyes cascade\n"); return -1; };
//-- 2. Read the video stream
capture.open( -1 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
while ( capture.read(frame) )
{
if( frame.empty() )
{
printf(" --(!) No captured frame -- Break!");
break;
}
//-- 3. Apply the classifier to the frame
detectAndDisplay( frame );
int c = waitKey(10);
if( (char)c == 27 ) { break; } // escape
}
return 0;
}
/* @function detectAndDisplay */
void detectAndDisplay( Mat frame )
{
std::vector<Rect> faces;
Mat frame_gray;
cvtColor( frame, frame_gray, COLOR_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect faces
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CASCADE_SCALE_IMAGE, Size(30, 30) );
for( size_t i = 0; i < faces.size(); i++ )
{
Point center( faces[i].x + faces[i].width/2, faces[i].y + faces[i].height/2 );
ellipse( frame, center, Size( faces[i].width/2, faces[i].height/2), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
Mat faceROI = frame_gray( faces[i] );
std::vector<Rect> eyes;
//-- In each face, detect eyes
eyes_cascade.detectMultiScale( faceROI, eyes, 1.1, 2, 0 |CASCADE_SCALE_IMAGE, Size(30, 30) );
for( size_t j = 0; j < eyes.size(); j++ )
{
Point eye_center( faces[i].x + eyes[j].x + eyes[j].width/2, faces[i].y + eyes[j].y + eyes[j].height/2 );
int radius = cvRound( (eyes[j].width + eyes[j].height)*0.25 );
circle( frame, eye_center, radius, Scalar( 255, 0, 0 ), 4, 8, 0 );
}
}
//-- Show what you got
imshow( window_name, frame );
}
@endcode
Explanation Explanation
----------- -----------
......
...@@ -24,39 +24,48 @@ int main(int argc, char** argv) ...@@ -24,39 +24,48 @@ int main(int argc, char** argv)
help(); help();
return 0; return 0;
} }
//![load]
string filename = parser.get<string>("@image"); string filename = parser.get<string>("@image");
if (filename.empty()) Mat img = imread(filename, IMREAD_COLOR);
{
help();
cout << "no image_name provided" << endl;
return -1;
}
Mat img = imread(filename, 0);
if(img.empty()) if(img.empty())
{ {
help(); help();
cout << "can not open " << filename << endl; cout << "can not open " << filename << endl;
return -1; return -1;
} }
//![load]
//![convert_to_gray]
Mat gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
//![convert_to_gray]
Mat cimg; //![reduce_noise]
medianBlur(img, img, 5); medianBlur(gray, gray, 5);
cvtColor(img, cimg, COLOR_GRAY2BGR); //![reduce_noise]
//![houghcircles]
vector<Vec3f> circles; vector<Vec3f> circles;
HoughCircles(img, circles, HOUGH_GRADIENT, 1, 10, HoughCircles(gray, circles, HOUGH_GRADIENT, 1,
gray.rows/16, // change this value to detect circles with different distances to each other
100, 30, 1, 30 // change the last two parameters 100, 30, 1, 30 // change the last two parameters
// (min_radius & max_radius) to detect larger circles // (min_radius & max_radius) to detect larger circles
); );
//![houghcircles]
//![draw]
for( size_t i = 0; i < circles.size(); i++ ) for( size_t i = 0; i < circles.size(); i++ )
{ {
Vec3i c = circles[i]; Vec3i c = circles[i];
circle( cimg, Point(c[0], c[1]), c[2], Scalar(0,0,255), 3, LINE_AA); circle( img, Point(c[0], c[1]), c[2], Scalar(0,0,255), 3, LINE_AA);
circle( cimg, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, LINE_AA); circle( img, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, LINE_AA);
} }
//![draw]
imshow("detected circles", cimg); //![display]
imshow("detected circles", img);
waitKey(); waitKey();
//![display]
return 0; return 0;
} }
...@@ -21,6 +21,7 @@ Mat src1; ...@@ -21,6 +21,7 @@ Mat src1;
Mat src2; Mat src2;
Mat dst; Mat dst;
//![on_trackbar]
/** /**
* @function on_trackbar * @function on_trackbar
* @brief Callback for trackbar * @brief Callback for trackbar
...@@ -35,7 +36,7 @@ static void on_trackbar( int, void* ) ...@@ -35,7 +36,7 @@ static void on_trackbar( int, void* )
imshow( "Linear Blend", dst ); imshow( "Linear Blend", dst );
} }
//![on_trackbar]
/** /**
* @function main * @function main
...@@ -43,9 +44,11 @@ static void on_trackbar( int, void* ) ...@@ -43,9 +44,11 @@ static void on_trackbar( int, void* )
*/ */
int main( void ) int main( void )
{ {
/// Read image ( same size, same type ) //![load]
/// Read images ( both have to be of the same size and type )
src1 = imread("../data/LinuxLogo.jpg"); src1 = imread("../data/LinuxLogo.jpg");
src2 = imread("../data/WindowsLogo.jpg"); src2 = imread("../data/WindowsLogo.jpg");
//![load]
if( src1.empty() ) { printf("Error loading src1 \n"); return -1; } if( src1.empty() ) { printf("Error loading src1 \n"); return -1; }
if( src2.empty() ) { printf("Error loading src2 \n"); return -1; } if( src2.empty() ) { printf("Error loading src2 \n"); return -1; }
...@@ -53,13 +56,15 @@ int main( void ) ...@@ -53,13 +56,15 @@ int main( void )
/// Initialize values /// Initialize values
alpha_slider = 0; alpha_slider = 0;
/// Create Windows //![window]
namedWindow("Linear Blend", 1); namedWindow("Linear Blend", WINDOW_AUTOSIZE); // Create Window
//![window]
/// Create Trackbars //![create_trackbar]
char TrackbarName[50]; char TrackbarName[50];
sprintf( TrackbarName, "Alpha x %d", alpha_slider_max ); sprintf( TrackbarName, "Alpha x %d", alpha_slider_max );
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar ); createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
//![create_trackbar]
/// Show some stuff /// Show some stuff
on_trackbar( alpha_slider, 0 ); on_trackbar( alpha_slider, 0 );
......
...@@ -66,6 +66,7 @@ int main( int, char** argv ) ...@@ -66,6 +66,7 @@ int main( int, char** argv )
return 0; return 0;
} }
//![erosion]
/** /**
* @function Erosion * @function Erosion
*/ */
...@@ -76,14 +77,19 @@ void Erosion( int, void* ) ...@@ -76,14 +77,19 @@ void Erosion( int, void* )
else if( erosion_elem == 1 ){ erosion_type = MORPH_CROSS; } else if( erosion_elem == 1 ){ erosion_type = MORPH_CROSS; }
else if( erosion_elem == 2) { erosion_type = MORPH_ELLIPSE; } else if( erosion_elem == 2) { erosion_type = MORPH_ELLIPSE; }
//![kernel]
Mat element = getStructuringElement( erosion_type, Mat element = getStructuringElement( erosion_type,
Size( 2*erosion_size + 1, 2*erosion_size+1 ), Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) ); Point( erosion_size, erosion_size ) );
//![kernel]
/// Apply the erosion operation /// Apply the erosion operation
erode( src, erosion_dst, element ); erode( src, erosion_dst, element );
imshow( "Erosion Demo", erosion_dst ); imshow( "Erosion Demo", erosion_dst );
} }
//![erosion]
//![dilation]
/** /**
* @function Dilation * @function Dilation
*/ */
...@@ -97,7 +103,9 @@ void Dilation( int, void* ) ...@@ -97,7 +103,9 @@ void Dilation( int, void* )
Mat element = getStructuringElement( dilation_type, Mat element = getStructuringElement( dilation_type,
Size( 2*dilation_size + 1, 2*dilation_size+1 ), Size( 2*dilation_size + 1, 2*dilation_size+1 ),
Point( dilation_size, dilation_size ) ); Point( dilation_size, dilation_size ) );
/// Apply the dilation operation /// Apply the dilation operation
dilate( src, dilation_dst, element ); dilate( src, dilation_dst, element );
imshow( "Dilation Demo", dilation_dst ); imshow( "Dilation Demo", dilation_dst );
} }
//![dilation]
...@@ -31,27 +31,35 @@ void Morphology_Operations( int, void* ); ...@@ -31,27 +31,35 @@ void Morphology_Operations( int, void* );
*/ */
int main( int, char** argv ) int main( int, char** argv )
{ {
/// Load an image //![load]
src = imread( argv[1], IMREAD_COLOR ); src = imread( argv[1], IMREAD_COLOR ); // Load an image
if( src.empty() ) if( src.empty() )
{ return -1; } { return -1; }
//![load]
/// Create window //![window]
namedWindow( window_name, WINDOW_AUTOSIZE ); namedWindow( window_name, WINDOW_AUTOSIZE ); // Create window
//![window]
//![create_trackbar1]
/// Create Trackbar to select Morphology operation /// Create Trackbar to select Morphology operation
createTrackbar("Operator:\n 0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat", window_name, &morph_operator, max_operator, Morphology_Operations ); createTrackbar("Operator:\n 0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat", window_name, &morph_operator, max_operator, Morphology_Operations );
//![create_trackbar1]
//![create_trackbar2]
/// Create Trackbar to select kernel type /// Create Trackbar to select kernel type
createTrackbar( "Element:\n 0: Rect - 1: Cross - 2: Ellipse", window_name, createTrackbar( "Element:\n 0: Rect - 1: Cross - 2: Ellipse", window_name,
&morph_elem, max_elem, &morph_elem, max_elem,
Morphology_Operations ); Morphology_Operations );
//![create_trackbar2]
//![create_trackbar3]
/// Create Trackbar to choose kernel size /// Create Trackbar to choose kernel size
createTrackbar( "Kernel size:\n 2n +1", window_name, createTrackbar( "Kernel size:\n 2n +1", window_name,
&morph_size, max_kernel_size, &morph_size, max_kernel_size,
Morphology_Operations ); Morphology_Operations );
//![create_trackbar3]
/// Default start /// Default start
Morphology_Operations( 0, 0 ); Morphology_Operations( 0, 0 );
...@@ -60,13 +68,16 @@ int main( int, char** argv ) ...@@ -60,13 +68,16 @@ int main( int, char** argv )
return 0; return 0;
} }
//![morphology_operations]
/** /**
* @function Morphology_Operations * @function Morphology_Operations
*/ */
void Morphology_Operations( int, void* ) void Morphology_Operations( int, void* )
{ {
// Since MORPH_X : 2,3,4,5 and 6 // Since MORPH_X : 2,3,4,5 and 6
//![operation]
int operation = morph_operator + 2; int operation = morph_operator + 2;
//![operation]
Mat element = getStructuringElement( morph_elem, Size( 2*morph_size + 1, 2*morph_size+1 ), Point( morph_size, morph_size ) ); Mat element = getStructuringElement( morph_elem, Size( 2*morph_size + 1, 2*morph_size+1 ), Point( morph_size, morph_size ) );
...@@ -74,3 +85,4 @@ void Morphology_Operations( int, void* ) ...@@ -74,3 +85,4 @@ void Morphology_Operations( int, void* )
morphologyEx( src, dst, operation, element ); morphologyEx( src, dst, operation, element );
imshow( window_name, dst ); imshow( window_name, dst );
} }
//![morphology_operations]
...@@ -28,39 +28,50 @@ int main( void ) ...@@ -28,39 +28,50 @@ int main( void )
printf( " * [d] -> Zoom out \n" ); printf( " * [d] -> Zoom out \n" );
printf( " * [ESC] -> Close program \n \n" ); printf( " * [ESC] -> Close program \n \n" );
/// Test image - Make sure it s divisible by 2^{n} //![load]
src = imread( "../data/chicky_512.png" ); src = imread( "../data/chicky_512.png" ); // Loads the test image
if( src.empty() ) if( src.empty() )
{ printf(" No data! -- Exiting the program \n"); { printf(" No data! -- Exiting the program \n");
return -1; } return -1; }
//![load]
tmp = src; tmp = src;
dst = tmp; dst = tmp;
/// Create window //![create_window]
namedWindow( window_name, WINDOW_AUTOSIZE );
imshow( window_name, dst ); imshow( window_name, dst );
//![create_window]
/// Loop //![infinite_loop]
for(;;) for(;;)
{ {
int c; int c;
c = waitKey(10); c = waitKey(0);
if( (char)c == 27 ) if( (char)c == 27 )
{ break; } { break; }
if( (char)c == 'u' ) if( (char)c == 'u' )
{ pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) ); {
//![pyrup]
pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) );
//![pyrup]
printf( "** Zoom In: Image x 2 \n" ); printf( "** Zoom In: Image x 2 \n" );
} }
else if( (char)c == 'd' ) else if( (char)c == 'd' )
{ pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) ); {
//![pyrdown]
pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) );
//![pyrdown]
printf( "** Zoom Out: Image / 2 \n" ); printf( "** Zoom Out: Image / 2 \n" );
} }
imshow( window_name, dst ); imshow( window_name, dst );
//![update_tmp]
tmp = dst; tmp = dst;
//![update_tmp]
} }
//![infinite_loop]
return 0; return 0;
} }
...@@ -43,33 +43,38 @@ int main( void ) ...@@ -43,33 +43,38 @@ int main( void )
/// Applying Homogeneous blur /// Applying Homogeneous blur
if( display_caption( "Homogeneous Blur" ) != 0 ) { return 0; } if( display_caption( "Homogeneous Blur" ) != 0 ) { return 0; }
//![blur]
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 ) for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ blur( src, dst, Size( i, i ), Point(-1,-1) ); { blur( src, dst, Size( i, i ), Point(-1,-1) );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } } if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
//![blur]
/// Applying Gaussian blur /// Applying Gaussian blur
if( display_caption( "Gaussian Blur" ) != 0 ) { return 0; } if( display_caption( "Gaussian Blur" ) != 0 ) { return 0; }
//![gaussianblur]
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 ) for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ GaussianBlur( src, dst, Size( i, i ), 0, 0 ); { GaussianBlur( src, dst, Size( i, i ), 0, 0 );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } } if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
//![gaussianblur]
/// Applying Median blur /// Applying Median blur
if( display_caption( "Median Blur" ) != 0 ) { return 0; } if( display_caption( "Median Blur" ) != 0 ) { return 0; }
//![medianblur]
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 ) for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ medianBlur ( src, dst, i ); { medianBlur ( src, dst, i );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } } if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
//![medianblur]
/// Applying Bilateral Filter /// Applying Bilateral Filter
if( display_caption( "Bilateral Blur" ) != 0 ) { return 0; } if( display_caption( "Bilateral Blur" ) != 0 ) { return 0; }
//![bilateralfilter]
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 ) for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ bilateralFilter ( src, dst, i, i*2, i/2 ); { bilateralFilter ( src, dst, i, i*2, i/2 );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } } if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
//![bilateralfilter]
/// Wait until user press a key /// Wait until user press a key
display_caption( "End: Press a key!" ); display_caption( "End: Press a key!" );
......
...@@ -32,29 +32,30 @@ void Threshold_Demo( int, void* ); ...@@ -32,29 +32,30 @@ void Threshold_Demo( int, void* );
*/ */
int main( int, char** argv ) int main( int, char** argv )
{ {
/// Load an image //! [load]
src = imread( argv[1], IMREAD_COLOR ); src = imread( argv[1], IMREAD_COLOR ); // Load an image
if( src.empty() ) if( src.empty() )
{ return -1; } { return -1; }
/// Convert the image to Gray cvtColor( src, src_gray, COLOR_BGR2GRAY ); // Convert the image to Gray
cvtColor( src, src_gray, COLOR_BGR2GRAY ); //! [load]
/// Create a window to display results //! [window]
namedWindow( window_name, WINDOW_AUTOSIZE ); namedWindow( window_name, WINDOW_AUTOSIZE ); // Create a window to display results
//! [window]
/// Create Trackbar to choose type of Threshold //! [trackbar]
createTrackbar( trackbar_type, createTrackbar( trackbar_type,
window_name, &threshold_type, window_name, &threshold_type,
max_type, Threshold_Demo ); max_type, Threshold_Demo ); // Create Trackbar to choose type of Threshold
createTrackbar( trackbar_value, createTrackbar( trackbar_value,
window_name, &threshold_value, window_name, &threshold_value,
max_value, Threshold_Demo ); max_value, Threshold_Demo ); // Create Trackbar to choose Threshold value
//! [trackbar]
/// Call the function to initialize Threshold_Demo( 0, 0 ); // Call the function to initialize
Threshold_Demo( 0, 0 );
/// Wait until user finishes program /// Wait until user finishes program
for(;;) for(;;)
...@@ -67,7 +68,7 @@ int main( int, char** argv ) ...@@ -67,7 +68,7 @@ int main( int, char** argv )
} }
//![Threshold_Demo]
/** /**
* @function Threshold_Demo * @function Threshold_Demo
*/ */
...@@ -84,3 +85,4 @@ void Threshold_Demo( int, void* ) ...@@ -84,3 +85,4 @@ void Threshold_Demo( int, void* )
imshow( window_name, dst ); imshow( window_name, dst );
} }
//![Threshold_Demo]
...@@ -10,8 +10,7 @@ ...@@ -10,8 +10,7 @@
using namespace cv; using namespace cv;
/// Global variables //![variables]
Mat src, src_gray; Mat src, src_gray;
Mat dst, detected_edges; Mat dst, detected_edges;
...@@ -21,6 +20,7 @@ int const max_lowThreshold = 100; ...@@ -21,6 +20,7 @@ int const max_lowThreshold = 100;
int ratio = 3; int ratio = 3;
int kernel_size = 3; int kernel_size = 3;
const char* window_name = "Edge Map"; const char* window_name = "Edge Map";
//![variables]
/** /**
* @function CannyThreshold * @function CannyThreshold
...@@ -28,17 +28,28 @@ const char* window_name = "Edge Map"; ...@@ -28,17 +28,28 @@ const char* window_name = "Edge Map";
*/ */
static void CannyThreshold(int, void*) static void CannyThreshold(int, void*)
{ {
//![reduce_noise]
/// Reduce noise with a kernel 3x3 /// Reduce noise with a kernel 3x3
blur( src_gray, detected_edges, Size(3,3) ); blur( src_gray, detected_edges, Size(3,3) );
//![reduce_noise]
//![canny]
/// Canny detector /// Canny detector
Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size ); Canny( detected_edges, detected_edges, lowThreshold, lowThreshold*ratio, kernel_size );
//![canny]
/// Using Canny's output as a mask, we display our result /// Using Canny's output as a mask, we display our result
//![fill]
dst = Scalar::all(0); dst = Scalar::all(0);
//![fill]
//![copyto]
src.copyTo( dst, detected_edges); src.copyTo( dst, detected_edges);
//![copyto]
//![display]
imshow( window_name, dst ); imshow( window_name, dst );
//![display]
} }
...@@ -47,23 +58,30 @@ static void CannyThreshold(int, void*) ...@@ -47,23 +58,30 @@ static void CannyThreshold(int, void*)
*/ */
int main( int, char** argv ) int main( int, char** argv )
{ {
/// Load an image //![load]
src = imread( argv[1], IMREAD_COLOR ); src = imread( argv[1], IMREAD_COLOR ); // Load an image
if( src.empty() ) if( src.empty() )
{ return -1; } { return -1; }
//![load]
//![create_mat]
/// Create a matrix of the same type and size as src (for dst) /// Create a matrix of the same type and size as src (for dst)
dst.create( src.size(), src.type() ); dst.create( src.size(), src.type() );
//![create_mat]
/// Convert the image to grayscale //![convert_to_gray]
cvtColor( src, src_gray, COLOR_BGR2GRAY ); cvtColor( src, src_gray, COLOR_BGR2GRAY );
//![convert_to_gray]
/// Create a window //![create_window]
namedWindow( window_name, WINDOW_AUTOSIZE ); namedWindow( window_name, WINDOW_AUTOSIZE );
//![create_window]
//![create_trackbar]
/// Create a Trackbar for user to enter threshold /// Create a Trackbar for user to enter threshold
createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold ); createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold );
//![create_trackbar]
/// Show the image /// Show the image
CannyThreshold(0, 0); CannyThreshold(0, 0);
......
...@@ -15,39 +15,45 @@ using namespace cv; ...@@ -15,39 +15,45 @@ using namespace cv;
*/ */
int main( int, char** argv ) int main( int, char** argv )
{ {
//![variables]
Mat src, src_gray, dst; Mat src, src_gray, dst;
int kernel_size = 3; int kernel_size = 3;
int scale = 1; int scale = 1;
int delta = 0; int delta = 0;
int ddepth = CV_16S; int ddepth = CV_16S;
const char* window_name = "Laplace Demo"; const char* window_name = "Laplace Demo";
//![variables]
/// Load an image //![load]
src = imread( argv[1], IMREAD_COLOR ); src = imread( argv[1], IMREAD_COLOR ); // Load an image
if( src.empty() ) if( src.empty() )
{ return -1; } { return -1; }
//![load]
/// Remove noise by blurring with a Gaussian filter //![reduce_noise]
/// Reduce noise by blurring with a Gaussian filter
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT ); GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
//![reduce_noise]
/// Convert the image to grayscale //![convert_to_gray]
cvtColor( src, src_gray, COLOR_RGB2GRAY ); cvtColor( src, src_gray, COLOR_BGR2GRAY ); // Convert the image to grayscale
//![convert_to_gray]
/// Create window
namedWindow( window_name, WINDOW_AUTOSIZE );
/// Apply Laplace function /// Apply Laplace function
Mat abs_dst; Mat abs_dst;
//![laplacian]
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT ); Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
//![laplacian]
//![convert]
convertScaleAbs( dst, abs_dst ); convertScaleAbs( dst, abs_dst );
//![convert]
/// Show what you got //![display]
imshow( window_name, abs_dst ); imshow( window_name, abs_dst );
waitKey(0); waitKey(0);
//![display]
return 0; return 0;
} }
/** /**
* @file Sobel_Demo.cpp * @file Sobel_Demo.cpp
* @brief Sample code using Sobel and/orScharr OpenCV functions to make a simple Edge Detector * @brief Sample code using Sobel and/or Scharr OpenCV functions to make a simple Edge Detector
* @author OpenCV team * @author OpenCV team
*/ */
...@@ -15,28 +15,31 @@ using namespace cv; ...@@ -15,28 +15,31 @@ using namespace cv;
*/ */
int main( int, char** argv ) int main( int, char** argv )
{ {
//![variables]
Mat src, src_gray; Mat src, src_gray;
Mat grad; Mat grad;
const char* window_name = "Sobel Demo - Simple Edge Detector"; const char* window_name = "Sobel Demo - Simple Edge Detector";
int scale = 1; int scale = 1;
int delta = 0; int delta = 0;
int ddepth = CV_16S; int ddepth = CV_16S;
//![variables]
/// Load an image //![load]
src = imread( argv[1], IMREAD_COLOR ); src = imread( argv[1], IMREAD_COLOR ); // Load an image
if( src.empty() ) if( src.empty() )
{ return -1; } { return -1; }
//![load]
//![reduce_noise]
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT ); GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
//![reduce_noise]
/// Convert it to gray //![convert_to_gray]
cvtColor( src, src_gray, COLOR_RGB2GRAY ); cvtColor( src, src_gray, COLOR_BGR2GRAY );
//![convert_to_gray]
/// Create window
namedWindow( window_name, WINDOW_AUTOSIZE );
//![sobel]
/// Generate grad_x and grad_y /// Generate grad_x and grad_y
Mat grad_x, grad_y; Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y; Mat abs_grad_x, abs_grad_y;
...@@ -44,19 +47,26 @@ int main( int, char** argv ) ...@@ -44,19 +47,26 @@ int main( int, char** argv )
/// Gradient X /// Gradient X
//Scharr( src_gray, grad_x, ddepth, 1, 0, scale, delta, BORDER_DEFAULT ); //Scharr( src_gray, grad_x, ddepth, 1, 0, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT ); Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_x, abs_grad_x );
/// Gradient Y /// Gradient Y
//Scharr( src_gray, grad_y, ddepth, 0, 1, scale, delta, BORDER_DEFAULT ); //Scharr( src_gray, grad_y, ddepth, 0, 1, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT ); Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
//![sobel]
//![convert]
convertScaleAbs( grad_x, abs_grad_x );
convertScaleAbs( grad_y, abs_grad_y ); convertScaleAbs( grad_y, abs_grad_y );
//![convert]
//![blend]
/// Total Gradient (approximate) /// Total Gradient (approximate)
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad ); addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
//![blend]
//![display]
imshow( window_name, grad ); imshow( window_name, grad );
waitKey(0); waitKey(0);
//![display]
return 0; return 0;
} }
...@@ -10,12 +10,13 @@ ...@@ -10,12 +10,13 @@
using namespace cv; using namespace cv;
/// Global Variables //![variables]
Mat src, dst; Mat src, dst;
int top, bottom, left, right; int top, bottom, left, right;
int borderType; int borderType;
const char* window_name = "copyMakeBorder Demo"; const char* window_name = "copyMakeBorder Demo";
RNG rng(12345); RNG rng(12345);
//![variables]
/** /**
* @function main * @function main
...@@ -25,14 +26,15 @@ int main( int, char** argv ) ...@@ -25,14 +26,15 @@ int main( int, char** argv )
int c; int c;
/// Load an image //![load]
src = imread( argv[1], IMREAD_COLOR ); src = imread( argv[1], IMREAD_COLOR ); // Load an image
if( src.empty() ) if( src.empty() )
{ {
printf(" No data entered, please enter the path to an image file \n"); printf(" No data entered, please enter the path to an image file \n");
return -1; return -1;
} }
//![load]
/// Brief how-to for this program /// Brief how-to for this program
printf( "\n \t copyMakeBorder Demo: \n" ); printf( "\n \t copyMakeBorder Demo: \n" );
...@@ -41,18 +43,22 @@ int main( int, char** argv ) ...@@ -41,18 +43,22 @@ int main( int, char** argv )
printf( " ** Press 'r' to set the border to be replicated \n"); printf( " ** Press 'r' to set the border to be replicated \n");
printf( " ** Press 'ESC' to exit the program \n"); printf( " ** Press 'ESC' to exit the program \n");
/// Create window //![create_window]
namedWindow( window_name, WINDOW_AUTOSIZE ); namedWindow( window_name, WINDOW_AUTOSIZE );
//![create_window]
//![init_arguments]
/// Initialize arguments for the filter /// Initialize arguments for the filter
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows); top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows);
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols); left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
dst = src; //![init_arguments]
dst = src;
imshow( window_name, dst ); imshow( window_name, dst );
for(;;) for(;;)
{ {
//![check_keypress]
c = waitKey(500); c = waitKey(500);
if( (char)c == 27 ) if( (char)c == 27 )
...@@ -61,11 +67,19 @@ int main( int, char** argv ) ...@@ -61,11 +67,19 @@ int main( int, char** argv )
{ borderType = BORDER_CONSTANT; } { borderType = BORDER_CONSTANT; }
else if( (char)c == 'r' ) else if( (char)c == 'r' )
{ borderType = BORDER_REPLICATE; } { borderType = BORDER_REPLICATE; }
//![check_keypress]
//![update_value]
Scalar value( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) ); Scalar value( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) );
//![update_value]
//![copymakeborder]
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value ); copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
//![copymakeborder]
//![display]
imshow( window_name, dst ); imshow( window_name, dst );
//![display]
} }
return 0; return 0;
......
...@@ -27,19 +27,19 @@ int main ( int, char** argv ) ...@@ -27,19 +27,19 @@ int main ( int, char** argv )
int c; int c;
/// Load an image //![load]
src = imread( argv[1], IMREAD_COLOR ); src = imread( argv[1], IMREAD_COLOR ); // Load an image
if( src.empty() ) if( src.empty() )
{ return -1; } { return -1; }
//![load]
/// Create window //![init_arguments]
namedWindow( window_name, WINDOW_AUTOSIZE );
/// Initialize arguments for the filter /// Initialize arguments for the filter
anchor = Point( -1, -1 ); anchor = Point( -1, -1 );
delta = 0; delta = 0;
ddepth = -1; ddepth = -1;
//![init_arguments]
/// Loop - Will filter the image with different kernel sizes each 0.5 seconds /// Loop - Will filter the image with different kernel sizes each 0.5 seconds
int ind = 0; int ind = 0;
...@@ -50,12 +50,15 @@ int main ( int, char** argv ) ...@@ -50,12 +50,15 @@ int main ( int, char** argv )
if( (char)c == 27 ) if( (char)c == 27 )
{ break; } { break; }
//![update_kernel]
/// Update kernel size for a normalized box filter /// Update kernel size for a normalized box filter
kernel_size = 3 + 2*( ind%5 ); kernel_size = 3 + 2*( ind%5 );
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size); kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
//![update_kernel]
/// Apply filter //![apply_filter]
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT ); filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
//![apply_filter]
imshow( window_name, dst ); imshow( window_name, dst );
ind++; ind++;
} }
......
...@@ -16,7 +16,6 @@ using namespace cv; ...@@ -16,7 +16,6 @@ using namespace cv;
*/ */
int main( void ) int main( void )
{ {
double alpha = 0.5; double beta; double input; double alpha = 0.5; double beta; double input;
Mat src1, src2, dst; Mat src1, src2, dst;
...@@ -27,26 +26,28 @@ int main( void ) ...@@ -27,26 +26,28 @@ int main( void )
std::cout<<"* Enter alpha [0-1]: "; std::cout<<"* Enter alpha [0-1]: ";
std::cin>>input; std::cin>>input;
// We use the alpha provided by the user iff it is between 0 and 1 // We use the alpha provided by the user if it is between 0 and 1
if( alpha >= 0 && alpha <= 1 ) if( alpha >= 0 && alpha <= 1 )
{ alpha = input; } { alpha = input; }
/// Read image ( same size, same type ) //![load]
/// Read images ( both have to be of the same size and type )
src1 = imread("../data/LinuxLogo.jpg"); src1 = imread("../data/LinuxLogo.jpg");
src2 = imread("../data/WindowsLogo.jpg"); src2 = imread("../data/WindowsLogo.jpg");
//![load]
if( src1.empty() ) { std::cout<< "Error loading src1"<<std::endl; return -1; } if( src1.empty() ) { std::cout<< "Error loading src1"<<std::endl; return -1; }
if( src2.empty() ) { std::cout<< "Error loading src2"<<std::endl; return -1; } if( src2.empty() ) { std::cout<< "Error loading src2"<<std::endl; return -1; }
/// Create Windows //![blend_images]
namedWindow("Linear Blend", 1);
beta = ( 1.0 - alpha ); beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst); addWeighted( src1, alpha, src2, beta, 0.0, dst);
//![blend_images]
//![display]
imshow( "Linear Blend", dst ); imshow( "Linear Blend", dst );
waitKey(0); waitKey(0);
//![display]
return 0; return 0;
} }
...@@ -23,6 +23,7 @@ void MyLine( Mat img, Point start, Point end ); ...@@ -23,6 +23,7 @@ void MyLine( Mat img, Point start, Point end );
*/ */
int main( void ){ int main( void ){
//![create_images]
/// Windows names /// Windows names
char atom_window[] = "Drawing 1: Atom"; char atom_window[] = "Drawing 1: Atom";
char rook_window[] = "Drawing 2: Rook"; char rook_window[] = "Drawing 2: Rook";
...@@ -30,10 +31,12 @@ int main( void ){ ...@@ -30,10 +31,12 @@ int main( void ){
/// Create black empty images /// Create black empty images
Mat atom_image = Mat::zeros( w, w, CV_8UC3 ); Mat atom_image = Mat::zeros( w, w, CV_8UC3 );
Mat rook_image = Mat::zeros( w, w, CV_8UC3 ); Mat rook_image = Mat::zeros( w, w, CV_8UC3 );
//![create_images]
/// 1. Draw a simple atom: /// 1. Draw a simple atom:
/// ----------------------- /// -----------------------
//![draw_atom]
/// 1.a. Creating ellipses /// 1.a. Creating ellipses
MyEllipse( atom_image, 90 ); MyEllipse( atom_image, 90 );
MyEllipse( atom_image, 0 ); MyEllipse( atom_image, 0 );
...@@ -42,26 +45,31 @@ int main( void ){ ...@@ -42,26 +45,31 @@ int main( void ){
/// 1.b. Creating circles /// 1.b. Creating circles
MyFilledCircle( atom_image, Point( w/2, w/2) ); MyFilledCircle( atom_image, Point( w/2, w/2) );
//![draw_atom]
/// 2. Draw a rook /// 2. Draw a rook
/// ------------------ /// ------------------
//![draw_rook]
/// 2.a. Create a convex polygon /// 2.a. Create a convex polygon
MyPolygon( rook_image ); MyPolygon( rook_image );
//![rectangle]
/// 2.b. Creating rectangles /// 2.b. Creating rectangles
rectangle( rook_image, rectangle( rook_image,
Point( 0, 7*w/8 ), Point( 0, 7*w/8 ),
Point( w, w), Point( w, w),
Scalar( 0, 255, 255 ), Scalar( 0, 255, 255 ),
-1, FILLED,
8 ); LINE_8 );
//![rectangle]
/// 2.c. Create a few lines /// 2.c. Create a few lines
MyLine( rook_image, Point( 0, 15*w/16 ), Point( w, 15*w/16 ) ); MyLine( rook_image, Point( 0, 15*w/16 ), Point( w, 15*w/16 ) );
MyLine( rook_image, Point( w/4, 7*w/8 ), Point( w/4, w ) ); MyLine( rook_image, Point( w/4, 7*w/8 ), Point( w/4, w ) );
MyLine( rook_image, Point( w/2, 7*w/8 ), Point( w/2, w ) ); MyLine( rook_image, Point( w/2, 7*w/8 ), Point( w/2, w ) );
MyLine( rook_image, Point( 3*w/4, 7*w/8 ), Point( 3*w/4, w ) ); MyLine( rook_image, Point( 3*w/4, 7*w/8 ), Point( 3*w/4, w ) );
//![draw_rook]
/// 3. Display your stuff! /// 3. Display your stuff!
imshow( atom_window, atom_image ); imshow( atom_window, atom_image );
...@@ -75,6 +83,7 @@ int main( void ){ ...@@ -75,6 +83,7 @@ int main( void ){
/// Function Declaration /// Function Declaration
//![myellipse]
/** /**
* @function MyEllipse * @function MyEllipse
* @brief Draw a fixed-size ellipse with different angles * @brief Draw a fixed-size ellipse with different angles
...@@ -94,31 +103,32 @@ void MyEllipse( Mat img, double angle ) ...@@ -94,31 +103,32 @@ void MyEllipse( Mat img, double angle )
thickness, thickness,
lineType ); lineType );
} }
//![myellipse]
//![myfilledcircle]
/** /**
* @function MyFilledCircle * @function MyFilledCircle
* @brief Draw a fixed-size filled circle * @brief Draw a fixed-size filled circle
*/ */
void MyFilledCircle( Mat img, Point center ) void MyFilledCircle( Mat img, Point center )
{ {
int thickness = -1;
int lineType = 8;
circle( img, circle( img,
center, center,
w/32, w/32,
Scalar( 0, 0, 255 ), Scalar( 0, 0, 255 ),
thickness, FILLED,
lineType ); LINE_8 );
} }
//![myfilledcircle]
//![mypolygon]
/** /**
* @function MyPolygon * @function MyPolygon
* @function Draw a simple concave polygon (rook) * @brief Draw a simple concave polygon (rook)
*/ */
void MyPolygon( Mat img ) void MyPolygon( Mat img )
{ {
int lineType = 8; int lineType = LINE_8;
/** Create some points */ /** Create some points */
Point rook_points[1][20]; Point rook_points[1][20];
...@@ -149,11 +159,13 @@ void MyPolygon( Mat img ) ...@@ -149,11 +159,13 @@ void MyPolygon( Mat img )
fillPoly( img, fillPoly( img,
ppt, ppt,
npt, npt,
1, 1,
Scalar( 255, 255, 255 ), Scalar( 255, 255, 255 ),
lineType ); lineType );
} }
//![mypolygon]
//![myline]
/** /**
* @function MyLine * @function MyLine
* @brief Draw a simple line * @brief Draw a simple line
...@@ -161,7 +173,7 @@ void MyPolygon( Mat img ) ...@@ -161,7 +173,7 @@ void MyPolygon( Mat img )
void MyLine( Mat img, Point start, Point end ) void MyLine( Mat img, Point start, Point end )
{ {
int thickness = 2; int thickness = 2;
int lineType = 8; int lineType = LINE_8;
line( img, line( img,
start, start,
end, end,
...@@ -169,3 +181,4 @@ void MyLine( Mat img, Point start, Point end ) ...@@ -169,3 +181,4 @@ void MyLine( Mat img, Point start, Point end )
thickness, thickness,
lineType ); lineType );
} }
//![myline]
...@@ -166,10 +166,14 @@ int main( int argc, char* argv[] ){ ...@@ -166,10 +166,14 @@ int main( int argc, char* argv[] ){
// load the image (note that we don't have the projection information. You will // load the image (note that we don't have the projection information. You will
// need to load that yourself or use the full GDAL driver. The values are pre-defined // need to load that yourself or use the full GDAL driver. The values are pre-defined
// at the top of this file // at the top of this file
//![load1]
cv::Mat image = cv::imread(argv[1], cv::IMREAD_LOAD_GDAL | cv::IMREAD_COLOR ); cv::Mat image = cv::imread(argv[1], cv::IMREAD_LOAD_GDAL | cv::IMREAD_COLOR );
//![load1]
//![load2]
// load the dem model // load the dem model
cv::Mat dem = cv::imread(argv[2], cv::IMREAD_LOAD_GDAL | cv::IMREAD_ANYDEPTH ); cv::Mat dem = cv::imread(argv[2], cv::IMREAD_LOAD_GDAL | cv::IMREAD_ANYDEPTH );
//![load2]
// create our output products // create our output products
cv::Mat output_dem( image.size(), CV_8UC3 ); cv::Mat output_dem( image.size(), CV_8UC3 );
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment