Commit 6c6900a6 authored by Vadim Pisarevsky's avatar Vadim Pisarevsky

Merge pull request #9424 from Cartucho:update_imgproc_tutorials

parents 21bd834a 9a2317e0
Smoothing Images {#tutorial_gausian_median_blur_bilateral_filter} Smoothing Images {#tutorial_gausian_median_blur_bilateral_filter}
================ ================
@next_tutorial{tutorial_erosion_dilatation}
Goal Goal
---- ----
In this tutorial you will learn how to apply diverse linear filters to smooth images using OpenCV In this tutorial you will learn how to apply diverse linear filters to smooth images using OpenCV
functions such as: functions such as:
- @ref cv::blur - **blur()**
- @ref cv::GaussianBlur - **GaussianBlur()**
- @ref cv::medianBlur - **medianBlur()**
- @ref cv::bilateralFilter - **bilateralFilter()**
Theory Theory
------ ------
...@@ -92,38 +94,65 @@ Code ...@@ -92,38 +94,65 @@ Code
- Loads an image - Loads an image
- Applies 4 different kinds of filters (explained in Theory) and show the filtered images - Applies 4 different kinds of filters (explained in Theory) and show the filtered images
sequentially sequentially
@add_toggle_cpp
- **Downloadable code**: Click
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgProc/Smoothing/Smoothing.cpp)
- **Code at glance:**
@include samples/cpp/tutorial_code/ImgProc/Smoothing/Smoothing.cpp
@end_toggle
@add_toggle_java
- **Downloadable code**: Click - **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Smoothing.cpp) [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgProc/Smoothing/Smoothing.java)
- **Code at glance:**
@include samples/java/tutorial_code/ImgProc/Smoothing/Smoothing.java
@end_toggle
@add_toggle_python
- **Downloadable code**: Click
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/imgProc/Smoothing/smoothing.py)
- **Code at glance:** - **Code at glance:**
@include samples/cpp/tutorial_code/ImgProc/Smoothing.cpp @include samples/python/tutorial_code/imgProc/Smoothing/smoothing.py
@end_toggle
Explanation Explanation
----------- -----------
-# Let's check the OpenCV functions that involve only the smoothing procedure, since the rest is Let's check the OpenCV functions that involve only the smoothing procedure, since the rest is
already known by now. already known by now.
-# **Normalized Block Filter:**
OpenCV offers the function @ref cv::blur to perform smoothing with this filter. #### Normalized Block Filter:
@snippet cpp/tutorial_code/ImgProc/Smoothing.cpp blur
- OpenCV offers the function **blur()** to perform smoothing with this filter.
We specify 4 arguments (more details, check the Reference): We specify 4 arguments (more details, check the Reference):
- *src*: Source image - *src*: Source image
- *dst*: Destination image - *dst*: Destination image
- *Size( w,h )*: Defines the size of the kernel to be used ( of width *w* pixels and height - *Size( w, h )*: Defines the size of the kernel to be used ( of width *w* pixels and height
*h* pixels) *h* pixels)
- *Point(-1, -1)*: Indicates where the anchor point (the pixel evaluated) is located with - *Point(-1, -1)*: Indicates where the anchor point (the pixel evaluated) is located with
respect to the neighborhood. If there is a negative value, then the center of the kernel is respect to the neighborhood. If there is a negative value, then the center of the kernel is
considered the anchor point. considered the anchor point.
-# **Gaussian Filter:** @add_toggle_cpp
@snippet cpp/tutorial_code/ImgProc/Smoothing/Smoothing.cpp blur
@end_toggle
It is performed by the function @ref cv::GaussianBlur : @add_toggle_java
@snippet cpp/tutorial_code/ImgProc/Smoothing.cpp gaussianblur @snippet samples/java/tutorial_code/ImgProc/Smoothing/Smoothing.java blur
@end_toggle
Here we use 4 arguments (more details, check the OpenCV reference): @add_toggle_python
@snippet samples/python/tutorial_code/imgProc/Smoothing/smoothing.py blur
@end_toggle
#### Gaussian Filter:
- It is performed by the function **GaussianBlur()** :
Here we use 4 arguments (more details, check the OpenCV reference):
- *src*: Source image - *src*: Source image
- *dst*: Destination image - *dst*: Destination image
- *Size(w, h)*: The size of the kernel to be used (the neighbors to be considered). \f$w\f$ and - *Size(w, h)*: The size of the kernel to be used (the neighbors to be considered). \f$w\f$ and
...@@ -134,35 +163,65 @@ Explanation ...@@ -134,35 +163,65 @@ Explanation
- \f$\sigma_{y}\f$: The standard deviation in y. Writing \f$0\f$ implies that \f$\sigma_{y}\f$ is - \f$\sigma_{y}\f$: The standard deviation in y. Writing \f$0\f$ implies that \f$\sigma_{y}\f$ is
calculated using kernel size. calculated using kernel size.
-# **Median Filter:** @add_toggle_cpp
@snippet cpp/tutorial_code/ImgProc/Smoothing/Smoothing.cpp gaussianblur
@end_toggle
This filter is provided by the @ref cv::medianBlur function: @add_toggle_java
@snippet cpp/tutorial_code/ImgProc/Smoothing.cpp medianblur @snippet samples/java/tutorial_code/ImgProc/Smoothing/Smoothing.java gaussianblur
@end_toggle
We use three arguments: @add_toggle_python
@snippet samples/python/tutorial_code/imgProc/Smoothing/smoothing.py gaussianblur
@end_toggle
#### Median Filter:
- This filter is provided by the **medianBlur()** function:
We use three arguments:
- *src*: Source image - *src*: Source image
- *dst*: Destination image, must be the same type as *src* - *dst*: Destination image, must be the same type as *src*
- *i*: Size of the kernel (only one because we use a square window). Must be odd. - *i*: Size of the kernel (only one because we use a square window). Must be odd.
-# **Bilateral Filter** @add_toggle_cpp
@snippet cpp/tutorial_code/ImgProc/Smoothing/Smoothing.cpp medianblur
@end_toggle
Provided by OpenCV function @ref cv::bilateralFilter @add_toggle_java
@snippet cpp/tutorial_code/ImgProc/Smoothing.cpp bilateralfilter @snippet samples/java/tutorial_code/ImgProc/Smoothing/Smoothing.java medianblur
@end_toggle
We use 5 arguments: @add_toggle_python
@snippet samples/python/tutorial_code/imgProc/Smoothing/smoothing.py medianblur
@end_toggle
#### Bilateral Filter
- Provided by OpenCV function **bilateralFilter()**
We use 5 arguments:
- *src*: Source image - *src*: Source image
- *dst*: Destination image - *dst*: Destination image
- *d*: The diameter of each pixel neighborhood. - *d*: The diameter of each pixel neighborhood.
- \f$\sigma_{Color}\f$: Standard deviation in the color space. - \f$\sigma_{Color}\f$: Standard deviation in the color space.
- \f$\sigma_{Space}\f$: Standard deviation in the coordinate space (in pixel terms) - \f$\sigma_{Space}\f$: Standard deviation in the coordinate space (in pixel terms)
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgProc/Smoothing/Smoothing.cpp bilateralfilter
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/Smoothing/Smoothing.java bilateralfilter
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/Smoothing/smoothing.py bilateralfilter
@end_toggle
Results Results
------- -------
- The code opens an image (in this case *lena.jpg*) and display it under the effects of the 4 - The code opens an image (in this case [lena.jpg](https://raw.githubusercontent.com/opencv/opencv/master/samples/data/lena.jpg))
filters explained. and display it under the effects of the 4 filters explained.
- Here is a snapshot of the image smoothed using *medianBlur*: - Here is a snapshot of the image smoothed using *medianBlur*:
![](images/Smoothing_Tutorial_Result_Median_Filter.jpg) ![](images/Smoothing_Tutorial_Result_Median_Filter.jpg)
Hit-or-Miss {#tutorial_hitOrMiss} Hit-or-Miss {#tutorial_hitOrMiss}
================================= =================================
@prev_tutorial{tutorial_opening_closing_hats}
@next_tutorial{tutorial_morph_lines_detection}
Goal Goal
---- ----
In this tutorial you will learn how to find a given configuration or pattern in a binary image by using the Hit-or-Miss transform (also known as Hit-and-Miss transform). In this tutorial you will learn how to find a given configuration or pattern in a binary image by using the Hit-or-Miss transform (also known as Hit-and-Miss transform).
This transform is also the basis of more advanced morphological operations such as thinning or pruning. This transform is also the basis of more advanced morphological operations such as thinning or pruning.
We will use the OpenCV function @ref cv::morphologyEx. We will use the OpenCV function **morphologyEx()** .
Hit-or-Miss theory Hit-or-Miss theory
------------------- -------------------
Morphological operators process images based on their shape. These operators apply one or more *structuring elements* to an input image to obtain the output image. Morphological operators process images based on their shape. These operators apply one or more *structuring elements* to an input image to obtain the output image.
The two basic morphological operations are the *erosion* and the *dilation*. The combination of these two operations generate advanced morphological transformations such as *opening*, *closing*, or *top-hat* transform. The two basic morphological operations are the *erosion* and the *dilation*. The combination of these two operations generate advanced morphological transformations such as *opening*, *closing*, or *top-hat* transform.
To know more about these and other basic morphological operations refer to previous tutorials @ref tutorial_erosion_dilatation "here" and @ref tutorial_opening_closing_hats "here". To know more about these and other basic morphological operations refer to previous tutorials (@ref tutorial_erosion_dilatation "Eroding and Dilating") and (@ref tutorial_opening_closing_hats "More Morphology Transformations").
The Hit-or-Miss transformation is useful to find patterns in binary images. In particular, it finds those pixels whose neighbourhood matches the shape of a first structuring element \f$B_1\f$ The Hit-or-Miss transformation is useful to find patterns in binary images. In particular, it finds those pixels whose neighbourhood matches the shape of a first structuring element \f$B_1\f$
while not matching the shape of a second structuring element \f$B_2\f$ at the same time. Mathematically, the operation applied to an image \f$A\f$ can be expressed as follows: while not matching the shape of a second structuring element \f$B_2\f$ at the same time. Mathematically, the operation applied to an image \f$A\f$ can be expressed as follows:
...@@ -43,11 +44,27 @@ You can see that the pattern is found in just one location within the image. ...@@ -43,11 +44,27 @@ You can see that the pattern is found in just one location within the image.
Code Code
---- ----
The code corresponding to the previous example is shown below. You can also download it from The code corresponding to the previous example is shown below.
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/HitMiss.cpp)
@include samples/cpp/tutorial_code/ImgProc/HitMiss.cpp @add_toggle_cpp
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgProc/HitMiss/HitMiss.cpp)
@include samples/cpp/tutorial_code/ImgProc/HitMiss/HitMiss.cpp
@end_toggle
@add_toggle_java
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgProc/HitMiss/HitMiss.java)
@include samples/java/tutorial_code/ImgProc/HitMiss/HitMiss.java
@end_toggle
@add_toggle_python
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/imgProc/HitMiss/hit_miss.py)
@include samples/python/tutorial_code/imgProc/HitMiss/hit_miss.py
@end_toggle
As you can see, it is as simple as using the function @ref cv::morphologyEx with the operation type @ref cv::MORPH_HITMISS and the chosen kernel. As you can see, it is as simple as using the function **morphologyEx()** with the operation type **MORPH_HITMISS** and the chosen kernel.
Other examples Other examples
-------------- --------------
......
Adding borders to your images {#tutorial_copyMakeBorder} Adding borders to your images {#tutorial_copyMakeBorder}
============================= =============================
@prev_tutorial{tutorial_filter_2d}
@next_tutorial{tutorial_sobel_derivatives}
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the OpenCV function @ref cv::copyMakeBorder to set the borders (extra padding to your - Use the OpenCV function **copyMakeBorder()** to set the borders (extra padding to your
image). image).
Theory Theory
...@@ -30,10 +33,7 @@ Theory ...@@ -30,10 +33,7 @@ Theory
This will be seen more clearly in the Code section. This will be seen more clearly in the Code section.
Code - **What does this program do?**
----
-# **What does this program do?**
- Load an image - Load an image
- Let the user choose what kind of padding use in the input image. There are two options: - Let the user choose what kind of padding use in the input image. There are two options:
...@@ -45,38 +45,153 @@ Code ...@@ -45,38 +45,153 @@ Code
The user chooses either option by pressing 'c' (constant) or 'r' (replicate) The user chooses either option by pressing 'c' (constant) or 'r' (replicate)
- The program finishes when the user presses 'ESC' - The program finishes when the user presses 'ESC'
-# The tutorial code's is shown lines below. You can also download it from Code
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp) ----
@include samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp
The tutorial code's is shown lines below.
@add_toggle_cpp
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp)
@include samples/cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp
@end_toggle
@add_toggle_java
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java)
@include samples/java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java
@end_toggle
@add_toggle_python
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py)
@include samples/python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py
@end_toggle
Explanation Explanation
----------- -----------
-# First we declare the variables we are going to use: #### Declare the variables
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp variables
First we declare the variables we are going to use:
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp variables
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java variables
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py variables
@end_toggle
Especial attention deserves the variable *rng* which is a random number generator. We use it to
generate the random border color, as we will see soon.
#### Load an image
As usual we load our source image *src*:
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp load
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java load
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py load
@end_toggle
#### Create a window
After giving a short intro of how to use the program, we create a window:
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp create_window
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java create_window
@end_toggle
Especial attention deserves the variable *rng* which is a random number generator. We use it to @add_toggle_python
generate the random border color, as we will see soon. @snippet python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py create_window
@end_toggle
-# As usual we load our source image *src*: #### Initialize arguments
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp load
-# After giving a short intro of how to use the program, we create a window: Now we initialize the argument that defines the size of the borders (*top*, *bottom*, *left* and
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp create_window *right*). We give them a value of 5% the size of *src*.
-# Now we initialize the argument that defines the size of the borders (*top*, *bottom*, *left* and
*right*). We give them a value of 5% the size of *src*.
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp init_arguments
-# The program runs in a **for** loop. If the user presses 'c' or 'r', the *borderType* variable
takes the value of *BORDER_CONSTANT* or *BORDER_REPLICATE* respectively:
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp check_keypress
-# In each iteration (after 0.5 seconds), the variable *value* is updated...
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp update_value
with a random value generated by the **RNG** variable *rng*. This value is a number picked
randomly in the range \f$[0,255]\f$
-# Finally, we call the function @ref cv::copyMakeBorder to apply the respective padding: @add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp copymakeborder @snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp init_arguments
The arguments are: @end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java init_arguments
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py init_arguments
@end_toggle
#### Loop
The program runs in an infinite loop while the key **ESC** isn't pressed.
If the user presses '**c**' or '**r**', the *borderType* variable
takes the value of *BORDER_CONSTANT* or *BORDER_REPLICATE* respectively:
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp check_keypress
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java check_keypress
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py check_keypress
@end_toggle
#### Random color
In each iteration (after 0.5 seconds), the random border color (*value*) is updated...
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp update_value
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java update_value
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py update_value
@end_toggle
This value is a set of three numbers picked randomly in the range \f$[0,255]\f$.
#### Form a border around the image
Finally, we call the function **copyMakeBorder()** to apply the respective padding:
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp copymakeborder
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java copymakeborder
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py copymakeborder
@end_toggle
- The arguments are:
-# *src*: Source image -# *src*: Source image
-# *dst*: Destination image -# *dst*: Destination image
...@@ -87,8 +202,21 @@ Explanation ...@@ -87,8 +202,21 @@ Explanation
-# *value*: If *borderType* is *BORDER_CONSTANT*, this is the value used to fill the border -# *value*: If *borderType* is *BORDER_CONSTANT*, this is the value used to fill the border
pixels. pixels.
-# We display our output image in the image created previously #### Display the results
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp display
We display our output image in the image created previously
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/copyMakeBorder_demo.cpp display
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/MakeBorder/CopyMakeBorder.java display
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/MakeBorder/copy_make_border.py display
@end_toggle
Results Results
------- -------
......
Making your own linear filters! {#tutorial_filter_2d} Making your own linear filters! {#tutorial_filter_2d}
=============================== ===============================
@prev_tutorial{tutorial_threshold_inRange}
@next_tutorial{tutorial_copyMakeBorder}
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the OpenCV function @ref cv::filter2D to create your own linear filters. - Use the OpenCV function **filter2D()** to create your own linear filters.
Theory Theory
------ ------
...@@ -40,61 +43,127 @@ Expressing the procedure above in the form of an equation we would have: ...@@ -40,61 +43,127 @@ Expressing the procedure above in the form of an equation we would have:
\f[H(x,y) = \sum_{i=0}^{M_{i} - 1} \sum_{j=0}^{M_{j}-1} I(x+i - a_{i}, y + j - a_{j})K(i,j)\f] \f[H(x,y) = \sum_{i=0}^{M_{i} - 1} \sum_{j=0}^{M_{j}-1} I(x+i - a_{i}, y + j - a_{j})K(i,j)\f]
Fortunately, OpenCV provides you with the function @ref cv::filter2D so you do not have to code all Fortunately, OpenCV provides you with the function **filter2D()** so you do not have to code all
these operations. these operations.
Code ### What does this program do?
---- - Loads an image
- Performs a *normalized box filter*. For instance, for a kernel of size \f$size = 3\f$, the
-# **What does this program do?** kernel would be:
- Loads an image
- Performs a *normalized box filter*. For instance, for a kernel of size \f$size = 3\f$, the
kernel would be:
\f[K = \dfrac{1}{3 \cdot 3} \begin{bmatrix} \f[K = \dfrac{1}{3 \cdot 3} \begin{bmatrix}
1 & 1 & 1 \\ 1 & 1 & 1 \\
1 & 1 & 1 \\ 1 & 1 & 1 \\
1 & 1 & 1 1 & 1 & 1
\end{bmatrix}\f] \end{bmatrix}\f]
The program will perform the filter operation with kernels of sizes 3, 5, 7, 9 and 11.
The program will perform the filter operation with kernels of sizes 3, 5, 7, 9 and 11. - The filter output (with each kernel) will be shown during 500 milliseconds
- The filter output (with each kernel) will be shown during 500 milliseconds Code
----
-# The tutorial code's is shown lines below. You can also download it from The tutorial code's is shown in the lines below.
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/filter2D_demo.cpp)
@include cpp/tutorial_code/ImgTrans/filter2D_demo.cpp @add_toggle_cpp
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgTrans/filter2D_demo.cpp)
@include cpp/tutorial_code/ImgTrans/filter2D_demo.cpp
@end_toggle
@add_toggle_java
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgTrans/Filter2D/Filter2D_Demo.java)
@include java/tutorial_code/ImgTrans/Filter2D/Filter2D_Demo.java
@end_toggle
@add_toggle_python
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/ImgTrans/Filter2D/filter2D.py)
@include python/tutorial_code/ImgTrans/Filter2D/filter2D.py
@end_toggle
Explanation Explanation
----------- -----------
-# Load an image #### Load an image
@snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp load
-# Initialize the arguments for the linear filter @add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp init_arguments @snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp load
-# Perform an infinite loop updating the kernel size and applying our linear filter to the input @end_toggle
image. Let's analyze that more in detail:
-# First we define the kernel our filter is going to use. Here it is: @add_toggle_java
@snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp update_kernel @snippet java/tutorial_code/ImgTrans/Filter2D/Filter2D_Demo.java load
The first line is to update the *kernel_size* to odd values in the range: \f$[3,11]\f$. The second @end_toggle
line actually builds the kernel by setting its value to a matrix filled with \f$1's\f$ and
normalizing it by dividing it between the number of elements. @add_toggle_python
@snippet python/tutorial_code/ImgTrans/Filter2D/filter2D.py load
-# After setting the kernel, we can generate the filter by using the function @ref cv::filter2D : @end_toggle
@snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp apply_filter
The arguments denote: #### Initialize the arguments
-# *src*: Source image @add_toggle_cpp
-# *dst*: Destination image @snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp init_arguments
-# *ddepth*: The depth of *dst*. A negative value (such as \f$-1\f$) indicates that the depth is @end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/Filter2D/Filter2D_Demo.java init_arguments
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/Filter2D/filter2D.py init_arguments
@end_toggle
##### Loop
Perform an infinite loop updating the kernel size and applying our linear filter to the input
image. Let's analyze that more in detail:
- First we define the kernel our filter is going to use. Here it is:
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp update_kernel
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/Filter2D/Filter2D_Demo.java update_kernel
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/Filter2D/filter2D.py update_kernel
@end_toggle
The first line is to update the *kernel_size* to odd values in the range: \f$[3,11]\f$.
The second line actually builds the kernel by setting its value to a matrix filled with
\f$1's\f$ and normalizing it by dividing it between the number of elements.
- After setting the kernel, we can generate the filter by using the function **filter2D()** :
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/filter2D_demo.cpp apply_filter
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgTrans/Filter2D/Filter2D_Demo.java apply_filter
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/ImgTrans/Filter2D/filter2D.py apply_filter
@end_toggle
- The arguments denote:
- *src*: Source image
- *dst*: Destination image
- *ddepth*: The depth of *dst*. A negative value (such as \f$-1\f$) indicates that the depth is
the same as the source. the same as the source.
-# *kernel*: The kernel to be scanned through the image - *kernel*: The kernel to be scanned through the image
-# *anchor*: The position of the anchor relative to its kernel. The location *Point(-1, -1)* - *anchor*: The position of the anchor relative to its kernel. The location *Point(-1, -1)*
indicates the center by default. indicates the center by default.
-# *delta*: A value to be added to each pixel during the correlation. By default it is \f$0\f$ - *delta*: A value to be added to each pixel during the correlation. By default it is \f$0\f$
-# *BORDER_DEFAULT*: We let this value by default (more details in the following tutorial) - *BORDER_DEFAULT*: We let this value by default (more details in the following tutorial)
-# Our program will effectuate a *while* loop, each 500 ms the kernel size of our filter will be - Our program will effectuate a *while* loop, each 500 ms the kernel size of our filter will be
updated in the range indicated. updated in the range indicated.
Results Results
...@@ -104,4 +173,4 @@ Results ...@@ -104,4 +173,4 @@ Results
result should be a window that shows an image blurred by a normalized filter. Each 0.5 seconds result should be a window that shows an image blurred by a normalized filter. Each 0.5 seconds
the kernel size should change, as can be seen in the series of snapshots below: the kernel size should change, as can be seen in the series of snapshots below:
![](images/filter_2d_tutorial_result.jpg) ![](images/filter_2d_tutorial_result.jpg)
Hough Circle Transform {#tutorial_hough_circle} Hough Circle Transform {#tutorial_hough_circle}
====================== ======================
@prev_tutorial{tutorial_hough_lines}
@next_tutorial{tutorial_remap}
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the OpenCV function @ref cv::HoughCircles to detect circles in an image. - Use the OpenCV function **HoughCircles()** to detect circles in an image.
Theory Theory
------ ------
...@@ -31,31 +34,96 @@ Theory ...@@ -31,31 +34,96 @@ Theory
the best radius for each candidate center. For more details, please check the book *Learning the best radius for each candidate center. For more details, please check the book *Learning
OpenCV* or your favorite Computer Vision bibliography OpenCV* or your favorite Computer Vision bibliography
#### What does this program do?
- Loads an image and blur it to reduce the noise
- Applies the *Hough Circle Transform* to the blurred image .
- Display the detected circle in a window.
Code Code
---- ----
-# **What does this program do?** @add_toggle_cpp
- Loads an image and blur it to reduce the noise The sample code that we will explain can be downloaded from
- Applies the *Hough Circle Transform* to the blurred image . [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp).
- Display the detected circle in a window. A slightly fancier version (which shows trackbars for changing the threshold values) can be found
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgTrans/HoughCircle_Demo.cpp).
-# The sample code that we will explain can be downloaded from [here](https://github.com/opencv/opencv/tree/master/samples/cpp/houghcircles.cpp). @include samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp
A slightly fancier version (which shows trackbars for @end_toggle
changing the threshold values) can be found [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/HoughCircle_Demo.cpp).
@include samples/cpp/houghcircles.cpp @add_toggle_java
The sample code that we will explain can be downloaded from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgTrans/HoughCircle/HoughCircles.java).
@include samples/java/tutorial_code/ImgTrans/HoughCircle/HoughCircles.java
@end_toggle
@add_toggle_python
The sample code that we will explain can be downloaded from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/ImgTrans/HoughCircle/hough_circle.py).
@include samples/python/tutorial_code/ImgTrans/HoughCircle/hough_circle.py
@end_toggle
Explanation Explanation
----------- -----------
-# Load an image The image we used can be found [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/data/smarties.png)
@snippet samples/cpp/houghcircles.cpp load
-# Convert it to grayscale: #### Load an image:
@snippet samples/cpp/houghcircles.cpp convert_to_gray
-# Apply a Median blur to reduce noise and avoid false circle detection: @add_toggle_cpp
@snippet samples/cpp/houghcircles.cpp reduce_noise @snippet samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp load
-# Proceed to apply Hough Circle Transform: @end_toggle
@snippet samples/cpp/houghcircles.cpp houghcircles
with the arguments: @add_toggle_java
@snippet samples/python/tutorial_code/ImgTrans/HoughCircle/hough_circle.py load
@end_toggle
@add_toggle_python
@snippet samples/java/tutorial_code/ImgTrans/HoughCircle/HoughCircles.java load
@end_toggle
#### Convert it to grayscale:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp convert_to_gray
@end_toggle
@add_toggle_java
@snippet samples/python/tutorial_code/ImgTrans/HoughCircle/hough_circle.py convert_to_gray
@end_toggle
@add_toggle_python
@snippet samples/java/tutorial_code/ImgTrans/HoughCircle/HoughCircles.java convert_to_gray
@end_toggle
#### Apply a Median blur to reduce noise and avoid false circle detection:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp reduce_noise
@end_toggle
@add_toggle_java
@snippet samples/python/tutorial_code/ImgTrans/HoughCircle/hough_circle.py reduce_noise
@end_toggle
@add_toggle_python
@snippet samples/java/tutorial_code/ImgTrans/HoughCircle/HoughCircles.java reduce_noise
@end_toggle
#### Proceed to apply Hough Circle Transform:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp houghcircles
@end_toggle
@add_toggle_java
@snippet samples/python/tutorial_code/ImgTrans/HoughCircle/hough_circle.py houghcircles
@end_toggle
@add_toggle_python
@snippet samples/java/tutorial_code/ImgTrans/HoughCircle/HoughCircles.java houghcircles
@end_toggle
- with the arguments:
- *gray*: Input image (grayscale). - *gray*: Input image (grayscale).
- *circles*: A vector that stores sets of 3 values: \f$x_{c}, y_{c}, r\f$ for each detected - *circles*: A vector that stores sets of 3 values: \f$x_{c}, y_{c}, r\f$ for each detected
...@@ -69,16 +137,39 @@ Explanation ...@@ -69,16 +137,39 @@ Explanation
- *min_radius = 0*: Minimum radius to be detected. If unknown, put zero as default. - *min_radius = 0*: Minimum radius to be detected. If unknown, put zero as default.
- *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default. - *max_radius = 0*: Maximum radius to be detected. If unknown, put zero as default.
-# Draw the detected circles: #### Draw the detected circles:
@snippet samples/cpp/houghcircles.cpp draw
You can see that we will draw the circle(s) on red and the center(s) with a small green dot @add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp draw
@end_toggle
@add_toggle_java
@snippet samples/python/tutorial_code/ImgTrans/HoughCircle/hough_circle.py draw
@end_toggle
@add_toggle_python
@snippet samples/java/tutorial_code/ImgTrans/HoughCircle/HoughCircles.java draw
@end_toggle
You can see that we will draw the circle(s) on red and the center(s) with a small green dot
#### Display the detected circle(s) and wait for the user to exit the program:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp display
@end_toggle
@add_toggle_java
@snippet samples/python/tutorial_code/ImgTrans/HoughCircle/hough_circle.py display
@end_toggle
-# Display the detected circle(s) and wait for the user to exit the program: @add_toggle_python
@snippet samples/cpp/houghcircles.cpp display @snippet samples/java/tutorial_code/ImgTrans/HoughCircle/HoughCircles.java display
@end_toggle
Result Result
------ ------
The result of running the code above with a test image is shown below: The result of running the code above with a test image is shown below:
![](images/Hough_Circle_Tutorial_Result.jpg) ![](images/Hough_Circle_Tutorial_Result.png)
Hough Line Transform {#tutorial_hough_lines} Hough Line Transform {#tutorial_hough_lines}
==================== ====================
@prev_tutorial{tutorial_canny_detector}
@next_tutorial{tutorial_hough_circle}
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the OpenCV functions @ref cv::HoughLines and @ref cv::HoughLinesP to detect lines in an - Use the OpenCV functions **HoughLines()** and **HoughLinesP()** to detect lines in an
image. image.
Theory Theory
...@@ -79,54 +82,93 @@ a. **The Standard Hough Transform** ...@@ -79,54 +82,93 @@ a. **The Standard Hough Transform**
- It consists in pretty much what we just explained in the previous section. It gives you as - It consists in pretty much what we just explained in the previous section. It gives you as
result a vector of couples \f$(\theta, r_{\theta})\f$ result a vector of couples \f$(\theta, r_{\theta})\f$
- In OpenCV it is implemented with the function @ref cv::HoughLines - In OpenCV it is implemented with the function **HoughLines()**
b. **The Probabilistic Hough Line Transform** b. **The Probabilistic Hough Line Transform**
- A more efficient implementation of the Hough Line Transform. It gives as output the extremes - A more efficient implementation of the Hough Line Transform. It gives as output the extremes
of the detected lines \f$(x_{0}, y_{0}, x_{1}, y_{1})\f$ of the detected lines \f$(x_{0}, y_{0}, x_{1}, y_{1})\f$
- In OpenCV it is implemented with the function @ref cv::HoughLinesP - In OpenCV it is implemented with the function **HoughLinesP()**
### What does this program do?
- Loads an image
- Applies a *Standard Hough Line Transform* and a *Probabilistic Line Transform*.
- Display the original image and the detected line in three windows.
Code Code
---- ----
-# **What does this program do?** @add_toggle_cpp
- Loads an image The sample code that we will explain can be downloaded from
- Applies either a *Standard Hough Line Transform* or a *Probabilistic Line Transform*. [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgTrans/houghlines.cpp).
- Display the original image and the detected line in two windows. A slightly fancier version (which shows both Hough standard and probabilistic
with trackbars for changing the threshold values) can be found
-# The sample code that we will explain can be downloaded from [here](https://github.com/opencv/opencv/tree/master/samples/cpp/houghlines.cpp). A slightly fancier version [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp).
(which shows both Hough standard and probabilistic with trackbars for changing the threshold @include samples/cpp/tutorial_code/ImgTrans/houghlines.cpp
values) can be found [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp). @end_toggle
@include samples/cpp/houghlines.cpp
@add_toggle_java
The sample code that we will explain can be downloaded from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java).
@include samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java
@end_toggle
@add_toggle_python
The sample code that we will explain can be downloaded from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py).
@include samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py
@end_toggle
Explanation Explanation
----------- -----------
-# Load an image #### Load an image:
@code{.cpp}
Mat src = imread(filename, 0); @add_toggle_cpp
if(src.empty()) @snippet samples/cpp/tutorial_code/ImgTrans/houghlines.cpp load
{ @end_toggle
help();
cout << "can not open " << filename << endl; @add_toggle_java
return -1; @snippet samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java load
} @end_toggle
@endcode
-# Detect the edges of the image by using a Canny detector @add_toggle_python
@code{.cpp} @snippet samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py load
Canny(src, dst, 50, 200, 3); @end_toggle
@endcode
Now we will apply the Hough Line Transform. We will explain how to use both OpenCV functions #### Detect the edges of the image by using a Canny detector:
available for this purpose:
@add_toggle_cpp
-# **Standard Hough Line Transform** @snippet samples/cpp/tutorial_code/ImgTrans/houghlines.cpp edge_detection
-# First, you apply the Transform: @end_toggle
@code{.cpp}
vector<Vec2f> lines; @add_toggle_java
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 ); @snippet samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java edge_detection
@endcode @end_toggle
with the following arguments:
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py edge_detection
@end_toggle
Now we will apply the Hough Line Transform. We will explain how to use both OpenCV functions
available for this purpose.
#### Standard Hough Line Transform:
First, you apply the Transform:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgTrans/houghlines.cpp hough_lines
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java hough_lines
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py hough_lines
@end_toggle
- with the following arguments:
- *dst*: Output of the edge detector. It should be a grayscale image (although in fact it - *dst*: Output of the edge detector. It should be a grayscale image (although in fact it
is a binary one) is a binary one)
...@@ -137,28 +179,35 @@ Explanation ...@@ -137,28 +179,35 @@ Explanation
- *threshold*: The minimum number of intersections to "*detect*" a line - *threshold*: The minimum number of intersections to "*detect*" a line
- *srn* and *stn*: Default parameters to zero. Check OpenCV reference for more info. - *srn* and *stn*: Default parameters to zero. Check OpenCV reference for more info.
-# And then you display the result by drawing the lines. And then you display the result by drawing the lines.
@code{.cpp} @add_toggle_cpp
for( size_t i = 0; i < lines.size(); i++ ) @snippet samples/cpp/tutorial_code/ImgTrans/houghlines.cpp draw_lines
{ @end_toggle
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2; @add_toggle_java
double a = cos(theta), b = sin(theta); @snippet samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java draw_lines
double x0 = a*rho, y0 = b*rho; @end_toggle
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a)); @add_toggle_python
pt2.x = cvRound(x0 - 1000*(-b)); @snippet samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py draw_lines
pt2.y = cvRound(y0 - 1000*(a)); @end_toggle
line( cdst, pt1, pt2, Scalar(0,0,255), 3, LINE_AA);
} #### Probabilistic Hough Line Transform
@endcode First you apply the transform:
-# **Probabilistic Hough Line Transform**
-# First you apply the transform: @add_toggle_cpp
@code{.cpp} @snippet samples/cpp/tutorial_code/ImgTrans/houghlines.cpp hough_lines_p
vector<Vec4i> lines; @end_toggle
HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 );
@endcode @add_toggle_java
with the arguments: @snippet samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java hough_lines_p
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py hough_lines_p
@end_toggle
- with the arguments:
- *dst*: Output of the edge detector. It should be a grayscale image (although in fact it - *dst*: Output of the edge detector. It should be a grayscale image (although in fact it
is a binary one) is a binary one)
...@@ -172,23 +221,47 @@ Explanation ...@@ -172,23 +221,47 @@ Explanation
this number of points are disregarded. this number of points are disregarded.
- *maxLineGap*: The maximum gap between two points to be considered in the same line. - *maxLineGap*: The maximum gap between two points to be considered in the same line.
-# And then you display the result by drawing the lines. And then you display the result by drawing the lines.
@code{.cpp}
for( size_t i = 0; i < lines.size(); i++ ) @add_toggle_cpp
{ @snippet samples/cpp/tutorial_code/ImgTrans/houghlines.cpp draw_lines_p
Vec4i l = lines[i]; @end_toggle
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, LINE_AA);
} @add_toggle_java
@endcode @snippet samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java draw_lines_p
-# Display the original image and the detected lines: @end_toggle
@code{.cpp}
imshow("source", src); @add_toggle_python
imshow("detected lines", cdst); @snippet samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py draw_lines_p
@endcode @end_toggle
-# Wait until the user exits the program
@code{.cpp} #### Display the original image and the detected lines:
waitKey();
@endcode @add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgTrans/houghlines.cpp imshow
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java imshow
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py imshow
@end_toggle
#### Wait until the user exits the program
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgTrans/houghlines.cpp exit
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/HoughLine/HoughLines.java exit
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/HoughLine/hough_lines.py exit
@end_toggle
Result Result
------ ------
...@@ -198,13 +271,11 @@ Result ...@@ -198,13 +271,11 @@ Result
section. It still implements the same stuff as above, only adding the Trackbar for the section. It still implements the same stuff as above, only adding the Trackbar for the
Threshold. Threshold.
Using an input image such as: Using an input image such as a [sudoku image](https://raw.githubusercontent.com/opencv/opencv/master/samples/data/sudoku.png).
We get the following result by using the Standard Hough Line Transform:
![](images/Hough_Lines_Tutorial_Original_Image.jpg) ![](images/hough_lines_result1.png)
And by using the Probabilistic Hough Line Transform:
We get the following result by using the Probabilistic Hough Line Transform: ![](images/hough_lines_result2.png)
![](images/Hough_Lines_Tutorial_Result.jpg)
You may observe that the number of lines detected vary while you change the *threshold*. The You may observe that the number of lines detected vary while you change the *threshold*. The
explanation is sort of evident: If you establish a higher threshold, fewer lines will be detected explanation is sort of evident: If you establish a higher threshold, fewer lines will be detected
......
Laplace Operator {#tutorial_laplace_operator} Laplace Operator {#tutorial_laplace_operator}
================ ================
@prev_tutorial{tutorial_sobel_derivatives}
@next_tutorial{tutorial_canny_detector}
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the OpenCV function @ref cv::Laplacian to implement a discrete analog of the *Laplacian - Use the OpenCV function **Laplacian()** to implement a discrete analog of the *Laplacian
operator*. operator*.
Theory Theory
...@@ -37,7 +40,7 @@ Theory ...@@ -37,7 +40,7 @@ Theory
\f[Laplace(f) = \dfrac{\partial^{2} f}{\partial x^{2}} + \dfrac{\partial^{2} f}{\partial y^{2}}\f] \f[Laplace(f) = \dfrac{\partial^{2} f}{\partial x^{2}} + \dfrac{\partial^{2} f}{\partial y^{2}}\f]
-# The Laplacian operator is implemented in OpenCV by the function @ref cv::Laplacian . In fact, -# The Laplacian operator is implemented in OpenCV by the function **Laplacian()** . In fact,
since the Laplacian uses the gradient of images, it calls internally the *Sobel* operator to since the Laplacian uses the gradient of images, it calls internally the *Sobel* operator to
perform its computation. perform its computation.
...@@ -50,25 +53,98 @@ Code ...@@ -50,25 +53,98 @@ Code
- Applies a Laplacian operator to the grayscale image and stores the output image - Applies a Laplacian operator to the grayscale image and stores the output image
- Display the result in a window - Display the result in a window
@add_toggle_cpp
-# The tutorial code's is shown lines below. You can also download it from -# The tutorial code's is shown lines below. You can also download it from
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp) [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp)
@include samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp @include samples/cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp
@end_toggle
@add_toggle_java
-# The tutorial code's is shown lines below. You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java)
@include samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java
@end_toggle
@add_toggle_python
-# The tutorial code's is shown lines below. You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py)
@include samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py
@end_toggle
Explanation Explanation
----------- -----------
-# Create some needed variables: #### Declare variables
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp variables
-# Loads the source image: @add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp load @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp variables
-# Apply a Gaussian blur to reduce noise: @end_toggle
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp reduce_noise
-# Convert the image to grayscale using @ref cv::cvtColor @add_toggle_java
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp convert_to_gray @snippet samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java variables
-# Apply the Laplacian operator to the grayscale image: @end_toggle
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp laplacian
where the arguments are: @add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py variables
@end_toggle
#### Load source image
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp load
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java load
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py load
@end_toggle
#### Reduce noise
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp reduce_noise
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java reduce_noise
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py reduce_noise
@end_toggle
#### Grayscale
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp convert_to_gray
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java convert_to_gray
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py convert_to_gray
@end_toggle
#### Laplacian operator
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp laplacian
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java laplacian
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py laplacian
@end_toggle
- The arguments are:
- *src_gray*: The input image. - *src_gray*: The input image.
- *dst*: Destination (output) image - *dst*: Destination (output) image
- *ddepth*: Depth of the destination image. Since our input is *CV_8U* we define *ddepth* = - *ddepth*: Depth of the destination image. Since our input is *CV_8U* we define *ddepth* =
...@@ -77,10 +153,33 @@ Explanation ...@@ -77,10 +153,33 @@ Explanation
this example. this example.
- *scale*, *delta* and *BORDER_DEFAULT*: We leave them as default values. - *scale*, *delta* and *BORDER_DEFAULT*: We leave them as default values.
-# Convert the output from the Laplacian operator to a *CV_8U* image: #### Convert output to a *CV_8U* image
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp convert
-# Display the result in a window: @add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp display @snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp convert
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java convert
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py convert
@end_toggle
#### Display the result
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgTrans/Laplace_Demo.cpp display
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgTrans/LaPlace/LaplaceDemo.java display
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/ImgTrans/LaPlace/laplace_demo.py display
@end_toggle
Results Results
------- -------
......
Sobel Derivatives {#tutorial_sobel_derivatives} Sobel Derivatives {#tutorial_sobel_derivatives}
================= =================
@prev_tutorial{tutorial_copyMakeBorder}
@next_tutorial{tutorial_laplace_operator}
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the OpenCV function @ref cv::Sobel to calculate the derivatives from an image. - Use the OpenCV function **Sobel()** to calculate the derivatives from an image.
- Use the OpenCV function @ref cv::Scharr to calculate a more accurate derivative for a kernel of - Use the OpenCV function **Scharr()** to calculate a more accurate derivative for a kernel of
size \f$3 \cdot 3\f$ size \f$3 \cdot 3\f$
Theory Theory
...@@ -83,7 +86,7 @@ Assuming that the image to be operated is \f$I\f$: ...@@ -83,7 +86,7 @@ Assuming that the image to be operated is \f$I\f$:
@note @note
When the size of the kernel is `3`, the Sobel kernel shown above may produce noticeable When the size of the kernel is `3`, the Sobel kernel shown above may produce noticeable
inaccuracies (after all, Sobel is only an approximation of the derivative). OpenCV addresses inaccuracies (after all, Sobel is only an approximation of the derivative). OpenCV addresses
this inaccuracy for kernels of size 3 by using the @ref cv::Scharr function. This is as fast this inaccuracy for kernels of size 3 by using the **Scharr()** function. This is as fast
but more accurate than the standar Sobel function. It implements the following kernels: but more accurate than the standar Sobel function. It implements the following kernels:
\f[G_{x} = \begin{bmatrix} \f[G_{x} = \begin{bmatrix}
-3 & 0 & +3 \\ -3 & 0 & +3 \\
...@@ -95,9 +98,9 @@ Assuming that the image to be operated is \f$I\f$: ...@@ -95,9 +98,9 @@ Assuming that the image to be operated is \f$I\f$:
+3 & +10 & +3 +3 & +10 & +3
\end{bmatrix}\f] \end{bmatrix}\f]
@note @note
You can check out more information of this function in the OpenCV reference (@ref cv::Scharr ). You can check out more information of this function in the OpenCV reference - **Scharr()** .
Also, in the sample code below, you will notice that above the code for @ref cv::Sobel function Also, in the sample code below, you will notice that above the code for **Sobel()** function
there is also code for the @ref cv::Scharr function commented. Uncommenting it (and obviously there is also code for the **Scharr()** function commented. Uncommenting it (and obviously
commenting the Sobel stuff) should give you an idea of how this function works. commenting the Sobel stuff) should give you an idea of how this function works.
Code Code
...@@ -107,28 +110,55 @@ Code ...@@ -107,28 +110,55 @@ Code
- Applies the *Sobel Operator* and generates as output an image with the detected *edges* - Applies the *Sobel Operator* and generates as output an image with the detected *edges*
bright on a darker background. bright on a darker background.
-# The tutorial code's is shown lines below. You can also download it from -# The tutorial code's is shown lines below.
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp)
@include samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp @add_toggle_cpp
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp)
@include samples/cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp
@end_toggle
@add_toggle_java
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgTrans/SobelDemo/SobelDemo.java)
@include samples/java/tutorial_code/ImgTrans/SobelDemo/SobelDemo.java
@end_toggle
@add_toggle_python
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/ImgTrans/SobelDemo/sobel_demo.py)
@include samples/python/tutorial_code/ImgTrans/SobelDemo/sobel_demo.py
@end_toggle
Explanation Explanation
----------- -----------
-# First we declare the variables we are going to use: #### Declare variables
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp variables
-# As usual we load our source image *src*: @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp variables
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp load
-# First, we apply a @ref cv::GaussianBlur to our image to reduce the noise ( kernel size = 3 ) #### Load source image
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp reduce_noise
-# Now we convert our filtered image to grayscale: @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp load
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp convert_to_gray
-# Second, we calculate the "*derivatives*" in *x* and *y* directions. For this, we use the #### Reduce noise
function @ref cv::Sobel as shown below:
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp sobel @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp reduce_noise
#### Grayscale
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp convert_to_gray
#### Sobel Operator
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp sobel
- We calculate the "derivatives" in *x* and *y* directions. For this, we use the
function **Sobel()** as shown below:
The function takes the following arguments: The function takes the following arguments:
- *src_gray*: In our example, the input image. Here it is *CV_8U* - *src_gray*: In our example, the input image. Here it is *CV_8U*
- *grad_x*/*grad_y*: The output image. - *grad_x* / *grad_y* : The output image.
- *ddepth*: The depth of the output image. We set it to *CV_16S* to avoid overflow. - *ddepth*: The depth of the output image. We set it to *CV_16S* to avoid overflow.
- *x_order*: The order of the derivative in **x** direction. - *x_order*: The order of the derivative in **x** direction.
- *y_order*: The order of the derivative in **y** direction. - *y_order*: The order of the derivative in **y** direction.
...@@ -137,13 +167,20 @@ Explanation ...@@ -137,13 +167,20 @@ Explanation
Notice that to calculate the gradient in *x* direction we use: \f$x_{order}= 1\f$ and Notice that to calculate the gradient in *x* direction we use: \f$x_{order}= 1\f$ and
\f$y_{order} = 0\f$. We do analogously for the *y* direction. \f$y_{order} = 0\f$. We do analogously for the *y* direction.
-# We convert our partial results back to *CV_8U*: #### Convert output to a CV_8U image
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp convert
-# Finally, we try to approximate the *gradient* by adding both directional gradients (note that @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp convert
this is not an exact calculation at all! but it is good for our purposes).
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp blend #### Gradient
-# Finally, we show our result:
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp display @snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp blend
We try to approximate the *gradient* by adding both directional gradients (note that
this is not an exact calculation at all! but it is good for our purposes).
#### Show results
@snippet cpp/tutorial_code/ImgTrans/Sobel_Demo.cpp display
Results Results
------- -------
......
Extract horizontal and vertical lines by using morphological operations {#tutorial_moprh_lines_detection} Extract horizontal and vertical lines by using morphological operations {#tutorial_morph_lines_detection}
============= =============
@prev_tutorial{tutorial_hitOrMiss}
@next_tutorial{tutorial_pyramids}
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Apply two very common morphology operators (i.e. Dilation and Erosion), with the creation of custom kernels, in order to extract straight lines on the horizontal and vertical axes. For this purpose, you will use the following OpenCV functions: - Apply two very common morphology operators (i.e. Dilation and Erosion), with the creation of custom kernels, in order to extract straight lines on the horizontal and vertical axes. For this purpose, you will use the following OpenCV functions:
- @ref cv::erode - **erode()**
- @ref cv::dilate - **dilate()**
- @ref cv::getStructuringElement - **getStructuringElement()**
in an example where your goal will be to extract the music notes from a music sheet. in an example where your goal will be to extract the music notes from a music sheet.
...@@ -48,39 +51,144 @@ A structuring element can have many common shapes, such as lines, diamonds, disk ...@@ -48,39 +51,144 @@ A structuring element can have many common shapes, such as lines, diamonds, disk
Code Code
---- ----
This tutorial code's is shown lines below. You can also download it from [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp). This tutorial code's is shown lines below.
@include samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp
@add_toggle_cpp
You can also download it from [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp).
@include samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp
@end_toggle
@add_toggle_java
You can also download it from [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java).
@include samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java
@end_toggle
@add_toggle_python
You can also download it from [here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py).
@include samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py
@end_toggle
Explanation / Result Explanation / Result
-------------------- --------------------
-# Load the source image and check if it is loaded without any problem, then show it: Get image from [here](https://raw.githubusercontent.com/opencv/opencv/master/doc/tutorials/imgproc/morph_lines_detection/images/src.png) .
@snippet samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp load_image
![](images/src.png) #### Load Image
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp load_image
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java load_image
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py load_image
@end_toggle
![](images/src.png)
#### Grayscale
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp gray
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java gray
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py gray
@end_toggle
![](images/gray.png)
#### Grayscale to Binary image
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp bin
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java bin
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py bin
@end_toggle
![](images/binary.png)
#### Output images
Now we are ready to apply morphological operations in order to extract the horizontal and vertical lines and as a consequence to separate the the music notes from the music sheet, but first let's initialize the output images that we will use for that reason:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp init
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java init
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py init
@end_toggle
#### Structure elements
As we specified in the theory in order to extract the object that we desire, we need to create the corresponding structure element. Since we want to extract the horizontal lines, a corresponding structure element for that purpose will have the following shape:
![](images/linear_horiz.png)
and in the source code this is represented by the following code snippet:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp horiz
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java horiz
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py horiz
@end_toggle
![](images/horiz.png)
The same applies for the vertical lines, with the corresponding structure element:
![](images/linear_vert.png)
and again this is represented as follows:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp vert
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java vert
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py vert
@end_toggle
![](images/vert.png)
-# Then transform image to grayscale if it not already: #### Refine edges / Result
@snippet samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp gray
![](images/gray.png)
-# Afterwards transform grayscale image to binary. Notice the ~ symbol which indicates that we use the inverse (i.e. bitwise_not) version of it: As you can see we are almost there. However, at that point you will notice that the edges of the notes are a bit rough. For that reason we need to refine the edges in order to obtain a smoother result:
@snippet samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp bin
![](images/binary.png)
-# Now we are ready to apply morphological operations in order to extract the horizontal and vertical lines and as a consequence to separate the the music notes from the music sheet, but first let's initialize the output images that we will use for that reason: @add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp init @snippet samples/cpp/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.cpp smooth
@end_toggle
-# As we specified in the theory in order to extract the object that we desire, we need to create the corresponding structure element. Since here we want to extract the horizontal lines, a corresponding structure element for that purpose will have the following shape: @add_toggle_java
![](images/linear_horiz.png) @snippet samples/java/tutorial_code/ImgProc/morph_lines_detection/Morphology_3.java smooth
and in the source code this is represented by the following code snippet: @end_toggle
@snippet samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp horiz
![](images/horiz.png)
-# The same applies for the vertical lines, with the corresponding structure element: @add_toggle_python
![](images/linear_vert.png) @snippet samples/python/tutorial_code/imgProc/morph_lines_detection/morph_lines_detection.py smooth
and again this is represented as follows: @end_toggle
@snippet samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp vert
![](images/vert.png)
-# As you can see we are almost there. However, at that point you will notice that the edges of the notes are a bit rough. For that reason we need to refine the edges in order to obtain a smoother result: ![](images/smooth.png)
@snippet samples/cpp/tutorial_code/ImgProc/Morphology_3.cpp smooth
![](images/smooth.png)
Image Pyramids {#tutorial_pyramids} Image Pyramids {#tutorial_pyramids}
============== ==============
@prev_tutorial{tutorial_morph_lines_detection}
@next_tutorial{tutorial_threshold}
Goal Goal
---- ----
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Use the OpenCV functions @ref cv::pyrUp and @ref cv::pyrDown to downsample or upsample a given - Use the OpenCV functions **pyrUp()** and **pyrDown()** to downsample or upsample a given
image. image.
Theory Theory
...@@ -19,7 +22,7 @@ Theory ...@@ -19,7 +22,7 @@ Theory
-# *Upsize* the image (zoom in) or -# *Upsize* the image (zoom in) or
-# *Downsize* it (zoom out). -# *Downsize* it (zoom out).
- Although there is a *geometric transformation* function in OpenCV that -literally- resize an - Although there is a *geometric transformation* function in OpenCV that -literally- resize an
image (@ref cv::resize , which we will show in a future tutorial), in this section we analyze image (**resize** , which we will show in a future tutorial), in this section we analyze
first the use of **Image Pyramids**, which are widely applied in a huge range of vision first the use of **Image Pyramids**, which are widely applied in a huge range of vision
applications. applications.
...@@ -52,12 +55,12 @@ Theory ...@@ -52,12 +55,12 @@ Theory
predecessor. Iterating this process on the input image \f$G_{0}\f$ (original image) produces the predecessor. Iterating this process on the input image \f$G_{0}\f$ (original image) produces the
entire pyramid. entire pyramid.
- The procedure above was useful to downsample an image. What if we want to make it bigger?: - The procedure above was useful to downsample an image. What if we want to make it bigger?:
columns filled with zeros (\f$0\f$) columns filled with zeros (\f$0 \f$)
- First, upsize the image to twice the original in each dimension, wit the new even rows and - First, upsize the image to twice the original in each dimension, wit the new even rows and
- Perform a convolution with the same kernel shown above (multiplied by 4) to approximate the - Perform a convolution with the same kernel shown above (multiplied by 4) to approximate the
values of the "missing pixels" values of the "missing pixels"
- These two procedures (downsampling and upsampling as explained above) are implemented by the - These two procedures (downsampling and upsampling as explained above) are implemented by the
OpenCV functions @ref cv::pyrUp and @ref cv::pyrDown , as we will see in an example with the OpenCV functions **pyrUp()** and **pyrDown()** , as we will see in an example with the
code below: code below:
@note When we reduce the size of an image, we are actually *losing* information of the image. @note When we reduce the size of an image, we are actually *losing* information of the image.
...@@ -65,76 +68,134 @@ Theory ...@@ -65,76 +68,134 @@ Theory
Code Code
---- ----
This tutorial code's is shown lines below. You can also download it from This tutorial code's is shown lines below.
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Pyramids.cpp)
@add_toggle_cpp
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/tutorial_code/ImgProc/Pyramids/Pyramids.cpp)
@include samples/cpp/tutorial_code/ImgProc/Pyramids/Pyramids.cpp
@end_toggle
@add_toggle_java
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/java/tutorial_code/ImgProc/Pyramids/Pyramids.java)
@include samples/java/tutorial_code/ImgProc/Pyramids/Pyramids.java
@end_toggle
@include samples/cpp/tutorial_code/ImgProc/Pyramids.cpp @add_toggle_python
You can also download it from
[here](https://raw.githubusercontent.com/opencv/opencv/master/samples/python/tutorial_code/imgProc/Pyramids/pyramids.py)
@include samples/python/tutorial_code/imgProc/Pyramids/pyramids.py
@end_toggle
Explanation Explanation
----------- -----------
Let's check the general structure of the program: Let's check the general structure of the program:
- Load an image (in this case it is defined in the program, the user does not have to enter it #### Load an image
as an argument)
@snippet cpp/tutorial_code/ImgProc/Pyramids.cpp load @add_toggle_cpp
@snippet cpp/tutorial_code/ImgProc/Pyramids/Pyramids.cpp load
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgProc/Pyramids/Pyramids.java load
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/imgProc/Pyramids/pyramids.py load
@end_toggle
#### Create window
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgProc/Pyramids/Pyramids.cpp show_image
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgProc/Pyramids/Pyramids.java show_image
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/imgProc/Pyramids/pyramids.py show_image
@end_toggle
- Create a Mat object to store the result of the operations (*dst*) and one to save temporal #### Loop
results (*tmp*).
@code{.cpp}
Mat src, dst, tmp;
/* ... */
tmp = src;
dst = tmp;
@endcode
- Create a window to display the result @add_toggle_cpp
@snippet cpp/tutorial_code/ImgProc/Pyramids.cpp create_window @snippet cpp/tutorial_code/ImgProc/Pyramids/Pyramids.cpp loop
@end_toggle
- Perform an infinite loop waiting for user input. @add_toggle_java
@snippet cpp/tutorial_code/ImgProc/Pyramids.cpp infinite_loop @snippet java/tutorial_code/ImgProc/Pyramids/Pyramids.java loop
@end_toggle
Our program exits if the user presses *ESC*. Besides, it has two options: @add_toggle_python
@snippet python/tutorial_code/imgProc/Pyramids/pyramids.py loop
@end_toggle
- **Perform upsampling (after pressing 'u')** Perform an infinite loop waiting for user input.
@snippet cpp/tutorial_code/ImgProc/Pyramids.cpp pyrup Our program exits if the user presses **ESC**. Besides, it has two options:
We use the function @ref cv::pyrUp with three arguments:
- *tmp*: The current image, it is initialized with the *src* original image. - **Perform upsampling - Zoom 'i'n (after pressing 'i')**
- *dst*: The destination image (to be shown on screen, supposedly the double of the
We use the function **pyrUp()** with three arguments:
- *src*: The current and destination image (to be shown on screen, supposedly the double of the
input image) input image)
- *Size( tmp.cols*2, tmp.rows\*2 )\* : The destination size. Since we are upsampling, - *Size( tmp.cols*2, tmp.rows\*2 )* : The destination size. Since we are upsampling,
@ref cv::pyrUp expects a size double than the input image (in this case *tmp*). **pyrUp()** expects a size double than the input image (in this case *src*).
- **Perform downsampling (after pressing 'd')**
@snippet cpp/tutorial_code/ImgProc/Pyramids.cpp pyrdown @add_toggle_cpp
Similarly as with @ref cv::pyrUp , we use the function @ref cv::pyrDown with three arguments: @snippet cpp/tutorial_code/ImgProc/Pyramids/Pyramids.cpp pyrup
@end_toggle
- *tmp*: The current image, it is initialized with the *src* original image.
- *dst*: The destination image (to be shown on screen, supposedly half the input @add_toggle_java
image) @snippet java/tutorial_code/ImgProc/Pyramids/Pyramids.java pyrup
- *Size( tmp.cols/2, tmp.rows/2 )* : The destination size. Since we are upsampling, @end_toggle
@ref cv::pyrDown expects half the size the input image (in this case *tmp*).
- Notice that it is important that the input image can be divided by a factor of two (in @add_toggle_python
both dimensions). Otherwise, an error will be shown. @snippet python/tutorial_code/imgProc/Pyramids/pyramids.py pyrup
- Finally, we update the input image **tmp** with the current image displayed, so the @end_toggle
subsequent operations are performed on it.
@snippet cpp/tutorial_code/ImgProc/Pyramids.cpp update_tmp - **Perform downsampling - Zoom 'o'ut (after pressing 'o')**
We use the function **pyrDown()** with three arguments (similarly to **pyrUp()**):
- *src*: The current and destination image (to be shown on screen, supposedly half the input
image)
- *Size( tmp.cols/2, tmp.rows/2 )* : The destination size. Since we are upsampling,
**pyrDown()** expects half the size the input image (in this case *src*).
@add_toggle_cpp
@snippet cpp/tutorial_code/ImgProc/Pyramids/Pyramids.cpp pyrdown
@end_toggle
@add_toggle_java
@snippet java/tutorial_code/ImgProc/Pyramids/Pyramids.java pyrdown
@end_toggle
@add_toggle_python
@snippet python/tutorial_code/imgProc/Pyramids/pyramids.py pyrdown
@end_toggle
Notice that it is important that the input image can be divided by a factor of two (in both dimensions).
Otherwise, an error will be shown.
Results Results
------- -------
- After compiling the code above we can test it. The program calls an image **chicky_512.jpg** - The program calls by default an image [chicky_512.png](https://raw.githubusercontent.com/opencv/opencv/master/samples/data/chicky_512.png)
that comes in the *samples/data* folder. Notice that this image is \f$512 \times 512\f$, that comes in the `samples/data` folder. Notice that this image is \f$512 \times 512\f$,
hence a downsample won't generate any error (\f$512 = 2^{9}\f$). The original image is shown below: hence a downsample won't generate any error (\f$512 = 2^{9}\f$). The original image is shown below:
![](images/Pyramids_Tutorial_Original_Image.jpg) ![](images/Pyramids_Tutorial_Original_Image.jpg)
- First we apply two successive @ref cv::pyrDown operations by pressing 'd'. Our output is: - First we apply two successive **pyrDown()** operations by pressing 'd'. Our output is:
![](images/Pyramids_Tutorial_PyrDown_Result.jpg) ![](images/Pyramids_Tutorial_PyrDown_Result.jpg)
- Note that we should have lost some resolution due to the fact that we are diminishing the size - Note that we should have lost some resolution due to the fact that we are diminishing the size
of the image. This is evident after we apply @ref cv::pyrUp twice (by pressing 'u'). Our output of the image. This is evident after we apply **pyrUp()** twice (by pressing 'u'). Our output
is now: is now:
![](images/Pyramids_Tutorial_PyrUp_Result.jpg) ![](images/Pyramids_Tutorial_PyrUp_Result.jpg)
...@@ -5,6 +5,8 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -5,6 +5,8 @@ In this section you will learn about the image processing (manipulation) functio
- @subpage tutorial_gausian_median_blur_bilateral_filter - @subpage tutorial_gausian_median_blur_bilateral_filter
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán *Author:* Ana Huamán
...@@ -27,7 +29,9 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -27,7 +29,9 @@ In this section you will learn about the image processing (manipulation) functio
Here we investigate different morphology operators Here we investigate different morphology operators
- @subpage tutorial_hitOrMiss - @subpage tutorial_hitOrMiss
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.4 *Compatibility:* \> OpenCV 2.4
...@@ -35,7 +39,9 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -35,7 +39,9 @@ In this section you will learn about the image processing (manipulation) functio
Learn how to find patterns in binary images using the Hit-or-Miss operation Learn how to find patterns in binary images using the Hit-or-Miss operation
- @subpage tutorial_moprh_lines_detection - @subpage tutorial_morph_lines_detection
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
...@@ -45,6 +51,8 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -45,6 +51,8 @@ In this section you will learn about the image processing (manipulation) functio
- @subpage tutorial_pyramids - @subpage tutorial_pyramids
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán *Author:* Ana Huamán
...@@ -69,6 +77,8 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -69,6 +77,8 @@ In this section you will learn about the image processing (manipulation) functio
- @subpage tutorial_filter_2d - @subpage tutorial_filter_2d
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán *Author:* Ana Huamán
...@@ -77,6 +87,8 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -77,6 +87,8 @@ In this section you will learn about the image processing (manipulation) functio
- @subpage tutorial_copyMakeBorder - @subpage tutorial_copyMakeBorder
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán *Author:* Ana Huamán
...@@ -85,6 +97,8 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -85,6 +97,8 @@ In this section you will learn about the image processing (manipulation) functio
- @subpage tutorial_sobel_derivatives - @subpage tutorial_sobel_derivatives
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán *Author:* Ana Huamán
...@@ -93,6 +107,8 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -93,6 +107,8 @@ In this section you will learn about the image processing (manipulation) functio
- @subpage tutorial_laplace_operator - @subpage tutorial_laplace_operator
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán *Author:* Ana Huamán
...@@ -109,6 +125,8 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -109,6 +125,8 @@ In this section you will learn about the image processing (manipulation) functio
- @subpage tutorial_hough_lines - @subpage tutorial_hough_lines
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán *Author:* Ana Huamán
...@@ -117,6 +135,8 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -117,6 +135,8 @@ In this section you will learn about the image processing (manipulation) functio
- @subpage tutorial_hough_circle - @subpage tutorial_hough_circle
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0 *Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán *Author:* Ana Huamán
......
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
static void help()
{
cout << "\nThis program demonstrates line finding with the Hough transform.\n"
"Usage:\n"
"./houghlines <image_name>, Default is ../data/pic1.png\n" << endl;
}
int main(int argc, char** argv)
{
cv::CommandLineParser parser(argc, argv,
"{help h||}{@image|../data/pic1.png|}"
);
if (parser.has("help"))
{
help();
return 0;
}
string filename = parser.get<string>("@image");
if (filename.empty())
{
help();
cout << "no image_name provided" << endl;
return -1;
}
Mat src = imread(filename, 0);
if(src.empty())
{
help();
cout << "can not open " << filename << endl;
return -1;
}
Mat dst, cdst;
Canny(src, dst, 50, 200, 3);
cvtColor(dst, cdst, COLOR_GRAY2BGR);
#if 0
vector<Vec2f> lines;
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( cdst, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
}
#else
vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
Vec4i l = lines[i];
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, LINE_AA);
}
#endif
imshow("source", src);
imshow("detected lines", cdst);
waitKey();
return 0;
}
...@@ -23,15 +23,22 @@ int main(){ ...@@ -23,15 +23,22 @@ int main(){
Mat output_image; Mat output_image;
morphologyEx(input_image, output_image, MORPH_HITMISS, kernel); morphologyEx(input_image, output_image, MORPH_HITMISS, kernel);
const int rate = 10; const int rate = 50;
kernel = (kernel + 1) * 127; kernel = (kernel + 1) * 127;
kernel.convertTo(kernel, CV_8U); kernel.convertTo(kernel, CV_8U);
resize(kernel, kernel, Size(), rate, rate, INTER_NEAREST); resize(kernel, kernel, Size(), rate, rate, INTER_NEAREST);
imshow("kernel", kernel); imshow("kernel", kernel);
moveWindow("kernel", 0, 0);
resize(input_image, input_image, Size(), rate, rate, INTER_NEAREST); resize(input_image, input_image, Size(), rate, rate, INTER_NEAREST);
imshow("Original", input_image); imshow("Original", input_image);
moveWindow("Original", 0, 200);
resize(output_image, output_image, Size(), rate, rate, INTER_NEAREST); resize(output_image, output_image, Size(), rate, rate, INTER_NEAREST);
imshow("Hit or Miss", output_image); imshow("Hit or Miss", output_image);
moveWindow("Hit or Miss", 500, 200);
waitKey(0); waitKey(0);
return 0; return 0;
} }
/**
* @file Pyramids.cpp
* @brief Sample code of image pyramids (pyrDown and pyrUp)
* @author OpenCV team
*/
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
using namespace cv;
/// Global variables
Mat src, dst, tmp;
const char* window_name = "Pyramids Demo";
/**
* @function main
*/
int main( void )
{
/// General instructions
printf( "\n Zoom In-Out demo \n " );
printf( "------------------ \n" );
printf( " * [u] -> Zoom in \n" );
printf( " * [d] -> Zoom out \n" );
printf( " * [ESC] -> Close program \n \n" );
//![load]
src = imread( "../data/chicky_512.png" ); // Loads the test image
if( src.empty() )
{ printf(" No data! -- Exiting the program \n");
return -1; }
//![load]
tmp = src;
dst = tmp;
//![create_window]
imshow( window_name, dst );
//![create_window]
//![infinite_loop]
for(;;)
{
char c = (char)waitKey(0);
if( c == 27 )
{ break; }
//![pyrup]
if( c == 'u' )
{ pyrUp( tmp, dst, Size( tmp.cols*2, tmp.rows*2 ) );
printf( "** Zoom In: Image x 2 \n" );
}
//![pyrup]
//![pyrdown]
else if( c == 'd' )
{ pyrDown( tmp, dst, Size( tmp.cols/2, tmp.rows/2 ) );
printf( "** Zoom Out: Image / 2 \n" );
}
//![pyrdown]
imshow( window_name, dst );
//![update_tmp]
tmp = dst;
//![update_tmp]
}
//![infinite_loop]
return 0;
}
/**
* @file Pyramids.cpp
* @brief Sample code of image pyramids (pyrDown and pyrUp)
* @author OpenCV team
*/
#include "iostream"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
using namespace std;
using namespace cv;
const char* window_name = "Pyramids Demo";
/**
* @function main
*/
int main( int argc, char** argv )
{
/// General instructions
cout << "\n Zoom In-Out demo \n "
"------------------ \n"
" * [i] -> Zoom in \n"
" * [o] -> Zoom out \n"
" * [ESC] -> Close program \n" << endl;
//![load]
const char* filename = argc >=2 ? argv[1] : "../data/chicky_512.png";
// Loads an image
Mat src = imread( filename );
// Check if image is loaded fine
if(src.empty()){
printf(" Error opening image\n");
printf(" Program Arguments: [image_name -- default ../data/chicky_512.png] \n");
return -1;
}
//![load]
//![loop]
for(;;)
{
//![show_image]
imshow( window_name, src );
//![show_image]
char c = (char)waitKey(0);
if( c == 27 )
{ break; }
//![pyrup]
else if( c == 'i' )
{ pyrUp( src, src, Size( src.cols*2, src.rows*2 ) );
printf( "** Zoom In: Image x 2 \n" );
}
//![pyrup]
//![pyrdown]
else if( c == 'o' )
{ pyrDown( src, src, Size( src.cols/2, src.rows/2 ) );
printf( "** Zoom Out: Image / 2 \n" );
}
//![pyrdown]
}
//![loop]
return 0;
}
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
* author OpenCV team * author OpenCV team
*/ */
#include <iostream>
#include "opencv2/imgproc.hpp" #include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp" #include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp" #include "opencv2/highgui.hpp"
...@@ -27,61 +28,66 @@ int display_dst( int delay ); ...@@ -27,61 +28,66 @@ int display_dst( int delay );
/** /**
* function main * function main
*/ */
int main( void ) int main( int argc, char ** argv )
{ {
namedWindow( window_name, WINDOW_AUTOSIZE ); namedWindow( window_name, WINDOW_AUTOSIZE );
/// Load the source image /// Load the source image
src = imread( "../data/lena.jpg", IMREAD_COLOR ); const char* filename = argc >=2 ? argv[1] : "../data/lena.jpg";
if( display_caption( "Original Image" ) != 0 ) { return 0; } src = imread( filename, IMREAD_COLOR );
if(src.empty()){
printf(" Error opening image\n");
printf(" Usage: ./Smoothing [image_name -- default ../data/lena.jpg] \n");
return -1;
}
dst = src.clone(); if( display_caption( "Original Image" ) != 0 ) { return 0; }
if( display_dst( DELAY_CAPTION ) != 0 ) { return 0; }
dst = src.clone();
if( display_dst( DELAY_CAPTION ) != 0 ) { return 0; }
/// Applying Homogeneous blur
if( display_caption( "Homogeneous Blur" ) != 0 ) { return 0; }
//![blur] /// Applying Homogeneous blur
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 ) if( display_caption( "Homogeneous Blur" ) != 0 ) { return 0; }
{ blur( src, dst, Size( i, i ), Point(-1,-1) );
//![blur]
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ blur( src, dst, Size( i, i ), Point(-1,-1) );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } } if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
//![blur] //![blur]
/// Applying Gaussian blur /// Applying Gaussian blur
if( display_caption( "Gaussian Blur" ) != 0 ) { return 0; } if( display_caption( "Gaussian Blur" ) != 0 ) { return 0; }
//![gaussianblur] //![gaussianblur]
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 ) for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ GaussianBlur( src, dst, Size( i, i ), 0, 0 ); { GaussianBlur( src, dst, Size( i, i ), 0, 0 );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } } if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
//![gaussianblur] //![gaussianblur]
/// Applying Median blur /// Applying Median blur
if( display_caption( "Median Blur" ) != 0 ) { return 0; } if( display_caption( "Median Blur" ) != 0 ) { return 0; }
//![medianblur] //![medianblur]
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 ) for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ medianBlur ( src, dst, i ); { medianBlur ( src, dst, i );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } } if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
//![medianblur] //![medianblur]
/// Applying Bilateral Filter /// Applying Bilateral Filter
if( display_caption( "Bilateral Blur" ) != 0 ) { return 0; } if( display_caption( "Bilateral Blur" ) != 0 ) { return 0; }
//![bilateralfilter] //![bilateralfilter]
for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 ) for ( int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2 )
{ bilateralFilter ( src, dst, i, i*2, i/2 ); { bilateralFilter ( src, dst, i, i*2, i/2 );
if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } } if( display_dst( DELAY_BLUR ) != 0 ) { return 0; } }
//![bilateralfilter] //![bilateralfilter]
/// Wait until user press a key
display_caption( "End: Press a key!" );
waitKey(0); /// Done
display_caption( "Done!" );
return 0; return 0;
} }
/** /**
...@@ -89,15 +95,12 @@ int main( void ) ...@@ -89,15 +95,12 @@ int main( void )
*/ */
int display_caption( const char* caption ) int display_caption( const char* caption )
{ {
dst = Mat::zeros( src.size(), src.type() ); dst = Mat::zeros( src.size(), src.type() );
putText( dst, caption, putText( dst, caption,
Point( src.cols/4, src.rows/2), Point( src.cols/4, src.rows/2),
FONT_HERSHEY_COMPLEX, 1, Scalar(255, 255, 255) ); FONT_HERSHEY_COMPLEX, 1, Scalar(255, 255, 255) );
imshow( window_name, dst ); return display_dst(DELAY_CAPTION);
int c = waitKey( DELAY_CAPTION );
if( c >= 0 ) { return -1; }
return 0;
} }
/** /**
...@@ -105,8 +108,8 @@ int display_caption( const char* caption ) ...@@ -105,8 +108,8 @@ int display_caption( const char* caption )
*/ */
int display_dst( int delay ) int display_dst( int delay )
{ {
imshow( window_name, dst ); imshow( window_name, dst );
int c = waitKey ( delay ); int c = waitKey ( delay );
if( c >= 0 ) { return -1; } if( c >= 0 ) { return -1; }
return 0; return 0;
} }
...@@ -4,28 +4,32 @@ ...@@ -4,28 +4,32 @@
* @author OpenCV team * @author OpenCV team
*/ */
#include <iostream>
#include <opencv2/opencv.hpp> #include <opencv2/opencv.hpp>
void show_wait_destroy(const char* winname, cv::Mat img);
using namespace std; using namespace std;
using namespace cv; using namespace cv;
int main(int, char** argv) int main(int, char** argv)
{ {
//! [load_image] //! [load_image]
// Load the image // Load the image
Mat src = imread(argv[1]); Mat src = imread(argv[1]);
// Check if image is loaded fine // Check if image is loaded fine
if(!src.data) if(src.empty()){
cerr << "Problem loading image!!!" << endl; printf(" Error opening image\n");
printf(" Program Arguments: [image_path]\n");
return -1;
}
// Show source image // Show source image
imshow("src", src); imshow("src", src);
//! [load_image] //! [load_image]
//! [gray] //! [gray]
// Transform source image to gray if it is not // Transform source image to gray if it is not already
Mat gray; Mat gray;
if (src.channels() == 3) if (src.channels() == 3)
...@@ -38,58 +42,58 @@ int main(int, char** argv) ...@@ -38,58 +42,58 @@ int main(int, char** argv)
} }
// Show gray image // Show gray image
imshow("gray", gray); show_wait_destroy("gray", gray);
//! [gray] //! [gray]
//! [bin] //! [bin]
// Apply adaptiveThreshold at the bitwise_not of gray, notice the ~ symbol // Apply adaptiveThreshold at the bitwise_not of gray, notice the ~ symbol
Mat bw; Mat bw;
adaptiveThreshold(~gray, bw, 255, CV_ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 15, -2); adaptiveThreshold(~gray, bw, 255, CV_ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 15, -2);
// Show binary image // Show binary image
imshow("binary", bw); show_wait_destroy("binary", bw);
//! [bin] //! [bin]
//! [init] //! [init]
// Create the images that will use to extract the horizontal and vertical lines // Create the images that will use to extract the horizontal and vertical lines
Mat horizontal = bw.clone(); Mat horizontal = bw.clone();
Mat vertical = bw.clone(); Mat vertical = bw.clone();
//! [init] //! [init]
//! [horiz] //! [horiz]
// Specify size on horizontal axis // Specify size on horizontal axis
int horizontalsize = horizontal.cols / 30; int horizontal_size = horizontal.cols / 30;
// Create structure element for extracting horizontal lines through morphology operations // Create structure element for extracting horizontal lines through morphology operations
Mat horizontalStructure = getStructuringElement(MORPH_RECT, Size(horizontalsize,1)); Mat horizontalStructure = getStructuringElement(MORPH_RECT, Size(horizontal_size, 1));
// Apply morphology operations // Apply morphology operations
erode(horizontal, horizontal, horizontalStructure, Point(-1, -1)); erode(horizontal, horizontal, horizontalStructure, Point(-1, -1));
dilate(horizontal, horizontal, horizontalStructure, Point(-1, -1)); dilate(horizontal, horizontal, horizontalStructure, Point(-1, -1));
// Show extracted horizontal lines // Show extracted horizontal lines
imshow("horizontal", horizontal); show_wait_destroy("horizontal", horizontal);
//! [horiz] //! [horiz]
//! [vert] //! [vert]
// Specify size on vertical axis // Specify size on vertical axis
int verticalsize = vertical.rows / 30; int vertical_size = vertical.rows / 30;
// Create structure element for extracting vertical lines through morphology operations // Create structure element for extracting vertical lines through morphology operations
Mat verticalStructure = getStructuringElement(MORPH_RECT, Size( 1,verticalsize)); Mat verticalStructure = getStructuringElement(MORPH_RECT, Size(1, vertical_size));
// Apply morphology operations // Apply morphology operations
erode(vertical, vertical, verticalStructure, Point(-1, -1)); erode(vertical, vertical, verticalStructure, Point(-1, -1));
dilate(vertical, vertical, verticalStructure, Point(-1, -1)); dilate(vertical, vertical, verticalStructure, Point(-1, -1));
// Show extracted vertical lines // Show extracted vertical lines
imshow("vertical", vertical); show_wait_destroy("vertical", vertical);
//! [vert] //! [vert]
//! [smooth] //! [smooth]
// Inverse vertical image // Inverse vertical image
bitwise_not(vertical, vertical); bitwise_not(vertical, vertical);
imshow("vertical_bit", vertical); show_wait_destroy("vertical_bit", vertical);
// Extract edges and smooth image according to the logic // Extract edges and smooth image according to the logic
// 1. extract edges // 1. extract edges
...@@ -101,12 +105,12 @@ int main(int, char** argv) ...@@ -101,12 +105,12 @@ int main(int, char** argv)
// Step 1 // Step 1
Mat edges; Mat edges;
adaptiveThreshold(vertical, edges, 255, CV_ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 3, -2); adaptiveThreshold(vertical, edges, 255, CV_ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 3, -2);
imshow("edges", edges); show_wait_destroy("edges", edges);
// Step 2 // Step 2
Mat kernel = Mat::ones(2, 2, CV_8UC1); Mat kernel = Mat::ones(2, 2, CV_8UC1);
dilate(edges, edges, kernel); dilate(edges, edges, kernel);
imshow("dilate", edges); show_wait_destroy("dilate", edges);
// Step 3 // Step 3
Mat smooth; Mat smooth;
...@@ -119,9 +123,15 @@ int main(int, char** argv) ...@@ -119,9 +123,15 @@ int main(int, char** argv)
smooth.copyTo(vertical, edges); smooth.copyTo(vertical, edges);
// Show final result // Show final result
imshow("smooth", vertical); show_wait_destroy("smooth - final", vertical);
//! [smooth] //! [smooth]
waitKey(0);
return 0; return 0;
} }
\ No newline at end of file
void show_wait_destroy(const char* winname, cv::Mat img) {
imshow(winname, img);
moveWindow(winname, 500, 0);
waitKey(0);
destroyWindow(winname);
}
...@@ -15,50 +15,53 @@ using namespace cv; ...@@ -15,50 +15,53 @@ using namespace cv;
*/ */
int main( int argc, char** argv ) int main( int argc, char** argv )
{ {
//![variables] //![variables]
Mat src, src_gray, dst; // Declare the variables we are going to use
int kernel_size = 3; Mat src, src_gray, dst;
int scale = 1; int kernel_size = 3;
int delta = 0; int scale = 1;
int ddepth = CV_16S; int delta = 0;
const char* window_name = "Laplace Demo"; int ddepth = CV_16S;
//![variables] const char* window_name = "Laplace Demo";
//![variables]
//![load] //![load]
String imageName("../data/lena.jpg"); // by default const char* imageName = argc >=2 ? argv[1] : "../data/lena.jpg";
if (argc > 1)
{
imageName = argv[1];
}
src = imread( imageName, IMREAD_COLOR ); // Load an image
if( src.empty() ) src = imread( imageName, IMREAD_COLOR ); // Load an image
{ return -1; }
//![load]
//![reduce_noise] // Check if image is loaded fine
/// Reduce noise by blurring with a Gaussian filter if(src.empty()){
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT ); printf(" Error opening image\n");
//![reduce_noise] printf(" Program Arguments: [image_name -- default ../data/lena.jpg] \n");
return -1;
}
//![load]
//![convert_to_gray] //![reduce_noise]
cvtColor( src, src_gray, COLOR_BGR2GRAY ); // Convert the image to grayscale // Reduce noise by blurring with a Gaussian filter ( kernel size = 3 )
//![convert_to_gray] GaussianBlur( src, src, Size(3, 3), 0, 0, BORDER_DEFAULT );
//![reduce_noise]
/// Apply Laplace function //![convert_to_gray]
Mat abs_dst; cvtColor( src, src_gray, COLOR_BGR2GRAY ); // Convert the image to grayscale
//![laplacian] //![convert_to_gray]
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
//![laplacian]
//![convert] /// Apply Laplace function
convertScaleAbs( dst, abs_dst ); Mat abs_dst;
//![convert] //![laplacian]
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
//![laplacian]
//![display] //![convert]
imshow( window_name, abs_dst ); // converting back to CV_8U
waitKey(0); convertScaleAbs( dst, abs_dst );
//![display] //![convert]
return 0; //![display]
imshow( window_name, abs_dst );
waitKey(0);
//![display]
return 0;
} }
...@@ -30,6 +30,7 @@ int main( int argc, char** argv ) ...@@ -30,6 +30,7 @@ int main( int argc, char** argv )
cout << "\nPress 'ESC' to exit program.\nPress 'R' to reset values ( ksize will be -1 equal to Scharr function )"; cout << "\nPress 'ESC' to exit program.\nPress 'R' to reset values ( ksize will be -1 equal to Scharr function )";
//![variables] //![variables]
// First we declare the variables we are going to use
Mat image,src, src_gray; Mat image,src, src_gray;
Mat grad; Mat grad;
const String window_name = "Sobel Demo - Simple Edge Detector"; const String window_name = "Sobel Demo - Simple Edge Detector";
...@@ -40,11 +41,14 @@ int main( int argc, char** argv ) ...@@ -40,11 +41,14 @@ int main( int argc, char** argv )
//![variables] //![variables]
//![load] //![load]
String imageName = parser.get<String>("@input"); // by default String imageName = parser.get<String>("@input");
// As usual we load our source image (src)
image = imread( imageName, IMREAD_COLOR ); // Load an image image = imread( imageName, IMREAD_COLOR ); // Load an image
// Check if image is loaded fine
if( image.empty() ) if( image.empty() )
{ {
printf("Error opening image: %s\n", imageName.c_str());
return 1; return 1;
} }
//![load] //![load]
...@@ -52,10 +56,12 @@ int main( int argc, char** argv ) ...@@ -52,10 +56,12 @@ int main( int argc, char** argv )
for (;;) for (;;)
{ {
//![reduce_noise] //![reduce_noise]
// Remove noise by blurring with a Gaussian filter ( kernel size = 3 )
GaussianBlur(image, src, Size(3, 3), 0, 0, BORDER_DEFAULT); GaussianBlur(image, src, Size(3, 3), 0, 0, BORDER_DEFAULT);
//![reduce_noise] //![reduce_noise]
//![convert_to_gray] //![convert_to_gray]
// Convert the image to grayscale
cvtColor(src, src_gray, COLOR_BGR2GRAY); cvtColor(src, src_gray, COLOR_BGR2GRAY);
//![convert_to_gray] //![convert_to_gray]
...@@ -72,6 +78,7 @@ int main( int argc, char** argv ) ...@@ -72,6 +78,7 @@ int main( int argc, char** argv )
//![sobel] //![sobel]
//![convert] //![convert]
// converting back to CV_8U
convertScaleAbs(grad_x, abs_grad_x); convertScaleAbs(grad_x, abs_grad_x);
convertScaleAbs(grad_y, abs_grad_y); convertScaleAbs(grad_y, abs_grad_y);
//![convert] //![convert]
......
...@@ -11,9 +11,10 @@ ...@@ -11,9 +11,10 @@
using namespace cv; using namespace cv;
//![variables] //![variables]
// Declare the variables
Mat src, dst; Mat src, dst;
int top, bottom, left, right; int top, bottom, left, right;
int borderType; int borderType = BORDER_CONSTANT;
const char* window_name = "copyMakeBorder Demo"; const char* window_name = "copyMakeBorder Demo";
RNG rng(12345); RNG rng(12345);
//![variables] //![variables]
...@@ -23,65 +24,61 @@ RNG rng(12345); ...@@ -23,65 +24,61 @@ RNG rng(12345);
*/ */
int main( int argc, char** argv ) int main( int argc, char** argv )
{ {
//![load] //![load]
String imageName("../data/lena.jpg"); // by default const char* imageName = argc >=2 ? argv[1] : "../data/lena.jpg";
if (argc > 1)
{
imageName = argv[1];
}
src = imread( imageName, IMREAD_COLOR ); // Load an image
if( src.empty() ) // Loads an image
{ src = imread( imageName, IMREAD_COLOR ); // Load an image
printf(" No data entered, please enter the path to an image file \n");
return -1;
}
//![load]
/// Brief how-to for this program // Check if image is loaded fine
printf( "\n \t copyMakeBorder Demo: \n" ); if( src.empty()) {
printf( "\t -------------------- \n" ); printf(" Error opening image\n");
printf( " ** Press 'c' to set the border to a random constant value \n"); printf(" Program Arguments: [image_name -- default ../data/lena.jpg] \n");
printf( " ** Press 'r' to set the border to be replicated \n"); return -1;
printf( " ** Press 'ESC' to exit the program \n"); }
//![load]
//![create_window] // Brief how-to for this program
namedWindow( window_name, WINDOW_AUTOSIZE ); printf( "\n \t copyMakeBorder Demo: \n" );
//![create_window] printf( "\t -------------------- \n" );
printf( " ** Press 'c' to set the border to a random constant value \n");
printf( " ** Press 'r' to set the border to be replicated \n");
printf( " ** Press 'ESC' to exit the program \n");
//![init_arguments] //![create_window]
/// Initialize arguments for the filter namedWindow( window_name, WINDOW_AUTOSIZE );
top = (int) (0.05*src.rows); bottom = (int) (0.05*src.rows); //![create_window]
left = (int) (0.05*src.cols); right = (int) (0.05*src.cols);
//![init_arguments]
dst = src; //![init_arguments]
imshow( window_name, dst ); // Initialize arguments for the filter
top = (int) (0.05*src.rows); bottom = top;
left = (int) (0.05*src.cols); right = left;
//![init_arguments]
for(;;) for(;;)
{ {
//![check_keypress] //![update_value]
char c = (char)waitKey(500); Scalar value( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) );
if( c == 27 ) //![update_value]
{ break; }
else if( c == 'c' )
{ borderType = BORDER_CONSTANT; }
else if( c == 'r' )
{ borderType = BORDER_REPLICATE; }
//![check_keypress]
//![update_value] //![copymakeborder]
Scalar value( rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255) ); copyMakeBorder( src, dst, top, bottom, left, right, borderType, value );
//![update_value] //![copymakeborder]
//![copymakeborder] //![display]
copyMakeBorder( src, dst, top, bottom, left, right, borderType, value ); imshow( window_name, dst );
//![copymakeborder] //![display]
//![display] //![check_keypress]
imshow( window_name, dst ); char c = (char)waitKey(500);
//![display] if( c == 27 )
} { break; }
else if( c == 'c' )
{ borderType = BORDER_CONSTANT; }
else if( c == 'r' )
{ borderType = BORDER_REPLICATE; }
//![check_keypress]
}
return 0; return 0;
} }
...@@ -15,56 +15,60 @@ using namespace cv; ...@@ -15,56 +15,60 @@ using namespace cv;
*/ */
int main ( int argc, char** argv ) int main ( int argc, char** argv )
{ {
/// Declare variables // Declare variables
Mat src, dst; Mat src, dst;
Mat kernel; Mat kernel;
Point anchor; Point anchor;
double delta; double delta;
int ddepth; int ddepth;
int kernel_size; int kernel_size;
const char* window_name = "filter2D Demo"; const char* window_name = "filter2D Demo";
//![load] //![load]
String imageName("../data/lena.jpg"); // by default const char* imageName = argc >=2 ? argv[1] : "../data/lena.jpg";
if (argc > 1)
{
imageName = argv[1];
}
src = imread( imageName, IMREAD_COLOR ); // Load an image
if( src.empty() ) // Loads an image
{ return -1; } src = imread( imageName, IMREAD_COLOR ); // Load an image
//![load]
//![init_arguments] if( src.empty() )
/// Initialize arguments for the filter {
anchor = Point( -1, -1 ); printf(" Error opening image\n");
delta = 0; printf(" Program Arguments: [image_name -- default ../data/lena.jpg] \n");
ddepth = -1; return -1;
//![init_arguments] }
//![load]
/// Loop - Will filter the image with different kernel sizes each 0.5 seconds //![init_arguments]
int ind = 0; // Initialize arguments for the filter
for(;;) anchor = Point( -1, -1 );
{ delta = 0;
char c = (char)waitKey(500); ddepth = -1;
/// Press 'ESC' to exit the program //![init_arguments]
if( c == 27 )
{ break; }
//![update_kernel] // Loop - Will filter the image with different kernel sizes each 0.5 seconds
/// Update kernel size for a normalized box filter int ind = 0;
kernel_size = 3 + 2*( ind%5 ); for(;;)
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size); {
//![update_kernel] //![update_kernel]
// Update kernel size for a normalized box filter
kernel_size = 3 + 2*( ind%5 );
kernel = Mat::ones( kernel_size, kernel_size, CV_32F )/ (float)(kernel_size*kernel_size);
//![update_kernel]
//![apply_filter] //![apply_filter]
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT ); // Apply filter
//![apply_filter] filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow( window_name, dst ); //![apply_filter]
ind++; imshow( window_name, dst );
}
return 0; char c = (char)waitKey(500);
// Press 'ESC' to exit the program
if( c == 27 )
{ break; }
ind++;
}
return 0;
} }
/**
* @file houghcircles.cpp
* @brief This program demonstrates circle finding with the Hough transform
*/
#include "opencv2/imgcodecs.hpp" #include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp" #include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp" #include "opencv2/imgproc.hpp"
#include <iostream>
using namespace cv; using namespace cv;
using namespace std; using namespace std;
static void help()
{
cout << "\nThis program demonstrates circle finding with the Hough transform.\n"
"Usage:\n"
"./houghcircles <image_name>, Default is ../data/board.jpg\n" << endl;
}
int main(int argc, char** argv) int main(int argc, char** argv)
{ {
cv::CommandLineParser parser(argc, argv,
"{help h ||}{@image|../data/board.jpg|}"
);
if (parser.has("help"))
{
help();
return 0;
}
//![load] //![load]
string filename = parser.get<string>("@image"); const char* filename = argc >=2 ? argv[1] : "../../../data/smarties.png";
Mat img = imread(filename, IMREAD_COLOR);
if(img.empty()) // Loads an image
{ Mat src = imread( filename, IMREAD_COLOR );
help();
cout << "can not open " << filename << endl; // Check if image is loaded fine
if(src.empty()){
printf(" Error opening image\n");
printf(" Program Arguments: [image_name -- default %s] \n", filename);
return -1; return -1;
} }
//![load] //![load]
//![convert_to_gray] //![convert_to_gray]
Mat gray; Mat gray;
cvtColor(img, gray, COLOR_BGR2GRAY); cvtColor(src, gray, COLOR_BGR2GRAY);
//![convert_to_gray] //![convert_to_gray]
//![reduce_noise] //![reduce_noise]
...@@ -47,23 +37,27 @@ int main(int argc, char** argv) ...@@ -47,23 +37,27 @@ int main(int argc, char** argv)
//![houghcircles] //![houghcircles]
vector<Vec3f> circles; vector<Vec3f> circles;
HoughCircles(gray, circles, HOUGH_GRADIENT, 1, HoughCircles(gray, circles, HOUGH_GRADIENT, 1,
gray.rows/16, // change this value to detect circles with different distances to each other gray.rows/16, // change this value to detect circles with different distances to each other
100, 30, 1, 30 // change the last two parameters 100, 30, 1, 30 // change the last two parameters
// (min_radius & max_radius) to detect larger circles // (min_radius & max_radius) to detect larger circles
); );
//![houghcircles] //![houghcircles]
//![draw] //![draw]
for( size_t i = 0; i < circles.size(); i++ ) for( size_t i = 0; i < circles.size(); i++ )
{ {
Vec3i c = circles[i]; Vec3i c = circles[i];
circle( img, Point(c[0], c[1]), c[2], Scalar(0,0,255), 3, LINE_AA); Point center = Point(c[0], c[1]);
circle( img, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, LINE_AA); // circle center
circle( src, center, 1, Scalar(0,100,100), 3, LINE_AA);
// circle outline
int radius = c[2];
circle( src, center, radius, Scalar(255,0,255), 3, LINE_AA);
} }
//![draw] //![draw]
//![display] //![display]
imshow("detected circles", img); imshow("detected circles", src);
waitKey(); waitKey();
//![display] //![display]
......
/**
* @file houghclines.cpp
* @brief This program demonstrates line finding with the Hough transform
*/
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
// Declare the output variables
Mat dst, cdst, cdstP;
//![load]
const char* default_file = "../../../data/sudoku.png";
const char* filename = argc >=2 ? argv[1] : default_file;
// Loads an image
Mat src = imread( filename, IMREAD_GRAYSCALE );
// Check if image is loaded fine
if(src.empty()){
printf(" Error opening image\n");
printf(" Program Arguments: [image_name -- default %s] \n", default_file);
return -1;
}
//![load]
//![edge_detection]
// Edge detection
Canny(src, dst, 50, 200, 3);
//![edge_detection]
// Copy edges to the images that will display the results in BGR
cvtColor(dst, cdst, COLOR_GRAY2BGR);
cdstP = cdst.clone();
//![hough_lines]
// Standard Hough Line Transform
vector<Vec2f> lines; // will hold the results of the detection
HoughLines(dst, lines, 1, CV_PI/180, 150, 0, 0 ); // runs the actual detection
//![hough_lines]
//![draw_lines]
// Draw the lines
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( cdst, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
}
//![draw_lines]
//![hough_lines_p]
// Probabilistic Line Transform
vector<Vec4i> linesP; // will hold the results of the detection
HoughLinesP(dst, linesP, 1, CV_PI/180, 50, 50, 10 ); // runs the actual detection
//![hough_lines_p]
//![draw_lines_p]
// Draw the lines
for( size_t i = 0; i < linesP.size(); i++ )
{
Vec4i l = linesP[i];
line( cdstP, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, LINE_AA);
}
//![draw_lines_p]
//![imshow]
// Show results
imshow("Source", src);
imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst);
imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP);
//![imshow]
//![exit]
// Wait and Exit
waitKey();
return 0;
//![exit]
}
import org.opencv.core.*;
import org.opencv.highgui.HighGui;
import org.opencv.imgproc.Imgproc;
class HitMissRun{
public void run() {
Mat input_image = new Mat( 8, 8, CvType.CV_8UC1 );
int row = 0, col = 0;
input_image.put(row ,col,
0, 0, 0, 0, 0, 0, 0, 0,
0, 255, 255, 255, 0, 0, 0, 255,
0, 255, 255, 255, 0, 0, 0, 0,
0, 255, 255, 255, 0, 255, 0, 0,
0, 0, 255, 0, 0, 0, 0, 0,
0, 0, 255, 0, 0, 255, 255, 0,
0, 255, 0, 255, 0, 0, 255, 0,
0, 255, 255, 255, 0, 0, 0, 0);
Mat kernel = new Mat( 3, 3, CvType.CV_16S );
kernel.put(row ,col,
0, 1, 0,
1, -1, 1,
0, 1, 0 );
Mat output_image = new Mat();
Imgproc.morphologyEx(input_image, output_image, Imgproc.MORPH_HITMISS, kernel);
int rate = 50;
Core.add(kernel, new Scalar(1), kernel);
Core.multiply(kernel, new Scalar(127), kernel);
kernel.convertTo(kernel, CvType.CV_8U);
Imgproc.resize(kernel, kernel, new Size(), rate, rate, Imgproc.INTER_NEAREST);
HighGui.imshow("kernel", kernel);
HighGui.moveWindow("kernel", 0, 0);
Imgproc.resize(input_image, input_image, new Size(), rate, rate, Imgproc.INTER_NEAREST);
HighGui.imshow("Original", input_image);
HighGui.moveWindow("Original", 0, 200);
Imgproc.resize(output_image, output_image, new Size(), rate, rate, Imgproc.INTER_NEAREST);
HighGui.imshow("Hit or Miss", output_image);
HighGui.moveWindow("Hit or Miss", 500, 200);
HighGui.waitKey(0);
System.exit(0);
}
}
public class HitMiss
{
public static void main(String[] args) {
// load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new HitMissRun().run();
}
}
import org.opencv.core.*;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class PyramidsRun {
String window_name = "Pyramids Demo";
public void run(String[] args) {
/// General instructions
System.out.println("\n" +
" Zoom In-Out demo \n" +
"------------------ \n" +
" * [i] -> Zoom [i]n \n" +
" * [o] -> Zoom [o]ut \n" +
" * [ESC] -> Close program \n");
//! [load]
String filename = ((args.length > 0) ? args[0] : "../data/chicky_512.png");
// Load the image
Mat src = Imgcodecs.imread(filename);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default ../data/chicky_512.png] \n");
System.exit(-1);
}
//! [load]
//! [loop]
while (true){
//! [show_image]
HighGui.imshow( window_name, src );
//! [show_image]
char c = (char) HighGui.waitKey(0);
c = Character.toLowerCase(c);
if( c == 27 ){
break;
//![pyrup]
}else if( c == 'i'){
Imgproc.pyrUp( src, src, new Size( src.cols()*2, src.rows()*2 ) );
System.out.println( "** Zoom In: Image x 2" );
//![pyrup]
//![pyrdown]
}else if( c == 'o'){
Imgproc.pyrDown( src, src, new Size( src.cols()/2, src.rows()/2 ) );
System.out.println( "** Zoom Out: Image / 2" );
//![pyrdown]
}
}
//! [loop]
System.exit(0);
}
}
public class Pyramids {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new PyramidsRun().run(args);
}
}
import org.opencv.core.*;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class SmoothingRun {
/// Global Variables
int DELAY_CAPTION = 1500;
int DELAY_BLUR = 100;
int MAX_KERNEL_LENGTH = 31;
Mat src = new Mat(), dst = new Mat();
String windowName = "Filter Demo 1";
public void run(String[] args) {
String filename = ((args.length > 0) ? args[0] : "../data/lena.jpg");
src = Imgcodecs.imread(filename, Imgcodecs.IMREAD_COLOR);
if( src.empty() ) {
System.out.println("Error opening image");
System.out.println("Usage: ./Smoothing [image_name -- default ../data/lena.jpg] \n");
System.exit(-1);
}
if( displayCaption( "Original Image" ) != 0 ) { System.exit(0); }
dst = src.clone();
if( displayDst( DELAY_CAPTION ) != 0 ) { System.exit(0); }
/// Applying Homogeneous blur
if( displayCaption( "Homogeneous Blur" ) != 0 ) { System.exit(0); }
//! [blur]
for (int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2) {
Imgproc.blur(src, dst, new Size(i, i), new Point(-1, -1));
displayDst(DELAY_BLUR);
}
//! [blur]
/// Applying Gaussian blur
if( displayCaption( "Gaussian Blur" ) != 0 ) { System.exit(0); }
//! [gaussianblur]
for (int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2) {
Imgproc.GaussianBlur(src, dst, new Size(i, i), 0, 0);
displayDst(DELAY_BLUR);
}
//! [gaussianblur]
/// Applying Median blur
if( displayCaption( "Median Blur" ) != 0 ) { System.exit(0); }
//! [medianblur]
for (int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2) {
Imgproc.medianBlur(src, dst, i);
displayDst(DELAY_BLUR);
}
//! [medianblur]
/// Applying Bilateral Filter
if( displayCaption( "Bilateral Blur" ) != 0 ) { System.exit(0); }
//![bilateralfilter]
for (int i = 1; i < MAX_KERNEL_LENGTH; i = i + 2) {
Imgproc.bilateralFilter(src, dst, i, i * 2, i / 2);
displayDst(DELAY_BLUR);
}
//![bilateralfilter]
/// Done
displayCaption( "Done!" );
System.exit(0);
}
int displayCaption(String caption) {
dst = Mat.zeros(src.size(), src.type());
Imgproc.putText(dst, caption,
new Point(src.cols() / 4, src.rows() / 2),
Core.FONT_HERSHEY_COMPLEX, 1, new Scalar(255, 255, 255));
return displayDst(DELAY_CAPTION);
}
int displayDst(int delay) {
HighGui.imshow( windowName, dst );
int c = HighGui.waitKey( delay );
if (c >= 0) { return -1; }
return 0;
}
}
public class Smoothing {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new SmoothingRun().run(args);
}
}
/**
* @file Morphology_3.java
* @brief Use morphology transformations for extracting horizontal and vertical lines sample code
*/
import org.opencv.core.*;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class Morphology_3Run {
public void run(String[] args) {
//! [load_image]
// Check number of arguments
if (args.length == 0){
System.out.println("Not enough parameters!");
System.out.println("Program Arguments: [image_path]");
System.exit(-1);
}
// Load the image
Mat src = Imgcodecs.imread(args[0]);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image: " + args[0]);
System.exit(-1);
}
// Show source image
HighGui.imshow("src", src);
//! [load_image]
//! [gray]
// Transform source image to gray if it is not already
Mat gray = new Mat();
if (src.channels() == 3)
{
Imgproc.cvtColor(src, gray, Imgproc.COLOR_BGR2GRAY);
}
else
{
gray = src;
}
// Show gray image
showWaitDestroy("gray" , gray);
//! [gray]
//! [bin]
// Apply adaptiveThreshold at the bitwise_not of gray
Mat bw = new Mat();
Core.bitwise_not(gray, gray);
Imgproc.adaptiveThreshold(gray, bw, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 15, -2);
// Show binary image
showWaitDestroy("binary" , bw);
//! [bin]
//! [init]
// Create the images that will use to extract the horizontal and vertical lines
Mat horizontal = bw.clone();
Mat vertical = bw.clone();
//! [init]
//! [horiz]
// Specify size on horizontal axis
int horizontal_size = horizontal.cols() / 30;
// Create structure element for extracting horizontal lines through morphology operations
Mat horizontalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(horizontal_size,1));
// Apply morphology operations
Imgproc.erode(horizontal, horizontal, horizontalStructure);
Imgproc.dilate(horizontal, horizontal, horizontalStructure);
// Show extracted horizontal lines
showWaitDestroy("horizontal" , horizontal);
//! [horiz]
//! [vert]
// Specify size on vertical axis
int vertical_size = vertical.rows() / 30;
// Create structure element for extracting vertical lines through morphology operations
Mat verticalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size( 1,vertical_size));
// Apply morphology operations
Imgproc.erode(vertical, vertical, verticalStructure);
Imgproc.dilate(vertical, vertical, verticalStructure);
// Show extracted vertical lines
showWaitDestroy("vertical", vertical);
//! [vert]
//! [smooth]
// Inverse vertical image
Core.bitwise_not(vertical, vertical);
showWaitDestroy("vertical_bit" , vertical);
// Extract edges and smooth image according to the logic
// 1. extract edges
// 2. dilate(edges)
// 3. src.copyTo(smooth)
// 4. blur smooth img
// 5. smooth.copyTo(src, edges)
// Step 1
Mat edges = new Mat();
Imgproc.adaptiveThreshold(vertical, edges, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 3, -2);
showWaitDestroy("edges", edges);
// Step 2
Mat kernel = Mat.ones(2, 2, CvType.CV_8UC1);
Imgproc.dilate(edges, edges, kernel);
showWaitDestroy("dilate", edges);
// Step 3
Mat smooth = new Mat();
vertical.copyTo(smooth);
// Step 4
Imgproc.blur(smooth, smooth, new Size(2, 2));
// Step 5
smooth.copyTo(vertical, edges);
// Show final result
showWaitDestroy("smooth - final", vertical);
//! [smooth]
System.exit(0);
}
private void showWaitDestroy(String winname, Mat img) {
HighGui.imshow(winname, img);
HighGui.moveWindow(winname, 500, 0);
HighGui.waitKey(0);
HighGui.destroyWindow(winname);
}
}
public class Morphology_3 {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new Morphology_3Run().run(args);
}
}
/**
* @file Filter2D_demo.java
* @brief Sample code that shows how to implement your own linear filters by using filter2D function
*/
import org.opencv.core.*;
import org.opencv.core.Point;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class Filter2D_DemoRun {
public void run(String[] args) {
// Declare variables
Mat src, dst = new Mat();
Mat kernel = new Mat();
Point anchor;
double delta;
int ddepth;
int kernel_size;
String window_name = "filter2D Demo";
//! [load]
String imageName = ((args.length > 0) ? args[0] : "../data/lena.jpg");
// Load an image
src = Imgcodecs.imread(imageName, Imgcodecs.IMREAD_COLOR);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default ../data/lena.jpg] \n");
System.exit(-1);
}
//! [load]
//! [init_arguments]
// Initialize arguments for the filter
anchor = new Point( -1, -1);
delta = 0.0;
ddepth = -1;
//! [init_arguments]
// Loop - Will filter the image with different kernel sizes each 0.5 seconds
int ind = 0;
while( true )
{
//! [update_kernel]
// Update kernel size for a normalized box filter
kernel_size = 3 + 2*( ind%5 );
Mat ones = Mat.ones( kernel_size, kernel_size, CvType.CV_32F );
Core.multiply(ones, new Scalar(1/(double)(kernel_size*kernel_size)), kernel);
//! [update_kernel]
//! [apply_filter]
// Apply filter
Imgproc.filter2D(src, dst, ddepth , kernel, anchor, delta, Core.BORDER_DEFAULT );
//! [apply_filter]
HighGui.imshow( window_name, dst );
int c = HighGui.waitKey(500);
// Press 'ESC' to exit the program
if( c == 27 )
{ break; }
ind++;
}
System.exit(0);
}
}
public class Filter2D_Demo {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new Filter2D_DemoRun().run(args);
}
}
package sample;
/**
* @file HoughCircles.java
* @brief This program demonstrates circle finding with the Hough transform
*/
import org.opencv.core.*;
import org.opencv.core.Point;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class HoughCirclesRun {
public void run(String[] args) {
//! [load]
String default_file = "../../../../data/smarties.png";
String filename = ((args.length > 0) ? args[0] : default_file);
// Load an image
Mat src = Imgcodecs.imread(filename, Imgcodecs.IMREAD_COLOR);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default "
+ default_file +"] \n");
System.exit(-1);
}
//! [load]
//! [convert_to_gray]
Mat gray = new Mat();
Imgproc.cvtColor(src, gray, Imgproc.COLOR_BGR2GRAY);
//! [convert_to_gray]
//![reduce_noise]
Imgproc.medianBlur(gray, gray, 5);
//![reduce_noise]
//! [houghcircles]
Mat circles = new Mat();
Imgproc.HoughCircles(gray, circles, Imgproc.HOUGH_GRADIENT, 1.0,
(double)gray.rows()/16, // change this value to detect circles with different distances to each other
100.0, 30.0, 1, 30); // change the last two parameters
// (min_radius & max_radius) to detect larger circles
//! [houghcircles]
//! [draw]
for (int x = 0; x < circles.cols(); x++) {
double[] c = circles.get(0, x);
Point center = new Point(Math.round(c[0]), Math.round(c[1]));
// circle center
Imgproc.circle(src, center, 1, new Scalar(0,100,100), 3, 8, 0 );
// circle outline
int radius = (int) Math.round(c[2]);
Imgproc.circle(src, center, radius, new Scalar(255,0,255), 3, 8, 0 );
}
//! [draw]
//! [display]
HighGui.imshow("detected circles", src);
HighGui.waitKey();
//! [display]
System.exit(0);
}
}
public class HoughCircles {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new HoughCirclesRun().run(args);
}
}
/**
* @file HoughLines.java
* @brief This program demonstrates line finding with the Hough transform
*/
import org.opencv.core.*;
import org.opencv.core.Point;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class HoughLinesRun {
public void run(String[] args) {
// Declare the output variables
Mat dst = new Mat(), cdst = new Mat(), cdstP;
//! [load]
String default_file = "../../../../data/sudoku.png";
String filename = ((args.length > 0) ? args[0] : default_file);
// Load an image
Mat src = Imgcodecs.imread(filename, Imgcodecs.IMREAD_GRAYSCALE);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default "
+ default_file +"] \n");
System.exit(-1);
}
//! [load]
//! [edge_detection]
// Edge detection
Imgproc.Canny(src, dst, 50, 200, 3, false);
//! [edge_detection]
// Copy edges to the images that will display the results in BGR
Imgproc.cvtColor(dst, cdst, Imgproc.COLOR_GRAY2BGR);
cdstP = cdst.clone();
//! [hough_lines]
// Standard Hough Line Transform
Mat lines = new Mat(); // will hold the results of the detection
Imgproc.HoughLines(dst, lines, 1, Math.PI/180, 150); // runs the actual detection
//! [hough_lines]
//! [draw_lines]
// Draw the lines
for (int x = 0; x < lines.rows(); x++) {
double rho = lines.get(x, 0)[0],
theta = lines.get(x, 0)[1];
double a = Math.cos(theta), b = Math.sin(theta);
double x0 = a*rho, y0 = b*rho;
Point pt1 = new Point(Math.round(x0 + 1000*(-b)), Math.round(y0 + 1000*(a)));
Point pt2 = new Point(Math.round(x0 - 1000*(-b)), Math.round(y0 - 1000*(a)));
Imgproc.line(cdst, pt1, pt2, new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
//! [draw_lines]
//! [hough_lines_p]
// Probabilistic Line Transform
Mat linesP = new Mat(); // will hold the results of the detection
Imgproc.HoughLinesP(dst, linesP, 1, Math.PI/180, 50, 50, 10); // runs the actual detection
//! [hough_lines_p]
//! [draw_lines_p]
// Draw the lines
for (int x = 0; x < linesP.rows(); x++) {
double[] l = linesP.get(x, 0);
Imgproc.line(cdstP, new Point(l[0], l[1]), new Point(l[2], l[3]), new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
//! [draw_lines_p]
//! [imshow]
// Show results
HighGui.imshow("Source", src);
HighGui.imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst);
HighGui.imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP);
//! [imshow]
//! [exit]
// Wait and Exit
HighGui.waitKey();
System.exit(0);
//! [exit]
}
}
public class HoughLines {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new HoughLinesRun().run(args);
}
}
/**
* @file LaplaceDemo.java
* @brief Sample code showing how to detect edges using the Laplace operator
*/
import org.opencv.core.*;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class LaplaceDemoRun {
public void run(String[] args) {
//! [variables]
// Declare the variables we are going to use
Mat src, src_gray = new Mat(), dst = new Mat();
int kernel_size = 3;
int scale = 1;
int delta = 0;
int ddepth = CvType.CV_16S;
String window_name = "Laplace Demo";
//! [variables]
//! [load]
String imageName = ((args.length > 0) ? args[0] : "../data/lena.jpg");
src = Imgcodecs.imread(imageName, Imgcodecs.IMREAD_COLOR); // Load an image
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image");
System.out.println("Program Arguments: [image_name -- default ../data/lena.jpg] \n");
System.exit(-1);
}
//! [load]
//! [reduce_noise]
// Reduce noise by blurring with a Gaussian filter ( kernel size = 3 )
Imgproc.GaussianBlur( src, src, new Size(3, 3), 0, 0, Core.BORDER_DEFAULT );
//! [reduce_noise]
//! [convert_to_gray]
// Convert the image to grayscale
Imgproc.cvtColor( src, src_gray, Imgproc.COLOR_RGB2GRAY );
//! [convert_to_gray]
/// Apply Laplace function
Mat abs_dst = new Mat();
//! [laplacian]
Imgproc.Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, Core.BORDER_DEFAULT );
//! [laplacian]
//! [convert]
// converting back to CV_8U
Core.convertScaleAbs( dst, abs_dst );
//! [convert]
//! [display]
HighGui.imshow( window_name, abs_dst );
HighGui.waitKey(0);
//! [display]
System.exit(0);
}
}
public class LaplaceDemo {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new LaplaceDemoRun().run(args);
}
}
/**
* @file CopyMakeBorder.java
* @brief Sample code that shows the functionality of copyMakeBorder
*/
import org.opencv.core.*;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import java.util.Random;
class CopyMakeBorderRun {
public void run(String[] args) {
//! [variables]
// Declare the variables
Mat src, dst = new Mat();
int top, bottom, left, right;
int borderType = Core.BORDER_CONSTANT;
String window_name = "copyMakeBorder Demo";
Random rng;
//! [variables]
//! [load]
String imageName = ((args.length > 0) ? args[0] : "../data/lena.jpg");
// Load an image
src = Imgcodecs.imread(imageName, Imgcodecs.IMREAD_COLOR);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default ../data/lena.jpg] \n");
System.exit(-1);
}
//! [load]
// Brief how-to for this program
System.out.println("\n" +
"\t copyMakeBorder Demo: \n" +
"\t -------------------- \n" +
" ** Press 'c' to set the border to a random constant value \n" +
" ** Press 'r' to set the border to be replicated \n" +
" ** Press 'ESC' to exit the program \n");
//![create_window]
HighGui.namedWindow( window_name, HighGui.WINDOW_AUTOSIZE );
//![create_window]
//! [init_arguments]
// Initialize arguments for the filter
top = (int) (0.05*src.rows()); bottom = top;
left = (int) (0.05*src.cols()); right = left;
//! [init_arguments]
while( true ) {
//! [update_value]
rng = new Random();
Scalar value = new Scalar( rng.nextInt(256),
rng.nextInt(256), rng.nextInt(256) );
//! [update_value]
//! [copymakeborder]
Core.copyMakeBorder( src, dst, top, bottom, left, right, borderType, value);
//! [copymakeborder]
//! [display]
HighGui.imshow( window_name, dst );
//! [display]
//![check_keypress]
char c = (char) HighGui.waitKey(500);
c = Character.toLowerCase(c);
if( c == 27 )
{ break; }
else if( c == 'c' )
{ borderType = Core.BORDER_CONSTANT;}
else if( c == 'r' )
{ borderType = Core.BORDER_REPLICATE;}
//![check_keypress]
}
System.exit(0);
}
}
public class CopyMakeBorder {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new CopyMakeBorderRun().run(args);
}
}
/**
* @file SobelDemo.java
* @brief Sample code using Sobel and/or Scharr OpenCV functions to make a simple Edge Detector
*/
import org.opencv.core.*;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class SobelDemoRun {
public void run(String[] args) {
//! [declare_variables]
// First we declare the variables we are going to use
Mat src, src_gray = new Mat();
Mat grad = new Mat();
String window_name = "Sobel Demo - Simple Edge Detector";
int scale = 1;
int delta = 0;
int ddepth = CvType.CV_16S;
//! [declare_variables]
//! [load]
// As usual we load our source image (src)
// Check number of arguments
if (args.length == 0){
System.out.println("Not enough parameters!");
System.out.println("Program Arguments: [image_path]");
System.exit(-1);
}
// Load the image
src = Imgcodecs.imread(args[0]);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image: " + args[0]);
System.exit(-1);
}
//! [load]
//! [reduce_noise]
// Remove noise by blurring with a Gaussian filter ( kernel size = 3 )
Imgproc.GaussianBlur( src, src, new Size(3, 3), 0, 0, Core.BORDER_DEFAULT );
//! [reduce_noise]
//! [convert_to_gray]
// Convert the image to grayscale
Imgproc.cvtColor( src, src_gray, Imgproc.COLOR_RGB2GRAY );
//! [convert_to_gray]
//! [sobel]
/// Generate grad_x and grad_y
Mat grad_x = new Mat(), grad_y = new Mat();
Mat abs_grad_x = new Mat(), abs_grad_y = new Mat();
/// Gradient X
//Imgproc.Scharr( src_gray, grad_x, ddepth, 1, 0, scale, delta, Core.BORDER_DEFAULT );
Imgproc.Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, Core.BORDER_DEFAULT );
/// Gradient Y
//Imgproc.Scharr( src_gray, grad_y, ddepth, 0, 1, scale, delta, Core.BORDER_DEFAULT );
Imgproc.Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, Core.BORDER_DEFAULT );
//! [sobel]
//![convert]
// converting back to CV_8U
Core.convertScaleAbs( grad_x, abs_grad_x );
Core.convertScaleAbs( grad_y, abs_grad_y );
//![convert]
//! [add_weighted]
/// Total Gradient (approximate)
Core.addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
//! [add_weighted]
//! [display]
HighGui.imshow( window_name, grad );
HighGui.waitKey(0);
//! [display]
System.exit(0);
}
}
public class SobelDemo {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new SobelDemoRun().run(args);
}
}
"""
@file filter2D.py
@brief Sample code that shows how to implement your own linear filters by using filter2D function
"""
import sys
import cv2
import numpy as np
def main(argv):
window_name = 'filter2D Demo'
## [load]
imageName = argv[0] if len(argv) > 0 else "../data/lena.jpg"
# Loads an image
src = cv2.imread(imageName, cv2.IMREAD_COLOR)
# Check if image is loaded fine
if src is None:
print ('Error opening image!')
print ('Usage: filter2D.py [image_name -- default ../data/lena.jpg] \n')
return -1
## [load]
## [init_arguments]
# Initialize ddepth argument for the filter
ddepth = -1
## [init_arguments]
# Loop - Will filter the image with different kernel sizes each 0.5 seconds
ind = 0
while True:
## [update_kernel]
# Update kernel size for a normalized box filter
kernel_size = 3 + 2 * (ind % 5)
kernel = np.ones((kernel_size, kernel_size), dtype=np.float32)
kernel /= (kernel_size * kernel_size)
## [update_kernel]
## [apply_filter]
# Apply filter
dst = cv2.filter2D(src, ddepth, kernel)
## [apply_filter]
cv2.imshow(window_name, dst)
c = cv2.waitKey(500)
if c == 27:
break
ind += 1
return 0
if __name__ == "__main__":
main(sys.argv[1:])
import sys
import cv2
import numpy as np
def main(argv):
## [load]
default_file = "../../../../data/smarties.png"
filename = argv[0] if len(argv) > 0 else default_file
# Loads an image
src = cv2.imread(filename, cv2.IMREAD_COLOR)
# Check if image is loaded fine
if src is None:
print ('Error opening image!')
print ('Usage: hough_circle.py [image_name -- default ' + default_file + '] \n')
return -1
## [load]
## [convert_to_gray]
# Convert it to gray
gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
## [convert_to_gray]
## [reduce_noise]
# Reduce the noise to avoid false circle detection
gray = cv2.medianBlur(gray, 5)
## [reduce_noise]
## [houghcircles]
rows = gray.shape[0]
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, rows / 8,
param1=100, param2=30,
minRadius=1, maxRadius=30)
## [houghcircles]
## [draw]
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
center = (i[0], i[1])
# circle center
cv2.circle(src, center, 1, (0, 100, 100), 3)
# circle outline
radius = i[2]
cv2.circle(src, center, radius, (255, 0, 255), 3)
## [draw]
## [display]
cv2.imshow("detected circles", src)
cv2.waitKey(0)
## [display]
return 0
if __name__ == "__main__":
main(sys.argv[1:])
"""
@file hough_lines.py
@brief This program demonstrates line finding with the Hough transform
"""
import sys
import math
import cv2
import numpy as np
def main(argv):
## [load]
default_file = "../../../../data/sudoku.png"
filename = argv[0] if len(argv) > 0 else default_file
# Loads an image
src = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)
# Check if image is loaded fine
if src is None:
print ('Error opening image!')
print ('Usage: hough_lines.py [image_name -- default ' + default_file + '] \n')
return -1
## [load]
## [edge_detection]
# Edge detection
dst = cv2.Canny(src, 50, 200, None, 3)
## [edge_detection]
# Copy edges to the images that will display the results in BGR
cdst = cv2.cvtColor(dst, cv2.COLOR_GRAY2BGR)
cdstP = np.copy(cdst)
## [hough_lines]
# Standard Hough Line Transform
lines = cv2.HoughLines(dst, 1, np.pi / 180, 150, None, 0, 0)
## [hough_lines]
## [draw_lines]
# Draw the lines
if lines is not None:
for i in range(0, len(lines)):
rho = lines[i][0][0]
theta = lines[i][0][1]
a = math.cos(theta)
b = math.sin(theta)
x0 = a * rho
y0 = b * rho
pt1 = (int(x0 + 1000*(-b)), int(y0 + 1000*(a)))
pt2 = (int(x0 - 1000*(-b)), int(y0 - 1000*(a)))
cv2.line(cdst, pt1, pt2, (0,0,255), 3, cv2.LINE_AA)
## [draw_lines]
## [hough_lines_p]
# Probabilistic Line Transform
linesP = cv2.HoughLinesP(dst, 1, np.pi / 180, 50, None, 50, 10)
## [hough_lines_p]
## [draw_lines_p]
# Draw the lines
if linesP is not None:
for i in range(0, len(linesP)):
l = linesP[i][0]
cv2.line(cdstP, (l[0], l[1]), (l[2], l[3]), (0,0,255), 3, cv2.LINE_AA)
## [draw_lines_p]
## [imshow]
# Show results
cv2.imshow("Source", src)
cv2.imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst)
cv2.imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP)
## [imshow]
## [exit]
# Wait and Exit
cv2.waitKey()
return 0
## [exit]
if __name__ == "__main__":
main(sys.argv[1:])
"""
@file laplace_demo.py
@brief Sample code showing how to detect edges using the Laplace operator
"""
import sys
import cv2
def main(argv):
# [variables]
# Declare the variables we are going to use
ddepth = cv2.CV_16S
kernel_size = 3
window_name = "Laplace Demo"
# [variables]
# [load]
imageName = argv[0] if len(argv) > 0 else "../data/lena.jpg"
src = cv2.imread(imageName, cv2.IMREAD_COLOR) # Load an image
# Check if image is loaded fine
if src is None:
print ('Error opening image')
print ('Program Arguments: [image_name -- default ../data/lena.jpg]')
return -1
# [load]
# [reduce_noise]
# Remove noise by blurring with a Gaussian filter
src = cv2.GaussianBlur(src, (3, 3), 0)
# [reduce_noise]
# [convert_to_gray]
# Convert the image to grayscale
src_gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
# [convert_to_gray]
# Create Window
cv2.namedWindow(window_name, cv2.WINDOW_AUTOSIZE)
# [laplacian]
# Apply Laplace function
dst = cv2.Laplacian(src_gray, ddepth, kernel_size)
# [laplacian]
# [convert]
# converting back to uint8
abs_dst = cv2.convertScaleAbs(dst)
# [convert]
# [display]
cv2.imshow(window_name, abs_dst)
cv2.waitKey(0)
# [display]
return 0
if __name__ == "__main__":
main(sys.argv[1:])
"""
@file copy_make_border.py
@brief Sample code that shows the functionality of copyMakeBorder
"""
import sys
from random import randint
import cv2
def main(argv):
## [variables]
# First we declare the variables we are going to use
borderType = cv2.BORDER_CONSTANT
window_name = "copyMakeBorder Demo"
## [variables]
## [load]
imageName = argv[0] if len(argv) > 0 else "../data/lena.jpg"
# Loads an image
src = cv2.imread(imageName, cv2.IMREAD_COLOR)
# Check if image is loaded fine
if src is None:
print ('Error opening image!')
print ('Usage: copy_make_border.py [image_name -- default ../data/lena.jpg] \n')
return -1
## [load]
# Brief how-to for this program
print ('\n'
'\t copyMakeBorder Demo: \n'
' -------------------- \n'
' ** Press \'c\' to set the border to a random constant value \n'
' ** Press \'r\' to set the border to be replicated \n'
' ** Press \'ESC\' to exit the program ')
## [create_window]
cv2.namedWindow(window_name, cv2.WINDOW_AUTOSIZE)
## [create_window]
## [init_arguments]
# Initialize arguments for the filter
top = int(0.05 * src.shape[0]) # shape[0] = rows
bottom = top
left = int(0.05 * src.shape[1]) # shape[1] = cols
right = left
## [init_arguments]
while 1:
## [update_value]
value = [randint(0, 255), randint(0, 255), randint(0, 255)]
## [update_value]
## [copymakeborder]
dst = cv2.copyMakeBorder(src, top, bottom, left, right, borderType, None, value)
## [copymakeborder]
## [display]
cv2.imshow(window_name, dst)
## [display]
## [check_keypress]
c = cv2.waitKey(500)
if c == 27:
break
elif c == 99: # 99 = ord('c')
borderType = cv2.BORDER_CONSTANT
elif c == 114: # 114 = ord('r')
borderType = cv2.BORDER_REPLICATE
## [check_keypress]
return 0
if __name__ == "__main__":
main(sys.argv[1:])
"""
@file sobel_demo.py
@brief Sample code using Sobel and/or Scharr OpenCV functions to make a simple Edge Detector
"""
import sys
import cv2
def main(argv):
## [variables]
# First we declare the variables we are going to use
window_name = ('Sobel Demo - Simple Edge Detector')
scale = 1
delta = 0
ddepth = cv2.CV_16S
## [variables]
## [load]
# As usual we load our source image (src)
# Check number of arguments
if len(argv) < 1:
print ('Not enough parameters')
print ('Usage:\nmorph_lines_detection.py < path_to_image >')
return -1
# Load the image
src = cv2.imread(argv[0], cv2.IMREAD_COLOR)
# Check if image is loaded fine
if src is None:
print ('Error opening image: ' + argv[0])
return -1
## [load]
## [reduce_noise]
# Remove noise by blurring with a Gaussian filter ( kernel size = 3 )
src = cv2.GaussianBlur(src, (3, 3), 0)
## [reduce_noise]
## [convert_to_gray]
# Convert the image to grayscale
gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
## [convert_to_gray]
## [sobel]
# Gradient-X
# grad_x = cv2.Scharr(gray,ddepth,1,0)
grad_x = cv2.Sobel(gray, ddepth, 1, 0, ksize=3, scale=scale, delta=delta, borderType=cv2.BORDER_DEFAULT)
# Gradient-Y
# grad_y = cv2.Scharr(gray,ddepth,0,1)
grad_y = cv2.Sobel(gray, ddepth, 0, 1, ksize=3, scale=scale, delta=delta, borderType=cv2.BORDER_DEFAULT)
## [sobel]
## [convert]
# converting back to uint8
abs_grad_x = cv2.convertScaleAbs(grad_x)
abs_grad_y = cv2.convertScaleAbs(grad_y)
## [convert]
## [blend]
## Total Gradient (approximate)
grad = cv2.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0)
## [blend]
## [display]
cv2.imshow(window_name, grad)
cv2.waitKey(0)
## [display]
return 0
if __name__ == "__main__":
main(sys.argv[1:])
import cv2
import numpy as np
input_image = np.array((
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 255, 255, 255, 0, 0, 0, 255],
[0, 255, 255, 255, 0, 0, 0, 0],
[0, 255, 255, 255, 0, 255, 0, 0],
[0, 0, 255, 0, 0, 0, 0, 0],
[0, 0, 255, 0, 0, 255, 255, 0],
[0,255, 0, 255, 0, 0, 255, 0],
[0, 255, 255, 255, 0, 0, 0, 0]), dtype="uint8")
kernel = np.array((
[0, 1, 0],
[1, -1, 1],
[0, 1, 0]), dtype="int")
output_image = cv2.morphologyEx(input_image, cv2.MORPH_HITMISS, kernel)
rate = 50
kernel = (kernel + 1) * 127
kernel = np.uint8(kernel)
kernel = cv2.resize(kernel, None, fx = rate, fy = rate, interpolation = cv2.INTER_NEAREST)
cv2.imshow("kernel", kernel)
cv2.moveWindow("kernel", 0, 0)
input_image = cv2.resize(input_image, None, fx = rate, fy = rate, interpolation = cv2.INTER_NEAREST)
cv2.imshow("Original", input_image)
cv2.moveWindow("Original", 0, 200)
output_image = cv2.resize(output_image, None , fx = rate, fy = rate, interpolation = cv2.INTER_NEAREST)
cv2.imshow("Hit or Miss", output_image)
cv2.moveWindow("Hit or Miss", 500, 200)
cv2.waitKey(0)
cv2.destroyAllWindows()
import sys
import cv2
def main(argv):
print("""
Zoom In-Out demo
------------------
* [i] -> Zoom [i]n
* [o] -> Zoom [o]ut
* [ESC] -> Close program
""")
## [load]
filename = argv[0] if len(argv) > 0 else "../data/chicky_512.png"
# Load the image
src = cv2.imread(filename)
# Check if image is loaded fine
if src is None:
print ('Error opening image!')
print ('Usage: pyramids.py [image_name -- default ../data/chicky_512.png] \n')
return -1
## [load]
## [loop]
while 1:
rows, cols, _channels = map(int, src.shape)
## [show_image]
cv2.imshow('Pyramids Demo', src)
## [show_image]
k = cv2.waitKey(0)
if k == 27:
break
## [pyrup]
elif chr(k) == 'i':
src = cv2.pyrUp(src, dstsize=(2 * cols, 2 * rows))
print ('** Zoom In: Image x 2')
## [pyrup]
## [pyrdown]
elif chr(k) == 'o':
src = cv2.pyrDown(src, dstsize=(cols // 2, rows // 2))
print ('** Zoom Out: Image / 2')
## [pyrdown]
## [loop]
cv2.destroyAllWindows()
return 0
if __name__ == "__main__":
main(sys.argv[1:])
import sys
import cv2
import numpy as np
# Global Variables
DELAY_CAPTION = 1500
DELAY_BLUR = 100
MAX_KERNEL_LENGTH = 31
src = None
dst = None
window_name = 'Smoothing Demo'
def main(argv):
cv2.namedWindow(window_name, cv2.WINDOW_AUTOSIZE)
# Load the source image
imageName = argv[0] if len(argv) > 0 else "../data/lena.jpg"
global src
src = cv2.imread(imageName, 1)
if src is None:
print ('Error opening image')
print ('Usage: smoothing.py [image_name -- default ../data/lena.jpg] \n')
return -1
if display_caption('Original Image') != 0:
return 0
global dst
dst = np.copy(src)
if display_dst(DELAY_CAPTION) != 0:
return 0
# Applying Homogeneous blur
if display_caption('Homogeneous Blur') != 0:
return 0
## [blur]
for i in range(1, MAX_KERNEL_LENGTH, 2):
dst = cv2.blur(src, (i, i))
if display_dst(DELAY_BLUR) != 0:
return 0
## [blur]
# Applying Gaussian blur
if display_caption('Gaussian Blur') != 0:
return 0
## [gaussianblur]
for i in range(1, MAX_KERNEL_LENGTH, 2):
dst = cv2.GaussianBlur(src, (i, i), 0)
if display_dst(DELAY_BLUR) != 0:
return 0
## [gaussianblur]
# Applying Median blur
if display_caption('Median Blur') != 0:
return 0
## [medianblur]
for i in range(1, MAX_KERNEL_LENGTH, 2):
dst = cv2.medianBlur(src, i)
if display_dst(DELAY_BLUR) != 0:
return 0
## [medianblur]
# Applying Bilateral Filter
if display_caption('Bilateral Blur') != 0:
return 0
## [bilateralfilter]
# Remember, bilateral is a bit slow, so as value go higher, it takes long time
for i in range(1, MAX_KERNEL_LENGTH, 2):
dst = cv2.bilateralFilter(src, i, i * 2, i / 2)
if display_dst(DELAY_BLUR) != 0:
return 0
## [bilateralfilter]
# Done
display_caption('Done!')
return 0
def display_caption(caption):
global dst
dst = np.zeros(src.shape, src.dtype)
rows, cols, ch = src.shape
cv2.putText(dst, caption,
(int(cols / 4), int(rows / 2)),
cv2.FONT_HERSHEY_COMPLEX, 1, (255, 255, 255))
return display_dst(DELAY_CAPTION)
def display_dst(delay):
cv2.imshow(window_name, dst)
c = cv2.waitKey(delay)
if c >= 0 : return -1
return 0
if __name__ == "__main__":
main(sys.argv[1:])
"""
@file morph_lines_detection.py
@brief Use morphology transformations for extracting horizontal and vertical lines sample code
"""
import numpy as np
import sys
import cv2
def show_wait_destroy(winname, img):
cv2.imshow(winname, img)
cv2.moveWindow(winname, 500, 0)
cv2.waitKey(0)
cv2.destroyWindow(winname)
def main(argv):
# [load_image]
# Check number of arguments
if len(argv) < 1:
print ('Not enough parameters')
print ('Usage:\nmorph_lines_detection.py < path_to_image >')
return -1
# Load the image
src = cv2.imread(argv[0], cv2.IMREAD_COLOR)
# Check if image is loaded fine
if src is None:
print ('Error opening image: ' + argv[0])
return -1
# Show source image
cv2.imshow("src", src)
# [load_image]
# [gray]
# Transform source image to gray if it is not already
if len(src.shape) != 2:
gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
else:
gray = src
# Show gray image
show_wait_destroy("gray", gray)
# [gray]
# [bin]
# Apply adaptiveThreshold at the bitwise_not of gray, notice the ~ symbol
gray = cv2.bitwise_not(gray)
bw = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, \
cv2.THRESH_BINARY, 15, -2)
# Show binary image
show_wait_destroy("binary", bw)
# [bin]
# [init]
# Create the images that will use to extract the horizontal and vertical lines
horizontal = np.copy(bw)
vertical = np.copy(bw)
# [init]
# [horiz]
# Specify size on horizontal axis
cols = horizontal.shape[1]
horizontal_size = cols / 30
# Create structure element for extracting horizontal lines through morphology operations
horizontalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (horizontal_size, 1))
# Apply morphology operations
horizontal = cv2.erode(horizontal, horizontalStructure)
horizontal = cv2.dilate(horizontal, horizontalStructure)
# Show extracted horizontal lines
show_wait_destroy("horizontal", horizontal)
# [horiz]
# [vert]
# Specify size on vertical axis
rows = vertical.shape[0]
verticalsize = rows / 30
# Create structure element for extracting vertical lines through morphology operations
verticalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (1, verticalsize))
# Apply morphology operations
vertical = cv2.erode(vertical, verticalStructure)
vertical = cv2.dilate(vertical, verticalStructure)
# Show extracted vertical lines
show_wait_destroy("vertical", vertical)
# [vert]
# [smooth]
# Inverse vertical image
vertical = cv2.bitwise_not(vertical)
show_wait_destroy("vertical_bit", vertical)
'''
Extract edges and smooth image according to the logic
1. extract edges
2. dilate(edges)
3. src.copyTo(smooth)
4. blur smooth img
5. smooth.copyTo(src, edges)
'''
# Step 1
edges = cv2.adaptiveThreshold(vertical, 255, cv2.ADAPTIVE_THRESH_MEAN_C, \
cv2.THRESH_BINARY, 3, -2)
show_wait_destroy("edges", edges)
# Step 2
kernel = np.ones((2, 2), np.uint8)
edges = cv2.dilate(edges, kernel)
show_wait_destroy("dilate", edges)
# Step 3
smooth = np.copy(vertical)
# Step 4
smooth = cv2.blur(smooth, (2, 2))
# Step 5
(rows, cols) = np.where(edges != 0)
vertical[rows, cols] = smooth[rows, cols]
# Show final result
show_wait_destroy("smooth - final", vertical)
# [smooth]
return 0
if __name__ == "__main__":
main(sys.argv[1:])
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment