Commit bc7f6fc4 authored by Adeel Ahmad's avatar Adeel Ahmad Committed by Alexander Alekhin

Merge pull request #8253 from adl1995:master

* Update linux_install.markdown

Grammar improvements, fixed typos.

* Update tutorials.markdown

Improvements in grammar.

* Update table_of_content_calib3d.markdown

* Update camera_calibration_square_chess.markdown

Improvements in grammar. Added answer.

* Update tutorials.markdown

* Update erosion_dilatation.markdown

* Update table_of_content_imgproc.markdown

* Update warp_affine.markdown

* Update camera_calibration_square_chess.markdown

Removed extra space.

* Update gpu_basics_similarity.markdown

Grammatical improvements, fixed typos.

* Update trackbar.markdown

Improvement for better understanding.
parent da0b1d88
...@@ -5,7 +5,7 @@ The goal of this tutorial is to learn how to calibrate a camera given a set of c ...@@ -5,7 +5,7 @@ The goal of this tutorial is to learn how to calibrate a camera given a set of c
*Test data*: use images in your data/chess folder. *Test data*: use images in your data/chess folder.
- Compile opencv with samples by setting BUILD_EXAMPLES to ON in cmake configuration. - Compile OpenCV with samples by setting BUILD_EXAMPLES to ON in cmake configuration.
- Go to bin folder and use imagelist_creator to create an XML/YAML list of your images. - Go to bin folder and use imagelist_creator to create an XML/YAML list of your images.
...@@ -14,32 +14,32 @@ The goal of this tutorial is to learn how to calibrate a camera given a set of c ...@@ -14,32 +14,32 @@ The goal of this tutorial is to learn how to calibrate a camera given a set of c
Pose estimation Pose estimation
--------------- ---------------
Now, let us write a code that detects a chessboard in a new image and finds its distance from the Now, let us write code that detects a chessboard in an image and finds its distance from the
camera. You can apply the same method to any object with known 3D geometry that you can detect in an camera. You can apply this method to any object with known 3D geometry; which you detect in an
image. image.
*Test data*: use chess_test\*.jpg images from your data folder. *Test data*: use chess_test\*.jpg images from your data folder.
- Create an empty console project. Load a test image: : - Create an empty console project. Load a test image :
Mat img = imread(argv[1], IMREAD_GRAYSCALE); Mat img = imread(argv[1], IMREAD_GRAYSCALE);
- Detect a chessboard in this image using findChessboard function. : - Detect a chessboard in this image using findChessboard function :
bool found = findChessboardCorners( img, boardSize, ptvec, CALIB_CB_ADAPTIVE_THRESH ); bool found = findChessboardCorners( img, boardSize, ptvec, CALIB_CB_ADAPTIVE_THRESH );
- Now, write a function that generates a vector\<Point3f\> array of 3d coordinates of a chessboard - Now, write a function that generates a vector\<Point3f\> array of 3d coordinates of a chessboard
in any coordinate system. For simplicity, let us choose a system such that one of the chessboard in any coordinate system. For simplicity, let us choose a system such that one of the chessboard
corners is in the origin and the board is in the plane *z = 0*. corners is in the origin and the board is in the plane *z = 0*
- Read camera parameters from XML/YAML file: : - Read camera parameters from XML/YAML file :
FileStorage fs(filename, FileStorage::READ); FileStorage fs( filename, FileStorage::READ );
Mat intrinsics, distortion; Mat intrinsics, distortion;
fs["camera_matrix"] >> intrinsics; fs["camera_matrix"] >> intrinsics;
fs["distortion_coefficients"] >> distortion; fs["distortion_coefficients"] >> distortion;
- Now we are ready to find chessboard pose by running \`solvePnP\`: : - Now we are ready to find a chessboard pose by running \`solvePnP\` :
vector<Point3f> boardPoints; vector<Point3f> boardPoints;
// fill the array // fill the array
...@@ -51,4 +51,5 @@ image. ...@@ -51,4 +51,5 @@ image.
- Calculate reprojection error like it is done in calibration sample (see - Calculate reprojection error like it is done in calibration sample (see
opencv/samples/cpp/calibration.cpp, function computeReprojectionErrors). opencv/samples/cpp/calibration.cpp, function computeReprojectionErrors).
Question: how to calculate the distance from the camera origin to any of the corners? Question: how would you calculate distance from the camera origin to any one of the corners?
Answer: As our image lies in a 3D space, firstly we would calculate the relative camera pose. This would give us 3D to 2D correspondences. Next, we can apply a simple L2 norm to calculate distance between any point (end point for corners).
Camera calibration and 3D reconstruction (calib3d module) {#tutorial_table_of_content_calib3d} Camera calibration and 3D reconstruction (calib3d module) {#tutorial_table_of_content_calib3d}
========================================================== ==========================================================
Although we got most of our images in a 2D format they do come from a 3D world. Here you will learn Although we get most of our images in a 2D format they do come from a 3D world. Here you will learn how to find out 3D world information from 2D images.
how to find out from the 2D images information about the 3D world.
- @subpage tutorial_camera_calibration_square_chess - @subpage tutorial_camera_calibration_square_chess
......
...@@ -6,14 +6,14 @@ Goal ...@@ -6,14 +6,14 @@ Goal
---- ----
In the @ref tutorial_video_input_psnr_ssim tutorial I already presented the PSNR and SSIM methods for checking In the @ref tutorial_video_input_psnr_ssim tutorial I already presented the PSNR and SSIM methods for checking
the similarity between the two images. And as you could see there performing these takes quite some the similarity between the two images. And as you could see, the execution process takes quite some
time, especially in the case of the SSIM. However, if the performance numbers of an OpenCV time , especially in the case of the SSIM. However, if the performance numbers of an OpenCV
implementation for the CPU do not satisfy you and you happen to have an NVidia CUDA GPU device in implementation for the CPU do not satisfy you and you happen to have an NVidia CUDA GPU device in
your system all is not lost. You may try to port or write your algorithm for the video card. your system, all is not lost. You may try to port or write your owm algorithm for the video card.
This tutorial will give a good grasp on how to approach coding by using the GPU module of OpenCV. As This tutorial will give a good grasp on how to approach coding by using the GPU module of OpenCV. As
a prerequisite you should already know how to handle the core, highgui and imgproc modules. So, our a prerequisite you should already know how to handle the core, highgui and imgproc modules. So, our
goals are: main goals are:
- What's different compared to the CPU? - What's different compared to the CPU?
- Create the GPU code for the PSNR and SSIM - Create the GPU code for the PSNR and SSIM
...@@ -22,8 +22,8 @@ goals are: ...@@ -22,8 +22,8 @@ goals are:
The source code The source code
--------------- ---------------
You may also find the source code and these video file in the You may also find the source code and the video file in the
`samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity` folder of the OpenCV `samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity` directory of the OpenCV
source library or download it from [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp). source library or download it from [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp).
The full source code is quite long (due to the controlling of the application via the command line The full source code is quite long (due to the controlling of the application via the command line
arguments and performance measurement). Therefore, to avoid cluttering up these sections with those arguments and performance measurement). Therefore, to avoid cluttering up these sections with those
...@@ -37,7 +37,7 @@ better). ...@@ -37,7 +37,7 @@ better).
@snippet samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp psnr @snippet samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp psnr
@snippet samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp getpsnropt @snippet samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity.cpp getpsnropt
The SSIM returns the MSSIM of the images. This is too a float number between zero and one (higher is The SSIM returns the MSSIM of the images. This is too a floating point number between zero and one (higher is
better), however we have one for each channel. Therefore, we return a *Scalar* OpenCV data better), however we have one for each channel. Therefore, we return a *Scalar* OpenCV data
structure: structure:
...@@ -49,13 +49,13 @@ structure: ...@@ -49,13 +49,13 @@ structure:
How to do it? - The GPU How to do it? - The GPU
----------------------- -----------------------
Now as you can see we have three types of functions for each operation. One for the CPU and two for As see above, we have three types of functions for each operation. One for the CPU and two for
the GPU. The reason I made two for the GPU is too illustrate that often simple porting your CPU to the GPU. The reason I made two for the GPU is too illustrate that often simple porting your CPU to
GPU will actually make it slower. If you want some performance gain you will need to remember a few GPU will actually make it slower. If you want some performance gain you will need to remember a few
rules, whose I'm going to detail later on. rules, for which I will go into detail later on.
The development of the GPU module was made so that it resembles as much as possible its CPU The development of the GPU module was made so that it resembles as much as possible its CPU
counterpart. This is to make porting easy. The first thing you need to do before writing any code is counterpart. This makes the porting process easier. The first thing you need to do before writing any code is
to link the GPU module to your project, and include the header file for the module. All the to link the GPU module to your project, and include the header file for the module. All the
functions and data structures of the GPU are in a *gpu* sub namespace of the *cv* namespace. You may functions and data structures of the GPU are in a *gpu* sub namespace of the *cv* namespace. You may
add this to the default one via the *use namespace* keyword, or mark it everywhere explicitly via add this to the default one via the *use namespace* keyword, or mark it everywhere explicitly via
...@@ -64,25 +64,25 @@ the cv:: to avoid confusion. I'll do the later. ...@@ -64,25 +64,25 @@ the cv:: to avoid confusion. I'll do the later.
#include <opencv2/gpu.hpp> // GPU structures and methods #include <opencv2/gpu.hpp> // GPU structures and methods
@endcode @endcode
GPU stands for "graphics processing unit". It was originally build to render graphical GPU stands for "graphics processing unit". It was originally built to render graphical
scenes. These scenes somehow build on a lot of data. Nevertheless, these aren't all dependent one scenes. These scenes somehow build on a lot of data. Nevertheless, these aren't all dependent one
from another in a sequential way and as it is possible a parallel processing of them. Due to this a from another in a sequential way and as it is possible a parallel processing of them. Due to this a
GPU will contain multiple smaller processing units. These aren't the state of the art processors and GPU will contain multiple smaller processing units. These aren't the state of the art processors and
on a one on one test with a CPU it will fall behind. However, its strength lies in its numbers. In on a one on one test with a CPU it will fall behind. However, its strength lies in its numbers. In
the last years there has been an increasing trend to harvest these massive parallel powers of the the last years there has been an increasing trend to harvest these massive parallel powers of the
GPU in non-graphical scene rendering too. This gave birth to the general-purpose computation on GPU in non-graphical scenes; rendering as well. This gave birth to the general-purpose computation on
graphics processing units (GPGPU). graphics processing units (GPGPU).
The GPU has its own memory. When you read data from the hard drive with OpenCV into a *Mat* object The GPU has its own memory. When you read data from the hard drive with OpenCV into a *Mat* object
that takes place in your systems memory. The CPU works somehow directly on this (via its cache), that takes place in your systems memory. The CPU works somehow directly on this (via its cache),
however the GPU cannot. He has too transferred the information he will use for calculations from the however the GPU cannot. It has to transfer the information required for calculations from the
system memory to its own. This is done via an upload process and takes time. In the end the result system memory to its own. This is done via an upload process and is time consuming. In the end the result
will have to be downloaded back to your system memory for your CPU to see it and use it. Porting will have to be downloaded back to your system memory for your CPU to see and use it. Porting
small functions to GPU is not recommended as the upload/download time will be larger than the amount small functions to GPU is not recommended as the upload/download time will be larger than the amount
you gain by a parallel execution. you gain by a parallel execution.
Mat objects are stored only in the system memory (or the CPU cache). For getting an OpenCV matrix to Mat objects are stored only in the system memory (or the CPU cache). For getting an OpenCV matrix to
the GPU you'll need to use its GPU counterpart @ref cv::cuda::GpuMat . It works similar to the Mat with a the GPU you'll need to use its GPU counterpart @ref cv::cuda::GpuMat. It works similar to the Mat with a
2D only limitation and no reference returning for its functions (cannot mix GPU references with CPU 2D only limitation and no reference returning for its functions (cannot mix GPU references with CPU
ones). To upload a Mat object to the GPU you need to call the upload function after creating an ones). To upload a Mat object to the GPU you need to call the upload function after creating an
instance of the class. To download you may use simple assignment to a Mat object or use the download instance of the class. To download you may use simple assignment to a Mat object or use the download
...@@ -103,17 +103,17 @@ with the source code. ...@@ -103,17 +103,17 @@ with the source code.
Another thing to keep in mind is that not for all channel numbers you can make efficient algorithms Another thing to keep in mind is that not for all channel numbers you can make efficient algorithms
on the GPU. Generally, I found that the input images for the GPU images need to be either one or on the GPU. Generally, I found that the input images for the GPU images need to be either one or
four channel ones and one of the char or float type for the item sizes. No double support on the four channel ones and one of the char or float type for the item sizes. No double support on the
GPU, sorry. Passing other types of objects for some functions will result in an exception thrown, GPU, sorry. Passing other types of objects for some functions will result in an exception throw,
and an error message on the error output. The documentation details in most of the places the types and an error message on the error output. The documentation details in most of the places the types
accepted for the inputs. If you have three channel images as an input you can do two things: either accepted for the inputs. If you have three channel images as an input you can do two things: either
adds a new channel (and use char elements) or split up the image and call the function for each add a new channel (and use char elements) or split up the image and call the function for each
image. The first one isn't really recommended as you waste memory. image. The first one isn't really recommended as this wastes memory.
For some functions, where the position of the elements (neighbor items) doesn't matter quick For some functions, where the position of the elements (neighbor items) doesn't matter, the quick
solution is to just reshape it into a single channel image. This is the case for the PSNR solution is to reshape it into a single channel image. This is the case for the PSNR
implementation where for the *absdiff* method the value of the neighbors is not important. However, implementation where for the *absdiff* method the value of the neighbors is not important. However,
for the *GaussianBlur* this isn't an option and such need to use the split method for the SSIM. With for the *GaussianBlur* this isn't an option and such need to use the split method for the SSIM. With
this knowledge you can already make a GPU viable code (like mine GPU one) and run it. You'll be this knowledge you can make a GPU viable code (like mine GPU one) and run it. You'll be
surprised to see that it might turn out slower than your CPU implementation. surprised to see that it might turn out slower than your CPU implementation.
Optimization Optimization
...@@ -147,15 +147,15 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda:: ...@@ -147,15 +147,15 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
Now you access these local parameters as: *b.gI1*, *b.buf* and so on. The GpuMat will only Now you access these local parameters as: *b.gI1*, *b.buf* and so on. The GpuMat will only
reallocate itself on a new call if the new matrix size is different from the previous one. reallocate itself on a new call if the new matrix size is different from the previous one.
-# Avoid unnecessary function data transfers. Any small data transfer will be significant one once -# Avoid unnecessary function data transfers. Any small data transfer will be significant once
you go to the GPU. Therefore, if possible make all calculations in-place (in other words do not you go to the GPU. Therefore, if possible, make all calculations in-place (in other words do not
create new memory objects - for reasons explained at the previous point). For example, although create new memory objects - for reasons explained at the previous point). For example, although
expressing arithmetical operations may be easier to express in one line formulas, it will be expressing arithmetical operations may be easier to express in one line formulas, it will be
slower. In case of the SSIM at one point I need to calculate: slower. In case of the SSIM at one point I need to calculate:
@code{.cpp} @code{.cpp}
b.t1 = 2 * b.mu1_mu2 + C1; b.t1 = 2 * b.mu1_mu2 + C1;
@endcode @endcode
Although the upper call will succeed observe that there is a hidden data transfer present. Although the upper call will succeed, observe that there is a hidden data transfer present.
Before it makes the addition it needs to store somewhere the multiplication. Therefore, it will Before it makes the addition it needs to store somewhere the multiplication. Therefore, it will
create a local matrix in the background, add to that the *C1* value and finally assign that to create a local matrix in the background, add to that the *C1* value and finally assign that to
*t1*. To avoid this we use the gpu functions, instead of the arithmetic operators: *t1*. To avoid this we use the gpu functions, instead of the arithmetic operators:
...@@ -163,17 +163,17 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda:: ...@@ -163,17 +163,17 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1; gpu::multiply(b.mu1_mu2, 2, b.t1); //b.t1 = 2 * b.mu1_mu2 + C1;
gpu::add(b.t1, C1, b.t1); gpu::add(b.t1, C1, b.t1);
@endcode @endcode
-# Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a gpu function -# Use asynchronous calls (the @ref cv::cuda::Stream ). By default whenever you call a GPU function
it will wait for the call to finish and return with the result afterwards. However, it is it will wait for the call to finish and return with the result afterwards. However, it is
possible to make asynchronous calls, meaning it will call for the operation execution, make the possible to make asynchronous calls, meaning it will call for the operation execution, making the
costly data allocations for the algorithm and return back right away. Now you can call another costly data allocations for the algorithm and return back right away. Now you can call another
function if you wish to do so. For the MSSIM this is a small optimization point. In our default function, if you wish. For the MSSIM this is a small optimization point. In our default
implementation we split up the image into channels and call then for each channel the gpu implementation we split up the image into channels and call them for each channel the GPU
functions. A small degree of parallelization is possible with the stream. By using a stream we functions. A small degree of parallelization is possible with the stream. By using a stream we
can make the data allocation, upload operations while the GPU is already executing a given can make the data allocation, upload operations while the GPU is already executing a given
method. For example we need to upload two images. We queue these one after another and call method. For example, we need to upload two images. We queue these one after another and call
already the function that processes it. The functions will wait for the upload to finish, the function that processes it. The functions will wait for the upload to finish,
however while that happens makes the output buffer allocations for the function to be executed however while this happens it makes the output buffer allocations for the function to be executed
next. next.
@code{.cpp} @code{.cpp}
gpu::Stream stream; gpu::Stream stream;
...@@ -187,7 +187,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda:: ...@@ -187,7 +187,7 @@ introduce asynchronous OpenCV GPU calls too with the help of the @ref cv::cuda::
Result and conclusion Result and conclusion
--------------------- ---------------------
On an Intel P8700 laptop CPU paired with a low end NVidia GT220M here are the performance numbers: On an Intel P8700 laptop CPU paired with a low end NVidia GT220M, here are the performance numbers:
@code @code
Time of PSNR CPU (averaged for 10 runs): 41.4122 milliseconds. With result of: 19.2506 Time of PSNR CPU (averaged for 10 runs): 41.4122 milliseconds. With result of: 19.2506
Time of PSNR GPU (averaged for 10 runs): 158.977 milliseconds. With result of: 19.2506 Time of PSNR GPU (averaged for 10 runs): 158.977 milliseconds. With result of: 19.2506
......
...@@ -68,7 +68,7 @@ Result ...@@ -68,7 +68,7 @@ Result
![](images/Adding_Trackbars_Tutorial_Result_0.jpg) ![](images/Adding_Trackbars_Tutorial_Result_0.jpg)
- As a manner of practice, you can also add two trackbars for the program made in - As a manner of practice, you can also add two trackbars for the program made in
@ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for \f$\beta\f$. The output might @ref tutorial_basic_linear_transform. One trackbar to set \f$\alpha\f$ and another for set \f$\beta\f$. The output might
look like: look like:
![](images/Adding_Trackbars_Tutorial_Result_1.jpg) ![](images/Adding_Trackbars_Tutorial_Result_1.jpg)
...@@ -6,12 +6,12 @@ Goal ...@@ -6,12 +6,12 @@ Goal
In this tutorial you will learn how to: In this tutorial you will learn how to:
- Apply two very common morphology operators: Dilation and Erosion. For this purpose, you will use - Apply two very common morphological operators: Erosion and Dilation. For this purpose, you will use
the following OpenCV functions: the following OpenCV functions:
- @ref cv::erode - @ref cv::erode
- @ref cv::dilate - @ref cv::dilate
Cool Theory Interesting fact
----------- -----------
@note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler. @note The explanation below belongs to the book **Learning OpenCV** by Bradski and Kaehler.
...@@ -21,7 +21,7 @@ Morphological Operations ...@@ -21,7 +21,7 @@ Morphological Operations
- In short: A set of operations that process images based on shapes. Morphological operations - In short: A set of operations that process images based on shapes. Morphological operations
apply a *structuring element* to an input image and generate an output image. apply a *structuring element* to an input image and generate an output image.
- The most basic morphological operations are two: Erosion and Dilation. They have a wide array of - The most basic morphological operations are: Erosion and Dilation. They have a wide array of
uses, i.e. : uses, i.e. :
- Removing noise - Removing noise
- Isolation of individual elements and joining disparate elements in an image. - Isolation of individual elements and joining disparate elements in an image.
...@@ -32,19 +32,19 @@ Morphological Operations ...@@ -32,19 +32,19 @@ Morphological Operations
### Dilation ### Dilation
- This operations consists of convoluting an image \f$A\f$ with some kernel (\f$B\f$), which can have any - This operations consists of convolving an image \f$A\f$ with some kernel (\f$B\f$), which can have any
shape or size, usually a square or circle. shape or size, usually a square or circle.
- The kernel \f$B\f$ has a defined *anchor point*, usually being the center of the kernel. - The kernel \f$B\f$ has a defined *anchor point*, usually being the center of the kernel.
- As the kernel \f$B\f$ is scanned over the image, we compute the maximal pixel value overlapped by - As the kernel \f$B\f$ is scanned over the image, we compute the maximal pixel value overlapped by
\f$B\f$ and replace the image pixel in the anchor point position with that maximal value. As you can \f$B\f$ and replace the image pixel in the anchor point position with that maximal value. As you can
deduce, this maximizing operation causes bright regions within an image to "grow" (therefore the deduce, this maximizing operation causes bright regions within an image to "grow" (therefore the
name *dilation*). Take as an example the image above. Applying dilation we can get: name *dilation*). Take the above image as an example. Applying dilation we can get:
![](images/Morphology_1_Tutorial_Theory_Dilation.png) ![](images/Morphology_1_Tutorial_Theory_Dilation.png)
The background (bright) dilates around the black regions of the letter. The background (bright) dilates around the black regions of the letter.
To better grasp the idea and avoid possible confusion, in this another example we have inverted the original To better grasp the idea and avoid possible confusion, in this other example we have inverted the original
image such as the object in white is now the letter. We have performed two dilatations with a rectangular image such as the object in white is now the letter. We have performed two dilatations with a rectangular
structuring element of size `3x3`. structuring element of size `3x3`.
...@@ -54,8 +54,8 @@ The dilatation makes the object in white bigger. ...@@ -54,8 +54,8 @@ The dilatation makes the object in white bigger.
### Erosion ### Erosion
- This operation is the sister of dilation. What this does is to compute a local minimum over the - This operation is the sister of dilation. It computes a local minimum over the
area of the kernel. area of given kernel.
- As the kernel \f$B\f$ is scanned over the image, we compute the minimal pixel value overlapped by - As the kernel \f$B\f$ is scanned over the image, we compute the minimal pixel value overlapped by
\f$B\f$ and replace the image pixel under the anchor point with that minimal value. \f$B\f$ and replace the image pixel under the anchor point with that minimal value.
- Analagously to the example for dilation, we can apply the erosion operator to the original image - Analagously to the example for dilation, we can apply the erosion operator to the original image
...@@ -64,7 +64,7 @@ The dilatation makes the object in white bigger. ...@@ -64,7 +64,7 @@ The dilatation makes the object in white bigger.
![](images/Morphology_1_Tutorial_Theory_Erosion.png) ![](images/Morphology_1_Tutorial_Theory_Erosion.png)
In the same manner, the corresponding image resulting of the erosion operation on the inverted original image (two erosions In similar manner, the corresponding image results by applying erosion operation on the inverted original image (two erosions
with a rectangular structuring element of size `3x3`): with a rectangular structuring element of size `3x3`):
![Left image: original image inverted, right image: resulting erosion](images/Morphology_1_Tutorial_Theory_Erosion_2.png) ![Left image: original image inverted, right image: resulting erosion](images/Morphology_1_Tutorial_Theory_Erosion_2.png)
...@@ -74,14 +74,14 @@ The erosion makes the object in white smaller. ...@@ -74,14 +74,14 @@ The erosion makes the object in white smaller.
Code Code
---- ----
This tutorial code's is shown lines below. You can also download it from This tutorial's code is shown below. You can also download it
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Morphology_1.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgProc/Morphology_1.cpp)
@include samples/cpp/tutorial_code/ImgProc/Morphology_1.cpp @include samples/cpp/tutorial_code/ImgProc/Morphology_1.cpp
Explanation Explanation
----------- -----------
-# Most of the stuff shown is known by you (if you have any doubt, please refer to the tutorials in -# Most of the material shown here is trivial (if you have any doubt, please refer to the tutorials in
previous sections). Let's check the general structure of the program: previous sections). Let's check the general structure of the program:
- Load an image (can be BGR or grayscale) - Load an image (can be BGR or grayscale)
...@@ -118,8 +118,8 @@ Explanation ...@@ -118,8 +118,8 @@ Explanation
- That is all. We are ready to perform the erosion of our image. - That is all. We are ready to perform the erosion of our image.
@note Additionally, there is another parameter that allows you to perform multiple erosions @note Additionally, there is another parameter that allows you to perform multiple erosions
(iterations) at once. We are not using it in this simple tutorial, though. You can check out the (iterations) at once. However, We haven't used it in this simple tutorial. You can check out the
Reference for more details. reference for more details.
-# **dilation:** -# **dilation:**
......
...@@ -14,9 +14,9 @@ Theory ...@@ -14,9 +14,9 @@ Theory
### What is an Affine Transformation? ### What is an Affine Transformation?
-# It is any transformation that can be expressed in the form of a *matrix multiplication* (linear -# A transformation that can be expressed in the form of a *matrix multiplication* (linear
transformation) followed by a *vector addition* (translation). transformation) followed by a *vector addition* (translation).
-# From the above, We can use an Affine Transformation to express: -# From the above, we can use an Affine Transformation to express:
-# Rotations (linear transformation) -# Rotations (linear transformation)
-# Translations (vector addition) -# Translations (vector addition)
...@@ -25,7 +25,7 @@ Theory ...@@ -25,7 +25,7 @@ Theory
you can see that, in essence, an Affine Transformation represents a **relation** between two you can see that, in essence, an Affine Transformation represents a **relation** between two
images. images.
-# The usual way to represent an Affine Transform is by using a \f$2 \times 3\f$ matrix. -# The usual way to represent an Affine Transformation is by using a \f$2 \times 3\f$ matrix.
\f[ \f[
A = \begin{bmatrix} A = \begin{bmatrix}
...@@ -49,7 +49,7 @@ Theory ...@@ -49,7 +49,7 @@ Theory
\f] \f]
Considering that we want to transform a 2D vector \f$X = \begin{bmatrix}x \\ y\end{bmatrix}\f$ by Considering that we want to transform a 2D vector \f$X = \begin{bmatrix}x \\ y\end{bmatrix}\f$ by
using \f$A\f$ and \f$B\f$, we can do it equivalently with: using \f$A\f$ and \f$B\f$, we can do the same with:
\f$T = A \cdot \begin{bmatrix}x \\ y\end{bmatrix} + B\f$ or \f$T = M \cdot [x, y, 1]^{T}\f$ \f$T = A \cdot \begin{bmatrix}x \\ y\end{bmatrix} + B\f$ or \f$T = M \cdot [x, y, 1]^{T}\f$
...@@ -60,35 +60,35 @@ Theory ...@@ -60,35 +60,35 @@ Theory
### How do we get an Affine Transformation? ### How do we get an Affine Transformation?
-# Excellent question. We mentioned that an Affine Transformation is basically a **relation** -# We mentioned that an Affine Transformation is basically a **relation**
between two images. The information about this relation can come, roughly, in two ways: between two images. The information about this relation can come, roughly, in two ways:
-# We know both \f$X\f$ and T and we also know that they are related. Then our job is to find \f$M\f$ -# We know both \f$X\f$ and T and we also know that they are related. Then our task is to find \f$M\f$
-# We know \f$M\f$ and \f$X\f$. To obtain \f$T\f$ we only need to apply \f$T = M \cdot X\f$. Our information -# We know \f$M\f$ and \f$X\f$. To obtain \f$T\f$ we only need to apply \f$T = M \cdot X\f$. Our information
for \f$M\f$ may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation for \f$M\f$ may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation
between points. between points.
-# Let's explain a little bit better (b). Since \f$M\f$ relates 02 images, we can analyze the simplest -# Let's explain this in a better way (b). Since \f$M\f$ relates 2 images, we can analyze the simplest
case in which it relates three points in both images. Look at the figure below: case in which it relates three points in both images. Look at the figure below:
![](images/Warp_Affine_Tutorial_Theory_0.jpg) ![](images/Warp_Affine_Tutorial_Theory_0.jpg)
the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a
triangle, but now they have changed notoriously. If we find the Affine Transformation with these triangle, but now they have changed notoriously. If we find the Affine Transformation with these
3 points (you can choose them as you like), then we can apply this found relation to the whole 3 points (you can choose them as you like), then we can apply this found relation to all the
pixels in the image. pixels in an image.
Code Code
---- ----
-# **What does this program do?** -# **What does this program do?**
- Loads an image - Loads an image
- Applies an Affine Transform to the image. This Transform is obtained from the relation - Applies an Affine Transform to the image. This transform is obtained from the relation
between three points. We use the function @ref cv::warpAffine for that purpose. between three points. We use the function @ref cv::warpAffine for that purpose.
- Applies a Rotation to the image after being transformed. This rotation is with respect to - Applies a Rotation to the image after being transformed. This rotation is with respect to
the image center the image center
- Waits until the user exits the program - Waits until the user exits the program
-# The tutorial code's is shown lines below. You can also download it from -# The tutorial's code is shown below. You can also download it here
[here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp) [here](https://github.com/opencv/opencv/tree/master/samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp)
@include samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp @include samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp
...@@ -113,10 +113,10 @@ Explanation ...@@ -113,10 +113,10 @@ Explanation
@code{.cpp} @code{.cpp}
warp_dst = Mat::zeros( src.rows, src.cols, src.type() ); warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
@endcode @endcode
-# **Affine Transform:** As we explained lines above, we need two sets of 3 points to derive the -# **Affine Transform:** As we explained in lines above, we need two sets of 3 points to derive the
affine transform relation. Take a look: affine transform relation. Have a look:
@code{.cpp} @code{.cpp}
srcTri[0] = Point2f( 0,0 ); srcTri[0] = Point2f( 0, 0 );
srcTri[1] = Point2f( src.cols - 1, 0 ); srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 ); srcTri[2] = Point2f( 0, src.rows - 1 );
...@@ -124,7 +124,7 @@ Explanation ...@@ -124,7 +124,7 @@ Explanation
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 ); dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 ); dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
@endcode @endcode
You may want to draw the points to make a better idea of how they change. Their locations are You may want to draw these points to get a better idea on how they change. Their locations are
approximately the same as the ones depicted in the example figure (in the Theory section). You approximately the same as the ones depicted in the example figure (in the Theory section). You
may note that the size and orientation of the triangle defined by the 3 points change. may note that the size and orientation of the triangle defined by the 3 points change.
...@@ -133,9 +133,9 @@ Explanation ...@@ -133,9 +133,9 @@ Explanation
@code{.cpp} @code{.cpp}
warp_mat = getAffineTransform( srcTri, dstTri ); warp_mat = getAffineTransform( srcTri, dstTri );
@endcode @endcode
We get as an output a \f$2 \times 3\f$ matrix (in this case **warp_mat**) We get a \f$2 \times 3\f$ matrix as an output (in this case **warp_mat**)
-# We apply the Affine Transform just found to the src image -# We then apply the Affine Transform just found to the src image
@code{.cpp} @code{.cpp}
warpAffine( src, warp_dst, warp_mat, warp_dst.size() ); warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
@endcode @endcode
......
...@@ -41,7 +41,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -41,7 +41,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Theodore Tsesmelis *Author:* Theodore Tsesmelis
Here we will show how we can use different morphology operators to extract horizontal and vertical lines Here we will show how we can use different morphological operators to extract horizontal and vertical lines
- @subpage tutorial_pyramids - @subpage tutorial_pyramids
...@@ -57,7 +57,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -57,7 +57,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Ana Huamán *Author:* Ana Huamán
After so much processing, it is time to decide which pixels stay! After so much processing, it is time to decide which pixels stay
- @subpage tutorial_threshold_inRange - @subpage tutorial_threshold_inRange
...@@ -81,7 +81,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -81,7 +81,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Ana Huamán *Author:* Ana Huamán
Where we learn how to pad our images! Where we learn how to pad our images
- @subpage tutorial_sobel_derivatives - @subpage tutorial_sobel_derivatives
...@@ -89,7 +89,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -89,7 +89,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Ana Huamán *Author:* Ana Huamán
Where we learn how to calculate gradients and use them to detect edges! Where we learn how to calculate gradients and use them to detect edges
- @subpage tutorial_laplace_operator - @subpage tutorial_laplace_operator
...@@ -97,7 +97,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -97,7 +97,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Ana Huamán *Author:* Ana Huamán
Where we learn about the *Laplace* operator and how to detect edges with it. Where we learn about the *Laplace* operator and how to detect edges with it
- @subpage tutorial_canny_detector - @subpage tutorial_canny_detector
...@@ -105,7 +105,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -105,7 +105,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Ana Huamán *Author:* Ana Huamán
Where we learn a sophisticated alternative to detect edges. Where we learn a sophisticated alternative to detect edges
- @subpage tutorial_hough_lines - @subpage tutorial_hough_lines
...@@ -193,7 +193,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -193,7 +193,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Ana Huamán *Author:* Ana Huamán
Where we learn how to get hull contours and draw them! Where we learn how to get hull contours and draw them
- @subpage tutorial_bounding_rects_circles - @subpage tutorial_bounding_rects_circles
...@@ -201,7 +201,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -201,7 +201,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Ana Huamán *Author:* Ana Huamán
Where we learn how to obtain bounding boxes and circles for our contours. Where we learn how to obtain bounding boxes and circles for our contours
- @subpage tutorial_bounding_rotated_ellipses - @subpage tutorial_bounding_rotated_ellipses
...@@ -209,7 +209,7 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -209,7 +209,7 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Ana Huamán *Author:* Ana Huamán
Where we learn how to obtain rotated bounding boxes and ellipses for our contours. Where we learn how to obtain rotated bounding boxes and ellipses for our contours
- @subpage tutorial_moments - @subpage tutorial_moments
...@@ -233,4 +233,4 @@ In this section you will learn about the image processing (manipulation) functio ...@@ -233,4 +233,4 @@ In this section you will learn about the image processing (manipulation) functio
*Author:* Theodore Tsesmelis *Author:* Theodore Tsesmelis
Where we learn to segment objects using Laplacian filtering, the Distance Transformation and the Watershed algorithm. Where we learn to segment objects using Laplacian filtering, the Distance Transformation and the Watershed algorithm.
\ No newline at end of file
Installation in Linux {#tutorial_linux_install} Installation in Linux {#tutorial_linux_install}
===================== =====================
These steps have been tested for Ubuntu 10.04 but should work with other distros as well. The following steps have been tested for Ubuntu 10.04 but should work with other distros as well.
Required Packages Required Packages
----------------- -----------------
...@@ -39,7 +39,7 @@ repository](https://github.com/opencv/opencv.git). ...@@ -39,7 +39,7 @@ repository](https://github.com/opencv/opencv.git).
### Getting the Cutting-edge OpenCV from the Git Repository ### Getting the Cutting-edge OpenCV from the Git Repository
Launch Git client and clone [OpenCV repository](http://github.com/opencv/opencv). If you need Launch Git client and clone [OpenCV repository](http://github.com/opencv/opencv). If you need
modules from [OpenCV contrib repository](http://github.com/opencv/opencv_contrib) then clone it too. modules from [OpenCV contrib repository](http://github.com/opencv/opencv_contrib) then clone it as well.
For example For example
@code{.bash} @code{.bash}
...@@ -97,7 +97,7 @@ Building OpenCV from Source Using CMake ...@@ -97,7 +97,7 @@ Building OpenCV from Source Using CMake
- It is useful also to unset BUILD_EXAMPLES, BUILD_TESTS, BUILD_PERF_TESTS - as they all - It is useful also to unset BUILD_EXAMPLES, BUILD_TESTS, BUILD_PERF_TESTS - as they all
will be statically linked with OpenCV and can take a lot of memory. will be statically linked with OpenCV and can take a lot of memory.
-# Build. From build directory execute make, recomend to do it in several threads -# Build. From build directory execute *make*, it is recommended to do this in several threads
For example For example
@code{.bash} @code{.bash}
...@@ -111,7 +111,7 @@ Building OpenCV from Source Using CMake ...@@ -111,7 +111,7 @@ Building OpenCV from Source Using CMake
cd ~/opencv/build/doc/ cd ~/opencv/build/doc/
make -j7 html_docs make -j7 html_docs
@endcode @endcode
-# To install libraries, from build directory execute -# To install libraries, execute the following command from build directory
@code{.bash} @code{.bash}
sudo make install sudo make install
@endcode @endcode
...@@ -134,6 +134,6 @@ Building OpenCV from Source Using CMake ...@@ -134,6 +134,6 @@ Building OpenCV from Source Using CMake
@note @note
If the size of the created library is a critical issue (like in case of an Android build) you If the size of the created library is a critical issue (like in case of an Android build) you
can use the install/strip command to get the smallest size as possible. The *stripped* version can use the install/strip command to get the smallest size possible. The *stripped* version
appears to be twice as small. However, we do not recommend using this unless those extra appears to be twice as small. However, we do not recommend using this unless those extra
megabytes do really matter. megabytes do really matter.
...@@ -2,7 +2,7 @@ OpenCV Tutorials {#tutorial_root} ...@@ -2,7 +2,7 @@ OpenCV Tutorials {#tutorial_root}
================ ================
The following links describe a set of basic OpenCV tutorials. All the source code mentioned here is The following links describe a set of basic OpenCV tutorials. All the source code mentioned here is
provided as part of the OpenCV regular releases, so check before you start copy & pasting the code. provided as part of the OpenCV regular releases, so check before you start copying & pasting the code.
The list of tutorials below is automatically generated from reST files located in our GIT The list of tutorials below is automatically generated from reST files located in our GIT
repository. repository.
...@@ -10,12 +10,12 @@ As always, we would be happy to hear your comments and receive your contribution ...@@ -10,12 +10,12 @@ As always, we would be happy to hear your comments and receive your contribution
- @subpage tutorial_table_of_content_introduction - @subpage tutorial_table_of_content_introduction
You will learn how to setup OpenCV on your computer! You will learn how to setup OpenCV on your computer
- @subpage tutorial_table_of_content_core - @subpage tutorial_table_of_content_core
Here you will learn Here you will learn
the about the basic building blocks of the library. A must read and know for understanding how the about the basic building blocks of this library. A must read for understanding how
to manipulate the images on a pixel level. to manipulate the images on a pixel level.
- @subpage tutorial_table_of_content_imgproc - @subpage tutorial_table_of_content_imgproc
...@@ -25,7 +25,7 @@ As always, we would be happy to hear your comments and receive your contribution ...@@ -25,7 +25,7 @@ As always, we would be happy to hear your comments and receive your contribution
- @subpage tutorial_table_of_content_highgui - @subpage tutorial_table_of_content_highgui
This section contains valuable tutorials about how to use the This section contains valuable tutorials on how to use the
built-in graphical user interface of the library. built-in graphical user interface of the library.
- @subpage tutorial_table_of_content_imgcodecs - @subpage tutorial_table_of_content_imgcodecs
...@@ -38,9 +38,9 @@ As always, we would be happy to hear your comments and receive your contribution ...@@ -38,9 +38,9 @@ As always, we would be happy to hear your comments and receive your contribution
- @subpage tutorial_table_of_content_calib3d - @subpage tutorial_table_of_content_calib3d
Although we got Although
most of our images in a 2D format they do come from a 3D world. Here you will learn how to find most of our images are in a 2D format they do come from a 3D world. Here you will learn how to find
out from the 2D images information about the 3D world. out 3D world information from 2D images.
- @subpage tutorial_table_of_content_features2d - @subpage tutorial_table_of_content_features2d
...@@ -49,14 +49,14 @@ As always, we would be happy to hear your comments and receive your contribution ...@@ -49,14 +49,14 @@ As always, we would be happy to hear your comments and receive your contribution
- @subpage tutorial_table_of_content_video - @subpage tutorial_table_of_content_video
Look here in order Here you will find
to find algorithms usable on your video streams like: motion extraction, feature tracking and algorithms usable on your video streams like motion extraction, feature tracking and
foreground extractions. foreground extractions.
- @subpage tutorial_table_of_content_objdetect - @subpage tutorial_table_of_content_objdetect
Ever wondered Ever wondered
how your digital camera detects peoples and faces? Look here to find out! how your digital camera detects people's faces? Look here to find out!
- @subpage tutorial_table_of_content_ml - @subpage tutorial_table_of_content_ml
...@@ -75,7 +75,7 @@ As always, we would be happy to hear your comments and receive your contribution ...@@ -75,7 +75,7 @@ As always, we would be happy to hear your comments and receive your contribution
- @subpage tutorial_table_of_content_gpu - @subpage tutorial_table_of_content_gpu
Squeeze out every Squeeze out every
little computation power from your system by using the power of your video card to run the little computational power from your system by utilizing the power of your video card to run the
OpenCV algorithms. OpenCV algorithms.
- @subpage tutorial_table_of_content_ios - @subpage tutorial_table_of_content_ios
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment