Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
O
opencv
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
opencv
Commits
17dfae77
Commit
17dfae77
authored
Nov 26, 2019
by
Alexander Alekhin
Browse files
Options
Browse Files
Download
Plain Diff
Merge pull request #15991 from collinbrake:feature_grammar_fixes_8
parents
a093a0e0
6276b86a
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
72 additions
and
72 deletions
+72
-72
py_colorspaces.markdown
...torials/py_imgproc/py_colorspaces/py_colorspaces.markdown
+18
-18
py_filtering.markdown
...y_tutorials/py_imgproc/py_filtering/py_filtering.markdown
+34
-34
py_geometric_transformations.markdown
...ric_transformations/py_geometric_transformations.markdown
+14
-14
py_thresholding.markdown
...rials/py_imgproc/py_thresholding/py_thresholding.markdown
+6
-6
No files found.
doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.markdown
View file @
17dfae77
...
...
@@ -5,44 +5,44 @@ Goal
----
-
In this tutorial, you will learn how to convert images from one color-space to another, like
BGR
\f
$
\l
eftrightarrow
\f
$ Gray, BGR
\f
$
\l
eftrightarrow
\f
$ HSV etc.
-
In addition to that, we will create an application
which extracts
a colored object in a video
-
You will learn
following functions :
**cv.cvtColor()**
,
**cv.inRange()**
etc.
BGR
\f
$
\l
eftrightarrow
\f
$ Gray, BGR
\f
$
\l
eftrightarrow
\f
$ HSV
,
etc.
-
In addition to that, we will create an application
to extract
a colored object in a video
-
You will learn
the following functions:
**cv.cvtColor()**
,
**cv.inRange()**
,
etc.
Changing Color-space
--------------------
There are more than 150 color-space conversion methods available in OpenCV. But we will look into
only two
which are most widely used ones,
BGR
\f
$
\l
eftrightarrow
\f
$ Gray and BGR
\f
$
\l
eftrightarrow
\f
$ HSV.
only two
, which are most widely used ones:
BGR
\f
$
\l
eftrightarrow
\f
$ Gray and BGR
\f
$
\l
eftrightarrow
\f
$ HSV.
For color conversion, we use the function cv.cvtColor(input_image, flag) where flag determines the
type of conversion.
For BGR
\f
$
\r
ightarrow
\f
$ Gray conversion
we use the flags
cv.COLOR_BGR2GRAY. Similarly for BGR
For BGR
\f
$
\r
ightarrow
\f
$ Gray conversion
, we use the flag
cv.COLOR_BGR2GRAY. Similarly for BGR
\f
$
\r
ightarrow
\f
$ HSV, we use the flag cv.COLOR_BGR2HSV. To get other flags, just run following
commands in your Python terminal
:
commands in your Python terminal:
@code{.py}
>>> import cv2 as cv
>>> flags = [i for i in dir(cv) if i.startswith('COLOR_')]
>>> print( flags )
@endcode
@note For HSV,
Hue range is
[
0,179
]
, Saturation range is
[
0,255
]
and V
alue range is
[
0,255
]
.
@note For HSV,
hue range is
[
0,179
]
, saturation range is
[
0,255
]
, and v
alue range is
[
0,255
]
.
Different software use different scales. So if you are comparing OpenCV values with them, you need
to normalize these ranges.
Object Tracking
---------------
Now
we know how to convert
BGR image to HSV, we can use this to extract a colored object. In HSV, it
is
more
easier to represent a color than in BGR color-space. In our application, we will try to extract
Now
that we know how to convert a
BGR image to HSV, we can use this to extract a colored object. In HSV, it
is easier to represent a color than in BGR color-space. In our application, we will try to extract
a blue colored object. So here is the method:
-
Take each frame of the video
-
Convert from BGR to HSV color-space
-
We threshold the HSV image for a range of blue color
-
Now extract the blue object alone, we can do whatever
on that image we want
.
-
Now extract the blue object alone, we can do whatever
we want on that image
.
Below is the code which
are commented in detail
:
Below is the code which
is commented in detail
:
@code{.py}
import cv2 as cv
import numpy as np
...
...
@@ -80,18 +80,18 @@ Below image shows tracking of the blue object:
![
image
](
images/frame.jpg
)
@note There
are some noises in the image. We will see how to remove them
in later chapters.
@note There
is some noise in the image. We will see how to remove it
in later chapters.
@note This is the simplest method in object tracking. Once you learn functions of contours, you can
do plenty of things like find
centroid of this
object and use it to track the object, draw diagrams
just by moving your hand in front of
camera and many other funny stuffs
.
do plenty of things like find
the centroid of an
object and use it to track the object, draw diagrams
just by moving your hand in front of
a camera, and other fun stuff
.
How to find HSV values to track?
--------------------------------
This is a common question found in
[
stackoverflow.com
](
http://www.stackoverflow.com
)
. It is very simple and
you can use the same function, cv.cvtColor(). Instead of passing an image, you just pass the BGR
values you want. For example, to find the HSV value of Green, try
following commands in
Python
values you want. For example, to find the HSV value of Green, try
the following commands in a
Python
terminal:
@code{.py}
>>> green = np.uint8([[[0,255,0 ]]])
...
...
@@ -99,7 +99,7 @@ terminal:
>>> print( hsv_green )
[
[[ 60 255 255
]
]]
@endcode
Now you take
[
H-10, 100,100
]
and
[
H+10, 255, 255
]
as lower bound and upper bound respectively. Apart
Now you take
[
H-10, 100,100
]
and
[
H+10, 255, 255
]
as
the
lower bound and upper bound respectively. Apart
from this method, you can use any image editing tools like GIMP or any online converters to find
these values, but don't forget to adjust the HSV ranges.
...
...
@@ -109,5 +109,5 @@ Additional Resources
Exercises
---------
-# Try to find a way to extract more than one colored object
s, for eg, extract red, blue,
green
objects simultaneously.
-# Try to find a way to extract more than one colored object
, for example, extract red, blue, and
green
objects simultaneously.
doc/py_tutorials/py_imgproc/py_filtering/py_filtering.markdown
View file @
17dfae77
...
...
@@ -5,24 +5,24 @@ Goals
-----
Learn to:
-
Blur
the
images with various low pass filters
-
Blur images with various low pass filters
-
Apply custom-made filters to images (2D convolution)
2D Convolution ( Image Filtering )
----------------------------------
As in one-dimensional signals, images also can be filtered with various low-pass filters(LPF),
high-pass filters
(HPF) etc. LPF helps in removing noises, blurring the images etc. HPF filters helps
in finding edges in
the
images.
As in one-dimensional signals, images also can be filtered with various low-pass filters
(LPF),
high-pass filters
(HPF), etc. LPF helps in removing noise, blurring images, etc. HPF filters help
in finding edges in images.
OpenCV provides a function
**cv.filter2D()**
to convolve a kernel with an image. As an example, we
will try an averaging filter on an image. A 5x5 averaging filter kernel will look like below:
will try an averaging filter on an image. A 5x5 averaging filter kernel will look like
the
below:
\f
[
K = \frac{1}{25} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{bmatrix}\f
]
Operation i
s like this: keep this kernel above a pixel, add all the 25 pixels below this kernel,
take
its average and replace the central pixel with the new average value. It continues this
operation
for all the pixels in the image. Try this code and check the result:
The operation work
s like this: keep this kernel above a pixel, add all the 25 pixels below this kernel,
take
the average, and replace the central pixel with the new average value. This operation is continued
for all the pixels in the image. Try this code and check the result:
@code{.py}
import numpy as np
import cv2 as cv
...
...
@@ -47,20 +47,20 @@ Image Blurring (Image Smoothing)
--------------------------------
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for
removing noise
s
. It actually removes high frequency content (eg: noise, edges) from the image. So
edges are blurred a little bit in this operation
. (Well, there are blurring techniques which does
n't
blur the edges
too). OpenCV provides mainly four
types of blurring techniques.
removing noise. It actually removes high frequency content (eg: noise, edges) from the image. So
edges are blurred a little bit in this operation
(there are also blurring techniques which do
n't
blur the edges
). OpenCV provides four main
types of blurring techniques.
### 1. Averaging
This is done by convolving image with a normalized box filter. It simply takes the average of all
the pixels under
kernel area and replace
the central element. This is done by the function
This is done by convolving
an
image with a normalized box filter. It simply takes the average of all
the pixels under
the kernel area and replaces
the central element. This is done by the function
**cv.blur()**
or
**cv.boxFilter()**
. Check the docs for more details about the kernel. We should
specify the width and height of
kernel. A 3x3 normalized box filter would look lik
e below:
specify the width and height of
the kernel. A 3x3 normalized box filter would look like th
e below:
\f
[
K = \frac{1}{9} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix}\f
]
@note If you don't want to use normalized box filter, use
**cv.boxFilter()**
. Pass an argument
@note If you don't want to use
a
normalized box filter, use
**cv.boxFilter()**
. Pass an argument
normalize=False to the function.
Check a sample demo below with a kernel of 5x5 size:
...
...
@@ -85,12 +85,12 @@ Result:
### 2. Gaussian Blurring
In this
, instead of box filter, g
aussian kernel is used. It is done with the function,
**cv.GaussianBlur()**
. We should specify the width and height of kernel which should be positive
and odd. We also should specify the standard deviation in
X and Y direction
, sigmaX and sigmaY
respectively. If only sigmaX is specified, sigmaY is taken as same as sigmaX. If both are given as
zeros, they are calculated from kernel size. Gaussian blurring is highly effective in removing
gaussian noise from the
image.
In this
method, instead of a box filter, a G
aussian kernel is used. It is done with the function,
**cv.GaussianBlur()**
. We should specify the width and height of
the
kernel which should be positive
and odd. We also should specify the standard deviation in
the X and Y directions
, sigmaX and sigmaY
respectively. If only sigmaX is specified, sigmaY is taken as
the
same as sigmaX. If both are given as
zeros, they are calculated from
the
kernel size. Gaussian blurring is highly effective in removing
Gaussian noise from an
image.
If you want, you can create a Gaussian kernel with the function,
**cv.getGaussianKernel()**
.
...
...
@@ -104,14 +104,14 @@ Result:
### 3. Median Blurring
Here, the function
**cv.medianBlur()**
takes
median of all the pixels under kernel area and
central
Here, the function
**cv.medianBlur()**
takes
the median of all the pixels under the kernel area and the
central
element is replaced with this median value. This is highly effective against salt-and-pepper noise
in
the images. Interesting thing is that, in the above filters,
central element is a newly
in
an image. Interestingly, in the above filters, the
central element is a newly
calculated value which may be a pixel value in the image or a new value. But in median blurring,
central element is always replaced by some pixel value in the image. It reduces the noise
the
central element is always replaced by some pixel value in the image. It reduces the noise
effectively. Its kernel size should be a positive odd integer.
In this demo, I added a 50% noise to our original image and applied median blur. Check the result:
In this demo, I added a 50% noise to our original image and applied median blur
ring
. Check the result:
@code{.py}
median = cv.medianBlur(img,5)
@endcode
...
...
@@ -122,19 +122,19 @@ Result:
### 4. Bilateral Filtering
**cv.bilateralFilter()**
is highly effective in noise removal while keeping edges sharp. But the
operation is slower compared to other filters. We already saw that
gaussian filter takes the a
neighbourhood around the pixel and find
its gaussian weighted average. This g
aussian filter is a
operation is slower compared to other filters. We already saw that
a Gaussian filter takes the
neighbourhood around the pixel and find
s its Gaussian weighted average. This G
aussian filter is a
function of space alone, that is, nearby pixels are considered while filtering. It doesn't consider
whether pixels have almost
same intensity. It doesn't consider whether
pixel is an edge pixel or
whether pixels have almost
the same intensity. It doesn't consider whether a
pixel is an edge pixel or
not. So it blurs the edges also, which we don't want to do.
Bilateral filter
also takes a gaussian filter in space, but one more g
aussian filter which is a
function of pixel difference.
Gaussian function of space make sure
only nearby pixels are considered
for blurring
while gaussian function of intensity difference make sure
only those pixels with
similar intensit
y to central pixel is
considered for blurring. So it preserves the edges since
Bilateral filter
ing also takes a Gaussian filter in space, but one more G
aussian filter which is a
function of pixel difference.
The Gaussian function of space makes sure that
only nearby pixels are considered
for blurring
, while the Gaussian function of intensity difference makes sure that
only those pixels with
similar intensit
ies to the central pixel are
considered for blurring. So it preserves the edges since
pixels at edges will have large intensity variation.
Below samples shows use
bilateral filter (For details on arguments, visit docs).
The below sample shows use of a
bilateral filter (For details on arguments, visit docs).
@code{.py}
blur = cv.bilateralFilter(img,9,75,75)
@endcode
...
...
@@ -142,7 +142,7 @@ Result:
![
image
](
images/bilateral.jpg
)
See, the texture on the surface is gone, but edges are still preserved.
See, the texture on the surface is gone, but
the
edges are still preserved.
Additional Resources
--------------------
...
...
doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.markdown
View file @
17dfae77
...
...
@@ -4,7 +4,7 @@ Geometric Transformations of Images {#tutorial_py_geometric_transformations}
Goals
-----
-
Learn to apply different geometric transformation
to images
like translation, rotation, affine
-
Learn to apply different geometric transformation
s to images,
like translation, rotation, affine
transformation etc.
-
You will see these functions:
**cv.getPerspectiveTransform**
...
...
@@ -12,7 +12,7 @@ Transformations
---------------
OpenCV provides two transformation functions,
**cv.warpAffine**
and
**cv.warpPerspective**
, with
which you can
have
all kinds of transformations.
**cv.warpAffine**
takes a 2x3 transformation
which you can
perform
all kinds of transformations.
**cv.warpAffine**
takes a 2x3 transformation
matrix while
**cv.warpPerspective**
takes a 3x3 transformation matrix as input.
### Scaling
...
...
@@ -21,8 +21,8 @@ Scaling is just resizing of the image. OpenCV comes with a function **cv.resize(
purpose. The size of the image can be specified manually, or you can specify the scaling factor.
Different interpolation methods are used. Preferable interpolation methods are
**cv.INTER_AREA**
for shrinking and
**cv.INTER_CUBIC**
(slow) &
**cv.INTER_LINEAR**
for zooming. By default,
interpolation method used is
**cv.INTER_LINEAR**
for all resizing purposes. You can resize an
input image either of following methods:
the interpolation method
**cv.INTER_LINEAR**
is used
for all resizing purposes. You can resize an
input image
with
either of following methods:
@code{.py}
import numpy as np
import cv2 as cv
...
...
@@ -38,13 +38,13 @@ res = cv.resize(img,(2*width, 2*height), interpolation = cv.INTER_CUBIC)
@endcode
### Translation
Translation is the shifting of
object's location. If you know the shift in (x,y) direction,
let it
Translation is the shifting of
an object's location. If you know the shift in the (x,y) direction and
let it
be
\f
$(t_x,t_y)
\f
$, you can create the transformation matrix
\f
$
\t
extbf{M}
\f
$ as follows:
\f
[
M = \begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \end{bmatrix}\f
]
You can take make it into a Numpy array of type np.float32 and pass it into
**cv.warpAffine()**
function. See below example for a shift of (100,50):
You can take make it into a Numpy array of type np.float32 and pass it into
the
**cv.warpAffine()**
function. See
the
below example for a shift of (100,50):
@code{.py}
import numpy as np
import cv2 as cv
...
...
@@ -61,7 +61,7 @@ cv.destroyAllWindows()
@endcode
**warning**
Third argument of the
**cv.warpAffine()**
function is the size of the output image, which should
Th
e th
ird argument of the
**cv.warpAffine()**
function is the size of the output image, which should
be in the form of
**(width, height)**
. Remember width = number of columns, and height = number of
rows.
...
...
@@ -76,7 +76,7 @@ Rotation of an image for an angle \f$\theta\f$ is achieved by the transformation
\f
[
M = \begin{bmatrix} cos\theta & -sin\theta \\ sin\theta & cos\theta \end{bmatrix}\f
]
But OpenCV provides scaled rotation with adjustable center of rotation so that you can rotate at any
location you prefer.
M
odified transformation matrix is given by
location you prefer.
The m
odified transformation matrix is given by
\f
[
\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot center.x - \beta \cdot center.y \\ - \beta & \alpha & \beta \cdot center.x + (1- \alpha ) \cdot center.y \end{bmatrix}\f
]
...
...
@@ -84,7 +84,7 @@ where:
\f
[
\begin{array}{l} \alpha = scale \cdot \cos \theta , \\ \beta = scale \cdot \sin \theta \end{array}\f
]
To find this transformation matrix, OpenCV provides a function,
**cv.getRotationMatrix2D**
. Check
To find this transformation matrix, OpenCV provides a function,
**cv.getRotationMatrix2D**
. Check
out the
below example which rotates the image by 90 degree with respect to center without any scaling.
@code{.py}
img = cv.imread('messi5.jpg',0)
...
...
@@ -101,11 +101,11 @@ See the result:
### Affine Transformation
In affine transformation, all parallel lines in the original image will still be parallel in the
output image. To find the transformation matrix, we need three points from input image and their
corresponding locations in output image. Then
**cv.getAffineTransform**
will create a 2x3 matrix
output image. To find the transformation matrix, we need three points from
the
input image and their
corresponding locations in
the
output image. Then
**cv.getAffineTransform**
will create a 2x3 matrix
which is to be passed to
**cv.warpAffine**
.
Check
below example, and also look at the points I selected (which are marked in G
reen color):
Check
the below example, and also look at the points I selected (which are marked in g
reen color):
@code{.py}
img = cv.imread('drawing.png')
rows,cols,ch = img.shape
...
...
@@ -130,7 +130,7 @@ See the result:
For perspective transformation, you need a 3x3 transformation matrix. Straight lines will remain
straight even after the transformation. To find this transformation matrix, you need 4 points on the
input image and corresponding points on the output image. Among these 4 points, 3 of them should not
be collinear. Then transformation matrix can be found by the function
be collinear. Then t
he t
ransformation matrix can be found by the function
**cv.getPerspectiveTransform**
. Then apply
**cv.warpPerspective**
with this 3x3 transformation
matrix.
...
...
doc/py_tutorials/py_imgproc/py_thresholding/py_thresholding.markdown
View file @
17dfae77
...
...
@@ -4,13 +4,13 @@ Image Thresholding {#tutorial_py_thresholding}
Goal
----
-
In this tutorial, you will learn
Simple thresholding, A
daptive thresholding and Otsu's thresholding.
-
In this tutorial, you will learn
simple thresholding, a
daptive thresholding and Otsu's thresholding.
-
You will learn the functions
**cv.threshold**
and
**cv.adaptiveThreshold**
.
Simple Thresholding
-------------------
Here, the matter is straight
forward. For every pixel, the same threshold value is applied.
Here, the matter is straight
-
forward. For every pixel, the same threshold value is applied.
If the pixel value is smaller than the threshold, it is set to 0, otherwise it is set to a maximum value.
The function
**cv.threshold**
is used to apply the thresholding.
The first argument is the source image, which
**should be a grayscale image**
.
...
...
@@ -65,11 +65,11 @@ Adaptive Thresholding
In the previous section, we used one global value as a threshold.
But this might not be good in all cases, e.g. if an image has different lighting conditions in different areas.
In that case, adaptive thresholding
thresholding
can help.
In that case, adaptive thresholding can help.
Here, the algorithm determines the threshold for a pixel based on a small region around it.
So we get different thresholds for different regions of the same image which gives better results for images with varying illumination.
Additionally to the parameters described above, the method cv.adaptiveThreshold
three input parameters:
In addition to the parameters described above, the method cv.adaptiveThreshold takes
three input parameters:
The
**adaptiveMethod**
decides how the threshold value is calculated:
-
cv.ADAPTIVE_THRESH_MEAN_C: The threshold value is the mean of the neighbourhood area minus the constant
**C**
.
...
...
@@ -168,8 +168,8 @@ Result:
### How does Otsu's Binarization work?
This section demonstrates a Python implementation of Otsu's binarization to show how it
works
actually
. If you are not interested, you can skip this.
This section demonstrates a Python implementation of Otsu's binarization to show how it
actually
works
. If you are not interested, you can skip this.
Since we are working with bimodal images, Otsu's algorithm tries to find a threshold value (t) which
minimizes the
**weighted within-class variance**
given by the relation:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment