Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
O
opencv
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
opencv
Commits
6b346926
Commit
6b346926
authored
Oct 22, 2018
by
Marco A. Gutierrez
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
minor typo corrections to python tutorials
parent
df6728e6
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
8 additions
and
8 deletions
+8
-8
py_sift_intro.markdown
...torials/py_feature2d/py_sift_intro/py_sift_intro.markdown
+3
-3
py_bg_subtraction.markdown
...als/py_video/py_bg_subtraction/py_bg_subtraction.markdown
+2
-2
py_lucas_kanade.markdown
...torials/py_video/py_lucas_kanade/py_lucas_kanade.markdown
+3
-3
No files found.
doc/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.markdown
View file @
6b346926
...
...
@@ -81,8 +81,8 @@ points.
Now an orientation is assigned to each keypoint to achieve invariance to image rotation. A
neighbourhood is taken around the keypoint location depending on the scale, and the gradient
magnitude and direction is calculated in that region. An orientation histogram with 36 bins covering
360 degrees is created
.
(It is weighted by gradient magnitude and gaussian-weighted circular window
with
\f
$
\s
igma
\f
$ equal to 1.5 times the scale of keypoint. The highest peak in the histogram is taken
360 degrees is created (It is weighted by gradient magnitude and gaussian-weighted circular window
with
\f
$
\s
igma
\f
$ equal to 1.5 times the scale of keypoint
)
. The highest peak in the histogram is taken
and any peak above 80% of it is also considered to calculate the orientation. It creates keypoints
with same location and scale, but different directions. It contribute to stability of matching.
...
...
@@ -99,7 +99,7 @@ illumination changes, rotation etc.
Keypoints between two images are matched by identifying their nearest neighbours. But in some cases,
the second closest-match may be very near to the first. It may happen due to noise or some other
reasons. In that case, ratio of closest-distance to second-closest distance is taken. If it is
greater than 0.8, they are rejected. It eliminate
r
s around 90% of false matches while discards only
greater than 0.8, they are rejected. It eliminates around 90% of false matches while discards only
5% correct matches, as per the paper.
So this is a summary of SIFT algorithm. For more details and understanding, reading the original
...
...
doc/py_tutorials/py_video/py_bg_subtraction/py_bg_subtraction.markdown
View file @
6b346926
...
...
@@ -20,7 +20,7 @@ extract the moving foreground from static background.
If you have an image of background alone, like an image of the room without visitors, image of the road
without vehicles etc, it is an easy job. Just subtract the new image from the background. You get
the foreground objects alone. But in most of the cases, you may not have such an image, so we need
to extract the background from whatever images we have. It become more complicated when there are
to extract the background from whatever images we have. It become
s
more complicated when there are
shadows of the vehicles. Since shadows also move, simple subtraction will mark that also as
foreground. It complicates things.
...
...
@@ -72,7 +72,7 @@ papers by Z.Zivkovic, "Improved adaptive Gaussian mixture model for background s
and "Efficient Adaptive Density Estimation per Image Pixel for the Task of Background Subtraction"
in 2006. One important feature of this algorithm is that it selects the appropriate number of
gaussian distribution for each pixel. (Remember, in last case, we took a K gaussian distributions
throughout the algorithm). It provides better adapt
i
bility to varying scenes due illumination
throughout the algorithm). It provides better adapt
a
bility to varying scenes due illumination
changes etc.
As in previous case, we have to create a background subtractor object. Here, you have an option of
...
...
doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.markdown
View file @
6b346926
...
...
@@ -75,10 +75,10 @@ solution.
( Check similarity of inverse matrix with Harris corner detector. It denotes that corners are better
points to be tracked.)
So from
user point of view,
idea is simple, we give some points to track, we receive the optical
So from
the user point of view, the
idea is simple, we give some points to track, we receive the optical
flow vectors of those points. But again there are some problems. Until now, we were dealing with
small motions
. So it fails when there is large motion. So again we go for
pyramids. When we go up in
the pyramid, small motions are removed and large motions become
s small motions. So
applying
small motions
, so it fails when there is a large motion. To deal with this we use
pyramids. When we go up in
the pyramid, small motions are removed and large motions become
small motions. So by
applying
Lucas-Kanade there, we get optical flow along with the scale.
Lucas-Kanade Optical Flow in OpenCV
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment