Commit 102c80a2 authored by berak's avatar berak

remove some non-ascii symbols

parent 60a510c2
Improved Background-Foreground Segmentation Methods
Improved Background-Foreground Segmentation Methods
===================================================
This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation. It[1] was introduced by Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg in 2012. As per the paper, the system ran a successful interactive audio art installation called “Are We There Yet?” from March 31 - July 31 2011 at the Contemporary Jewish Museum in San Francisco, California.
This algorithm combines statistical background image estimation and per-pixel Bayesian segmentation. It[1] was introduced by Andrew B. Godbehere, Akihiro Matsukawa, Ken Goldberg in 2012. As per the paper, the system ran a successful interactive audio art installation called "Are We There Yet?" from March 31 - July 31 2011 at the Contemporary Jewish Museum in San Francisco, California.
It uses first few (120 by default) frames for background modelling. It employs probabilistic foreground segmentation algorithm that identifies possible foreground objects using Bayesian inference. The estimates are adaptive; newer observations are more heavily weighted than old observations to accommodate variable illumination. Several morphological filtering operations like closing and opening are done to remove unwanted noise. You will get a black window during first few frames.
References
----------
[1]: A.B. Godbehere, A. Matsukawa, K. Goldberg. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. American Control Conference. (2012), pp. 4305–4312
\ No newline at end of file
[1]: A.B. Godbehere, A. Matsukawa, K. Goldberg. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. American Control Conference. (2012), pp. 4305–4312
......@@ -485,7 +485,7 @@ Implements loading dataset:
"VOT 2015 dataset comprises 60 short sequences showing various objects in challenging backgrounds.
The sequences were chosen from a large pool of sequences including the ALOV dataset, OTB2 dataset,
non-tracking datasets, Computer Vision Online, Professor Bob Fishers Image Database, Videezy,
non-tracking datasets, Computer Vision Online, Professor Bob Fisher's Image Database, Videezy,
Center for Research in Computer Vision, University of Central Florida, USA, NYU Center for Genomics
and Systems Biology, Data Wrangling, Open Access Directory and Learning and Recognition in Vision
Group, INRIA, France. The VOT sequence selection protocol was applied to obtain a representative
......
......@@ -70,7 +70,7 @@ which is available since the 2.4 release. I suggest you take a look at its descr
Algorithm provides the following features for all derived classes:
- So called “virtual constructor”. That is, each Algorithm derivative is registered at program
- So called "virtual constructor". That is, each Algorithm derivative is registered at program
start and you can get the list of registered algorithms and create instance of a particular
algorithm by its name (see Algorithm::create). If you plan to add your own algorithms, it is
good practice to add a unique prefix to your algorithms to distinguish them from other
......
......@@ -52,7 +52,7 @@
}
@incollection{IPMU2012,
title={$F^1$-transform edge detector inspired by cannys algorithm},
title={$F^1$-transform edge detector inspired by canny's algorithm},
author={Perfilieva, Irina and Hod'{\'a}kov{\'a}, Petra and Hurtík, Petr},
booktitle={Advances on Computational Intelligence},
pages={230--239},
......@@ -75,4 +75,4 @@
pages={235--240},
year={2015},
organization={IEEE}
}
\ No newline at end of file
}
......@@ -93,7 +93,7 @@ class CV_EXPORTS_W StaticSaliency : public virtual Saliency
targets, a segmentation by clustering is performed, using *K-means algorithm*. Then, to gain a
binary representation of clustered saliency map, since values of the map can vary according to
the characteristics of frame under analysis, it is not convenient to use a fixed threshold. So,
*Otsus algorithm* is used, which assumes that the image to be thresholded contains two classes
*Otsu's algorithm* is used, which assumes that the image to be thresholded contains two classes
of pixels or bi-modal histograms (e.g. foreground and back-ground pixels); later on, the
algorithm calculates the optimal threshold separating those two classes, so that their
intra-class variance is minimal.
......
......@@ -77,7 +77,7 @@ void FindCandidateMatches(const FeatureSet &left,
// method.
// I.E: A match is considered as strong if the following test is true :
// I.E distance[0] < fRatio * distances[1].
// From David Lowe “Distinctive Image Features from Scale-Invariant Keypoints”.
// From David Lowe "Distinctive Image Features from Scale-Invariant Keypoints".
// You can use David Lowe's magic ratio (0.6 or 0.8).
// 0.8 allow to remove 90% of the false matches while discarding less than 5%
// of the correct matches.
......
......@@ -137,7 +137,7 @@ class CV_EXPORTS_W GrayCodePattern : public StructuredLightPattern
* @param patternImages The pattern images acquired by the camera, stored in a grayscale vector < Mat >.
* @param x x coordinate of the image pixel.
* @param y y coordinate of the image pixel.
* @param projPix Projector's pixel corresponding to the camera's pixel: projPix.x and projPix.y are the image coordinates of the projectors pixel corresponding to the pixel being decoded in a camera.
* @param projPix Projector's pixel corresponding to the camera's pixel: projPix.x and projPix.y are the image coordinates of the projector's pixel corresponding to the pixel being decoded in a camera.
*/
CV_WRAP
virtual bool getProjPixel( InputArrayOfArrays patternImages, int x, int y, Point &projPix ) const = 0;
......@@ -146,4 +146,4 @@ class CV_EXPORTS_W GrayCodePattern : public StructuredLightPattern
//! @}
}
}
#endif
\ No newline at end of file
#endif
......@@ -53,7 +53,7 @@ namespace structured_light {
// other algorithms can be implemented
enum
{
DECODE_3D_UNDERWORLD = 0 //!< Kyriakos Herakleous, Charalambos Poullis. “3DUNDERWORLD-SLS: An Open-Source Structured-Light Scanning System for Rapid Geometry Acquisition”, arXiv preprint arXiv:1406.6595 (2014).
DECODE_3D_UNDERWORLD = 0 //!< Kyriakos Herakleous, Charalambos Poullis. "3DUNDERWORLD-SLS: An Open-Source Structured-Light Scanning System for Rapid Geometry Acquisition", arXiv preprint arXiv:1406.6595 (2014).
};
/** @brief Abstract base class for generating and decoding structured light patterns.
......@@ -88,4 +88,4 @@ class CV_EXPORTS_W StructuredLightPattern : public virtual Algorithm
}
}
#endif
\ No newline at end of file
#endif
......@@ -5,7 +5,7 @@
#ifndef __OPENCV_TEXT_TEXTDETECTOR_HPP__
#define __OPENCV_TEXT_TEXTDETECTOR_HPP__
#include"ocr.hpp"
#include "ocr.hpp"
namespace cv
{
......
......@@ -113,4 +113,4 @@ CMAKE_OPTIONS='-DBUILD_PERF_TESTS:BOOL=OFF -DBUILD_TESTS:BOOL=OFF -DBUILD_DOCS:B
@endcode
-# now we need the language files from tesseract. either clone https://github.com/tesseract-ocr/tessdata, or copy only those language files you need to a folder (example c:\\lib\\install\\tesseract\\tessdata). If you don't want to add a new folder you must copy language file in same folder than your executable
-# if you created a new folder, then you must add a new variable, TESSDATA_PREFIX with the value c:\\lib\\install\\tessdata to your system's environment
-# add c:\\Lib\\install\\leptonica\\bin and c:\\Lib\\install\\tesseract\\bin to your PATH environment. If you don't want to modify the PATH then copy tesseract400.dll and leptonica-1.74.4.dll to the same folder than your exe file.
\ No newline at end of file
-# add c:\\Lib\\install\\leptonica\\bin and c:\\Lib\\install\\tesseract\\bin to your PATH environment. If you don't want to modify the PATH then copy tesseract400.dll and leptonica-1.74.4.dll to the same folder than your exe file.
......@@ -1171,7 +1171,7 @@ class CV_EXPORTS_W TrackerMedianFlow : public Tracker
tracking, learning and detection.
The tracker follows the object from frame to frame. The detector localizes all appearances that
have been observed so far and corrects the tracker if necessary. The learning estimates detectors
have been observed so far and corrects the tracker if necessary. The learning estimates detector's
errors and updates it to avoid these errors in the future. The implementation is based on @cite TLD .
The Median Flow algorithm (see cv::TrackerMedianFlow) was chosen as a tracking component in this
......@@ -1435,7 +1435,7 @@ public:
the long-term tracking task into tracking, learning and detection.
The tracker follows the object from frame to frame. The detector localizes all appearances that
have been observed so far and corrects the tracker if necessary. The learning estimates detectors
have been observed so far and corrects the tracker if necessary. The learning estimates detector's
errors and updates it to avoid these errors in the future. The implementation is based on @cite TLD .
The Median Flow algorithm (see cv::TrackerMedianFlow) was chosen as a tracking component in this
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment