Commit f55740da authored by Roman Donchenko's avatar Roman Donchenko

Deleted all trailing whitespace.

parent 0d8cb2e3
......@@ -16,7 +16,7 @@ How to update opencv_ffmpeg.dll and opencv_ffmpeg_64.dll when a new version of F
2. Install 64-bit MinGW. http://mingw-w64.sourceforge.net/
Let's assume, it's installed in C:\MSYS64
3. Copy C:\MSYS32\msys to C:\MSYS64\msys. Edit C:\MSYS64\msys\etc\fstab, change C:\MSYS32 to C:\MSYS64.
4. Now you have working MSYS32 and MSYS64 environments.
Launch, one by one, C:\MSYS32\msys\msys.bat and C:\MSYS64\msys\msys.bat to create your home directories.
......
......@@ -45,13 +45,13 @@ jasper-1.900.1 - JasPer is a collection of software
and manipulation of images. This software can handle image data in a
variety of formats. One such format supported by JasPer is the JPEG-2000
format defined in ISO/IEC 15444-1.
Copyright (c) 1999-2000 Image Power, Inc.
Copyright (c) 1999-2000 The University of British Columbia
Copyright (c) 2001-2003 Michael David Adams
The JasPer license can be found in src/libjasper.
OpenCV on Windows uses pre-built libjasper library
(lib/libjasper*). To get the latest source code,
please, visit the project homepage:
......
......@@ -140,7 +140,7 @@ endfunction()
# ------------------------------------------------------------------------
function(set_ipp_new_libraries _LATEST_VERSION)
set(IPP_PREFIX "ipp")
if(${_LATEST_VERSION} VERSION_LESS "8.0")
set(IPP_SUFFIX "_l") # static not threaded libs suffix IPP 7.x
else()
......
......@@ -19,7 +19,7 @@ set(XIMEA_LIBRARY_DIR)
if(WIN32)
# Try to find the XIMEA API path in registry.
GET_FILENAME_COMPONENT(XIMEA_PATH "[HKEY_CURRENT_USER\\Software\\XIMEA\\CamSupport\\API;Path]" ABSOLUTE)
if(EXISTS ${XIMEA_PATH})
set(XIMEA_FOUND 1)
# set LIB folders
......
function insertIframe (elementId, iframeSrc)
function insertIframe (elementId, iframeSrc)
{
var iframe;
if (document.createElement && (iframe = document.createElement('iframe')))
......
......@@ -4,14 +4,14 @@ INSTRUCTIONS TO BUILD WIN32 PACKAGES WITH CMAKE+CPACK
- Install NSIS.
- Generate OpenCV solutions for MSVC using CMake as usual.
- In cmake-gui:
- In cmake-gui:
- Mark BUILD_PACKAGE
- Mark BUILD_EXAMPLES (If examples are desired to be shipped as binaries...)
- Unmark ENABLE_OPENMP, since this feature seems to have some issues yet...
- Mark INSTALL_*_EXAMPLES
- Open the OpenCV solution and build ALL in Debug and Release.
- Build PACKAGE, from the Release configuration. An NSIS installer package will be
- Build PACKAGE, from the Release configuration. An NSIS installer package will be
created with both release and debug LIBs and DLLs.
Jose Luis Blanco, 2009/JUL/29
......@@ -7,16 +7,16 @@ Camera calibration with square chessboard
The goal of this tutorial is to learn how to calibrate a camera given a set of chessboard images.
*Test data*: use images in your data/chess folder.
*Test data*: use images in your data/chess folder.
#.
Compile opencv with samples by setting ``BUILD_EXAMPLES`` to ``ON`` in cmake configuration.
Compile opencv with samples by setting ``BUILD_EXAMPLES`` to ``ON`` in cmake configuration.
#.
Go to ``bin`` folder and use ``imagelist_creator`` to create an ``XML/YAML`` list of your images.
#.
Then, run ``calibration`` sample to get camera parameters. Use square size equal to 3cm.
Then, run ``calibration`` sample to get camera parameters. Use square size equal to 3cm.
Pose estimation
===============
......@@ -57,6 +57,6 @@ Now, let us write a code that detects a chessboard in a new image and finds its
distCoeffs, rvec, tvec, false);
#.
Calculate reprojection error like it is done in ``calibration`` sample (see ``opencv/samples/cpp/calibration.cpp``, function ``computeReprojectionErrors``).
Calculate reprojection error like it is done in ``calibration`` sample (see ``opencv/samples/cpp/calibration.cpp``, function ``computeReprojectionErrors``).
Question: how to calculate the distance from the camera origin to any of the corners?
\ No newline at end of file
Question: how to calculate the distance from the camera origin to any of the corners?
\ No newline at end of file
......@@ -3,11 +3,11 @@
*calib3d* module. Camera calibration and 3D reconstruction
-----------------------------------------------------------
Although we got most of our images in a 2D format they do come from a 3D world. Here you will learn how to find out from the 2D images information about the 3D world.
Although we got most of our images in a 2D format they do come from a 3D world. Here you will learn how to find out from the 2D images information about the 3D world.
.. include:: ../../definitions/tocDefinitions.rst
.. include:: ../../definitions/tocDefinitions.rst
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
......@@ -26,7 +26,7 @@ Although we got most of our images in a 2D format they do come from a 3D world.
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
......
......@@ -18,7 +18,7 @@ Theory
.. note::
The explanation below belongs to the book `Computer Vision: Algorithms and Applications <http://szeliski.org/Book/>`_ by Richard Szeliski
The explanation below belongs to the book `Computer Vision: Algorithms and Applications <http://szeliski.org/Book/>`_ by Richard Szeliski
From our previous tutorial, we know already a bit of *Pixel operators*. An interesting dyadic (two-input) operator is the *linear blend operator*:
......@@ -43,7 +43,7 @@ As usual, after the not-so-lengthy explanation, let's go to the code:
int main( int argc, char** argv )
{
double alpha = 0.5; double beta; double input;
double alpha = 0.5; double beta; double input;
Mat src1, src2, dst;
......@@ -69,7 +69,7 @@ As usual, after the not-so-lengthy explanation, let's go to the code:
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst );
waitKey(0);
......@@ -99,10 +99,10 @@ Explanation
#. Now we need to generate the :math:`g(x)` image. For this, the function :add_weighted:`addWeighted <>` comes quite handy:
.. code-block:: cpp
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
since :add_weighted:`addWeighted <>` produces:
.. math::
......@@ -110,12 +110,12 @@ Explanation
dst = \alpha \cdot src1 + \beta \cdot src2 + \gamma
In this case, :math:`\gamma` is the argument :math:`0.0` in the code above.
#. Create windows, show the images and wait for the user to end the program.
#. Create windows, show the images and wait for the user to end the program.
Result
=======
.. image:: images/Adding_Images_Tutorial_Result_0.jpg
:alt: Blending Images Tutorial - Final Result
:align: center
:align: center
......@@ -10,7 +10,7 @@ In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
+ Access pixel values
+ Access pixel values
+ Initialize a matrix with zeros
......@@ -20,16 +20,16 @@ In this tutorial you will learn how to:
Theory
=======
.. note::
The explanation below belongs to the book `Computer Vision: Algorithms and Applications <http://szeliski.org/Book/>`_ by Richard Szeliski
The explanation below belongs to the book `Computer Vision: Algorithms and Applications <http://szeliski.org/Book/>`_ by Richard Szeliski
Image Processing
--------------------
.. container:: enumeratevisibleitemswithsquare
* A general image processing operator is a function that takes one or more input images and produces an output image.
* A general image processing operator is a function that takes one or more input images and produces an output image.
* Image transforms can be seen as:
......@@ -54,18 +54,18 @@ Brightness and contrast adjustments
* Two commonly used point processes are *multiplication* and *addition* with a constant:
.. math::
g(x) = \alpha f(x) + \beta
* The parameters :math:`\alpha > 0` and :math:`\beta` are often called the *gain* and *bias* parameters; sometimes these parameters are said to control *contrast* and *brightness* respectively.
* You can think of :math:`f(x)` as the source image pixels and :math:`g(x)` as the output image pixels. Then, more conveniently we can write the expression as:
.. math::
g(i,j) = \alpha \cdot f(i,j) + \beta
where :math:`i` and :math:`j` indicates that the pixel is located in the *i-th* row and *j-th* column.
where :math:`i` and :math:`j` indicates that the pixel is located in the *i-th* row and *j-th* column.
Code
=====
......@@ -91,7 +91,7 @@ Code
Mat image = imread( argv[1] );
Mat new_image = Mat::zeros( image.size(), image.type() );
/// Initialize values
/// Initialize values
std::cout<<" Basic Linear Transforms "<<std::endl;
std::cout<<"-------------------------"<<std::endl;
std::cout<<"* Enter the alpha value [1.0-3.0]: ";std::cin>>alpha;
......@@ -102,7 +102,7 @@ Code
{ for( int x = 0; x < image.cols; x++ )
{ for( int c = 0; c < 3; c++ )
{
new_image.at<Vec3b>(y,x)[c] =
new_image.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );
}
}
......@@ -133,41 +133,41 @@ Explanation
#. We load an image using :imread:`imread <>` and save it in a Mat object:
.. code-block:: cpp
Mat image = imread( argv[1] );
#. Now, since we will make some transformations to this image, we need a new Mat object to store it. Also, we want this to have the following features:
.. container:: enumeratevisibleitemswithsquare
* Initial pixel values equal to zero
* Same size and type as the original image
.. code-block:: cpp
Mat new_image = Mat::zeros( image.size(), image.type() );
We observe that :mat_zeros:`Mat::zeros <>` returns a Matlab-style zero initializer based on *image.size()* and *image.type()*
Mat new_image = Mat::zeros( image.size(), image.type() );
We observe that :mat_zeros:`Mat::zeros <>` returns a Matlab-style zero initializer based on *image.size()* and *image.type()*
#. Now, to perform the operation :math:`g(i,j) = \alpha \cdot f(i,j) + \beta` we will access to each pixel in image. Since we are operating with RGB images, we will have three values per pixel (R, G and B), so we will also access them separately. Here is the piece of code:
.. code-block:: cpp
for( int y = 0; y < image.rows; y++ )
{ for( int x = 0; x < image.cols; x++ )
{ for( int c = 0; c < 3; c++ )
{ new_image.at<Vec3b>(y,x)[c] =
{ new_image.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta ); }
}
}
Notice the following:
.. container:: enumeratevisibleitemswithsquare
* To access each pixel in the images we are using this syntax: *image.at<Vec3b>(y,x)[c]* where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2).
* To access each pixel in the images we are using this syntax: *image.at<Vec3b>(y,x)[c]* where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2).
* Since the operation :math:`\alpha \cdot p(i,j) + \beta` can give values out of range or not integers (if :math:`\alpha` is float), we use :saturate_cast:`saturate_cast <>` to make sure the values are valid.
......@@ -175,7 +175,7 @@ Explanation
#. Finally, we create windows and show the images, the usual way.
.. code-block:: cpp
namedWindow("Original Image", 1);
namedWindow("New Image", 1);
......@@ -185,9 +185,9 @@ Explanation
waitKey(0);
.. note::
Instead of using the **for** loops to access each pixel, we could have simply used this command:
.. code-block:: cpp
image.convertTo(new_image, -1, alpha, beta);
......@@ -211,4 +211,4 @@ Result
.. image:: images/Basic_Linear_Transform_Tutorial_Result_0.jpg
:alt: Basic Linear Transform - Final Result
:align: center
:align: center
......@@ -4,9 +4,9 @@ File Input and Output using XML and YAML files
**********************************************
Goal
====
====
You'll find answers for the following questions:
You'll find answers for the following questions:
.. container:: enumeratevisibleitemswithsquare
......@@ -18,7 +18,7 @@ You'll find answers for the following questions:
Source code
===========
You can :download:`download this from here <../../../../samples/cpp/tutorial_code/core/file_input_output/file_input_output.cpp>` or find it in the :file:`samples/cpp/tutorial_code/core/file_input_output/file_input_output.cpp` of the OpenCV source code library.
You can :download:`download this from here <../../../../samples/cpp/tutorial_code/core/file_input_output/file_input_output.cpp>` or find it in the :file:`samples/cpp/tutorial_code/core/file_input_output/file_input_output.cpp` of the OpenCV source code library.
Here's a sample code of how to achieve all the stuff enumerated at the goal list.
......@@ -31,9 +31,9 @@ Here's a sample code of how to achieve all the stuff enumerated at the goal list
Explanation
===========
Here we talk only about XML and YAML file inputs. Your output (and its respective input) file may have only one of these extensions and the structure coming from this. They are two kinds of data structures you may serialize: *mappings* (like the STL map) and *element sequence* (like the STL vector>. The difference between these is that in a map every element has a unique name through what you may access it. For sequences you need to go through them to query a specific item.
Here we talk only about XML and YAML file inputs. Your output (and its respective input) file may have only one of these extensions and the structure coming from this. They are two kinds of data structures you may serialize: *mappings* (like the STL map) and *element sequence* (like the STL vector>. The difference between these is that in a map every element has a unique name through what you may access it. For sequences you need to go through them to query a specific item.
1. **XML\\YAML File Open and Close.** Before you write any content to such file you need to open it and at the end to close it. The XML\YAML data structure in OpenCV is :xmlymlpers:`FileStorage <filestorage>`. To specify that this structure to which file binds on your hard drive you can use either its constructor or the *open()* function of this:
1. **XML\\YAML File Open and Close.** Before you write any content to such file you need to open it and at the end to close it. The XML\YAML data structure in OpenCV is :xmlymlpers:`FileStorage <filestorage>`. To specify that this structure to which file binds on your hard drive you can use either its constructor or the *open()* function of this:
.. code-block:: cpp
......@@ -42,29 +42,29 @@ Here we talk only about XML and YAML file inputs. Your output (and its respectiv
\\...
fs.open(filename, FileStorage::READ);
Either one of this you use the second argument is a constant specifying the type of operations you'll be able to on them: WRITE, READ or APPEND. The extension specified in the file name also determinates the output format that will be used. The output may be even compressed if you specify an extension such as *.xml.gz*.
Either one of this you use the second argument is a constant specifying the type of operations you'll be able to on them: WRITE, READ or APPEND. The extension specified in the file name also determinates the output format that will be used. The output may be even compressed if you specify an extension such as *.xml.gz*.
The file automatically closes when the :xmlymlpers:`FileStorage <filestorage>` objects is destroyed. However, you may explicitly call for this by using the *release* function:
The file automatically closes when the :xmlymlpers:`FileStorage <filestorage>` objects is destroyed. However, you may explicitly call for this by using the *release* function:
.. code-block:: cpp
fs.release(); // explicit close
#. **Input and Output of text and numbers.** The data structure uses the same << output operator that the STL library. For outputting any type of data structure we need first to specify its name. We do this by just simply printing out the name of this. For basic types you may follow this with the print of the value :
#. **Input and Output of text and numbers.** The data structure uses the same << output operator that the STL library. For outputting any type of data structure we need first to specify its name. We do this by just simply printing out the name of this. For basic types you may follow this with the print of the value :
.. code-block:: cpp
fs << "iterationNr" << 100;
Reading in is a simple addressing (via the [] operator) and casting operation or a read via the >> operator :
Reading in is a simple addressing (via the [] operator) and casting operation or a read via the >> operator :
.. code-block:: cpp
int itNr;
int itNr;
fs["iterationNr"] >> itNr;
itNr = (int) fs["iterationNr"];
#. **Input\\Output of OpenCV Data structures.** Well these behave exactly just as the basic C++ types:
#. **Input\\Output of OpenCV Data structures.** Well these behave exactly just as the basic C++ types:
.. code-block:: cpp
......@@ -77,7 +77,7 @@ Here we talk only about XML and YAML file inputs. Your output (and its respectiv
fs["R"] >> R; // Read cv::Mat
fs["T"] >> T;
#. **Input\\Output of vectors (arrays) and associative maps.** As I mentioned beforehand we can output maps and sequences (array, vector) too. Again we first print the name of the variable and then we have to specify if our output is either a sequence or map.
#. **Input\\Output of vectors (arrays) and associative maps.** As I mentioned beforehand we can output maps and sequences (array, vector) too. Again we first print the name of the variable and then we have to specify if our output is either a sequence or map.
For sequence before the first element print the "[" character and after the last one the "]" character:
......@@ -95,7 +95,7 @@ Here we talk only about XML and YAML file inputs. Your output (and its respectiv
fs << "{" << "One" << 1;
fs << "Two" << 2 << "}";
To read from these we use the :xmlymlpers:`FileNode <filenode>` and the :xmlymlpers:`FileNodeIterator <filenodeiterator>` data structures. The [] operator of the :xmlymlpers:`FileStorage <filestorage>` class returns a :xmlymlpers:`FileNode <filenode>` data type. If the node is sequential we can use the :xmlymlpers:`FileNodeIterator <filenodeiterator>` to iterate through the items:
To read from these we use the :xmlymlpers:`FileNode <filenode>` and the :xmlymlpers:`FileNodeIterator <filenodeiterator>` data structures. The [] operator of the :xmlymlpers:`FileStorage <filestorage>` class returns a :xmlymlpers:`FileNode <filenode>` data type. If the node is sequential we can use the :xmlymlpers:`FileNodeIterator <filenodeiterator>` to iterate through the items:
.. code-block:: cpp
......@@ -115,8 +115,8 @@ Here we talk only about XML and YAML file inputs. Your output (and its respectiv
.. code-block:: cpp
n = fs["Mapping"]; // Read mappings from a sequence
cout << "Two " << (int)(n["Two"]) << "; ";
cout << "One " << (int)(n["One"]) << endl << endl;
cout << "Two " << (int)(n["Two"]) << "; ";
cout << "One " << (int)(n["One"]) << endl << endl;
#. **Read and write your own data structures.** Suppose you have a data structure such as:
......@@ -148,7 +148,7 @@ Here we talk only about XML and YAML file inputs. Your output (and its respectiv
id = (string)node["id"];
}
Then you need to add the following functions definitions outside the class:
Then you need to add the following functions definitions outside the class:
.. code-block:: cpp
......@@ -175,17 +175,17 @@ Here we talk only about XML and YAML file inputs. Your output (and its respectiv
fs << "MyData" << m; // your own data structures
fs["MyData"] >> m; // Read your own structure_
Or to try out reading a non-existing read:
Or to try out reading a non-existing read:
.. code-block:: cpp
fs["NonExisting"] >> m; // Do not add a fs << "NonExisting" << m command for this to work
fs["NonExisting"] >> m; // Do not add a fs << "NonExisting" << m command for this to work
cout << endl << "NonExisting = " << endl << m << endl;
Result
======
Well mostly we just print out the defined numbers. On the screen of your console you could see:
Well mostly we just print out the defined numbers. On the screen of your console you could see:
.. code-block:: bash
......@@ -212,7 +212,7 @@ Well mostly we just print out the defined numbers. On the screen of your console
Tip: Open up output.xml with a text editor to see the serialized data.
Nevertheless, it's much more interesting what you may see in the output xml file:
Nevertheless, it's much more interesting what you may see in the output xml file:
.. code-block:: xml
......@@ -242,7 +242,7 @@ Nevertheless, it's much more interesting what you may see in the output xml file
<id>mydata1234</id></MyData>
</opencv_storage>
Or the YAML file:
Or the YAML file:
.. code-block:: yaml
......
......@@ -8,11 +8,11 @@ Mask operations on matrices are quite simple. The idea is that we recalculate ea
Our test case
=============
Let us consider the issue of an image contrast enhancement method. Basically we want to apply for every pixel of the image the following formula:
Let us consider the issue of an image contrast enhancement method. Basically we want to apply for every pixel of the image the following formula:
.. math::
I(i,j) = 5*I(i,j) - [ I(i-1,j) + I(i+1,j) + I(i,j-1) + I(i,j+1)]
I(i,j) = 5*I(i,j) - [ I(i-1,j) + I(i+1,j) + I(i,j-1) + I(i,j+1)]
\iff I(i,j)*M, \text{where }
M = \bordermatrix{ _i\backslash ^j & -1 & 0 & +1 \cr
......@@ -23,12 +23,12 @@ Let us consider the issue of an image contrast enhancement method. Basically we
The first notation is by using a formula, while the second is a compacted version of the first by using a mask. You use the mask by putting the center of the mask matrix (in the upper case noted by the zero-zero index) on the pixel you want to calculate and sum up the pixel values multiplied with the overlapped matrix values. It's the same thing, however in case of large matrices the latter notation is a lot easier to look over.
Now let us see how we can make this happen by using the basic pixel access method or by using the :filtering:`filter2D <filter2d>` function.
Now let us see how we can make this happen by using the basic pixel access method or by using the :filtering:`filter2D <filter2d>` function.
The Basic Method
================
Here's a function that will do this:
Here's a function that will do this:
.. code-block:: cpp
......@@ -49,7 +49,7 @@ Here's a function that will do this:
for(int i= nChannels;i < nChannels*(myImage.cols-1); ++i)
{
*output++ = saturate_cast<uchar>(5*current[i]
*output++ = saturate_cast<uchar>(5*current[i]
-current[i-nChannels] - current[i+nChannels] - previous[i] - next[i]);
}
}
......@@ -87,7 +87,7 @@ We'll use the plain C [] operator to access pixels. Because we need to access mu
for(int i= nChannels;i < nChannels*(myImage.cols-1); ++i)
{
*output++ = saturate_cast<uchar>(5*current[i]
*output++ = saturate_cast<uchar>(5*current[i]
-current[i-nChannels] - current[i+nChannels] - previous[i] - next[i]);
}
}
......@@ -96,7 +96,7 @@ On the borders of the image the upper notation results inexistent pixel location
.. code-block:: cpp
Result.row(0).setTo(Scalar(0)); // The top row
Result.row(0).setTo(Scalar(0)); // The top row
Result.row(Result.rows-1).setTo(Scalar(0)); // The bottom row
Result.col(0).setTo(Scalar(0)); // The left column
Result.col(Result.cols-1).setTo(Scalar(0)); // The right column
......@@ -108,19 +108,19 @@ Applying such filters are so common in image processing that in OpenCV there exi
.. code-block:: cpp
Mat kern = (Mat_<char>(3,3) << 0, -1, 0,
Mat kern = (Mat_<char>(3,3) << 0, -1, 0,
-1, 5, -1,
0, -1, 0);
Then call the :filtering:`filter2D <filter2d>` function specifying the input, the output image and the kernell to use:
Then call the :filtering:`filter2D <filter2d>` function specifying the input, the output image and the kernell to use:
.. code-block:: cpp
filter2D(I, K, I.depth(), kern );
filter2D(I, K, I.depth(), kern );
The function even has a fifth optional argument to specify the center of the kernel, and a sixth one for determining what to do in the regions where the operation is undefined (borders). Using this function has the advantage that it's shorter, less verbose and because there are some optimization techniques implemented it is usually faster than the *hand-coded method*. For example in my test while the second one took only 13 milliseconds the first took around 31 milliseconds. Quite some difference.
For example:
For example:
.. image:: images/resultMatMaskFilter2D.png
:alt: A sample output of the program
......@@ -128,7 +128,7 @@ For example:
You can download this source code from :download:`here <../../../../samples/cpp/tutorial_code/core/mat_mask_operations/mat_mask_operations.cpp>` or look in the OpenCV source code libraries sample directory at :file:`samples/cpp/tutorial_code/core/mat_mask_operations/mat_mask_operations.cpp`.
Check out an instance of running the program on our `YouTube channel <http://www.youtube.com/watch?v=7PF1tAU9se4>`_ .
Check out an instance of running the program on our `YouTube channel <http://www.youtube.com/watch?v=7PF1tAU9se4>`_ .
.. raw:: html
......
......@@ -44,7 +44,7 @@ Here you will learn the about the basic building blocks of the library. A must r
.. |HowScanImag| image:: images/howToScanImages.jpg
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
......@@ -193,7 +193,7 @@ Here you will learn the about the basic building blocks of the library. A must r
*Author:* |Author_BernatG|
Did you used OpenCV before its 2.0 version? Do you wanna know what happened with your library with 2.0? Don't you know how to convert your old OpenCV programs to the new C++ interface? Look here to shed light on all this questions.
=============== ======================================================
.. |InterOOpenCV1| image:: images/interopOpenCV1.png
......@@ -208,7 +208,7 @@ Here you will learn the about the basic building blocks of the library. A must r
.. toctree::
:hidden:
../mat_the_basic_image_container/mat_the_basic_image_container
../how_to_scan_images/how_to_scan_images
../mat-mask-operations/mat-mask-operations
......
......@@ -3,7 +3,7 @@
.. |Author_AndreyK| unicode:: Andrey U+0020 Kamaev
.. |Author_LeonidBLB| unicode:: Leonid U+0020 Beynenson
.. |Author_VsevolodG| unicode:: Vsevolod U+0020 Glumov
.. |Author_VictorE| unicode:: Victor U+0020 Eruhimov
.. |Author_VictorE| unicode:: Victor U+0020 Eruhimov
.. |Author_ArtemM| unicode:: Artem U+0020 Myagkov
.. |Author_FernandoI| unicode:: Fernando U+0020 Iglesias U+0020 Garc U+00ED a
.. |Author_EduardF| unicode:: Eduard U+0020 Feicho
......
......@@ -5,9 +5,9 @@ Detection of planar objects
.. highlight:: cpp
The goal of this tutorial is to learn how to use *features2d* and *calib3d* modules for detecting known planar objects in scenes.
The goal of this tutorial is to learn how to use *features2d* and *calib3d* modules for detecting known planar objects in scenes.
*Test data*: use images in your data folder, for instance, ``box.png`` and ``box_in_scene.png``.
*Test data*: use images in your data folder, for instance, ``box.png`` and ``box_in_scene.png``.
#.
Create a new console project. Read two input images. ::
......@@ -22,7 +22,7 @@ The goal of this tutorial is to learn how to use *features2d* and *calib3d* modu
FastFeatureDetector detector(15);
vector<KeyPoint> keypoints1;
detector.detect(img1, keypoints1);
... // do the same for the second image
#.
......@@ -32,7 +32,7 @@ The goal of this tutorial is to learn how to use *features2d* and *calib3d* modu
SurfDescriptorExtractor extractor;
Mat descriptors1;
extractor.compute(img1, keypoints1, descriptors1);
... // process keypoints from the second image as well
#.
......@@ -69,4 +69,4 @@ The goal of this tutorial is to learn how to use *features2d* and *calib3d* modu
perspectiveTransform(Mat(points1), points1Projected, H);
#.
Use ``drawMatches`` for drawing inliers.
Use ``drawMatches`` for drawing inliers.
......@@ -5,166 +5,166 @@
Learn about how to use the feature points detectors, descriptors and matching framework found inside OpenCV.
.. include:: ../../definitions/tocDefinitions.rst
.. include:: ../../definitions/tocDefinitions.rst
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|Harris| **Title:** :ref:`harris_detector`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Why is it a good idea to track corners? We learn to use the Harris method to detect corners
===================== ==============================================
.. |Harris| image:: images/trackingmotion/Harris_Detector_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|ShiTomasi| **Title:** :ref:`good_features_to_track`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Where we use an improved method to detect corners more accuratelyI
===================== ==============================================
.. |ShiTomasi| image:: images/trackingmotion/Shi_Tomasi_Detector_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|GenericCorner| **Title:** :ref:`generic_corner_detector`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Here you will learn how to use OpenCV functions to make your personalized corner detector!
===================== ==============================================
.. |GenericCorner| image:: images/trackingmotion/Generic_Corner_Detector_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|Subpixel| **Title:** :ref:`corner_subpixeles`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Is pixel resolution enough? Here we learn a simple method to improve our accuracy.
===================== ==============================================
.. |Subpixel| image:: images/trackingmotion/Corner_Subpixeles_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|FeatureDetect| **Title:** :ref:`feature_detection`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
In this tutorial, you will use *features2d* to detect interest points.
===================== ==============================================
.. |FeatureDetect| image:: images/Feature_Detection_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|FeatureDescript| **Title:** :ref:`feature_description`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
In this tutorial, you will use *features2d* to calculate feature vectors.
===================== ==============================================
.. |FeatureDescript| image:: images/Feature_Description_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|FeatureFlann| **Title:** :ref:`feature_flann_matcher`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
In this tutorial, you will use the FLANN library to make a fast matching.
===================== ==============================================
.. |FeatureFlann| image:: images/Feature_Flann_Matcher_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|FeatureHomo| **Title:** :ref:`feature_homography`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
In this tutorial, you will use *features2d* and *calib3d* to detect an object in a scene.
===================== ==============================================
.. |FeatureHomo| image:: images/Feature_Homography_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
......@@ -175,7 +175,7 @@ Learn about how to use the feature points detectors, descriptors and matching f
*Author:* |Author_VictorE|
You will use *features2d* and *calib3d* modules for detecting known planar objects in scenes.
You will use *features2d* and *calib3d* modules for detecting known planar objects in scenes.
===================== ==============================================
......
......@@ -7,7 +7,7 @@ Squeeze out every little computation power from your system by using the power o
.. include:: ../../definitions/tocDefinitions.rst
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
......@@ -18,7 +18,7 @@ Squeeze out every little computation power from your system by using the power o
*Author:* |Author_BernatG|
This will give a good grasp on how to approach coding on the GPU module, once you already know how to handle the other modules. As a test case it will port the similarity methods from the tutorial :ref:`videoInputPSNRMSSIM` to the GPU.
This will give a good grasp on how to approach coding on the GPU module, once you already know how to handle the other modules. As a test case it will port the similarity methods from the tutorial :ref:`videoInputPSNRMSSIM` to the GPU.
=============== ======================================================
......
......@@ -3,30 +3,30 @@
*highgui* module. High Level GUI and Media
------------------------------------------
This section contains valuable tutorials about how to read/save your image/video files and how to use the built-in graphical user interface of the library.
This section contains valuable tutorials about how to read/save your image/video files and how to use the built-in graphical user interface of the library.
.. include:: ../../definitions/tocDefinitions.rst
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=============== ======================================================
|Beginners_5| *Title:* :ref:`Adding_Trackbars`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
We will learn how to add a Trackbar to our applications
=============== ======================================================
.. |Beginners_5| image:: images/Adding_Trackbars_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
......@@ -34,7 +34,7 @@ This section contains valuable tutorials about how to read/save your image/video
|hVideoInput| *Title:* :ref:`videoInputPSNRMSSIM`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_BernatG|
You will learn how to read video streams, and how to calculate similarity values such as PSNR or SSIM.
......@@ -45,7 +45,7 @@ This section contains valuable tutorials about how to read/save your image/video
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
......
......@@ -5,11 +5,11 @@ Adding a Trackbar to our applications!
* In the previous tutorials (about *linear blending* and the *brightness and contrast adjustments*) you might have noted that we needed to give some **input** to our programs, such as :math:`\alpha` and :math:`beta`. We accomplished that by entering this data using the Terminal
* Well, it is time to use some fancy GUI tools. OpenCV provides some GUI utilities (*highgui.h*) for you. An example of this is a **Trackbar**
* Well, it is time to use some fancy GUI tools. OpenCV provides some GUI utilities (*highgui.h*) for you. An example of this is a **Trackbar**
.. image:: images/Adding_Trackbars_Tutorial_Trackbar.png
:alt: Trackbar example
:align: center
:align: center
* In this tutorial we will just modify our two previous programs so that they get the input information from the trackbar.
......@@ -19,7 +19,7 @@ Goals
In this tutorial you will learn how to:
* Add a Trackbar in an OpenCV window by using :create_trackbar:`createTrackbar <>`
* Add a Trackbar in an OpenCV window by using :create_trackbar:`createTrackbar <>`
Code
=====
......@@ -33,13 +33,13 @@ Let's modify the program made in the tutorial :ref:`Adding_Images`. We will let
using namespace cv;
/// Global Variables
/// Global Variables
const int alpha_slider_max = 100;
int alpha_slider;
int alpha_slider;
double alpha;
double beta;
double beta;
/// Matrices to store images
/// Matrices to store images
Mat src1;
Mat src2;
Mat dst;
......@@ -49,12 +49,12 @@ Let's modify the program made in the tutorial :ref:`Adding_Images`. We will let
* @brief Callback for trackbar
*/
void on_trackbar( int, void* )
{
{
alpha = (double) alpha_slider/alpha_slider_max ;
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst );
}
......@@ -67,7 +67,7 @@ Let's modify the program made in the tutorial :ref:`Adding_Images`. We will let
if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
if( !src2.data ) { printf("Error loading src2 \n"); return -1; }
/// Initialize values
/// Initialize values
alpha_slider = 0;
/// Create Windows
......@@ -75,13 +75,13 @@ Let's modify the program made in the tutorial :ref:`Adding_Images`. We will let
/// Create Trackbars
char TrackbarName[50];
sprintf( TrackbarName, "Alpha x %d", alpha_slider_max );
sprintf( TrackbarName, "Alpha x %d", alpha_slider_max );
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
/// Show some stuff
on_trackbar( alpha_slider, 0 );
/// Wait until user press some key
waitKey(0);
return 0;
......@@ -113,7 +113,7 @@ We only analyze the code that is related to Trackbar:
createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
Note the following:
* Our Trackbar has a label **TrackbarName**
* The Trackbar is located in the window named **"Linear Blend"**
* The Trackbar values will be in the range from :math:`0` to **alpha_slider_max** (the minimum limit is always **zero**).
......@@ -125,21 +125,21 @@ We only analyze the code that is related to Trackbar:
.. code-block:: cpp
void on_trackbar( int, void* )
{
{
alpha = (double) alpha_slider/alpha_slider_max ;
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst );
}
Note that:
* We use the value of **alpha_slider** (integer) to get a double value for **alpha**.
* We use the value of **alpha_slider** (integer) to get a double value for **alpha**.
* **alpha_slider** is updated each time the trackbar is displaced by the user.
* We define *src1*, *src2*, *dist*, *alpha*, *alpha_slider* and *beta* as global variables, so they can be used everywhere.
Result
=======
......@@ -147,13 +147,13 @@ Result
.. image:: images/Adding_Trackbars_Tutorial_Result_0.jpg
:alt: Adding Trackbars - Windows Linux
:align: center
:align: center
* As a manner of practice, you can also add 02 trackbars for the program made in :ref:`Basic_Linear_Transform`. One trackbar to set :math:`\alpha` and another for :math:`\beta`. The output might look like:
.. image:: images/Adding_Trackbars_Tutorial_Result_1.jpg
:alt: Adding Trackbars - Lena
:align: center
:align: center
......
......@@ -64,7 +64,7 @@ Closing the video is automatic when the objects destructor is called. However, i
captRefrnc >> frameReference;
captUndTst.open(frameUnderTest);
The upper read operations will leave empty the *Mat* objects if no frame could be acquired (either cause the video stream was closed or you got to the end of the video file). We can check this with a simple if:
The upper read operations will leave empty the *Mat* objects if no frame could be acquired (either cause the video stream was closed or you got to the end of the video file). We can check this with a simple if:
.. code-block:: cpp
......@@ -111,7 +111,7 @@ Then the PSNR is expressed as:
PSNR = 10 \cdot \log_{10} \left( \frac{MAX_I^2}{MSE} \right)
Here the :math:`MAX_I^2` is the maximum valid value for a pixel. In case of the simple single byte image per pixel per channel this is 255. When two images are the same the MSE will give zero, resulting in an invalid divide by zero operation in the PSNR formula. In this case the PSNR is undefined and as we'll need to handle this case separately. The transition to a logarithmic scale is made because the pixel values have a very wide dynamic range. All this translated to OpenCV and a C++ function looks like:
Here the :math:`MAX_I^2` is the maximum valid value for a pixel. In case of the simple single byte image per pixel per channel this is 255. When two images are the same the MSE will give zero, resulting in an invalid divide by zero operation in the PSNR formula. In this case the PSNR is undefined and as we'll need to handle this case separately. The transition to a logarithmic scale is made because the pixel values have a very wide dynamic range. All this translated to OpenCV and a C++ function looks like:
.. code-block:: cpp
......@@ -136,13 +136,13 @@ Here the :math:`MAX_I^2` is the maximum valid value for a pixel. In case of the
}
}
Typically result values are anywhere between 30 and 50 for video compression, where higher is better. If the images significantly differ you'll get much lower ones like 15 and so. This similarity check is easy and fast to calculate, however in practice it may turn out somewhat inconsistent with human eye perception. The **structural similarity** algorithm aims to correct this.
Typically result values are anywhere between 30 and 50 for video compression, where higher is better. If the images significantly differ you'll get much lower ones like 15 and so. This similarity check is easy and fast to calculate, however in practice it may turn out somewhat inconsistent with human eye perception. The **structural similarity** algorithm aims to correct this.
Describing the methods goes well beyond the purpose of this tutorial. For that I invite you to read the article introducing it. Nevertheless, you can get a good image of it by looking at the OpenCV implementation below.
Describing the methods goes well beyond the purpose of this tutorial. For that I invite you to read the article introducing it. Nevertheless, you can get a good image of it by looking at the OpenCV implementation below.
.. seealso::
SSIM is described more in-depth in the: "Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004." article.
SSIM is described more in-depth in the: "Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004." article.
.. code-block:: cpp
......@@ -162,7 +162,7 @@ Describing the methods goes well beyond the purpose of this tutorial. For that I
/***********************PRELIMINARY COMPUTING ******************************/
Mat mu1, mu2; //
Mat mu1, mu2; //
GaussianBlur(I1, mu1, Size(11, 11), 1.5);
GaussianBlur(I2, mu2, Size(11, 11), 1.5);
......@@ -199,7 +199,7 @@ Describing the methods goes well beyond the purpose of this tutorial. For that I
return mssim;
}
This will return a similarity index for each channel of the image. This value is between zero and one, where one corresponds to perfect fit. Unfortunately, the many Gaussian blurring is quite costly, so while the PSNR may work in a real time like environment (24 frame per second) this will take significantly more than to accomplish similar performance results.
This will return a similarity index for each channel of the image. This value is between zero and one, where one corresponds to perfect fit. Unfortunately, the many Gaussian blurring is quite costly, so while the PSNR may work in a real time like environment (24 frame per second) this will take significantly more than to accomplish similar performance results.
Therefore, the source code presented at the start of the tutorial will perform the PSNR measurement for each frame, and the SSIM only for the frames where the PSNR falls below an input value. For visualization purpose we show both images in an OpenCV window and print the PSNR and MSSIM values to the console. Expect to see something like:
......@@ -207,7 +207,7 @@ Therefore, the source code presented at the start of the tutorial will perform t
:alt: A sample output
:align: center
You may observe a runtime instance of this on the `YouTube here <https://www.youtube.com/watch?v=iOcNljutOgg>`_.
You may observe a runtime instance of this on the `YouTube here <https://www.youtube.com/watch?v=iOcNljutOgg>`_.
.. raw:: html
......
......@@ -20,7 +20,7 @@ In MacOS it can be done using the following command in Terminal:
cd ~/<my_working _directory>
git clone https://github.com/Itseez/opencv.git
Building OpenCV from Source, using CMake and Command Line
=========================================================
......@@ -28,10 +28,10 @@ Building OpenCV from Source, using CMake and Command Line
#. Make symbolic link for Xcode to let OpenCV build scripts find the compiler, header files etc.
.. code-block:: bash
cd /
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
#. Build OpenCV framework:
.. code-block:: bash
......
......@@ -11,7 +11,7 @@ Prerequisites
1. Having installed `Eclipse <http://www.eclipse.org/>`_ in your workstation (only the CDT plugin for C/C++ is needed). You can follow the following steps:
* Go to the Eclipse site
* Go to the Eclipse site
* Download `Eclipse IDE for C/C++ Developers <http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/heliossr2>`_ . Choose the link according to your workstation.
......@@ -20,7 +20,7 @@ Prerequisites
Making a project
=================
1. Start Eclipse. Just run the executable that comes in the folder.
1. Start Eclipse. Just run the executable that comes in the folder.
#. Go to **File -> New -> C/C++ Project**
......@@ -28,13 +28,13 @@ Making a project
:alt: Eclipse Tutorial Screenshot 0
:align: center
#. Choose a name for your project (i.e. DisplayImage). An **Empty Project** should be okay for this example.
#. Choose a name for your project (i.e. DisplayImage). An **Empty Project** should be okay for this example.
.. image:: images/a1.png
:alt: Eclipse Tutorial Screenshot 1
:align: center
#. Leave everything else by default. Press **Finish**.
#. Leave everything else by default. Press **Finish**.
#. Your project (in this case DisplayImage) should appear in the **Project Navigator** (usually at the left side of your window).
......@@ -45,7 +45,7 @@ Making a project
#. Now, let's add a source file using OpenCV:
* Right click on **DisplayImage** (in the Navigator). **New -> Folder** .
* Right click on **DisplayImage** (in the Navigator). **New -> Folder** .
.. image:: images/a4.png
:alt: Eclipse Tutorial Screenshot 4
......@@ -76,9 +76,9 @@ Making a project
image = imread( argv[1], 1 );
if( argc != 2 || !image.data )
{
{
printf( "No image data \n" );
return -1;
return -1;
}
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
......@@ -102,7 +102,7 @@ Making a project
:align: center
.. note::
If you do not know where your opencv files are, open the **Terminal** and type:
If you do not know where your opencv files are, open the **Terminal** and type:
.. code-block:: bash
......@@ -112,56 +112,56 @@ Making a project
.. code-block:: bash
-I/usr/local/include/opencv -I/usr/local/include
-I/usr/local/include/opencv -I/usr/local/include
b. Now go to **GCC C++ Linker**,there you have to fill two spaces:
First in **Library search path (-L)** you have to write the path to where the opencv libraries reside, in my case the path is:
::
/usr/local/lib
Then in **Libraries(-l)** add the OpenCV libraries that you may need. Usually just the 3 first on the list below are enough (for simple applications) . In my case, I am putting all of them since I plan to use the whole bunch:
opencv_core
opencv_imgproc
opencv_core
opencv_imgproc
opencv_highgui
opencv_ml
opencv_video
opencv_ml
opencv_video
opencv_features2d
opencv_calib3d
opencv_objdetect
opencv_calib3d
opencv_objdetect
opencv_contrib
opencv_legacy
opencv_legacy
opencv_flann
.. image:: images/a10.png
:alt: Eclipse Tutorial Screenshot 10
:align: center
:align: center
If you don't know where your libraries are (or you are just psychotic and want to make sure the path is fine), type in **Terminal**:
.. code-block:: bash
pkg-config --libs opencv
My output (in case you want to check) was:
.. code-block:: bash
-L/usr/local/lib -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann
-L/usr/local/lib -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann
Now you are done. Click **OK**
* Your project should be ready to be built. For this, go to **Project->Build all**
* Your project should be ready to be built. For this, go to **Project->Build all**
In the Console you should get something like
In the Console you should get something like
.. image:: images/a12.png
:alt: Eclipse Tutorial Screenshot 12
:align: center
:align: center
If you check in your folder, there should be an executable there.
......@@ -179,21 +179,21 @@ So, now we have an executable ready to run. If we were to use the Terminal, we w
Assuming that the image to use as the argument would be located in <DisplayImage_directory>/images/HappyLittleFish.png. We can still do this, but let's do it from Eclipse:
#. Go to **Run->Run Configurations**
#. Go to **Run->Run Configurations**
#. Under C/C++ Application you will see the name of your executable + Debug (if not, click over C/C++ Application a couple of times). Select the name (in this case **DisplayImage Debug**).
#. Under C/C++ Application you will see the name of your executable + Debug (if not, click over C/C++ Application a couple of times). Select the name (in this case **DisplayImage Debug**).
#. Now, in the right side of the window, choose the **Arguments** Tab. Write the path of the image file we want to open (path relative to the workspace/DisplayImage folder). Let's use **HappyLittleFish.png**:
.. image:: images/a14.png
:alt: Eclipse Tutorial Screenshot 14
:align: center
:align: center
#. Click on the **Apply** button and then in Run. An OpenCV window should pop up with the fish image (or whatever you used).
.. image:: images/a15.jpg
:alt: Eclipse Tutorial Screenshot 15
:align: center
:align: center
#. Congratulations! You are ready to have fun with OpenCV using Eclipse.
......@@ -236,7 +236,7 @@ Say you have or create a new file, *helloworld.cpp* in a directory called *foo*:
ADD_EXECUTABLE( helloworld helloworld.cxx )
TARGET_LINK_LIBRARIES( helloworld ${OpenCV_LIBS} )
#. Run: ``cmake-gui ..`` and make sure you fill in where opencv was built.
#. Run: ``cmake-gui ..`` and make sure you fill in where opencv was built.
#. Then click ``configure`` and then ``generate``. If it's OK, **quit cmake-gui**
......
......@@ -11,7 +11,7 @@ Using OpenCV with gcc and CMake
* The easiest way of using OpenCV in your code is to use `CMake <http://www.cmake.org/>`_. A few advantages (taken from the Wiki):
#. No need to change anything when porting between Linux and Windows
#. Can easily be combined with other tools by CMake( i.e. Qt, ITK and VTK )
#. Can easily be combined with other tools by CMake( i.e. Qt, ITK and VTK )
* If you are not familiar with CMake, checkout the `tutorial <http://www.cmake.org/cmake/help/cmake_tutorial.html>`_ on its website.
......@@ -21,7 +21,7 @@ Steps
Create a program using OpenCV
-------------------------------
Let's use a simple program such as DisplayImage.cpp shown below.
Let's use a simple program such as DisplayImage.cpp shown below.
.. code-block:: cpp
......@@ -36,9 +36,9 @@ Let's use a simple program such as DisplayImage.cpp shown below.
image = imread( argv[1], 1 );
if( argc != 2 || !image.data )
{
{
printf( "No image data \n" );
return -1;
return -1;
}
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
......
......@@ -11,8 +11,8 @@ Required Packages
.. code-block:: bash
sudo apt-get install build-essential
sudo apt-get install build-essential
* CMake 2.6 or higher;
* Git;
* GTK+2.x or higher, including headers (libgtk2.0-dev);
......@@ -48,7 +48,7 @@ In Linux it can be achieved with the following command in Terminal:
cd ~/<my_working _directory>
git clone https://github.com/Itseez/opencv.git
Building OpenCV from Source Using CMake, Using the Command Line
===============================================================
......@@ -58,26 +58,26 @@ Building OpenCV from Source Using CMake, Using the Command Line
#. Enter the <cmake_binary_dir> and type
.. code-block:: bash
cmake [<some optional parameters>] <path to the OpenCV source directory>
For example
.. code-block:: bash
cd ~/opencv
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..
#. Enter the created temporary directory (<cmake_binary_dir>) and proceed with:
.. code-block:: bash
make
sudo make install
.. note::
If the size of the created library is a critical issue (like in case of an Android build) you can use the ``install/strip`` command to get the smallest size as possible. The *stripped* version appears to be twice as small. However, we do not recommend using this unless those extra megabytes do really matter.
......@@ -5,8 +5,8 @@ Load, Modify, and Save an Image
.. note::
We assume that by now you know how to load an image using :imread:`imread <>` and to display it in a window (using :imshow:`imshow <>`). Read the :ref:`Display_Image` tutorial otherwise.
We assume that by now you know how to load an image using :imread:`imread <>` and to display it in a window (using :imshow:`imshow <>`). Read the :ref:`Display_Image` tutorial otherwise.
Goals
======
......@@ -35,9 +35,9 @@ Here it is:
{
char* imageName = argv[1];
Mat image;
Mat image;
image = imread( imageName, 1 );
if( argc != 2 || !image.data )
{
printf( " No image data \n " );
......@@ -53,7 +53,7 @@ Here it is:
namedWindow( "Gray image", CV_WINDOW_AUTOSIZE );
imshow( imageName, image );
imshow( "Gray image", gray_image );
imshow( "Gray image", gray_image );
waitKey(0);
......@@ -67,18 +67,18 @@ Explanation
* Creating a Mat object to store the image information
* Load an image using :imread:`imread <>`, located in the path given by *imageName*. Fort this example, assume you are loading a RGB image.
#. Now we are going to convert our image from BGR to Grayscale format. OpenCV has a really nice function to do this kind of transformations:
#. Now we are going to convert our image from BGR to Grayscale format. OpenCV has a really nice function to do this kind of transformations:
.. code-block:: cpp
cvtColor( image, gray_image, CV_BGR2GRAY );
As you can see, :cvt_color:`cvtColor <>` takes as arguments:
.. container:: enumeratevisibleitemswithsquare
* a source image (*image*)
* a source image (*image*)
* a destination image (*gray_image*), in which we will save the converted image.
* an additional parameter that indicates what kind of transformation will be performed. In this case we use **CV_BGR2GRAY** (because of :imread:`imread <>` has BGR default channel order in case of color images).
......@@ -86,7 +86,7 @@ Explanation
.. code-block:: cpp
imwrite( "../../images/Gray_Image.jpg", gray_image );
imwrite( "../../images/Gray_Image.jpg", gray_image );
Which will save our *gray_image* as *Gray_Image.jpg* in the folder *images* located two levels up of my current location.
......
......@@ -126,7 +126,7 @@ Building the library
#. Install |TortoiseGit|_. Choose the 32 or 64 bit version according to the type of OS you work in. While installing, locate your msysgit (if it doesn't do that automatically). Follow the wizard -- the default options are OK for the most part.
#. Choose a directory in your file system, where you will download the OpenCV libraries to. I recommend creating a new one that has short path and no special charachters in it, for example :file:`D:/OpenCV`. For this tutorial I'll suggest you do so. If you use your own path and know, what you're doing -- it's OK.
#. Choose a directory in your file system, where you will download the OpenCV libraries to. I recommend creating a new one that has short path and no special charachters in it, for example :file:`D:/OpenCV`. For this tutorial I'll suggest you do so. If you use your own path and know, what you're doing -- it's OK.
a) Clone the repository to the selected directory. After clicking *Clone* button, a window will appear where you can select from what repository you want to download source files (https://github.com/Itseez/opencv.git) and to what directory (:file:`D:/OpenCV`).
......@@ -314,10 +314,10 @@ First we set an enviroment variable to make easier our work. This will hold the
setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc10 (suggested for Visual Studio 2010 - 32 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc10 (suggested for Visual Studio 2010 - 64 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc11 (suggested for Visual Studio 2012 - 32 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc11 (suggested for Visual Studio 2012 - 64 bit Windows)
Here the directory is where you have your OpenCV binaries (*extracted* or *built*). You can have different platform (e.g. x64 instead of x86) or compiler type, so substitute appropriate value. Inside this you should have two folders called *lib* and *bin*. The -m should be added if you wish to make the settings computer wise, instead of user wise.
If you built static libraries then you are done. Otherwise, you need to add the *bin* folders path to the systems path. This is cause you will use the OpenCV library in form of *\"Dynamic-link libraries\"* (also known as **DLL**). Inside these are stored all the algorithms and information the OpenCV library contains. The operating system will load them only on demand, during runtime. However, to do this he needs to know where they are. The systems **PATH** contains a list of folders where DLLs can be found. Add the OpenCV library path to this and the OS will know where to look if he ever needs the OpenCV binaries. Otherwise, you will need to copy the used DLLs right beside the applications executable file (*exe*) for the OS to find it, which is highly unpleasent if you work on many projects. To do this start up again the |PathEditor|_ and add the following new entry (right click in the application to bring up the menu):
......
......@@ -114,7 +114,7 @@ Now assume you want to do a visual sanity check of the *cv::Canny()* implementat
.. image:: images/edges_zoom.png
:height: 160pt
Right-click on the *Image Viewer* to bring up the view context menu and enable :menuselection:`Link Views` (a check box next to the menu item indicates whether the option is enabled).
.. image:: images/viewer_context_menu.png
......@@ -124,7 +124,7 @@ The :menuselection:`Link Views` feature keeps the view region fixed when flippin
.. image:: images/input_zoom.png
:height: 160pt
You may also switch back and forth between viewing input and edges with your up/down cursor keys. That way you can easily verify that the detected edges line up nicely with the data in the input image.
More ...
......
......@@ -19,7 +19,7 @@ Follow this step by step guide to link OpenCV to iOS.
1. Create a new XCode project.
2. Now we need to link *opencv2.framework* with Xcode. Select the project Navigator in the left hand panel and click on project name.
2. Now we need to link *opencv2.framework* with Xcode. Select the project Navigator in the left hand panel and click on project name.
3. Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
......@@ -29,10 +29,10 @@ Follow this step by step guide to link OpenCV to iOS.
.. image:: images/linking_opencv_ios.png
:alt: OpenCV iOS in Xcode
:align: center
:align: center
*Hello OpenCV iOS Application*
===============================
===============================
Now we will learn how to write a simple Hello World Application in Xcode using OpenCV.
......@@ -49,7 +49,7 @@ Now we will learn how to write a simple Hello World Application in Xcode using O
.. image:: images/header_directive.png
:alt: header
:align: center
:align: center
.. container:: enumeratevisibleitemswithsquare
......@@ -61,7 +61,7 @@ Now we will learn how to write a simple Hello World Application in Xcode using O
.. image:: images/view_did_load.png
:alt: view did load
:align: center
:align: center
.. container:: enumeratevisibleitemswithsquare
......@@ -73,4 +73,4 @@ Now we will learn how to write a simple Hello World Application in Xcode using O
.. image:: images/output.png
:alt: output
:align: center
......@@ -21,9 +21,9 @@ In *OpenCV* all the image processing operations are done on *Mat*. iOS uses UIIm
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
......@@ -32,11 +32,11 @@ In *OpenCV* all the image processing operations are done on *Mat*. iOS uses UIIm
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
......@@ -47,9 +47,9 @@ In *OpenCV* all the image processing operations are done on *Mat*. iOS uses UIIm
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
......@@ -58,11 +58,11 @@ In *OpenCV* all the image processing operations are done on *Mat*. iOS uses UIIm
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
......@@ -81,15 +81,15 @@ After the processing we need to convert it back to UIImage.
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
......@@ -103,15 +103,15 @@ After the processing we need to convert it back to UIImage.
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
return finalImage;
}
*Output*
......@@ -119,9 +119,9 @@ After the processing we need to convert it back to UIImage.
.. image:: images/output.jpg
:alt: header
:align: center
:align: center
Check out an instance of running code with more Image Effects on `YouTube <http://www.youtube.com/watch?v=Ko3K_xdhJ1I>`_ .
Check out an instance of running code with more Image Effects on `YouTube <http://www.youtube.com/watch?v=Ko3K_xdhJ1I>`_ .
.. raw:: html
......
......@@ -69,7 +69,7 @@
.. toctree::
:hidden:
../hello/hello
../image_manipulation/image_manipulation
../video_processing/video_processing
......@@ -18,34 +18,34 @@ Including OpenCV library in your iOS project
The OpenCV library comes as a so-called framework, which you can directly drag-and-drop into your XCode project. Download the latest binary from <http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>. Alternatively follow this guide :ref:`iOS-Installation` to compile the framework manually. Once you have the framework, just drag-and-drop into XCode:
.. image:: images/xcode_hello_ios_framework_drag_and_drop.png
Also you have to locate the prefix header that is used for all header files in the project. The file is typically located at "ProjectName/Supporting Files/ProjectName-Prefix.pch". There, you have add an include statement to import the opencv library. However, make sure you include opencv before you include UIKit and Foundation, because else you will get some weird compile errors that some macros like min and max are defined multiple times. For example the prefix header could look like the following:
.. code-block:: objc
:linenos:
//
// Prefix header for all source files of the 'VideoFilters' target in the 'VideoFilters' project
//
#import <Availability.h>
#ifndef __IPHONE_4_0
#warning "This project uses features only available in iOS SDK 4.0 and later."
#endif
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#endif
Example video frame processing project
--------------------------------------
User Interface
......@@ -60,18 +60,18 @@ Make sure to add and connect the IBOutlets and IBActions to the corresponding Vi
.. code-block:: objc
:linenos:
@interface ViewController : UIViewController
{
IBOutlet UIImageView* imageView;
IBOutlet UIButton* button;
}
- (IBAction)actionStart:(id)sender;
@end
Adding the Camera
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......@@ -79,21 +79,21 @@ We add a camera controller to the view controller and initialize it when the vie
.. code-block:: objc
:linenos:
#import <opencv2/highgui/cap_ios.h>
using namespace cv;
@interface ViewController : UIViewController
{
...
...
CvVideoCamera* videoCamera;
}
...
@property (nonatomic, retain) CvVideoCamera* videoCamera;
@end
.. code-block:: objc
:linenos:
......@@ -101,7 +101,7 @@ We add a camera controller to the view controller and initialize it when the vie
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset352x288;
......@@ -109,7 +109,7 @@ We add a camera controller to the view controller and initialize it when the vie
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscale = NO;
}
In this case, we initialize the camera and provide the imageView as a target for rendering each frame. CvVideoCamera is basically a wrapper around AVFoundation, so we provie as properties some of the AVFoundation camera options. For example we want to use the front camera, set the video size to 352x288 and a video orientation (the video camera normally outputs in landscape mode, which results in transposed data when you design a portrait application).
The property defaultFPS sets the FPS of the camera. If the processing is less fast than the desired FPS, frames are automatically dropped.
......@@ -153,14 +153,14 @@ We follow the delegation pattern, which is very common in iOS, to provide access
.. code-block:: objc
:linenos:
@interface ViewController : UIViewController<CvVideoCameraDelegate>
.. code-block:: objc
:linenos:
- (void)viewDidLoad
{
...
......@@ -194,13 +194,13 @@ From here you can start processing video frames. For example the following snipp
.. code-block:: objc
:linenos:
- (void)processImage:(Mat&)image;
{
// Do some OpenCV stuff with the image
Mat image_copy;
cvtColor(image, image_copy, CV_BGRA2BGR);
// invert image
bitwise_not(image_copy, image_copy);
cvtColor(image_copy, image, CV_BGR2BGRA);
......@@ -214,9 +214,9 @@ Finally, we have to tell the camera to actually start/stop working. The followin
.. code-block:: objc
:linenos:
#pragma mark - UI Actions
- (IBAction)actionStart:(id)sender;
{
[self.videoCamera start];
......
......@@ -10,7 +10,7 @@ In this tutorial you will learn how to:
.. container:: enumeratevisibleitemswithsquare
+ Use the OpenCV functions :svms:`CvSVM::train <cvsvm-train>` to build a classifier based on SVMs and :svms:`CvSVM::predict <cvsvm-predict>` to test its performance.
+ Use the OpenCV functions :svms:`CvSVM::train <cvsvm-train>` to build a classifier based on SVMs and :svms:`CvSVM::predict <cvsvm-predict>` to test its performance.
What is a SVM?
==============
......@@ -36,14 +36,14 @@ Then, the operation of the SVM algorithm is based on finding the hyperplane that
.. image:: images/optimal-hyperplane.png
:alt: The Optimal hyperplane
:align: center
:align: center
How is the optimal hyperplane computed?
=======================================
Let's introduce the notation used to define formally a hyperplane:
.. math::
.. math::
f(x) = \beta_{0} + \beta^{T} x,
where :math:`\beta` is known as the *weight vector* and :math:`\beta_{0}` as the *bias*.
......@@ -106,7 +106,7 @@ Explanation
.. code-block:: cpp
Mat trainingDataMat(3, 2, CV_32FC1, trainingData);
Mat labelsMat (3, 1, CV_32FC1, labels);
Mat labelsMat (3, 1, CV_32FC1, labels);
2. **Set up SVM's parameters**
......@@ -143,7 +143,7 @@ Explanation
.. code-block:: cpp
Vec3b green(0,255,0), blue (255,0,0);
for (int i = 0; i < image.rows; ++i)
for (int j = 0; j < image.cols; ++j)
{
......@@ -152,8 +152,8 @@ Explanation
if (response == 1)
image.at<Vec3b>(j, i) = green;
else
if (response == -1)
else
if (response == -1)
image.at<Vec3b>(j, i) = blue;
}
......@@ -184,5 +184,5 @@ Results
.. image:: images/result.png
:alt: The seperated planes
:align: center
:align: center
......@@ -5,9 +5,9 @@
Use the powerfull machine learning classes for statistical classification, regression and clustering of data.
.. include:: ../../definitions/tocDefinitions.rst
.. include:: ../../definitions/tocDefinitions.rst
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
......@@ -18,7 +18,7 @@ Use the powerfull machine learning classes for statistical classification, regre
*Author:* |Author_FernandoI|
Learn what a Suport Vector Machine is.
Learn what a Suport Vector Machine is.
============ ==============================================
......@@ -26,7 +26,7 @@ Use the powerfull machine learning classes for statistical classification, regre
:height: 90pt
:width: 90pt
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
......@@ -51,6 +51,6 @@ Use the powerfull machine learning classes for statistical classification, regre
.. toctree::
:hidden:
../introduction_to_svm/introduction_to_svm
../non_linear_svms/non_linear_svms
......@@ -5,23 +5,23 @@
Ever wondered how your digital camera detects peoples and faces? Look here to find out!
.. include:: ../../definitions/tocDefinitions.rst
.. include:: ../../definitions/tocDefinitions.rst
+
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
===================== ==============================================
|CascadeClassif| **Title:** :ref:`cascade_classifier`
*Compatibility:* > OpenCV 2.0
*Author:* |Author_AnaH|
Here we learn how to use *objdetect* to find objects in our images or videos
===================== ==============================================
.. |CascadeClassif| image:: images/Cascade_Classifier_Tutorial_Cover.jpg
:height: 90pt
:width: 90pt
......
......@@ -3,7 +3,7 @@
*video* module. Video analysis
-----------------------------------------------------------
Look here in order to find use on your video stream algoritms like: motion extraction, feature tracking and foreground extractions.
Look here in order to find use on your video stream algoritms like: motion extraction, feature tracking and foreground extractions.
.. include:: ../../definitions/noContent.rst
......
......@@ -78,7 +78,7 @@ First, we create an instance of a keypoint detector. All detectors inherit the a
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
We create an instance of descriptor extractor. The most of OpenCV descriptors inherit ``DescriptorExtractor`` abstract interface. Then we compute descriptors for each of the keypoints. The output ``Mat`` of the ``DescriptorExtractor::compute`` method contains a descriptor in a row *i* for each *i*-th keypoint. Note that the method can modify the keypoints vector by removing the keypoints such that a descriptor for them is not defined (usually these are the keypoints near image border). The method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that the number of keypoints is equal to the descriptors row count). ::
We create an instance of descriptor extractor. The most of OpenCV descriptors inherit ``DescriptorExtractor`` abstract interface. Then we compute descriptors for each of the keypoints. The output ``Mat`` of the ``DescriptorExtractor::compute`` method contains a descriptor in a row *i* for each *i*-th keypoint. Note that the method can modify the keypoints vector by removing the keypoints such that a descriptor for them is not defined (usually these are the keypoints near image border). The method makes sure that the ouptut keypoints and descriptors are consistent with each other (so that the number of keypoints is equal to the descriptors row count). ::
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
......
......@@ -13,7 +13,7 @@ Images
Load an image from a file: ::
Mat img = imread(filename)
If you read a jpg file, a 3 channel image is created by default. If you need a grayscale image, use: ::
Mat img = imread(filename, 0);
......@@ -23,14 +23,14 @@ If you read a jpg file, a 3 channel image is created by default. If you need a g
Save an image to a file: ::
imwrite(filename, img);
.. note:: format of the file is determined by its extension.
.. note:: use ``imdecode`` and ``imencode`` to read and write image from/to memory rather than a file.
XML/YAML
--------
TBD
Basic operations with images
......@@ -71,7 +71,7 @@ There are functions in OpenCV, especially from calib3d module, such as ``project
//... fill the array
Mat pointsMat = Mat(points);
One can access a point in this matrix using the same method ``Mat::at`` :
One can access a point in this matrix using the same method ``Mat::at`` :
::
......@@ -87,7 +87,7 @@ Memory management and reference counting
// .. fill the array
Mat pointsMat = Mat(points).reshape(1);
As a result we get a 32FC1 matrix with 3 columns instead of 32FC3 matrix with 1 column. ``pointsMat`` uses data from ``points`` and will not deallocate the memory when destroyed. In this particular instance, however, developer has to make sure that lifetime of ``points`` is longer than of ``pointsMat``.
As a result we get a 32FC1 matrix with 3 columns instead of 32FC3 matrix with 1 column. ``pointsMat`` uses data from ``points`` and will not deallocate the memory when destroyed. In this particular instance, however, developer has to make sure that lifetime of ``points`` is longer than of ``pointsMat``.
If we need to copy the data, this is done using, for example, ``Mat::copyTo`` or ``Mat::clone``: ::
Mat img = imread("image.jpg");
......@@ -117,7 +117,7 @@ A convertion from ``Mat`` to C API data structures: ::
IplImage img1 = img;
CvMat m = img;
Note that there is no data copying here.
Note that there is no data copying here.
Conversion from color to grey scale: ::
......
This diff is collapsed.
......@@ -1481,7 +1481,7 @@ Reconstructs points by triangulation.
:param points4D: 4xN array of reconstructed points in homogeneous coordinates.
The function reconstructs 3-dimensional points (in homogeneous coordinates) by using their observations with a stereo camera. Projections matrices can be obtained from :ocv:func:`stereoRectify`.
The function reconstructs 3-dimensional points (in homogeneous coordinates) by using their observations with a stereo camera. Projections matrices can be obtained from :ocv:func:`stereoRectify`.
.. seealso::
......
......@@ -4,19 +4,19 @@ Changelog
Release 0.05
------------
This library is now included in the official OpenCV distribution (from 2.4 on).
This library is now included in the official OpenCV distribution (from 2.4 on).
The :ocv:class`FaceRecognizer` is now an :ocv:class:`Algorithm`, which better fits into the overall
OpenCV API.
OpenCV API.
To reduce the confusion on user side and minimize my work, libfacerec and OpenCV
have been synchronized and are now based on the same interfaces and implementation.
To reduce the confusion on user side and minimize my work, libfacerec and OpenCV
have been synchronized and are now based on the same interfaces and implementation.
The library now has an extensive documentation:
* The API is explained in detail and with a lot of code examples.
* The face recognition guide I had written for Python and GNU Octave/MATLAB has been adapted to the new OpenCV C++ ``cv::FaceRecognizer``.
* The face recognition guide I had written for Python and GNU Octave/MATLAB has been adapted to the new OpenCV C++ ``cv::FaceRecognizer``.
* A tutorial for gender classification with Fisherfaces.
* A tutorial for face recognition in videos (e.g. webcam).
* A tutorial for face recognition in videos (e.g. webcam).
Release highlights
......@@ -27,8 +27,8 @@ Release highlights
Release 0.04
------------
This version is fully Windows-compatible and works with OpenCV 2.3.1. Several
bugfixes, but none influenced the recognition rate.
This version is fully Windows-compatible and works with OpenCV 2.3.1. Several
bugfixes, but none influenced the recognition rate.
Release highlights
++++++++++++++++++
......@@ -40,9 +40,9 @@ Release highlights
Release 0.03
------------
Reworked the library to provide separate implementations in cpp files, because
it's the preferred way of contributing OpenCV libraries. This means the library
is not header-only anymore. Slight API changes were done, please see the
Reworked the library to provide separate implementations in cpp files, because
it's the preferred way of contributing OpenCV libraries. This means the library
is not header-only anymore. Slight API changes were done, please see the
documentation for details.
Release highlights
......@@ -55,9 +55,9 @@ Release highlights
Release 0.02
------------
Reworked the library to provide separate implementations in cpp files, because
it's the preferred way of contributing OpenCV libraries. This means the library
is not header-only anymore. Slight API changes were done, please see the
Reworked the library to provide separate implementations in cpp files, because
it's the preferred way of contributing OpenCV libraries. This means the library
is not header-only anymore. Slight API changes were done, please see the
documentation for details.
Release highlights
......@@ -80,7 +80,7 @@ Release highlights
* Eigenfaces [TP91]_
* Fisherfaces [BHK97]_
* Local Binary Patterns Histograms [AHP04]_
* Added persistence facilities to store the models with a common API.
* Unit Tests (using `gtest <http://code.google.com/p/googletest/>`_).
* Providing a CMakeLists.txt to enable easy cross-platform building.
......@@ -201,7 +201,7 @@ For the first source code example, I'll go through it with you. I am first givin
.. literalinclude:: src/facerec_eigenfaces.cpp
:language: cpp
:linenos:
The source code for this demo application is also available in the ``src`` folder coming with this documentation:
* :download:`src/facerec_eigenfaces.cpp <src/facerec_eigenfaces.cpp>`
......
......@@ -6,7 +6,7 @@ Introduction
Saving and loading a :ocv:class:`FaceRecognizer` is very important. Training a FaceRecognizer can be a very time-intense task, plus it's often impossible to ship the whole face database to the user of your product. The task of saving and loading a FaceRecognizer is easy with :ocv:class:`FaceRecognizer`. You only have to call :ocv:func:`FaceRecognizer::load` for loading and :ocv:func:`FaceRecognizer::save` for saving a :ocv:class:`FaceRecognizer`.
I'll adapt the Eigenfaces example from the :doc:`../facerec_tutorial`: Imagine we want to learn the Eigenfaces of the `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_, store the model to a YAML file and then load it again.
I'll adapt the Eigenfaces example from the :doc:`../facerec_tutorial`: Imagine we want to learn the Eigenfaces of the `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_, store the model to a YAML file and then load it again.
From the loaded model, we'll get a prediction, show the mean, Eigenfaces and the image reconstruction.
......
......@@ -111,7 +111,7 @@ An example. If the haar-cascade is at ``C:/opencv/data/haarcascades/haarcascade_
facerec_video.exe C:/opencv/data/haarcascades/haarcascade_frontalface_default.xml C:/facerec/data/celebrities.txt 1
That's it.
That's it.
Results
-------
......
......@@ -306,7 +306,7 @@ void FaceRecognizer::update(InputArrayOfArrays src, InputArray labels ) {
dynamic_cast<LBPH*>(this)->update( src, labels );
return;
}
string error_msg = format("This FaceRecognizer (%s) does not support updating, you have to use FaceRecognizer::train to update it.", this->name().c_str());
CV_Error(CV_StsNotImplemented, error_msg);
}
......
......@@ -553,7 +553,7 @@ Range
-----
.. ocv:class:: Range
Template class specifying a continuous subsequence (slice) of a sequence.
Template class specifying a continuous subsequence (slice) of a sequence.
::
......@@ -773,7 +773,7 @@ Mat
---
.. ocv:class:: Mat
OpenCV C++ n-dimensional dense array class
OpenCV C++ n-dimensional dense array class
::
class CV_EXPORTS Mat
......
......@@ -80,8 +80,8 @@ Splits an element set into equivalency classes.
:param vec: Set of elements stored as a vector.
:param labels: Output vector of labels. It contains as many elements as ``vec``. Each label ``labels[i]`` is a 0-based cluster index of ``vec[i]`` .
:param labels: Output vector of labels. It contains as many elements as ``vec``. Each label ``labels[i]`` is a 0-based cluster index of ``vec[i]`` .
:param predicate: Equivalence predicate (pointer to a boolean function of two arguments or an instance of the class that has the method ``bool operator()(const _Tp& a, const _Tp& b)`` ). The predicate returns ``true`` when the elements are certainly in the same class, and returns ``false`` if they may or may not be in the same class.
The generic function ``partition`` implements an
......
......@@ -416,8 +416,8 @@ The number of pixels along the line is stored in ``LineIterator::count`` . The m
for(int i = 0; i < it.count; i++, ++it)
buf[i] = *(const Vec3b)*it;
// alternative way of iterating through the line
// alternative way of iterating through the line
for(int i = 0; i < it2.count; i++, ++it2)
{
Vec3b val = img.at<Vec3b>(it2.pos());
......
......@@ -91,8 +91,8 @@ you can use::
Ptr<T> ptr = new T(...);
That is, ``Ptr<T> ptr`` encapsulates a pointer to a ``T`` instance and a reference counter associated with the pointer. See the
:ocv:class:`Ptr`
That is, ``Ptr<T> ptr`` encapsulates a pointer to a ``T`` instance and a reference counter associated with the pointer. See the
:ocv:class:`Ptr`
description for details.
.. _AutomaticAllocation:
......
......@@ -3002,55 +3002,55 @@ static inline void read(const FileNode& node, string& value, const string& defau
}
template<typename _Tp> static inline void read(const FileNode& node, Point_<_Tp>& value, const Point_<_Tp>& default_value)
{
{
vector<_Tp> temp; FileNodeIterator it = node.begin(); it >> temp;
value = temp.size() != 2 ? default_value : Point_<_Tp>(saturate_cast<_Tp>(temp[0]), saturate_cast<_Tp>(temp[1]));
}
template<typename _Tp> static inline void read(const FileNode& node, Point3_<_Tp>& value, const Point3_<_Tp>& default_value)
{
{
vector<_Tp> temp; FileNodeIterator it = node.begin(); it >> temp;
value = temp.size() != 3 ? default_value : Point3_<_Tp>(saturate_cast<_Tp>(temp[0]), saturate_cast<_Tp>(temp[1]),
saturate_cast<_Tp>(temp[2]));
}
template<typename _Tp> static inline void read(const FileNode& node, Size_<_Tp>& value, const Size_<_Tp>& default_value)
{
{
vector<_Tp> temp; FileNodeIterator it = node.begin(); it >> temp;
value = temp.size() != 2 ? default_value : Size_<_Tp>(saturate_cast<_Tp>(temp[0]), saturate_cast<_Tp>(temp[1]));
}
template<typename _Tp> static inline void read(const FileNode& node, Complex<_Tp>& value, const Complex<_Tp>& default_value)
{
{
vector<_Tp> temp; FileNodeIterator it = node.begin(); it >> temp;
value = temp.size() != 2 ? default_value : Complex<_Tp>(saturate_cast<_Tp>(temp[0]), saturate_cast<_Tp>(temp[1]));
}
template<typename _Tp> static inline void read(const FileNode& node, Rect_<_Tp>& value, const Rect_<_Tp>& default_value)
{
{
vector<_Tp> temp; FileNodeIterator it = node.begin(); it >> temp;
value = temp.size() != 4 ? default_value : Rect_<_Tp>(saturate_cast<_Tp>(temp[0]), saturate_cast<_Tp>(temp[1]),
value = temp.size() != 4 ? default_value : Rect_<_Tp>(saturate_cast<_Tp>(temp[0]), saturate_cast<_Tp>(temp[1]),
saturate_cast<_Tp>(temp[2]), saturate_cast<_Tp>(temp[3]));
}
template<typename _Tp, int cn> static inline void read(const FileNode& node, Vec<_Tp, cn>& value, const Vec<_Tp, cn>& default_value)
{
{
vector<_Tp> temp; FileNodeIterator it = node.begin(); it >> temp;
value = temp.size() != cn ? default_value : Vec<_Tp, cn>(&temp[0]);
}
template<typename _Tp> static inline void read(const FileNode& node, Scalar_<_Tp>& value, const Scalar_<_Tp>& default_value)
{
{
vector<_Tp> temp; FileNodeIterator it = node.begin(); it >> temp;
value = temp.size() != 4 ? default_value : Scalar_<_Tp>(saturate_cast<_Tp>(temp[0]), saturate_cast<_Tp>(temp[1]),
saturate_cast<_Tp>(temp[2]), saturate_cast<_Tp>(temp[3]));
}
static inline void read(const FileNode& node, Range& value, const Range& default_value)
{
Point2i temp(value.start, value.end); const Point2i default_temp = Point2i(default_value.start, default_value.end);
{
Point2i temp(value.start, value.end); const Point2i default_temp = Point2i(default_value.start, default_value.end);
read(node, temp, default_temp);
value.start = temp.x; value.end = temp.y;
value.start = temp.x; value.end = temp.y;
}
CV_EXPORTS_W void read(const FileNode& node, Mat& mat, const Mat& default_mat=Mat() );
......
......@@ -1252,14 +1252,14 @@ static void arithm_op(InputArray _src1, InputArray _src2, OutputArray _dst,
Mat src1 = _src1.getMat(), src2 = _src2.getMat();
bool haveMask = !_mask.empty();
bool reallocate = false;
bool src1Scalar = checkScalar(src1, src2.type(), kind1, kind2);
bool src2Scalar = checkScalar(src2, src1.type(), kind2, kind1);
bool src1Scalar = checkScalar(src1, src2.type(), kind1, kind2);
bool src2Scalar = checkScalar(src2, src1.type(), kind2, kind1);
if( (kind1 == kind2 || src1.channels() == 1) && src1.dims <= 2 && src2.dims <= 2 &&
src1.size() == src2.size() && src1.type() == src2.type() &&
!haveMask && ((!_dst.fixedType() && (dtype < 0 || CV_MAT_DEPTH(dtype) == src1.depth())) ||
(_dst.fixedType() && _dst.type() == _src1.type())) &&
(_dst.fixedType() && _dst.type() == _src1.type())) &&
((src1Scalar && src2Scalar) || (!src1Scalar && !src2Scalar)) )
{
_dst.create(src1.size(), src1.type());
......
......@@ -453,7 +453,7 @@ cv::Scalar cv::sum( InputArray _src )
{
Mat src = _src.getMat();
int k, cn = src.channels(), depth = src.depth();
#if defined (HAVE_IPP) && (IPP_VERSION_MAJOR >= 7)
size_t total_size = src.total();
int rows = src.size[0], cols = (int)(total_size/rows);
......@@ -462,7 +462,7 @@ cv::Scalar cv::sum( InputArray _src )
IppiSize sz = { cols, rows };
int type = src.type();
typedef IppStatus (CV_STDCALL* ippiSumFunc)(const void*, int, IppiSize, double *, int);
ippiSumFunc ippFunc =
ippiSumFunc ippFunc =
type == CV_8UC1 ? (ippiSumFunc)ippiSum_8u_C1R :
type == CV_8UC3 ? (ippiSumFunc)ippiSum_8u_C3R :
type == CV_8UC4 ? (ippiSumFunc)ippiSum_8u_C4R :
......@@ -490,8 +490,8 @@ cv::Scalar cv::sum( InputArray _src )
}
}
}
#endif
#endif
SumFunc func = getSumFunc(depth);
CV_Assert( cn <= 4 && func != 0 );
......@@ -565,7 +565,7 @@ cv::Scalar cv::mean( InputArray _src, InputArray _mask )
CV_Assert( mask.empty() || mask.type() == CV_8U );
int k, cn = src.channels(), depth = src.depth();
#if defined (HAVE_IPP) && (IPP_VERSION_MAJOR >= 7)
size_t total_size = src.total();
int rows = src.size[0], cols = (int)(total_size/rows);
......@@ -576,7 +576,7 @@ cv::Scalar cv::mean( InputArray _src, InputArray _mask )
if( !mask.empty() )
{
typedef IppStatus (CV_STDCALL* ippiMaskMeanFuncC1)(const void *, int, void *, int, IppiSize, Ipp64f *);
ippiMaskMeanFuncC1 ippFuncC1 =
ippiMaskMeanFuncC1 ippFuncC1 =
type == CV_8UC1 ? (ippiMaskMeanFuncC1)ippiMean_8u_C1MR :
type == CV_16UC1 ? (ippiMaskMeanFuncC1)ippiMean_16u_C1MR :
type == CV_32FC1 ? (ippiMaskMeanFuncC1)ippiMean_32f_C1MR :
......@@ -590,7 +590,7 @@ cv::Scalar cv::mean( InputArray _src, InputArray _mask )
}
}
typedef IppStatus (CV_STDCALL* ippiMaskMeanFuncC3)(const void *, int, void *, int, IppiSize, int, Ipp64f *);
ippiMaskMeanFuncC3 ippFuncC3 =
ippiMaskMeanFuncC3 ippFuncC3 =
type == CV_8UC3 ? (ippiMaskMeanFuncC3)ippiMean_8u_C3CMR :
type == CV_16UC3 ? (ippiMaskMeanFuncC3)ippiMean_16u_C3CMR :
type == CV_32FC3 ? (ippiMaskMeanFuncC3)ippiMean_32f_C3CMR :
......@@ -609,7 +609,7 @@ cv::Scalar cv::mean( InputArray _src, InputArray _mask )
else
{
typedef IppStatus (CV_STDCALL* ippiMeanFunc)(const void*, int, IppiSize, double *, int);
ippiMeanFunc ippFunc =
ippiMeanFunc ippFunc =
type == CV_8UC1 ? (ippiMeanFunc)ippiMean_8u_C1R :
type == CV_8UC3 ? (ippiMeanFunc)ippiMean_8u_C3R :
type == CV_8UC4 ? (ippiMeanFunc)ippiMean_8u_C4R :
......@@ -639,7 +639,7 @@ cv::Scalar cv::mean( InputArray _src, InputArray _mask )
}
}
#endif
SumFunc func = getSumFunc(depth);
CV_Assert( cn <= 4 && func != 0 );
......
......@@ -405,7 +405,7 @@ protected:
Vec<int, 5> v1(15, 16, 17, 18, 19), ov1;
Scalar sc1(20.0, 21.1, 22.2, 23.3), osc1;
Range g1(7, 8), og1;
FileStorage fs(fname, FileStorage::WRITE);
fs << "mi" << mi;
fs << "mv" << mv;
......
......@@ -2457,7 +2457,7 @@ TEST(Core_Invert, small)
{
cv::Mat a = (cv::Mat_<float>(3,3) << 2.42104644730331, 1.81444796521479, -3.98072565304758, 0, 7.08389214348967e-3, 5.55326770986007e-3, 0,0, 7.44556154284261e-3);
//cv::randu(a, -1, 1);
cv::Mat b = a.t()*a;
cv::Mat c, i = Mat_<float>::eye(3, 3);
cv::invert(b, c, cv::DECOMP_LU); //std::cout << b*c << std::endl;
......
......@@ -40,7 +40,7 @@ Lixin Fan, Jutta Willamowski, Cedric Bray, 2004. ::
BOWTrainer::add
-------------------
Adds descriptors to a training set.
Adds descriptors to a training set.
.. ocv:function:: void BOWTrainer::add( const Mat& descriptors )
......@@ -66,7 +66,7 @@ Returns the count of all descriptors stored in the training set.
BOWTrainer::cluster
-----------------------
Clusters train descriptors.
Clusters train descriptors.
.. ocv:function:: Mat BOWTrainer::cluster() const
......@@ -116,7 +116,7 @@ Class to compute an image descriptor using the *bag of visual words*. Such a com
#. Compute descriptors for a given image and its keypoints set.
#. Find the nearest visual words from the vocabulary for each keypoint descriptor.
#. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The ``i``-th bin of the histogram is a frequency of ``i``-th word of the vocabulary in the given image.
The class declaration is the following: ::
class BOWImgDescriptorExtractor
......
......@@ -178,10 +178,10 @@ void BOWImgDescriptorExtractor::compute( const Mat& image, vector<KeyPoint>& key
// Normalize image descriptor.
imgDescriptor /= descriptors.rows;
// Add the descriptors of image keypoints
if (_descriptors) {
*_descriptors = descriptors.clone();
*_descriptors = descriptors.clone();
}
}
......
......@@ -258,7 +258,7 @@ struct IntersectAreaCounter
{
CV_Assert( miny < maxy );
CV_Assert( dr > FLT_EPSILON );
int temp_bua = bua, temp_bna = bna;
for( int i = range.begin(); i != range.end(); i++ )
{
......
......@@ -68,11 +68,11 @@ The method constructs a fast search structure from a set of features using the s
* **branching** The branching factor to use for the hierarchical k-means tree
* **iterations** The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence
* **iterations** The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence
* **centers_init** The algorithm to use for selecting the initial centers when performing a k-means clustering step. The possible values are ``CENTERS_RANDOM`` (picks the initial cluster centers randomly), ``CENTERS_GONZALES`` (picks the initial centers using Gonzales' algorithm) and ``CENTERS_KMEANSPP`` (picks the initial centers using the algorithm suggested in arthur_kmeanspp_2007 )
* **centers_init** The algorithm to use for selecting the initial centers when performing a k-means clustering step. The possible values are ``CENTERS_RANDOM`` (picks the initial cluster centers randomly), ``CENTERS_GONZALES`` (picks the initial centers using Gonzales' algorithm) and ``CENTERS_KMEANSPP`` (picks the initial centers using the algorithm suggested in arthur_kmeanspp_2007 )
* **cb_index** This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When ``cb_index`` is zero the next kmeans domain to be explored is chosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain.
* **cb_index** This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When ``cb_index`` is zero the next kmeans domain to be explored is chosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain.
*
**CompositeIndexParams** When using a parameters object of this type the index created combines the randomized kd-trees and the hierarchical k-means tree. ::
......@@ -122,16 +122,16 @@ The method constructs a fast search structure from a set of features using the s
..
* **target_precision** Is a number between 0 and 1 specifying the percentage of the approximate nearest-neighbor searches that return the exact nearest-neighbor. Using a higher value for this parameter gives more accurate results, but the search takes longer. The optimum value usually depends on the application.
* **target_precision** Is a number between 0 and 1 specifying the percentage of the approximate nearest-neighbor searches that return the exact nearest-neighbor. Using a higher value for this parameter gives more accurate results, but the search takes longer. The optimum value usually depends on the application.
* **build_weight** Specifies the importance of the index build time raported to the nearest-neighbor search time. In some applications it's acceptable for the index build step to take a long time if the subsequent searches in the index can be performed very fast. In other applications it's required that the index be build as fast as possible even if that leads to slightly longer search times.
* **build_weight** Specifies the importance of the index build time raported to the nearest-neighbor search time. In some applications it's acceptable for the index build step to take a long time if the subsequent searches in the index can be performed very fast. In other applications it's required that the index be build as fast as possible even if that leads to slightly longer search times.
* **memory_weight** Is used to specify the tradeoff between time (index build time and search time) and memory used by the index. A value less than 1 gives more importance to the time spent and a value greater than 1 gives more importance to the memory usage.
* **sample_fraction** Is a number between 0 and 1 indicating what fraction of the dataset to use in the automatic parameter configuration algorithm. Running the algorithm on the full dataset gives the most accurate results, but for very large datasets can take longer than desired. In such case using just a fraction of the data helps speeding up this algorithm while still giving good approximations of the optimum parameters.
* **sample_fraction** Is a number between 0 and 1 indicating what fraction of the dataset to use in the automatic parameter configuration algorithm. Running the algorithm on the full dataset gives the most accurate results, but for very large datasets can take longer than desired. In such case using just a fraction of the data helps speeding up this algorithm while still giving good approximations of the optimum parameters.
*
**SavedIndexParams** This object type is used for loading a previously saved index from the disk. ::
......
......@@ -43,7 +43,7 @@ if(HAVE_CUDA)
ocv_cuda_compile(cuda_objs ${lib_cuda} ${ncv_cuda})
set(cuda_link_libs ${CUDA_LIBRARIES} ${CUDA_npp_LIBRARY})
if(HAVE_CUFFT)
set(cuda_link_libs ${cuda_link_libs} ${CUDA_cufft_LIBRARY})
endif()
......
......@@ -49,7 +49,7 @@ This means that the input left image is low textured.
* A basic stereo matching example can be found at opencv_source_code/samples/gpu/stereo_match.cpp
* A stereo matching example using several GPU's can be found at opencv_source_code/samples/gpu/stereo_multi.cpp
* A stereo matching example using several GPU's and driver API can be found at opencv_source_code/samples/gpu/driver_api_stereo_multi.cpp
gpu::StereoBM_GPU::StereoBM_GPU
-----------------------------------
Enables :ocv:class:`gpu::StereoBM_GPU` constructors.
......
......@@ -7,7 +7,7 @@ Video Analysis
* A general optical flow example can be found at opencv_source_code/samples/gpu/optical_flow.cpp
* A general optical flow example using the Nvidia API can be found at opencv_source_code/samples/gpu/opticalflow_nvidia_api.cpp
gpu::BroxOpticalFlow
--------------------
.. ocv:class:: gpu::BroxOpticalFlow
......
......@@ -20,7 +20,7 @@ Reads an image from a buffer in memory.
:param buf: Input array or vector of bytes.
:param flags: The same flags as in :ocv:func:`imread` .
:param dst: The optional output placeholder for the decoded matrix. It can save the image reallocations when the function is called repeatedly for images of the same size.
The function reads an image from the specified buffer in the memory.
......@@ -74,9 +74,9 @@ Loads an image from a file.
:param filename: Name of file to be loaded.
:param flags: Flags specifying the color type of a loaded image:
* CV_LOAD_IMAGE_ANYDEPTH - If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit.
* CV_LOAD_IMAGE_COLOR - If set, always convert image to the color one
* CV_LOAD_IMAGE_GRAYSCALE - If set, always convert image to the grayscale one
......
......@@ -160,7 +160,7 @@ private:
};
class CvCapture_FFMPEG_proxy :
class CvCapture_FFMPEG_proxy :
public CvCapture
{
public:
......@@ -224,7 +224,7 @@ CvCapture* cvCreateFileCapture_FFMPEG_proxy(const char * filename)
return 0;
}
class CvVideoWriter_FFMPEG_proxy :
class CvVideoWriter_FFMPEG_proxy :
public CvVideoWriter
{
public:
......
......@@ -345,7 +345,7 @@ class ImplMutex
public:
ImplMutex() { init(); }
~ImplMutex() { destroy(); }
void init();
void destroy();
......@@ -450,7 +450,7 @@ void ImplMutex::init()
impl = (Impl*)malloc(sizeof(Impl));
impl->init();
}
void ImplMutex::destroy()
void ImplMutex::destroy()
{
impl->destroy();
free(impl);
......
......@@ -388,7 +388,7 @@ static CGFloat DegreesToRadians(CGFloat degrees) {return degrees * M_PI / 180;};
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kCVPixelBufferCGImageCompatibilityKey,
......@@ -399,23 +399,23 @@ static CGFloat DegreesToRadians(CGFloat degrees) {return degrees * M_PI / 180;};
frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) CFBridgingRetain(options),
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
......
......@@ -14,7 +14,7 @@ It has been tested with the motempl sample program
First Patch: August 24, 2004 Travis Wood TravisOCV@tkwood.com
For Release: OpenCV-Linux Beta4 opencv-0.9.6
Tested On: LMLBT44 with 8 video inputs
Problems? Post your questions at answers.opencv.org,
Problems? Post your questions at answers.opencv.org,
Report bugs at code.opencv.org,
Submit your fixes at https://github.com/Itseez/opencv/
Patched Comments:
......
......@@ -3026,7 +3026,7 @@ double CvCaptureFile_MSMF::getProperty(int property_id)
return ((double)captureFormats[captureFormatIndex].MF_MT_FRAME_RATE_NUMERATOR) /
((double)captureFormats[captureFormatIndex].MF_MT_FRAME_RATE_DENOMINATOR);
}
return -1;
}
......@@ -3062,7 +3062,7 @@ IplImage* CvCaptureFile_MSMF::retrieveFrame(int)
if(RIOut && size == RIOut->getSize())
{
videoInput::processPixels(RIOut->getpPixels(), (unsigned char*)frame->imageData, width,
videoInput::processPixels(RIOut->getpPixels(), (unsigned char*)frame->imageData, width,
height, bytes, false, verticalFlip);
}
......
......@@ -413,9 +413,9 @@ int CvCaptureCAM::startCaptureDevice(int cameraNum) {
void CvCaptureCAM::setWidthHeight() {
NSAutoreleasePool* localpool = [[NSAutoreleasePool alloc] init];
[mCaptureSession stopRunning];
NSDictionary* pixelBufferOptions = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:1.0*width], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithDouble:1.0*height], (id)kCVPixelBufferHeightKey,
......@@ -424,9 +424,9 @@ void CvCaptureCAM::setWidthHeight() {
nil];
[mCaptureDecompressedVideoOutput setPixelBufferAttributes:pixelBufferOptions];
[mCaptureSession startRunning];
grabFrame(60);
[localpool drain];
}
......
......@@ -14,7 +14,7 @@ It has been tested with the motempl sample program
First Patch: August 24, 2004 Travis Wood TravisOCV@tkwood.com
For Release: OpenCV-Linux Beta4 opencv-0.9.6
Tested On: LMLBT44 with 8 video inputs
Problems? Post your questions at answers.opencv.org,
Problems? Post your questions at answers.opencv.org,
Report bugs at code.opencv.org,
Submit your fixes at https://github.com/Itseez/opencv/
Patched Comments:
......@@ -157,7 +157,7 @@ the symptoms were damaged image and 'Corrupt JPEG data: premature end of data se
prevents bad images in the first place
11th patch: April 2, 2013, Forrest Reiling forrest.reiling@gmail.com
Added v4l2 support for getting capture property CV_CAP_PROP_POS_MSEC.
Added v4l2 support for getting capture property CV_CAP_PROP_POS_MSEC.
Returns the millisecond timestamp of the last frame grabbed or 0 if no frames have been grabbed
Used to successfully synchonize 2 Logitech C310 USB webcams to within 16 ms of one another
......@@ -1233,8 +1233,8 @@ static int read_frame_v4l2(CvCaptureCAM_V4L* capture) {
if (-1 == ioctl (capture->deviceHandle, VIDIOC_QBUF, &buf))
perror ("VIDIOC_QBUF");
//set timestamp in capture struct to be timestamp of most recent frame
capture->timestamp = buf.timestamp;
//set timestamp in capture struct to be timestamp of most recent frame
capture->timestamp = buf.timestamp;
return 1;
}
......@@ -2327,7 +2327,7 @@ static double icvGetPropertyCAM_V4L (CvCaptureCAM_V4L* capture,
if (capture->FirstCapture) {
return 0;
} else {
return 1000 * capture->timestamp.tv_sec + ((double) capture->timestamp.tv_usec) / 1000;
return 1000 * capture->timestamp.tv_sec + ((double) capture->timestamp.tv_usec) / 1000;
}
break;
case CV_CAP_PROP_BRIGHTNESS:
......
......@@ -138,7 +138,7 @@ void CvCaptureCAM_XIMEA::close()
{
if(frame)
cvReleaseImage(&frame);
if(hmv)
{
xiStopAcquisition(hmv);
......@@ -176,11 +176,11 @@ IplImage* CvCaptureCAM_XIMEA::retrieveFrame(int)
{
// update cvImage after format has changed
resetCvImage();
// copy pixel data
switch( image.frm)
{
case XI_MONO8 :
case XI_MONO8 :
case XI_RAW8 : memcpy( frame->imageData, image.bp, image.width*image.height); break;
case XI_MONO16 :
case XI_RAW16 : memcpy( frame->imageData, image.bp, image.width*image.height*sizeof(WORD)); break;
......@@ -210,15 +210,15 @@ void CvCaptureCAM_XIMEA::resetCvImage()
{
case XI_MONO8 :
case XI_RAW8 : frame = cvCreateImage(cvSize( image.width, image.height), IPL_DEPTH_8U, 1); break;
case XI_MONO16 :
case XI_MONO16 :
case XI_RAW16 : frame = cvCreateImage(cvSize( image.width, image.height), IPL_DEPTH_16U, 1); break;
case XI_RGB24 :
case XI_RGB24 :
case XI_RGB_PLANAR : frame = cvCreateImage(cvSize( image.width, image.height), IPL_DEPTH_8U, 3); break;
case XI_RGB32 : frame = cvCreateImage(cvSize( image.width, image.height), IPL_DEPTH_8U, 4); break;
default :
return;
}
}
}
cvZero(frame);
}
/**********************************************************************************/
......@@ -338,9 +338,9 @@ int CvCaptureCAM_XIMEA::getBpp()
{
case XI_MONO8 :
case XI_RAW8 : return 1;
case XI_MONO16 :
case XI_MONO16 :
case XI_RAW16 : return 2;
case XI_RGB24 :
case XI_RGB24 :
case XI_RGB_PLANAR : return 3;
case XI_RGB32 : return 4;
default :
......@@ -348,4 +348,4 @@ int CvCaptureCAM_XIMEA::getBpp()
}
}
/**********************************************************************************/
\ No newline at end of file
/**********************************************************************************/
......@@ -16,4 +16,4 @@ The license does not permit the following uses:
You may not use, or allow anyone else to use the icons to create pornographic, libelous, obscene, or defamatory material.
All icon files are provided "as is". You agree not to hold IconEden.com liable for any damages that may occur due to use, or inability to use, icons or image data from IconEden.com.
\ No newline at end of file
All icon files are provided "as is". You agree not to hold IconEden.com liable for any damages that may occur due to use, or inability to use, icons or image data from IconEden.com.
\ No newline at end of file
......@@ -44,21 +44,21 @@
#include "precomp.hpp"
UIImage* MatToUIImage(const cv::Mat& image) {
NSData *data = [NSData dataWithBytes:image.data
length:image.elemSize()*image.total()];
CGColorSpaceRef colorSpace;
if (image.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider =
CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(image.cols,
image.rows,
......@@ -73,14 +73,14 @@ UIImage* MatToUIImage(const cv::Mat& image) {
false,
kCGRenderingIntentDefault
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
......
......@@ -2474,7 +2474,7 @@ void DefaultViewPort::saveView()
if (!fileName.isEmpty()) //save the picture
{
QString extension = fileName.right(3);
// Create a new pixmap to render the viewport into
QPixmap viewportPixmap(viewport()->size());
viewport()->render(&viewportPixmap);
......
......@@ -179,7 +179,7 @@ Compares two histograms.
* **CV_COMP_INTERSECT** Intersection
* **CV_COMP_BHATTACHARYYA** Bhattacharyya distance
* **CV_COMP_HELLINGER** Synonym for ``CV_COMP_BHATTACHARYYA``
The functions ``compareHist`` compare two dense or two sparse histograms using the specified method:
......
......@@ -309,7 +309,7 @@ enum
// alpha premultiplication
CV_RGBA2mRGBA = 125,
CV_mRGBA2RGBA = 126,
CV_RGB2YUV_I420 = 127,
CV_BGR2YUV_I420 = 128,
CV_RGB2YUV_IYUV = CV_RGB2YUV_I420,
......
......@@ -3896,7 +3896,7 @@ void cv::cvtColor( InputArray _src, OutputArray _dst, int code, int dcn )
CV_Error( CV_StsBadArg, "Unsupported image depth" );
}
}
break;
break;
default:
CV_Error( CV_StsBadFlag, "Unknown/unsupported color conversion code" );
}
......
......@@ -1149,11 +1149,11 @@ static bool IPPMorphReplicate(int op, const Mat &src, Mat &dst, const Mat &kerne
}
//DEPRECATED. Allocates and initializes morphology state structure for erosion or dilation operation.
typedef IppStatus (CV_STDCALL* ippiMorphologyInitAllocFunc)(int, const void*, IppiSize, IppiPoint, IppiMorphState **);
ippiMorphologyInitAllocFunc ippInitAllocFunc =
type == CV_8UC1 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_8u_C1R :
type == CV_8UC3 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_8u_C3R :
type == CV_8UC4 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_8u_C4R :
type == CV_32FC1 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_32f_C1R :
ippiMorphologyInitAllocFunc ippInitAllocFunc =
type == CV_8UC1 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_8u_C1R :
type == CV_8UC3 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_8u_C3R :
type == CV_8UC4 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_8u_C4R :
type == CV_32FC1 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_32f_C1R :
type == CV_32FC3 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_32f_C3R :
type == CV_32FC4 ? (ippiMorphologyInitAllocFunc)ippiMorphologyInitAlloc_32f_C4R :
0;
......@@ -1163,25 +1163,25 @@ static bool IPPMorphReplicate(int op, const Mat &src, Mat &dst, const Mat &kerne
{
case MORPH_DILATE:
{
ippFunc =
type == CV_8UC1 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_8u_C1R :
type == CV_8UC3 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_8u_C3R :
type == CV_8UC4 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_8u_C4R :
type == CV_32FC1 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_32f_C1R :
type == CV_32FC3 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_32f_C3R :
type == CV_32FC4 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_32f_C4R :
ippFunc =
type == CV_8UC1 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_8u_C1R :
type == CV_8UC3 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_8u_C3R :
type == CV_8UC4 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_8u_C4R :
type == CV_32FC1 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_32f_C1R :
type == CV_32FC3 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_32f_C3R :
type == CV_32FC4 ? (ippiMorphologyBorderReplicateFunc)ippiDilateBorderReplicate_32f_C4R :
0;
break;
}
case MORPH_ERODE:
{
ippFunc =
type == CV_8UC1 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_8u_C1R :
type == CV_8UC3 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_8u_C3R :
type == CV_8UC4 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_8u_C4R :
type == CV_32FC1 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_32f_C1R :
type == CV_32FC3 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_32f_C3R :
type == CV_32FC4 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_32f_C4R :
ippFunc =
type == CV_8UC1 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_8u_C1R :
type == CV_8UC3 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_8u_C3R :
type == CV_8UC4 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_8u_C4R :
type == CV_32FC1 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_32f_C1R :
type == CV_32FC3 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_32f_C3R :
type == CV_32FC4 ? (ippiMorphologyBorderReplicateFunc)ippiErodeBorderReplicate_32f_C4R :
0;
break;
}
......@@ -1207,8 +1207,8 @@ static bool IPPMorphOp(int op, InputArray _src, OutputArray _dst,
int borderType, const Scalar &borderValue)
{
Mat src = _src.getMat(), kernel = _kernel.getMat();
if( !( src.depth() == CV_8U || src.depth() == CV_32F ) || ( iterations > 1 ) ||
!( borderType == cv::BORDER_REPLICATE || (borderType == cv::BORDER_CONSTANT && borderValue == morphologyDefaultBorderValue()) )
if( !( src.depth() == CV_8U || src.depth() == CV_32F ) || ( iterations > 1 ) ||
!( borderType == cv::BORDER_REPLICATE || (borderType == cv::BORDER_CONSTANT && borderValue == morphologyDefaultBorderValue()) )
|| !( op == MORPH_DILATE || op == MORPH_ERODE) )
return false;
if( borderType == cv::BORDER_CONSTANT )
......
......@@ -349,7 +349,7 @@ public:
int r = rgb[0];
int g = rgb[1];
int b = rgb[2];
uchar y = saturate_cast<uchar>((int)( 0.257f*r + 0.504f*g + 0.098f*b + 0.5f) + 16);
uchar u = saturate_cast<uchar>((int)(-0.148f*r - 0.291f*g + 0.439f*b + 0.5f) + 128);
uchar v = saturate_cast<uchar>((int)( 0.439f*r - 0.368f*g - 0.071f*b + 0.5f) + 128);
......
This diff is collapsed.
......@@ -67,7 +67,7 @@ The following loss functions are implemented for regression problems:
:math:`L(y,f(x)) = \left\{ \begin{array}{lr}
\delta\cdot\left(|y-f(x)|-\dfrac{\delta}{2}\right) & : |y-f(x)|>\delta\\
\dfrac{1}{2}\cdot(y-f(x))^2 & : |y-f(x)|\leq\delta \end{array} \right.`,
where :math:`\delta` is the :math:`\alpha`-quantile estimation of the
:math:`|y-f(x)|`. In the current implementation :math:`\alpha=0.2`.
......@@ -129,9 +129,9 @@ CvGBTreesParams::CvGBTreesParams
:param weak_count: Count of boosting algorithm iterations. ``weak_count*K`` is the total
count of trees in the GBT model, where ``K`` is the output classes count
(equal to one in case of a regression).
:param shrinkage: Regularization parameter (see :ref:`Training GBT`).
:param subsample_portion: Portion of the whole training set used for each algorithm iteration.
Subset is generated randomly. For more information see
http://www.salfordsystems.com/doc/StochasticBoostingSS.pdf.
......@@ -139,7 +139,7 @@ CvGBTreesParams::CvGBTreesParams
:param max_depth: Maximal depth of each decision tree in the ensemble (see :ocv:class:`CvDTree`).
:param use_surrogates: If ``true``, surrogate splits are built (see :ocv:class:`CvDTree`).
By default the following constructor is used:
.. code-block:: cpp
......@@ -178,7 +178,7 @@ Trains a Gradient boosted tree model.
.. ocv:function:: bool CvGBTrees::train(CvMLData* data, CvGBTreesParams params=CvGBTreesParams(), bool update=false)
.. ocv:pyfunction:: cv2.GBTrees.train(trainData, tflag, responses[, varIdx[, sampleIdx[, varType[, missingDataMask[, params[, update]]]]]]) -> retval
The first train method follows the common template (see :ocv:func:`CvStatModel::train`).
Both ``tflag`` values (``CV_ROW_SAMPLE``, ``CV_COL_SAMPLE``) are supported.
``trainData`` must be of the ``CV_32F`` type. ``responses`` must be a matrix of type
......@@ -188,7 +188,7 @@ list of indices (``CV_32S``) or a mask (``CV_8U`` or ``CV_8S``). ``update`` is
a dummy parameter.
The second form of :ocv:func:`CvGBTrees::train` function uses :ocv:class:`CvMLData` as a
data set container. ``update`` is still a dummy parameter.
data set container. ``update`` is still a dummy parameter.
All parameters specific to the GBT model are passed into the training function
as a :ocv:class:`CvGBTreesParams` structure.
......@@ -207,42 +207,42 @@ Predicts a response for an input sample.
:param sample: Input feature vector that has the same format as every training set
element. If not all the variables were actually used during training,
``sample`` contains forged values at the appropriate places.
:param missing: Missing values mask, which is a dimensional matrix of the same size as
``sample`` having the ``CV_8U`` type. ``1`` corresponds to the missing value
in the same position in the ``sample`` vector. If there are no missing values
in the feature vector, an empty matrix can be passed instead of the missing mask.
:param weakResponses: Matrix used to obtain predictions of all the trees.
The matrix has :math:`K` rows,
where :math:`K` is the count of output classes (1 for the regression case).
The matrix has as many columns as the ``slice`` length.
:param slice: Parameter defining the part of the ensemble used for prediction.
If ``slice = Range::all()``, all trees are used. Use this parameter to
get predictions of the GBT models with different ensemble sizes learning
only one model.
:param k: Number of tree ensembles built in case of the classification problem
(see :ref:`Training GBT`). Use this
parameter to change the output to sum of the trees' predictions in the
``k``-th ensemble only. To get the total GBT model prediction, ``k`` value
must be -1. For regression problems, ``k`` is also equal to -1.
The method predicts the response corresponding to the given sample
(see :ref:`Predicting with GBT`).
The result is either the class label or the estimated function value. The
:ocv:func:`CvGBTrees::predict` method enables using the parallel version of the GBT model
prediction if the OpenCV is built with the TBB library. In this case, predictions
of single trees are computed in a parallel fashion.
of single trees are computed in a parallel fashion.
CvGBTrees::clear
----------------
Clears the model.
.. ocv:function:: void CvGBTrees::clear()
.. ocv:pyfunction:: cv2.GBTrees.clear() -> None
The function deletes the data set information and all the weak models and sets all internal
......@@ -257,7 +257,7 @@ Calculates a training or testing error.
.. ocv:function:: float CvGBTrees::calc_error( CvMLData* _data, int type, std::vector<float> *resp = 0 )
:param _data: Data set.
:param type: Parameter defining the error that should be computed: train (``CV_TRAIN_ERROR``) or test
(``CV_TEST_ERROR``).
......
......@@ -45,7 +45,7 @@ Trains the model.
:param updateBase: Specifies whether the model is trained from scratch (``update_base=false``), or it is updated using the new training data (``update_base=true``). In the latter case, the parameter ``maxK`` must not be larger than the original value.
The method trains the K-Nearest model. It follows the conventions of the generic :ocv:func:`CvStatModel::train` approach with the following limitations:
The method trains the K-Nearest model. It follows the conventions of the generic :ocv:func:`CvStatModel::train` approach with the following limitations:
* Only ``CV_ROW_SAMPLE`` data layout is supported.
* Input variables are all ordered.
......
This diff is collapsed.
......@@ -116,7 +116,7 @@ bool CvKNearest::train( const CvMat* _train_data, const CvMat* _responses,
if( !responses )
CV_ERROR( CV_StsNoMem, "Could not allocate memory for responses" );
if( _update_base && _dims != var_count )
CV_ERROR( CV_StsBadArg, "The newly added data have different dimensionality" );
......
This diff is collapsed.
......@@ -53,7 +53,7 @@
#if defined(HAVE_OPENCV_GPU)
#include "opencv2/nonfree/gpu.hpp"
#if defined(HAVE_CUDA)
#include "opencv2/gpu/stream_accessor.hpp"
#include "opencv2/gpu/device/common.hpp"
......
......@@ -1964,10 +1964,10 @@ cvLoadHaarClassifierCascade( const char* directory, CvSize orig_window_size )
size += (n+1)*sizeof(char*);
const char** input_cascade = (const char**)cvAlloc( size );
if( !input_cascade )
CV_Error( CV_StsNoMem, "Could not allocate memory for input_cascade" );
char* ptr = (char*)(input_cascade + n + 1);
for( int i = 0; i < n; i++ )
......@@ -1988,7 +1988,7 @@ cvLoadHaarClassifierCascade( const char* directory, CvSize orig_window_size )
}
input_cascade[n] = 0;
CvHaarClassifierCascade* cascade = icvLoadCascadeCART( input_cascade, n, orig_window_size );
if( input_cascade )
......
......@@ -439,7 +439,7 @@ int CV_CascadeDetectorTest::detectMultiScale_C( const string& filename,
CvMat c_gray = grayImg;
CvSeq* rs = cvHaarDetectObjects(&c_gray, c_cascade, storage, 1.1, 3, flags[di] );
objects.clear();
for( int i = 0; i < rs->total; i++ )
{
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -130,7 +130,7 @@ namespace cv
{
openCLFree(tex_);
}
operator cl_mem()
operator cl_mem()
{
return tex_;
}
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment