<iframetitle=" Camera calibration With OpenCV - Chessboard or asymmetrical circle pattern."width="560"height="349"src="http://www.youtube.com/embed/ViPN810E0SU?rel=0&loop=1"frameborder="0"allowfullscreenalign="middle"></iframe>
<iframetitle="Pose estimation of textured object using OpenCV in cluttered background"width="560"height="349"src="http://www.youtube.com/embed/YLS9bWek78k?rel=0&loop=1"frameborder="0"allowfullscreenalign="middle"></iframe>
Note that the order of the channels is inverse: BGR instead of RGB. Because in many cases the memory
is large enough to store the rows in a successive fashion the rows may follow one after another,
creating a single long row. Because everything is in a single place following one after another this
may help to speed up the scanning process. We can use the @ref cv::isContinuous() function to *ask*
may help to speed up the scanning process. We can use the @ref cv::Mat::isContinuous() function to *ask*
the matrix if this is the case. Continue on to the next section to find an example.
The efficient way
...
...
@@ -227,12 +229,12 @@ differences I've used a quite large (2560 X 1600) image. The performance present
color images. For a more accurate value I've averaged the value I got from the call of the function
for hundred times.
--------------- ----------------------
Efficient Way 79.4717 milliseconds
Iterator 83.7201 milliseconds
On-The-Fly RA 93.7878 milliseconds
LUT function 32.5759 milliseconds
--------------- ----------------------
Method | Time
--------------- | ----------------------
Efficient Way | 79.4717 milliseconds
Iterator | 83.7201 milliseconds
On-The-Fly RA | 93.7878 milliseconds
LUT function | 32.5759 milliseconds
We can conclude a couple of things. If possible, use the already made functions of OpenCV (instead
reinventing these). The fastest method turns out to be the LUT function. This is because the OpenCV
...
...
@@ -242,12 +244,10 @@ Using the on-the-fly reference access method for full image scan is the most cos
In the release mode it may beat the iterator approach or not, however it surely sacrifices for this
the safety trait of iterators.
Finally, you may watch a sample run of the program on the [video
posted](https://www.youtube.com/watch?v=fB3AN5fjgwc) on our YouTube channel.
Finally, you may watch a sample run of the program on the [video posted](https://www.youtube.com/watch?v=fB3AN5fjgwc) on our YouTube channel.
\htmlonly
<divalign="center">
<iframetitle="How to scan images in OpenCV?"width="560"height="349"src="http://www.youtube.com/embed/fB3AN5fjgwc?rel=0&loop=1"frameborder="0"allowfullscreenalign="middle"></iframe>
Here you can observe that we may go through all the pixels of an image in three fashions: an
iterator, a C pointer and an individual element access style. You can read a more in-depth
description of these in the @ref howToScanImagesOpenCV tutorial. Converting from the old function
description of these in the @ref tutorial_how_to_scan_images tutorial. Converting from the old function
names is easy. Just remove the cv prefix and use the new *Mat* data structure. Here's an example of
this by using the weighted addition function:
...
...
@@ -161,4 +161,3 @@ of the OpenCV source code library.
<iframetitle="Interoperability with OpenCV 1"width="560"height="349"src="http://www.youtube.com/embed/qckm-zvo31w?rel=0&loop=1"frameborder="0"allowfullscreenalign="middle"></iframe>
You can also find a quick video demonstration of this on
...
...
@@ -355,4 +352,3 @@ You can also find a quick video demonstration of this on
<iframetitle="Install OpenCV by using its source files - Part 1"width="560"height="349"src="http://www.youtube.com/embed/1tibU7vGWpk?rel=0&loop=1"frameborder="0"allowfullscreenalign="middle"></iframe>
We get the results below. Varying the indices in the Trackbars give different output images, naturally. Try them out! You can even try to add a third Trackbar to control the number of iterations.
SourceForge](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/) and download
the latest available version. Currently it's `OpenCV-2.4.9-android-sdk.zip`_.
the latest available version. Currently it's [OpenCV-2.4.9-android-sdk.zip](http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.4.9/OpenCV-2.4.9-android-sdk.zip/download).
2. Create a new folder for Android with OpenCV development. For this tutorial we have unpacked
OpenCV SDK to the `C:\\Work\\OpenCV4Android\\` directory.
OpenCV SDK to the `C:\Work\OpenCV4Android\` directory.
@note Better to use a path without spaces in it. Otherwise you may have problems with ndk-build.
@note Better to use a path without spaces in it. Otherwise you may have problems with ndk-build. \#.
Unpack the SDK archive into the chosen directory.
3. Unpack the SDK archive into the chosen directory.
You can unpack it using any popular archiver (e.g with 7-Zip_):
You can unpack it using any popular archiver (e.g with 7-Zip_):


On Unix you can use the following command:
@code{.bash}
unzip ~/Downloads/OpenCV-2.4.9-android-sdk.zip
@endcode
On Unix you can use the following command:
@code{.bash}
unzip ~/Downloads/OpenCV-2.4.9-android-sdk.zip
@endcode
### Import OpenCV library and samples to the Eclipse
1. Start Eclipse and choose your workspace location.
@note This XML file can be reused for building other Java applications. It describes a common folder structure in the lines 3 - 12 and common targets for compiling and running the application.
- In this folder create the `build.xml` file with the following content using any text editor:
@include samples/java/ant/build.xml
@note This XML file can be reused for building other Java applications. It describes a common folder structure in the lines 3 - 12 and common targets for compiling and running the application.
When reusing this XML don't forget to modify the project name in the line 1, that is also the
name of the main class (line 14). The paths to OpenCV jar and jni lib are expected as parameters
("\\f${ocvJarDir}" in line 5 and "\\f${ocvLibDir}" in line 37), but you can hardcode these paths for
("${ocvJarDir}" in line 5 and "${ocvLibDir}" in line 37), but you can hardcode these paths for
your convenience. See [Ant documentation](http://ant.apache.org/manual/) for detailed
description of its build file format.
- Create an `src` folder next to the `build.xml` file and a `SimpleSample.java` file in it.
-
Put the following Java code into the `SimpleSample.java` file:
- Put the following Java code into the `SimpleSample.java` file:
@code{.java}
import org.opencv.core.Core;
import org.opencv.core.Mat;
...
...
@@ -175,9 +130,7 @@ folder. \* Create a folder where you'll develop this sample application.
}
@endcode
-
Run the following command in console in the folder containing `build.xml`:
- Run the following command in console in the folder containing `build.xml`:
@code{.bash}
ant -DocvJarDir=path/to/dir/containing/opencv-244.jar -DocvLibDir=path/to/dir/containing/opencv_java244/native/library
@endcode
...
...
@@ -370,4 +323,3 @@ It should also write the following image to `faceDetection.png`:
You're done! Now you have a sample Java application working with OpenCV, so you can start the work
on your own. We wish you good luck and many years of joyful life!
The first line is a reference to the section title in the reST system. The section title will be a
link and you may refer to it via the
`` @ref directive. The *include* directive imports the template text from the definitions directories *noContent.rst* file. *Sphinx* does not creates the PDF from scratch. It does this by first creating a latex file. Then creates the PDF from the latex file. With the *raw* directive you can directly add to this output commands. Its unique argument is for what kind of output to add the content of the directive. For the PDFs it may happen that multiple sections will overlap on a single page. To avoid this at the end of the TOC we add a *pagebreak* latex command, that hints to the LATEX system that the next line should be on a new page. If you have one of this, try to transform it to the following form: .. include:: ../../definitions/tocDefinitions.rst .. code-block:: rst .. _Table-Of-Content-Section: Section title ----------------------------------------------------------- .. include:: ../../definitions/tocDefinitions.rst + .. tabularcolumns:: m{100pt} m{300pt} .. cssclass:: toctableopencv |MatBasicIma| **Title:** @ref matTheBasicImageContainer *Compatibility:* > OpenCV 2.0 *Author:* Bernát Gábor You will learn how to store images in the memory and how to print out their content to the console. =============== ===================================================== .. |MatBasicIma| image:: images/matTheBasicImageStructure.jpg .. raw:: latex .. toctree:: :hidden: ../mat - the basic image container/mat - the basic image container If this is already present just add a new section of the content between the include and the raw directives (excluding those lines). Here you'll see a new include directive. This should be present only once in a TOC tree and the reST file contains the definitions of all the authors contributing to the OpenCV tutorials. We are a multicultural community and some of our name may contain some funky characters. However, reST **only supports** ANSI characters. Luckily we can specify Unicode characters with the *unicode* directive. Doing this for all of your tutorials is a troublesome procedure. Therefore, the tocDefinitions file contains the definition of your author name. Add it here once and afterwards just use the replace construction. For example here's the definition for my name: .. code-block:: rst .. Bernát Gábor unicode:: Bern U+00E1 t U+0020 G U+00E1 bor The `Bernát Gábor` is the text definitions alias. I can use later this to add the definition, like I've done in the TOCs *Author* part. After the `::` and a space you start the definition. If you want to add an UNICODE character (non-ASCI) leave an empty space and specify it in the format U+(UNICODE code). To find the UNICODE code of a character I recommend using the `FileFormat <http://www.fileformat.info>`_ websites service. Spaces are trimmed from the definition, therefore we add a space by its UNICODE character (U+0020). Until the *raw* directive what you can see is a TOC tree entry. Here's how a TOC entry will look like: + .. tabularcolumns:: m{100pt} m{300pt} .. cssclass:: toctableopencv |MatBasicIma| **Title:** @ref matTheBasicImageContainer *Compatibility:* > OpenCV 2.0 *Author:* Bernát Gábor You will learn how to store images in the memory and how to print out their content to the console. .. |MatBasicIma| image:: images/matTheBasicImageStructure.jpg As you can see we have an image to the left and a description box to the right. To create two boxes we use a table with two columns and a single row. In the left column is the image and in the right one the description. However, the image directive is way too long to fit in a column. Therefore, we need to use the substitution definition system. We add this definition after the TOC tree. All images for the TOC tree are to be put in the images folder near its |reST|_ file. We use the point measurement system because we are also creating PDFs. PDFs are printable documents, where there is no such thing that pixels (px), just points (pt). And while generally space is no problem for web pages (we have monitors with **huge** resolutions) the size of the paper (A4 or letter) is constant and will be for a long time in the future. Therefore, size constrains come in play more like for the PDF, than the generated HTML code. Now your images should be as small as possible, while still offering the intended information for the user. Remember that the tutorial will become part of the OpenCV source code. If you add large images (that manifest in form of large image size) it will just increase the size of the repository pointlessly. If someone wants to download it later, its download time will be that much longer. Not to mention the larger PDF size for the tutorials and the longer load time for the web pages. In terms of pixels a TOC image should not be larger than 120 X 120 pixels. Resize your images if they are larger! @note If you add a larger image and specify a smaller image size, *Sphinx* will not resize that. At build time will add the full size image and the resize will be done by your browser after the image is loaded. A 120 X 120 image is somewhere below 10KB. If you add a 110KB image, you have just pointlessly added a 100KB extra data to transfer over the internet for every user! Generally speaking you shouldn't need to specify your images size (excluding the TOC entries). If no such is found *Sphinx* will use the size of the image itself (so no resize occurs). Then again if for some reason you decide to specify a size that should be the **width** of the image rather than its height. The reason for this again goes back to the PDFs. On a PDF page the height is larger than the width. In the PDF the images will not be resized. If you specify a size that does not fit in the page, then what does not fits in **will be cut off**. When creating your images for your tutorial you should try to keep the image widths below 500 pixels, and calculate with around 400 point page width when specifying image widths. The image format depends on the content of the image. If you have some complex scene (many random like colors) then use *jpg*. Otherwise, prefer using *png*. They are even some tools out there that optimize the size of *PNG* images, such as `PNGGauntlet <http://pnggauntlet.com/>`_. Use them to make your images as small as possible in size. Now on the right side column of the table we add the information about the tutorial: .. container:: enumeratevisibleitemswithsquare + In the first line it is the title of the tutorial. However, there is no need to specify it explicitly. We use the reference system. We'll start up our tutorial with a reference specification, just like in case of this TOC entry with its ` .. _Table-Of-Content-Section:` . If after this you have a title (pointed out by the following line of -), then Sphinx will replace the `@ref Table-Of-Content-Section` directive with the tile of the section in reference form (creates a link in web page). Here's how the definition looks in my case: .. code-block:: rst .. _matTheBasicImageContainer: Mat - The Basic Image Container ******************************* Note, that according to the |reST|_ rules the * should be as long as your title. + Compatibility. What version of OpenCV is required to run your sample code. + Author. Use the substitution markup of |reST|_. + A short sentence describing the essence of your tutorial. Now before each TOC entry you need to add the three lines of: .. code-block:: cpp + .. tabularcolumns:: m{100pt} m{300pt} .. cssclass:: toctableopencv The plus sign (+) is to enumerate tutorials by using bullet points. So for every TOC entry we have a corresponding bullet point represented by the +. Sphinx is highly indenting sensitive. Indentation is used to express from which point until to which point does a construction last. Un-indentation means end of that construction. So to keep all the bullet points to the same group the following TOC entries (until the next +) should be indented by two spaces. Here, I should also mention that **always** prefer using spaces instead of tabs. Working with only spaces makes possible that if we both use monotype fonts we will see the same thing. Tab size is text editor dependent and as should be avoided. *Sphinx* translates all tabs into 8 spaces before interpreting it. It turns out that the automatic formatting of both the HTML and PDF(LATEX) system messes up our tables. Therefore, we need to help them out a little. For the PDF generation we add the `.. tabularcolumns:: m{100pt} m{300pt}` directive. This means that the first column should be 100 points wide and middle aligned. For the HTML look we simply name the following table of a *toctableopencv* class type. Then, we can modify the look of the table by modifying the CSS of our web page. The CSS definitions go into the ``opencv/doc/_themes/blue/static/default.css_t`file. .. code-block:: css .toctableopencv { width: 100% ; table-layout: fixed; } .toctableopencv colgroup col:first-child { width: 100pt !important; max-width: 100pt !important; min-width: 100pt !important; } .toctableopencv colgroup col:nth-child(2) { width: 100% !important; } However, you should not need to modify this. Just add these three lines (plus keep the two space indentation) for all TOC entries you add. At the end of the TOC file you'll find: .. code-block:: rst .. raw:: latex .. toctree:: :hidden: ../mat - the basic image container/mat - the basic image container The page break entry comes for separating sections and should be only one in a TOC tree |reST|_ file. Finally, at the end of the TOC tree we need to add our tutorial to the *Sphinx* TOC tree system. *Sphinx* will generate from this the previous-next-up information for the HTML file and add items to the PDF according to the order here. By default this TOC tree directive generates a simple table of contents. However, we already created a fancy looking one so we no longer need this basic one. Therefore, we add the *hidden* option to do not show it. The path is of a relative type. We step back in the file system and then go into the`mat -
the basic image container`directory for the`mat - the basic image
container.rst`file. Putting out the *rst* extension for the file is optional. Write the tutorial ================== Create a folder with the name of your tutorial. Preferably, use small letters only. Then create a text file in this folder with *rst* extension and the same name. If you have images for the tutorial create an`images`` folder and add your images there. When creating your images follow the guidelines described in the previous part! Now here's our recommendation for the structure of the tutorial (although, remember that this is not carved in the stone; if you have a better idea, use it!): .. container:: enumeratevisibleitemswithsquare + Create the reference point and the title. .. code-block:: rst .. _matTheBasicImageContainer: Mat - The Basic Image Container ******************************* You start the tutorial by specifying a reference point by the `.. _matTheBasicImageContainer:` and then its title. The name of the reference point should be a unique one over the whole documentation. Therefore, do not use general names like *tutorial1*. Use the * character to underline the title for its full width. The subtitles of the tutorial should be underlined with = charachter. + Goals. You start your tutorial by specifying what you will present. You can also enumerate the sub jobs to be done. For this you can use a bullet point construction. There is a single configuration file for both the reference manual and the tutorial documentation. In the reference manuals at the argument enumeration we do not want any kind of bullet point style enumeration. Therefore, by default all the bullet points at this level are set to do not show the dot before the entries in the HTML. You can override this by putting the bullet point in a container. I've defined a square type bullet point view under the name *enumeratevisibleitemswithsquare*. The CSS style definition for this is again in the ``opencvdoc_themesbluestaticdefault.css_t`file. Here's a quick example of using it: .. code-block:: rst .. container:: enumeratevisibleitemswithsquare + Create the reference point and the title. + Second entry + Third entry Note that you need the keep the indentation of the container directive. Directive indentations are always three (3) spaces. Here you may even give usage tips for your sample code. + Source code. Present your samples code to the user. It's a good idea to offer a quick download link for the HTML page by using the *download* directive and pointing out where the user may find your source code in the file system by using the *file* directive: .. code-block:: rst Text`samples/cpp/tutorial_code/highgui/video-write/`` folder of the OpenCV source library or :download:`text to appear in the webpage <../../../../samples/cpp/tutorial_code/HighGUI/video-write/video-write.cpp>`. For the download link the path is a relative one, hence the multiple back stepping operations (..). Then you can add the source code either by using the *code block* directive or the *literal include* one. In case of the code block you will need to actually add all the source code text into your |reST|_ text and also apply the required indentation: .. code-block:: rst .. code-block:: cpp int i = 0; l = ++j; The only argument of the directive is the language used (here CPP). Then you add the source code into its content (meaning one empty line after the directive) by keeping the indentation of the directive (3 spaces). With the *literal include* directive you do not need to add the source code of the sample. You just specify the sample and *Sphinx* will load it for you, during build time. Here's an example usage: .. code-block:: rst @includelineno cpp/tutorial_code/HighGUI/video-write/video-write.cpp :lines: 1-8, 21-23, 25- After the directive you specify a relative path to the file from what to import. It has four options: the language to use, if you add the ` + The tutorial. Well here goes the explanation for why and what have you used. Try to be short, clear, concise and yet a thorough one. There's no magic formula. Look into a few already made tutorials and start out from there. Try to mix sample OpenCV code with your explanations. If with words is hard to describe something do not hesitate to add in a reasonable size image, to overcome this issue. When you present OpenCV functionality it's a good idea to give a link to the used OpenCV data structure or function. Because the OpenCV tutorials and reference manual are in separate PDF files it is not possible to make this link work for the PDF format. Therefore, we use here only web page links to the http://docs.opencv.org website. The OpenCV functions and data structures may be used for multiple tasks. Nevertheless, we want to avoid that every users creates its own reference to a commonly used function. So for this we use the global link collection of *Sphinx*. This is defined in the file:`opencv/doc/conf.py` configuration file. Open it and go all the way down to the last entry: .. code-block:: py # ---- External links for tutorials ----------------- extlinks = { 'rwimg' : ('http://docs.opencv.org/modules/imgcodecs/doc/reading_and_writing_images.html#%s', None) } In short here we defined a new **rwimg** directive that refers to an external webpage link. Its usage is: .. code-block:: rst A sample function of the highgui modules image write and read page is the @ref cv::imread() function . Which turns to: A sample function of the highgui modules image write and read page is the @ref cv::imread() function . The argument you give between the <> will be put in place of the `%s` in the upper definition, and as the link will anchor to the correct function. To find out the anchor of a given function just open up a web page, search for the function and click on it. In the address bar it should appear like: `http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images.html#imread` . Look here for the name of the directives for each page of the OpenCV reference manual. If none present for one of them feel free to add one for it. For formulas you can add LATEX code that will translate in the web pages into images. You do this by using the *math* directive. A usage tip: .. code-block:: latex .. math:: MSE = \frac{1}{c*i*j} \sum{(I_1-I_2)^2} That after build turns into: .. math:: MSE = \frac{1}{c*i*j} \sum{(I_1-I_2)^2} You can even use it inline as `:math:` MSE = \frac{1}{c*i*j} \sum{(I_1-I_2)^2} ``
that turns into \f$MSE = \frac{1}{c*i*j} \sum{(I_1-I_2)^2}\f$.
If you use some crazy LATEX library extension you need to add those to the ones to use at build
time. Look into the [file:\`opencv/doc/conf.py](file:`opencv/doc/conf.py)\` configuration file for
more information on this.
- Results. Well, here depending on your program show one of more of the following:
- Console outputs by using the code block directive.
- Output images.
- Runtime videos, visualization. For this use your favorite screens capture software.
[Camtasia Studio](http://www.techsmith.com/camtasia/) certainly is one of the better
choices, however their prices are out of this world. [CamStudio](http://camstudio.org/) is
a free alternative, but less powerful. If you do a video you can upload it to YouTube and
then use the raw directive with HTML option to embed it into the generated web page:
@code{.rst}
You may observe a runtime instance of this on the `YouTube here <https://www.youtube.com/watch?v=jpBwHxsl1_0>`_.
.. raw:: html
\htmlonly
<divalign="center">
<iframetitle="Creating a video with OpenCV"width="560"height="349"src="http://www.youtube.com/embed/jpBwHxsl1_0?rel=0&loop=1"frameborder="0"allowfullscreenalign="middle"></iframe>
</div>
\endhtmlonly
@endcode
This results in the text and video: You may observe a runtime instance of this on the
<iframetitle="Creating a video with OpenCV"width="560"height="349"src="http://www.youtube.com/embed/jpBwHxsl1_0?rel=0&loop=1"frameborder="0"allowfullscreenalign="middle"></iframe>
</div>
\endhtmlonly
When these aren't self-explanatory make sure to throw in a few guiding lines about what and
why we can see.
- Build the documentation and check for errors or warnings. In the CMake make sure you check or
pass the option for building documentation. Then simply build the **docs** project for the PDF
file and the **docs_html** project for the web page. Read the output of the build and check
for errors/warnings for what you have added. This is also the time to observe and correct any
kind of *not so good looking* parts. Remember to keep clean our build logs.
- Read again your tutorial and check for both programming and spelling errors. If found any,
please correct them.
Take home the pride and joy of a job well done!
-----------------------------------------------
Once you are done please make a GitHub pull request with the tutorial. Now, to see your work
**live** you may need to wait some time. The PDFs are updated usually at the launch of a new OpenCV
version. The web pages are a little more diverse. They are automatically rebuilt nightly. Currently
we use 2.4 and master branches for daily builds. So, if your pull request was merged to any of these
branches, your material will be published at [docs.opencv.org/2.4](http:/docs.opencv.org/2.4) or
[docs.opencv.org/master](http:/docs.opencv.org/master) correspondingly. Everything that was added to
2.4 is merged to master branch every week. Although, we try to make a build every night,
occasionally we might freeze any of the branches to fix upcoming issues. During this it may take a
little longer to see your work online, however if you submitted it, be sure that eventually it will
show up.
If you have any questions or advices relating to this tutorial you can contact us at
<-delete-admin@-delete-opencv.org> (delete the -delete- parts of that email address).
You may find the content of this tutorial also inside the following videos: [Part
1](https://www.youtube.com/watch?v=NnovZ1cTlMs) and [Part
2](https://www.youtube.com/watch?v=qGNWMcfWwPU), hosted on YouTube.
You may find the content of this tutorial also inside the following videos:
[Part 1](https://www.youtube.com/watch?v=NnovZ1cTlMs) and [Part 2](https://www.youtube.com/watch?v=qGNWMcfWwPU), hosted on YouTube.
\htmlonly
<divalign="center">
...
...
@@ -37,6 +36,7 @@ You may find the content of this tutorial also inside the following videos: [Par
<iframetitle="Install OpenCV by using its source files - Part 2"width="560"height="349"src="http://www.youtube.com/embed/qGNWMcfWwPU?rel=0&loop=1"frameborder="0"allowfullscreenalign="middle"></iframe>
</div>
\endhtmlonly
**warning**
These videos above are long-obsolete and contain inaccurate information. Be careful, since
...
...
@@ -50,10 +50,10 @@ Building the OpenCV library from scratch requires a couple of tools installed be
- An IDE of choice (preferably), or just a CC++ compiler that will actually make the binary files.
Here we will use the [Microsoft Visual Studio](https://www.microsoft.com/visualstudio/en-us).
However, you can use any other IDE that has a valid CC++ compiler.
-CMake_, which is a neat tool to make the project files (for your chosen IDE) from the OpenCV
-[CMake](http://www.cmake.org/cmake/resources/software.html), which is a neat tool to make the project files (for your chosen IDE) from the OpenCV
source files. It will also allow an easy configuration of the OpenCV build files, in order to
make binary files that fits exactly to your needs.
- Git to acquire the OpenCV source files. A good tool for this is TortoiseGit_. Alternatively,
- Git to acquire the OpenCV source files. A good tool for this is [TortoiseGit](http://code.google.com/p/tortoisegit/wiki/Download). Alternatively,
you can just download an archived version of the source files from our [page on
First we set an enviroment variable to make easier our work. This will hold the build directory of
our OpenCV library that we use in our projects. Start up a command window and enter:
@code
setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc10 (suggested for Visual Studio 2010 - 32 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc10 (suggested for Visual Studio 2010 - 64 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc11 (suggested for Visual Studio 2012 - 32 bit Windows)
setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc11 (suggested for Visual Studio 2012 - 64 bit Windows)
@endcode
Here the directory is where you have your OpenCV binaries (*extracted* or *built*). You can have
different platform (e.g. x64 instead of x86) or compiler type, so substitute appropriate value.
Inside this you should have two folders called *lib* and *bin*. The -m should be added if you wish
...
...
@@ -344,10 +347,11 @@ However, to do this the operating system needs to know where they are. The syste
a list of folders where DLLs can be found. Add the OpenCV library path to this and the OS will know
where to look if he ever needs the OpenCV binaries. Otherwise, you will need to copy the used DLLs
right beside the applications executable file (*exe*) for the OS to find it, which is highly
unpleasent if you work on many projects. To do this start up again the |PathEditor|_ and add the
unpleasent if you work on many projects. To do this start up again the [PathEditor](http://www.redfernplace.com/software-projects/patheditor/) and add the
following new entry (right click in the application to bring up the menu):
@code
%OPENCV_DIR%\bin
@endcode

...
...
@@ -357,7 +361,6 @@ Save it to the registry and you are done. If you ever change the location of you
or want to try out your applicaton with a different build all you will need to do is to update the
OPENCV_DIR variable via the *setx* command inside a command window.
Now you can continue reading the tutorials with the @ref Windows_Visual_Studio_How_To section.
Now you can continue reading the tutorials with the @ref tutorial_windows_visual_studio_Opencv section.
There you will find out how to use the OpenCV library in your own projects with the help of the
The function @ref cv::ml::SVM::train that will be used afterwards requires the training data to be
stored as @ref cv::Mat objects of floats. Therefore, we create these objects from the arrays
defined above:
@code{.cpp}
Mat trainingDataMat(4, 2, CV_32FC1, trainingData);
Mat labelsMat (4, 1, CV_32FC1, labels);
@endcode
2.**Set up SVM's parameters**
In this tutorial we have introduced the theory of SVMs in the most simple case, when the
...
...
@@ -121,7 +122,7 @@ Mat labelsMat (4, 1, CV_32FC1, labels);
used in a wide variety of problems (e.g. problems with non-linearly separable data, a SVM using
a kernel function to raise the dimensionality of the examples, etc). As a consequence of this,
we have to define some parameters before training the SVM. These parameters are stored in an
object of the class @ref cv::CvSVMParams .
object of the class @ref cv::ml::SVM::Params .
@code{.cpp}
ml::SVM::Params params;
params.svmType = ml::SVM::C_SVC;
...
...
@@ -132,7 +133,8 @@ Mat labelsMat (4, 1, CV_32FC1, labels);
classification (n \f$\geq\f$ 2). This parameter is defined in the attribute
*ml::SVM::Params.svmType*.
@note The important feature of the type of SVM **CvSVM::C_SVC** deals with imperfect separation of classes (i.e. when the training data is non-linearly separable). This feature is not important here since the data is linearly separable and we chose this SVM type only for being the most commonly used.
The important feature of the type of SVM **CvSVM::C_SVC** deals with imperfect separation of classes (i.e. when the training data is non-linearly separable). This feature is not important here since the data is linearly separable and we chose this SVM type only for being the most commonly used.
- *Type of SVM kernel*. We have not talked about kernel functions since they are not
interesting for the training data we are dealing with. Nevertheless, let's explain briefly
now the main idea behind a kernel function. It is a mapping done to the training data to
...
...
@@ -140,6 +142,7 @@ Mat labelsMat (4, 1, CV_32FC1, labels);
increasing the dimensionality of the data and is done efficiently using a kernel function.
We choose here the type **ml::SVM::LINEAR** which means that no mapping is done. This
parameter is defined in the attribute *ml::SVMParams.kernel_type*.
- *Termination criteria of the algorithm*. The SVM training procedure is implemented solving a
constrained quadratic optimization problem in an **iterative** fashion. Here we specify a
maximum number of iterations and a tolerance error so we allow the algorithm to finish in
...
...
@@ -155,17 +158,18 @@ Mat labelsMat (4, 1, CV_32FC1, labels);
There are just two differences between the configuration we do here and the one that was done in
the previous tutorial (tutorial_introduction_to_svm) that we use as reference.
- *CvSVM::C_SVC*. We chose here a small value of this parameter in order not to punish too much
the misclassification errors in the optimization. The idea of doing this stems from the will
of obtaining a solution close to the one intuitively expected. However, we recommend to get a
better insight of the problem by making adjustments to this parameter.
@note Here there are just very few points in the overlapping region between classes, giving a smaller value to **FRAC_LINEAR_SEP** the density of points can be incremented and the impact of the parameter **CvSVM::C_SVC** explored deeply.
@note Here there are just very few points in the overlapping region between classes, giving a smaller value to **FRAC_LINEAR_SEP** the density of points can be incremented and the impact of the parameter **CvSVM::C_SVC** explored deeply.
- *Termination Criteria of the algorithm*. The maximum number of iterations has to be
increased considerably in order to solve correctly a problem with non-linearly separable
training data. In particular, we have increased in five orders of magnitude this value.
3.**Train the SVM**
We call the method @ref cv::CvSVM::train to build the SVM model. Watch out that the training
process may take a quite long time. Have patiance when your run the program.