1. From link above download dataset files (git clone https://code.google.com/p/sports-1m-dataset/).
2. To load data run: ./ar_sports -p=/home/user/path_to_downloaded_folders/
Face Recognition
----------------
...
...
@@ -27,7 +45,17 @@ Face Recognition
fr_lfw
======
.. ocv:class:: fr_lfw
Implements loading dataset: `Labeled Faces in the Wild-a <http://www.openu.ac.il/home/hassner/data/lfwa/>`_
Implements loading dataset:
_`"Labeled Faces in the Wild-a"`: http://www.openu.ac.il/home/hassner/data/lfwa/
.. note:: Usage
1. From link above download dataset file: lfwa.tar.gz.
2. Unpack it.
3. To load data run: ./fr_lfw -p=/home/user/path_to_unpacked_folder/lfw2/
Gesture Recognition
-------------------
...
...
@@ -35,12 +63,32 @@ Gesture Recognition
gr_chalearn
===========
.. ocv:class:: gr_chalearn
Implements loading dataset: `ChaLearn Looking at People <http://gesture.chalearn.org/>`_
Implements loading dataset:
_`"ChaLearn Looking at People"`: http://gesture.chalearn.org/
.. note:: Usage
1. Follow instruction from site above, download files for dataset "Track 3: Gesture Recognition": Train1.zip-Train5.zip, Validation1.zip-Validation3.zip (Register on site: www.codalab.org and accept the terms and conditions of competition: https://www.codalab.org/competitions/991#learn_the_details There are three mirrors for downloading dataset files. When I downloaded data only mirror: "Universitat Oberta de Catalunya" works).
2. Unpack train archives Train1.zip-Train5.zip to one folder (currently loading validation files wasn't implemented)
3. To load data run: ./gr_chalearn -p=/home/user/path_to_unpacked_folder/
1. From link above download dataset file: people.zip.
2. Unpack it.
3. To load data run: ./hpe_parse -p=/home/user/path_to_unpacked_folder/people_all/
Image Registration
------------------
...
...
@@ -56,12 +114,31 @@ Image Registration
ir_affine
=========
.. ocv:class:: ir_affine
Implements loading dataset: `Affine Covariant Regions Datasets <http://www.robots.ox.ac.uk/~vgg/data/data-aff.html>`_
Implements loading dataset:
_`"Affine Covariant Regions Datasets"`: http://www.robots.ox.ac.uk/~vgg/data/data-aff.html
.. note:: Usage
1. From link above download dataset files: bark\\bikes\\boat\\graf\\leuven\\trees\\ubc\\wall.tar.gz.
2. Unpack them.
3. To load data, for example, for "bark", run: ./ir_affine -p=/home/user/path_to_unpacked_folder/bark/
ir_robot
========
.. ocv:class:: ir_robot
Implements loading dataset: `Robot Data Set <http://roboimagedata.compute.dtu.dk/?page_id=24>`_
Implements loading dataset:
_`"Robot Data Set"`: http://roboimagedata.compute.dtu.dk/?page_id=24
.. note:: Usage
1. From link above download files for dataset "Point Feature Data Set – 2010": SET001_6.tar.gz-SET055_60.tar.gz (there are two data sets: - Full resolution images (1200×1600), ~500 Gb and - Half size image (600×800), ~115 Gb.)
2. Unpack them to one folder.
3. To load data run: ./ir_robot -p=/home/user/path_to_unpacked_folder/
Image Segmentation
------------------
...
...
@@ -69,12 +146,32 @@ Image Segmentation
is_bsds
=======
.. ocv:class:: is_bsds
Implements loading dataset: `The Berkeley Segmentation Dataset and Benchmark <https://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/>`_
Implements loading dataset:
_`"The Berkeley Segmentation Dataset and Benchmark"`: https://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/
.. note:: Usage
1. From link above download dataset files: BSDS300-human.tgz & BSDS300-images.tgz.
2. Unpack them.
3. To load data run: ./is_bsds -p=/home/user/path_to_unpacked_folder/BSDS300/
1. From link above download dataset files: castle_dense\\castle_dense_large\\castle_entry\\fountain\\herzjesu_dense\\herzjesu_dense_large_bounding\\cameras\\images\\p.tar.gz.
2. Unpack them in separate folder for each object. For example, for "fountain", in folder fountain/ : fountain_dense_bounding.tar.gz -> bounding/, fountain_dense_cameras.tar.gz -> camera/, fountain_dense_images.tar.gz -> png/, fountain_dense_p.tar.gz -> P/
3. To load data, for example, for "fountain", run: ./msm_epfl -p=/home/user/path_to_unpacked_folder/fountain/
1. From link above download "Odometry" dataset files: data_odometry_gray\\data_odometry_color\\data_odometry_velodyne\\data_odometry_poses\\data_odometry_calib.zip.
2. Unpack data_odometry_poses.zip, it creates folder dataset/poses/. After that unpack data_odometry_gray.zip, data_odometry_color.zip, data_odometry_velodyne.zip. Folder dataset/sequences/ will be created with folders 00/..21/. Each of these folders will contain: image_0/, image_1/, image_2/, image_3/, velodyne/ and files calib.txt & times.txt. These two last files will be replaced after unpacking data_odometry_calib.zip at the end.
3. To load data run: ./slam_kitti -p=/home/user/path_to_unpacked_folder/dataset/
1. From link above download dataset files: dslr\\info\\ladybug\\pointcloud.tar.bz2 for each dataset: 11-11-28 (1st floor)\\11-12-13 (1st floor N1)\\11-12-17a (4th floor)\\11-12-17b (3rd floor)\\11-12-17c (Ground I)\\11-12-18a (Ground II)\\11-12-18b (2nd floor)
2. Unpack them in separate folder for each dataset. dslr.tar.bz2 -> dslr/, info.tar.bz2 -> info/, ladybug.tar.bz2 -> ladybug/, pointcloud.tar.bz2 -> pointcloud/.
3. To load each dataset run: ./slam_tumindoor -p=/home/user/path_to_unpacked_folders/
<h1>datasetstools. Tools for working with different datasets.<aclass="headerlink"href="#datasetstools-tools-for-working-with-different-datasets"title="Permalink to this headline">¶</a></h1>
<p>The datasetstools module includes classes for working with different datasets.</p>
<p>First version of this module was implemented for <strong>Fall2014 OpenCV Challenge</strong>.</p>
<divclass="section"id="action-recognition">
<h2>Action Recognition<aclass="headerlink"href="#action-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="ar-hmdb">
<h3>ar_hmdb<aclass="headerlink"href="#ar-hmdb"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="hmdb-a-large-human-motion-database">“HMDB: A Large Human Motion Database”</span>: <aclass="reference external"href="http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/">http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset files: hmdb51_org.rar & test_train_splits.rar.</li>
<li>Unpack them.</li>
<li>To load data run: ./ar_hmdb -p=/home/user/path_to_unpacked_folders/</li>
</ol>
</div>
</div>
<divclass="section"id="ar-sports">
<h3>ar_sports<aclass="headerlink"href="#ar-sports"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files (git clone <aclass="reference external"href="https://code.google.com/p/sports-1m-dataset/">https://code.google.com/p/sports-1m-dataset/</a>).</li>
<li>To load data run: ./ar_sports -p=/home/user/path_to_downloaded_folders/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="face-recognition">
<h2>Face Recognition<aclass="headerlink"href="#face-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="fr-lfw">
<h3>fr_lfw<aclass="headerlink"href="#fr-lfw"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="labeled-faces-in-the-wild-a">“Labeled Faces in the Wild-a”</span>: <aclass="reference external"href="http://www.openu.ac.il/home/hassner/data/lfwa/">http://www.openu.ac.il/home/hassner/data/lfwa/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset file: lfwa.tar.gz.</li>
<li>Unpack it.</li>
<li>To load data run: ./fr_lfw -p=/home/user/path_to_unpacked_folder/lfw2/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="gesture-recognition">
<h2>Gesture Recognition<aclass="headerlink"href="#gesture-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="gr-chalearn">
<h3>gr_chalearn<aclass="headerlink"href="#gr-chalearn"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="chalearn-looking-at-people">“ChaLearn Looking at People”</span>: <aclass="reference external"href="http://gesture.chalearn.org/">http://gesture.chalearn.org/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>Follow instruction from site above, download files for dataset “Track 3: Gesture Recognition”: Train1.zip-Train5.zip, Validation1.zip-Validation3.zip (Register on site: www.codalab.org and accept the terms and conditions of competition: <aclass="reference external"href="https://www.codalab.org/competitions/991#learn_the_details">https://www.codalab.org/competitions/991#learn_the_details</a> There are three mirrors for downloading dataset files. When I downloaded data only mirror: “Universitat Oberta de Catalunya” works).</li>
<li>Unpack train archives Train1.zip-Train5.zip to one folder (currently loading validation files wasn’t implemented)</li>
<li>To load data run: ./gr_chalearn -p=/home/user/path_to_unpacked_folder/</li>
</ol>
</div>
</div>
<divclass="section"id="gr-skig">
<h3>gr_skig<aclass="headerlink"href="#gr-skig"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset file: people.zip.</li>
<li>Unpack it.</li>
<li>To load data run: ./hpe_parse -p=/home/user/path_to_unpacked_folder/people_all/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="image-registration">
<h2>Image Registration<aclass="headerlink"href="#image-registration"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="ir-affine">
<h3>ir_affine<aclass="headerlink"href="#ir-affine"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="affine-covariant-regions-datasets">“Affine Covariant Regions Datasets”</span>: <aclass="reference external"href="http://www.robots.ox.ac.uk/~vgg/data/data-aff.html">http://www.robots.ox.ac.uk/~vgg/data/data-aff.html</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset files: bark\bikes\boat\graf\leuven\trees\ubc\wall.tar.gz.</li>
<li>Unpack them.</li>
<li>To load data, for example, for “bark”, run: ./ir_affine -p=/home/user/path_to_unpacked_folder/bark/</li>
</ol>
</div>
</div>
<divclass="section"id="ir-robot">
<h3>ir_robot<aclass="headerlink"href="#ir-robot"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="robot-data-set">“Robot Data Set”</span>: <aclass="reference external"href="http://roboimagedata.compute.dtu.dk/?page_id=24">http://roboimagedata.compute.dtu.dk/?page_id=24</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download files for dataset “Point Feature Data Set – 2010”: SET001_6.tar.gz-SET055_60.tar.gz (there are two data sets: - Full resolution images (1200×1600), ~500 Gb and - Half size image (600×800), ~115 Gb.)</li>
<li>Unpack them to one folder.</li>
<li>To load data run: ./ir_robot -p=/home/user/path_to_unpacked_folder/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="image-segmentation">
<h2>Image Segmentation<aclass="headerlink"href="#image-segmentation"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="is-bsds">
<h3>is_bsds<aclass="headerlink"href="#is-bsds"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="the-berkeley-segmentation-dataset-and-benchmark">“The Berkeley Segmentation Dataset and Benchmark”</span>: <aclass="reference external"href="https://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/">https://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset files: BSDS300-human.tgz & BSDS300-images.tgz.</li>
<li>Unpack them.</li>
<li>To load data run: ./is_bsds -p=/home/user/path_to_unpacked_folder/BSDS300/</li>
</ol>
</div>
</div>
<divclass="section"id="is-weizmann">
<h3>is_weizmann<aclass="headerlink"href="#is-weizmann"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files: castle_dense\castle_dense_large\castle_entry\fountain\herzjesu_dense\herzjesu_dense_large_bounding\cameras\images\p.tar.gz.</li>
<li>Unpack them in separate folder for each object. For example, for “fountain”, in folder fountain/ : fountain_dense_bounding.tar.gz -> bounding/, fountain_dense_cameras.tar.gz -> camera/, fountain_dense_images.tar.gz -> png/, fountain_dense_p.tar.gz -> P/</li>
<li>To load data, for example, for “fountain”, run: ./msm_epfl -p=/home/user/path_to_unpacked_folder/fountain/</li>
</ol>
</div>
</div>
<divclass="section"id="msm-middlebury">
<h3>msm_middlebury<aclass="headerlink"href="#msm-middlebury"title="Permalink to this headline">¶</a></h3>
<li>From link above download “Odometry” dataset files: data_odometry_gray\data_odometry_color\data_odometry_velodyne\data_odometry_poses\data_odometry_calib.zip.</li>
<li>Unpack data_odometry_poses.zip, it creates folder dataset/poses/. After that unpack data_odometry_gray.zip, data_odometry_color.zip, data_odometry_velodyne.zip. Folder dataset/sequences/ will be created with folders 00/..21/. Each of these folders will contain: image_0/, image_1/, image_2/, image_3/, velodyne/ and files calib.txt & times.txt. These two last files will be replaced after unpacking data_odometry_calib.zip at the end.</li>
<li>To load data run: ./slam_kitti -p=/home/user/path_to_unpacked_folder/dataset/</li>
</ol>
</div>
</div>
<divclass="section"id="slam-tumindoor">
<h3>slam_tumindoor<aclass="headerlink"href="#slam-tumindoor"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files: dslr\info\ladybug\pointcloud.tar.bz2 for each dataset: 11-11-28 (1st floor)\11-12-13 (1st floor N1)\11-12-17a (4th floor)\11-12-17b (3rd floor)\11-12-17c (Ground I)\11-12-18a (Ground II)\11-12-18b (2nd floor)</li>
<li>Unpack them in separate folder for each dataset. dslr.tar.bz2 -> dslr/, info.tar.bz2 -> info/, ladybug.tar.bz2 -> ladybug/, pointcloud.tar.bz2 -> pointcloud/.</li>
<li>To load each dataset run: ./slam_tumindoor -p=/home/user/path_to_unpacked_folders/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="text-recognition">
<h2>Text Recognition<aclass="headerlink"href="#text-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="tr-chars">
<h3>tr_chars<aclass="headerlink"href="#tr-chars"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files: EnglishFnt\EnglishHnd\EnglishImg\KannadaHnd\KannadaImg.tgz, ListsTXT.tgz.</li>
<li>Unpack them.</li>
<li>Move <ahref="#id1"><spanclass="problematic"id="id2">*</span></a>.m files from folder ListsTXT/ to appropriate folder. For example, English/list_English_Img.m for EnglishImg.tgz.</li>
<li>To load data, for example “EnglishImg”, run: ./tr_chars -p=/home/user/path_to_unpacked_folder/English/</li>
</ol>
</div>
</div>
<divclass="section"id="tr-svt">
<h3>tr_svt<aclass="headerlink"href="#tr-svt"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="the-street-view-text-dataset">“The Street View Text Dataset”</span>: <aclass="reference external"href="http://vision.ucsd.edu/~kai/svt/">http://vision.ucsd.edu/~kai/svt/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset file: svt.zip.</li>
<li>Unpack it.</li>
<li>To load data run: ./tr_svt -p=/home/user/path_to_unpacked_folder/svt/svt1/</li>
</ol>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<divclass="sphinxsidebar">
<divclass="sphinxsidebarwrapper">
<h3><ahref="index.html">Table Of Contents</a></h3>
<ul>
<li><aclass="reference internal"href="#">datasetstools. Tools for working with different datasets.</a><ul>