<h1>datasetstools. Tools for working with different datasets.<aclass="headerlink"href="#datasetstools-tools-for-working-with-different-datasets"title="Permalink to this headline">¶</a></h1>
<p>The datasetstools module includes classes for working with different datasets.</p>
<p>First version of this module was implemented for <strong>Fall2014 OpenCV Challenge</strong>.</p>
<divclass="section"id="action-recognition">
<h2>Action Recognition<aclass="headerlink"href="#action-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="ar-hmdb">
<h3>ar_hmdb<aclass="headerlink"href="#ar-hmdb"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="hmdb-a-large-human-motion-database">“HMDB: A Large Human Motion Database”</span>: <aclass="reference external"href="http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/">http://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset files: hmdb51_org.rar & test_train_splits.rar.</li>
<li>Unpack them.</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_ar_hmdb -p=/home/user/path_to_unpacked_folders/</li>
</ol>
</div>
</div>
<divclass="section"id="ar-sports">
<h3>ar_sports<aclass="headerlink"href="#ar-sports"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files (git clone <aclass="reference external"href="https://code.google.com/p/sports-1m-dataset/">https://code.google.com/p/sports-1m-dataset/</a>).</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_ar_sports -p=/home/user/path_to_downloaded_folders/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="face-recognition">
<h2>Face Recognition<aclass="headerlink"href="#face-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="fr-lfw">
<h3>fr_lfw<aclass="headerlink"href="#fr-lfw"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="labeled-faces-in-the-wild-a">“Labeled Faces in the Wild-a”</span>: <aclass="reference external"href="http://www.openu.ac.il/home/hassner/data/lfwa/">http://www.openu.ac.il/home/hassner/data/lfwa/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset file: lfwa.tar.gz.</li>
<li>Unpack it.</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_fr_lfw -p=/home/user/path_to_unpacked_folder/lfw2/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="gesture-recognition">
<h2>Gesture Recognition<aclass="headerlink"href="#gesture-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="gr-chalearn">
<h3>gr_chalearn<aclass="headerlink"href="#gr-chalearn"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="chalearn-looking-at-people">“ChaLearn Looking at People”</span>: <aclass="reference external"href="http://gesture.chalearn.org/">http://gesture.chalearn.org/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>Follow instruction from site above, download files for dataset “Track 3: Gesture Recognition”: Train1.zip-Train5.zip, Validation1.zip-Validation3.zip (Register on site: www.codalab.org and accept the terms and conditions of competition: <aclass="reference external"href="https://www.codalab.org/competitions/991#learn_the_details">https://www.codalab.org/competitions/991#learn_the_details</a> There are three mirrors for downloading dataset files. When I downloaded data only mirror: “Universitat Oberta de Catalunya” works).</li>
<li>Unpack train archives Train1.zip-Train5.zip to one folder (currently loading validation files wasn’t implemented)</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_gr_chalearn -p=/home/user/path_to_unpacked_folder/</li>
</ol>
</div>
</div>
<divclass="section"id="gr-skig">
<h3>gr_skig<aclass="headerlink"href="#gr-skig"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset file: people.zip.</li>
<li>Unpack it.</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_hpe_parse -p=/home/user/path_to_unpacked_folder/people_all/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="image-registration">
<h2>Image Registration<aclass="headerlink"href="#image-registration"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="ir-affine">
<h3>ir_affine<aclass="headerlink"href="#ir-affine"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="affine-covariant-regions-datasets">“Affine Covariant Regions Datasets”</span>: <aclass="reference external"href="http://www.robots.ox.ac.uk/~vgg/data/data-aff.html">http://www.robots.ox.ac.uk/~vgg/data/data-aff.html</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset files: bark\bikes\boat\graf\leuven\trees\ubc\wall.tar.gz.</li>
<li>Unpack them.</li>
<li>To load data, for example, for “bark”, run: ./opencv/build/bin/example_datasetstools_ir_affine -p=/home/user/path_to_unpacked_folder/bark/</li>
</ol>
</div>
</div>
<divclass="section"id="ir-robot">
<h3>ir_robot<aclass="headerlink"href="#ir-robot"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="robot-data-set">“Robot Data Set”</span>: <aclass="reference external"href="http://roboimagedata.compute.dtu.dk/?page_id=24">http://roboimagedata.compute.dtu.dk/?page_id=24</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download files for dataset “Point Feature Data Set – 2010”: SET001_6.tar.gz-SET055_60.tar.gz (there are two data sets: - Full resolution images (1200×1600), ~500 Gb and - Half size image (600×800), ~115 Gb.)</li>
<li>Unpack them to one folder.</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_ir_robot -p=/home/user/path_to_unpacked_folder/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="image-segmentation">
<h2>Image Segmentation<aclass="headerlink"href="#image-segmentation"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="is-bsds">
<h3>is_bsds<aclass="headerlink"href="#is-bsds"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="the-berkeley-segmentation-dataset-and-benchmark">“The Berkeley Segmentation Dataset and Benchmark”</span>: <aclass="reference external"href="https://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/">https://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset files: BSDS300-human.tgz & BSDS300-images.tgz.</li>
<li>Unpack them.</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_is_bsds -p=/home/user/path_to_unpacked_folder/BSDS300/</li>
</ol>
</div>
</div>
<divclass="section"id="is-weizmann">
<h3>is_weizmann<aclass="headerlink"href="#is-weizmann"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files: Weizmann_Seg_DB_1obj.ZIP & Weizmann_Seg_DB_2obj.ZIP.</li>
<li>Unpack them.</li>
<li>To load data, for example, for 1 object dataset, run: ./opencv/build/bin/example_datasetstools_is_weizmann -p=/home/user/path_to_unpacked_folder/1obj/</li>
<li>From link above download dataset files: castle_dense\castle_dense_large\castle_entry\fountain\herzjesu_dense\herzjesu_dense_large_bounding\cameras\images\p.tar.gz.</li>
<li>Unpack them in separate folder for each object. For example, for “fountain”, in folder fountain/ : fountain_dense_bounding.tar.gz -> bounding/, fountain_dense_cameras.tar.gz -> camera/, fountain_dense_images.tar.gz -> png/, fountain_dense_p.tar.gz -> P/</li>
<li>To load data, for example, for “fountain”, run: ./opencv/build/bin/example_datasetstools_msm_epfl -p=/home/user/path_to_unpacked_folder/fountain/</li>
</ol>
</div>
</div>
<divclass="section"id="msm-middlebury">
<h3>msm_middlebury<aclass="headerlink"href="#msm-middlebury"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files: dino\dinoRing\dinoSparseRing\temple\templeRing\templeSparseRing.zip</li>
<li>Unpack them.</li>
<li>To load data, for example “temple” dataset, run: ./opencv/build/bin/example_datasetstools_msm_middlebury -p=/home/user/path_to_unpacked_folder/temple/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="object-recognition">
<h2>Object Recognition<aclass="headerlink"href="#object-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="or-imagenet">
<h3>or_imagenet<aclass="headerlink"href="#or-imagenet"title="Permalink to this headline">¶</a></h3>
<li>From link above download “Odometry” dataset files: data_odometry_gray\data_odometry_color\data_odometry_velodyne\data_odometry_poses\data_odometry_calib.zip.</li>
<li>Unpack data_odometry_poses.zip, it creates folder dataset/poses/. After that unpack data_odometry_gray.zip, data_odometry_color.zip, data_odometry_velodyne.zip. Folder dataset/sequences/ will be created with folders 00/..21/. Each of these folders will contain: image_0/, image_1/, image_2/, image_3/, velodyne/ and files calib.txt & times.txt. These two last files will be replaced after unpacking data_odometry_calib.zip at the end.</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_slam_kitti -p=/home/user/path_to_unpacked_folder/dataset/</li>
</ol>
</div>
</div>
<divclass="section"id="slam-tumindoor">
<h3>slam_tumindoor<aclass="headerlink"href="#slam-tumindoor"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files: dslr\info\ladybug\pointcloud.tar.bz2 for each dataset: 11-11-28 (1st floor)\11-12-13 (1st floor N1)\11-12-17a (4th floor)\11-12-17b (3rd floor)\11-12-17c (Ground I)\11-12-18a (Ground II)\11-12-18b (2nd floor)</li>
<li>Unpack them in separate folder for each dataset. dslr.tar.bz2 -> dslr/, info.tar.bz2 -> info/, ladybug.tar.bz2 -> ladybug/, pointcloud.tar.bz2 -> pointcloud/.</li>
<li>To load each dataset run: ./opencv/build/bin/example_datasetstools_slam_tumindoor -p=/home/user/path_to_unpacked_folders/</li>
</ol>
</div>
</div>
</div>
<divclass="section"id="text-recognition">
<h2>Text Recognition<aclass="headerlink"href="#text-recognition"title="Permalink to this headline">¶</a></h2>
<divclass="section"id="tr-chars">
<h3>tr_chars<aclass="headerlink"href="#tr-chars"title="Permalink to this headline">¶</a></h3>
<li>From link above download dataset files: EnglishFnt\EnglishHnd\EnglishImg\KannadaHnd\KannadaImg.tgz, ListsTXT.tgz.</li>
<li>Unpack them.</li>
<li>Move <ahref="#id1"><spanclass="problematic"id="id2">*</span></a>.m files from folder ListsTXT/ to appropriate folder. For example, English/list_English_Img.m for EnglishImg.tgz.</li>
<li>To load data, for example “EnglishImg”, run: ./opencv/build/bin/example_datasetstools_tr_chars -p=/home/user/path_to_unpacked_folder/English/</li>
</ol>
</div>
</div>
<divclass="section"id="tr-svt">
<h3>tr_svt<aclass="headerlink"href="#tr-svt"title="Permalink to this headline">¶</a></h3>
<p>Implements loading dataset:</p>
<p><spanclass="target"id="the-street-view-text-dataset">“The Street View Text Dataset”</span>: <aclass="reference external"href="http://vision.ucsd.edu/~kai/svt/">http://vision.ucsd.edu/~kai/svt/</a></p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<p>Usage</p>
<olclass="last arabic simple">
<li>From link above download dataset file: svt.zip.</li>
<li>Unpack it.</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_tr_svt -p=/home/user/path_to_unpacked_folder/svt/svt1/</li>
</ol>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<divclass="sphinxsidebar">
<divclass="sphinxsidebarwrapper">
<h3><ahref="index.html">Table Of Contents</a></h3>
<ul>
<li><aclass="reference internal"href="#">datasetstools. Tools for working with different datasets.</a><ul>