3. To load data run: ./hpe_parse -p=/home/user/path_to_unpacked_folder/people_all/
3. To load data run: ./opencv/build/bin/example_datasetstools_hpe_parse -p=/home/user/path_to_unpacked_folder/people_all/
Image Registration
Image Registration
------------------
------------------
...
@@ -124,7 +124,7 @@ _`"Affine Covariant Regions Datasets"`: http://www.robots.ox.ac.uk/~vgg/data/dat
...
@@ -124,7 +124,7 @@ _`"Affine Covariant Regions Datasets"`: http://www.robots.ox.ac.uk/~vgg/data/dat
2. Unpack them.
2. Unpack them.
3. To load data, for example, for "bark", run: ./ir_affine -p=/home/user/path_to_unpacked_folder/bark/
3. To load data, for example, for "bark", run: ./opencv/build/bin/example_datasetstools_ir_affine -p=/home/user/path_to_unpacked_folder/bark/
ir_robot
ir_robot
========
========
...
@@ -138,7 +138,7 @@ _`"Robot Data Set"`: http://roboimagedata.compute.dtu.dk/?page_id=24
...
@@ -138,7 +138,7 @@ _`"Robot Data Set"`: http://roboimagedata.compute.dtu.dk/?page_id=24
1. From link above download files for dataset "Point Feature Data Set – 2010": SET001_6.tar.gz-SET055_60.tar.gz (there are two data sets: - Full resolution images (1200×1600), ~500 Gb and - Half size image (600×800), ~115 Gb.)
1. From link above download files for dataset "Point Feature Data Set – 2010": SET001_6.tar.gz-SET055_60.tar.gz (there are two data sets: - Full resolution images (1200×1600), ~500 Gb and - Half size image (600×800), ~115 Gb.)
2. Unpack them to one folder.
2. Unpack them to one folder.
3. To load data run: ./ir_robot -p=/home/user/path_to_unpacked_folder/
3. To load data run: ./opencv/build/bin/example_datasetstools_ir_robot -p=/home/user/path_to_unpacked_folder/
3. To load data, for example, for 1 object dataset, run: ./is_weizmann -p=/home/user/path_to_unpacked_folder/1obj/
3. To load data, for example, for 1 object dataset, run: ./opencv/build/bin/example_datasetstools_is_weizmann -p=/home/user/path_to_unpacked_folder/1obj/
2. Unpack them in separate folder for each object. For example, for "fountain", in folder fountain/ : fountain_dense_bounding.tar.gz -> bounding/, fountain_dense_cameras.tar.gz -> camera/, fountain_dense_images.tar.gz -> png/, fountain_dense_p.tar.gz -> P/
2. Unpack them in separate folder for each object. For example, for "fountain", in folder fountain/ : fountain_dense_bounding.tar.gz -> bounding/, fountain_dense_cameras.tar.gz -> camera/, fountain_dense_images.tar.gz -> png/, fountain_dense_p.tar.gz -> P/
3. To load data, for example, for "fountain", run: ./msm_epfl -p=/home/user/path_to_unpacked_folder/fountain/
3. To load data, for example, for "fountain", run: ./opencv/build/bin/example_datasetstools_msm_epfl -p=/home/user/path_to_unpacked_folder/fountain/
3. To load data, for example "temple" dataset, run: ./msm_middlebury -p=/home/user/path_to_unpacked_folder/temple/
3. To load data, for example "temple" dataset, run: ./opencv/build/bin/example_datasetstools_msm_middlebury -p=/home/user/path_to_unpacked_folder/temple/
Object Recognition
Object Recognition
------------------
------------------
...
@@ -224,7 +224,7 @@ Currently implemented loading full list with urls. Planned to implement dataset
...
@@ -224,7 +224,7 @@ Currently implemented loading full list with urls. Planned to implement dataset
2. Unpack it.
2. Unpack it.
3. To load data run: ./or_imagenet -p=/home/user/path_to_unpacked_file/
3. To load data run: ./opencv/build/bin/example_datasetstools_or_imagenet -p=/home/user/path_to_unpacked_file/
or_sun
or_sun
======
======
...
@@ -241,7 +241,7 @@ Currently implemented loading "Scene Recognition Benchmark. SUN397". Planned to
...
@@ -241,7 +241,7 @@ Currently implemented loading "Scene Recognition Benchmark. SUN397". Planned to
2. Unpack it.
2. Unpack it.
3. To load data run: ./or_sun -p=/home/user/path_to_unpacked_folder/SUN397/
3. To load data run: ./opencv/build/bin/example_datasetstools_or_sun -p=/home/user/path_to_unpacked_folder/SUN397/
2. Unpack data_odometry_poses.zip, it creates folder dataset/poses/. After that unpack data_odometry_gray.zip, data_odometry_color.zip, data_odometry_velodyne.zip. Folder dataset/sequences/ will be created with folders 00/..21/. Each of these folders will contain: image_0/, image_1/, image_2/, image_3/, velodyne/ and files calib.txt & times.txt. These two last files will be replaced after unpacking data_odometry_calib.zip at the end.
2. Unpack data_odometry_poses.zip, it creates folder dataset/poses/. After that unpack data_odometry_gray.zip, data_odometry_color.zip, data_odometry_velodyne.zip. Folder dataset/sequences/ will be created with folders 00/..21/. Each of these folders will contain: image_0/, image_1/, image_2/, image_3/, velodyne/ and files calib.txt & times.txt. These two last files will be replaced after unpacking data_odometry_calib.zip at the end.
3. To load data run: ./slam_kitti -p=/home/user/path_to_unpacked_folder/dataset/
3. To load data run: ./opencv/build/bin/example_datasetstools_slam_kitti -p=/home/user/path_to_unpacked_folder/dataset/
2. Unpack them in separate folder for each dataset. dslr.tar.bz2 -> dslr/, info.tar.bz2 -> info/, ladybug.tar.bz2 -> ladybug/, pointcloud.tar.bz2 -> pointcloud/.
2. Unpack them in separate folder for each dataset. dslr.tar.bz2 -> dslr/, info.tar.bz2 -> info/, ladybug.tar.bz2 -> ladybug/, pointcloud.tar.bz2 -> pointcloud/.
3. To load each dataset run: ./slam_tumindoor -p=/home/user/path_to_unpacked_folders/
3. To load each dataset run: ./opencv/build/bin/example_datasetstools_slam_tumindoor -p=/home/user/path_to_unpacked_folders/
<li>From link above download dataset files: hmdb51_org.rar & test_train_splits.rar.</li>
<li>From link above download dataset files: hmdb51_org.rar & test_train_splits.rar.</li>
<li>Unpack them.</li>
<li>Unpack them.</li>
<li>To load data run: ./ar_hmdb -p=/home/user/path_to_unpacked_folders/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_ar_hmdb -p=/home/user/path_to_unpacked_folders/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -70,7 +70,7 @@
...
@@ -70,7 +70,7 @@
<p>Usage</p>
<p>Usage</p>
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset files (git clone <aclass="reference external"href="https://code.google.com/p/sports-1m-dataset/">https://code.google.com/p/sports-1m-dataset/</a>).</li>
<li>From link above download dataset files (git clone <aclass="reference external"href="https://code.google.com/p/sports-1m-dataset/">https://code.google.com/p/sports-1m-dataset/</a>).</li>
<li>To load data run: ./ar_sports -p=/home/user/path_to_downloaded_folders/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_ar_sports -p=/home/user/path_to_downloaded_folders/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -87,7 +87,7 @@
...
@@ -87,7 +87,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset file: lfwa.tar.gz.</li>
<li>From link above download dataset file: lfwa.tar.gz.</li>
<li>Unpack it.</li>
<li>Unpack it.</li>
<li>To load data run: ./fr_lfw -p=/home/user/path_to_unpacked_folder/lfw2/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_fr_lfw -p=/home/user/path_to_unpacked_folder/lfw2/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -104,7 +104,7 @@
...
@@ -104,7 +104,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>Follow instruction from site above, download files for dataset “Track 3: Gesture Recognition”: Train1.zip-Train5.zip, Validation1.zip-Validation3.zip (Register on site: www.codalab.org and accept the terms and conditions of competition: <aclass="reference external"href="https://www.codalab.org/competitions/991#learn_the_details">https://www.codalab.org/competitions/991#learn_the_details</a> There are three mirrors for downloading dataset files. When I downloaded data only mirror: “Universitat Oberta de Catalunya” works).</li>
<li>Follow instruction from site above, download files for dataset “Track 3: Gesture Recognition”: Train1.zip-Train5.zip, Validation1.zip-Validation3.zip (Register on site: www.codalab.org and accept the terms and conditions of competition: <aclass="reference external"href="https://www.codalab.org/competitions/991#learn_the_details">https://www.codalab.org/competitions/991#learn_the_details</a> There are three mirrors for downloading dataset files. When I downloaded data only mirror: “Universitat Oberta de Catalunya” works).</li>
<li>Unpack train archives Train1.zip-Train5.zip to one folder (currently loading validation files wasn’t implemented)</li>
<li>Unpack train archives Train1.zip-Train5.zip to one folder (currently loading validation files wasn’t implemented)</li>
<li>To load data run: ./gr_chalearn -p=/home/user/path_to_unpacked_folder/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_gr_chalearn -p=/home/user/path_to_unpacked_folder/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -118,7 +118,7 @@
...
@@ -118,7 +118,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset files: subject1_dep.7z-subject6_dep.7z, subject1_rgb.7z-subject6_rgb.7z.</li>
<li>From link above download dataset files: subject1_dep.7z-subject6_dep.7z, subject1_rgb.7z-subject6_rgb.7z.</li>
<li>Unpack them.</li>
<li>Unpack them.</li>
<li>To load data run: ./gr_skig -p=/home/user/path_to_unpacked_folders/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_gr_skig -p=/home/user/path_to_unpacked_folders/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -135,7 +135,7 @@
...
@@ -135,7 +135,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset file: people.zip.</li>
<li>From link above download dataset file: people.zip.</li>
<li>Unpack it.</li>
<li>Unpack it.</li>
<li>To load data run: ./hpe_parse -p=/home/user/path_to_unpacked_folder/people_all/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_hpe_parse -p=/home/user/path_to_unpacked_folder/people_all/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -152,7 +152,7 @@
...
@@ -152,7 +152,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset files: bark\bikes\boat\graf\leuven\trees\ubc\wall.tar.gz.</li>
<li>From link above download dataset files: bark\bikes\boat\graf\leuven\trees\ubc\wall.tar.gz.</li>
<li>Unpack them.</li>
<li>Unpack them.</li>
<li>To load data, for example, for “bark”, run: ./ir_affine -p=/home/user/path_to_unpacked_folder/bark/</li>
<li>To load data, for example, for “bark”, run: ./opencv/build/bin/example_datasetstools_ir_affine -p=/home/user/path_to_unpacked_folder/bark/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -166,7 +166,7 @@
...
@@ -166,7 +166,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download files for dataset “Point Feature Data Set – 2010”: SET001_6.tar.gz-SET055_60.tar.gz (there are two data sets: - Full resolution images (1200×1600), ~500 Gb and - Half size image (600×800), ~115 Gb.)</li>
<li>From link above download files for dataset “Point Feature Data Set – 2010”: SET001_6.tar.gz-SET055_60.tar.gz (there are two data sets: - Full resolution images (1200×1600), ~500 Gb and - Half size image (600×800), ~115 Gb.)</li>
<li>Unpack them to one folder.</li>
<li>Unpack them to one folder.</li>
<li>To load data run: ./ir_robot -p=/home/user/path_to_unpacked_folder/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_ir_robot -p=/home/user/path_to_unpacked_folder/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -183,7 +183,7 @@
...
@@ -183,7 +183,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset files: BSDS300-human.tgz & BSDS300-images.tgz.</li>
<li>From link above download dataset files: BSDS300-human.tgz & BSDS300-images.tgz.</li>
<li>Unpack them.</li>
<li>Unpack them.</li>
<li>To load data run: ./is_bsds -p=/home/user/path_to_unpacked_folder/BSDS300/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_is_bsds -p=/home/user/path_to_unpacked_folder/BSDS300/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -197,7 +197,7 @@
...
@@ -197,7 +197,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset files: Weizmann_Seg_DB_1obj.ZIP & Weizmann_Seg_DB_2obj.ZIP.</li>
<li>From link above download dataset files: Weizmann_Seg_DB_1obj.ZIP & Weizmann_Seg_DB_2obj.ZIP.</li>
<li>Unpack them.</li>
<li>Unpack them.</li>
<li>To load data, for example, for 1 object dataset, run: ./is_weizmann -p=/home/user/path_to_unpacked_folder/1obj/</li>
<li>To load data, for example, for 1 object dataset, run: ./opencv/build/bin/example_datasetstools_is_weizmann -p=/home/user/path_to_unpacked_folder/1obj/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -214,7 +214,7 @@
...
@@ -214,7 +214,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset files: castle_dense\castle_dense_large\castle_entry\fountain\herzjesu_dense\herzjesu_dense_large_bounding\cameras\images\p.tar.gz.</li>
<li>From link above download dataset files: castle_dense\castle_dense_large\castle_entry\fountain\herzjesu_dense\herzjesu_dense_large_bounding\cameras\images\p.tar.gz.</li>
<li>Unpack them in separate folder for each object. For example, for “fountain”, in folder fountain/ : fountain_dense_bounding.tar.gz -> bounding/, fountain_dense_cameras.tar.gz -> camera/, fountain_dense_images.tar.gz -> png/, fountain_dense_p.tar.gz -> P/</li>
<li>Unpack them in separate folder for each object. For example, for “fountain”, in folder fountain/ : fountain_dense_bounding.tar.gz -> bounding/, fountain_dense_cameras.tar.gz -> camera/, fountain_dense_images.tar.gz -> png/, fountain_dense_p.tar.gz -> P/</li>
<li>To load data, for example, for “fountain”, run: ./msm_epfl -p=/home/user/path_to_unpacked_folder/fountain/</li>
<li>To load data, for example, for “fountain”, run: ./opencv/build/bin/example_datasetstools_msm_epfl -p=/home/user/path_to_unpacked_folder/fountain/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -228,7 +228,7 @@
...
@@ -228,7 +228,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset files: dino\dinoRing\dinoSparseRing\temple\templeRing\templeSparseRing.zip</li>
<li>From link above download dataset files: dino\dinoRing\dinoSparseRing\temple\templeRing\templeSparseRing.zip</li>
<li>Unpack them.</li>
<li>Unpack them.</li>
<li>To load data, for example “temple” dataset, run: ./msm_middlebury -p=/home/user/path_to_unpacked_folder/temple/</li>
<li>To load data, for example “temple” dataset, run: ./opencv/build/bin/example_datasetstools_msm_middlebury -p=/home/user/path_to_unpacked_folder/temple/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -246,7 +246,7 @@
...
@@ -246,7 +246,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset file: imagenet_fall11_urls.tgz</li>
<li>From link above download dataset file: imagenet_fall11_urls.tgz</li>
<li>Unpack it.</li>
<li>Unpack it.</li>
<li>To load data run: ./or_imagenet -p=/home/user/path_to_unpacked_file/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_or_imagenet -p=/home/user/path_to_unpacked_file/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -261,7 +261,7 @@
...
@@ -261,7 +261,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset file: SUN397.tar</li>
<li>From link above download dataset file: SUN397.tar</li>
<li>Unpack it.</li>
<li>Unpack it.</li>
<li>To load data run: ./or_sun -p=/home/user/path_to_unpacked_folder/SUN397/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_or_sun -p=/home/user/path_to_unpacked_folder/SUN397/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -278,7 +278,7 @@
...
@@ -278,7 +278,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download “Odometry” dataset files: data_odometry_gray\data_odometry_color\data_odometry_velodyne\data_odometry_poses\data_odometry_calib.zip.</li>
<li>From link above download “Odometry” dataset files: data_odometry_gray\data_odometry_color\data_odometry_velodyne\data_odometry_poses\data_odometry_calib.zip.</li>
<li>Unpack data_odometry_poses.zip, it creates folder dataset/poses/. After that unpack data_odometry_gray.zip, data_odometry_color.zip, data_odometry_velodyne.zip. Folder dataset/sequences/ will be created with folders 00/..21/. Each of these folders will contain: image_0/, image_1/, image_2/, image_3/, velodyne/ and files calib.txt & times.txt. These two last files will be replaced after unpacking data_odometry_calib.zip at the end.</li>
<li>Unpack data_odometry_poses.zip, it creates folder dataset/poses/. After that unpack data_odometry_gray.zip, data_odometry_color.zip, data_odometry_velodyne.zip. Folder dataset/sequences/ will be created with folders 00/..21/. Each of these folders will contain: image_0/, image_1/, image_2/, image_3/, velodyne/ and files calib.txt & times.txt. These two last files will be replaced after unpacking data_odometry_calib.zip at the end.</li>
<li>To load data run: ./slam_kitti -p=/home/user/path_to_unpacked_folder/dataset/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_slam_kitti -p=/home/user/path_to_unpacked_folder/dataset/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -292,7 +292,7 @@
...
@@ -292,7 +292,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset files: dslr\info\ladybug\pointcloud.tar.bz2 for each dataset: 11-11-28 (1st floor)\11-12-13 (1st floor N1)\11-12-17a (4th floor)\11-12-17b (3rd floor)\11-12-17c (Ground I)\11-12-18a (Ground II)\11-12-18b (2nd floor)</li>
<li>From link above download dataset files: dslr\info\ladybug\pointcloud.tar.bz2 for each dataset: 11-11-28 (1st floor)\11-12-13 (1st floor N1)\11-12-17a (4th floor)\11-12-17b (3rd floor)\11-12-17c (Ground I)\11-12-18a (Ground II)\11-12-18b (2nd floor)</li>
<li>Unpack them in separate folder for each dataset. dslr.tar.bz2 -> dslr/, info.tar.bz2 -> info/, ladybug.tar.bz2 -> ladybug/, pointcloud.tar.bz2 -> pointcloud/.</li>
<li>Unpack them in separate folder for each dataset. dslr.tar.bz2 -> dslr/, info.tar.bz2 -> info/, ladybug.tar.bz2 -> ladybug/, pointcloud.tar.bz2 -> pointcloud/.</li>
<li>To load each dataset run: ./slam_tumindoor -p=/home/user/path_to_unpacked_folders/</li>
<li>To load each dataset run: ./opencv/build/bin/example_datasetstools_slam_tumindoor -p=/home/user/path_to_unpacked_folders/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -310,7 +310,7 @@
...
@@ -310,7 +310,7 @@
<li>From link above download dataset files: EnglishFnt\EnglishHnd\EnglishImg\KannadaHnd\KannadaImg.tgz, ListsTXT.tgz.</li>
<li>From link above download dataset files: EnglishFnt\EnglishHnd\EnglishImg\KannadaHnd\KannadaImg.tgz, ListsTXT.tgz.</li>
<li>Unpack them.</li>
<li>Unpack them.</li>
<li>Move <ahref="#id1"><spanclass="problematic"id="id2">*</span></a>.m files from folder ListsTXT/ to appropriate folder. For example, English/list_English_Img.m for EnglishImg.tgz.</li>
<li>Move <ahref="#id1"><spanclass="problematic"id="id2">*</span></a>.m files from folder ListsTXT/ to appropriate folder. For example, English/list_English_Img.m for EnglishImg.tgz.</li>
<li>To load data, for example “EnglishImg”, run: ./tr_chars -p=/home/user/path_to_unpacked_folder/English/</li>
<li>To load data, for example “EnglishImg”, run: ./opencv/build/bin/example_datasetstools_tr_chars -p=/home/user/path_to_unpacked_folder/English/</li>
</ol>
</ol>
</div>
</div>
</div>
</div>
...
@@ -324,7 +324,7 @@
...
@@ -324,7 +324,7 @@
<olclass="last arabic simple">
<olclass="last arabic simple">
<li>From link above download dataset file: svt.zip.</li>
<li>From link above download dataset file: svt.zip.</li>
<li>Unpack it.</li>
<li>Unpack it.</li>
<li>To load data run: ./tr_svt -p=/home/user/path_to_unpacked_folder/svt/svt1/</li>
<li>To load data run: ./opencv/build/bin/example_datasetstools_tr_svt -p=/home/user/path_to_unpacked_folder/svt/svt1/</li>