Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
O
opencv_contrib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
opencv_contrib
Commits
f03e415e
Commit
f03e415e
authored
Dec 28, 2017
by
Alexander Alekhin
Browse files
Options
Browse Files
Download
Plain Diff
Merge pull request #1493 from csukuangfj:improve-hdf
parents
81ca8dab
13508c76
Hide whitespace changes
Inline
Side-by-side
Showing
16 changed files
with
810 additions
and
174 deletions
+810
-174
fuzzy_filtering.cpp
modules/fuzzy/samples/fuzzy_filtering.cpp
+2
-2
create_groups.png
modules/hdf/doc/pics/create_groups.png
+0
-0
root_group_single_channel.png
modules/hdf/doc/pics/root_group_single_channel.png
+0
-0
single_channel.png
modules/hdf/doc/pics/single_channel.png
+0
-0
two_channels.png
modules/hdf/doc/pics/two_channels.png
+0
-0
hdf.hpp
modules/hdf/include/opencv2/hdf.hpp
+3
-0
hdf5.hpp
modules/hdf/include/opencv2/hdf/hdf5.hpp
+110
-105
create_groups.cpp
modules/hdf/samples/create_groups.cpp
+60
-0
create_read_write_datasets.cpp
modules/hdf/samples/create_read_write_datasets.cpp
+143
-0
hdf5.cpp
modules/hdf/src/hdf5.cpp
+70
-67
test_hdf5.cpp
modules/hdf/test/test_hdf5.cpp
+212
-0
test_main.cpp
modules/hdf/test/test_main.cpp
+8
-0
test_precomp.hpp
modules/hdf/test/test_precomp.hpp
+20
-0
how_to_create_groups.markdown
...hdf/tutorials/create_groups/how_to_create_groups.markdown
+85
-0
create_read_write_dataset.markdown
...ate_read_write_dataset/create_read_write_dataset.markdown
+73
-0
table_of_content_hdf.markdown
modules/hdf/tutorials/table_of_content_hdf.markdown
+24
-0
No files found.
modules/fuzzy/samples/fuzzy_filtering.cpp
View file @
f03e415e
...
@@ -11,7 +11,7 @@
...
@@ -11,7 +11,7 @@
* using "kernel2" with radius 100.
* using "kernel2" with radius 100.
*
*
* Both kernels are created from linear function, using
* Both kernels are created from linear function, using
* linear interpolation (parametr ft:LINEAR).
* linear interpolation (paramet
e
r ft:LINEAR).
*/
*/
#include "opencv2/core.hpp"
#include "opencv2/core.hpp"
...
@@ -26,7 +26,7 @@ int main(void)
...
@@ -26,7 +26,7 @@ int main(void)
// Input image
// Input image
Mat
I
=
imread
(
"input.png"
);
Mat
I
=
imread
(
"input.png"
);
// Kernel cretion
// Kernel cre
a
tion
Mat
kernel1
,
kernel2
;
Mat
kernel1
,
kernel2
;
ft
::
createKernel
(
ft
::
LINEAR
,
3
,
kernel1
,
3
);
ft
::
createKernel
(
ft
::
LINEAR
,
3
,
kernel1
,
3
);
...
...
modules/hdf/doc/pics/create_groups.png
0 → 100644
View file @
f03e415e
19 KB
modules/hdf/doc/pics/root_group_single_channel.png
0 → 100644
View file @
f03e415e
27.9 KB
modules/hdf/doc/pics/single_channel.png
0 → 100644
View file @
f03e415e
28 KB
modules/hdf/doc/pics/two_channels.png
0 → 100644
View file @
f03e415e
30.4 KB
modules/hdf/include/opencv2/hdf.hpp
View file @
f03e415e
...
@@ -47,6 +47,9 @@ This module provides storage routines for Hierarchical Data Format objects.
...
@@ -47,6 +47,9 @@ This module provides storage routines for Hierarchical Data Format objects.
Hierarchical Data Format version 5
Hierarchical Data Format version 5
--------------------------------------------------------
--------------------------------------------------------
In order to use it, the hdf5 library has to be installed, which
means cmake should find it using `find_package(HDF5)` .
@}
@}
*/
*/
...
...
modules/hdf/include/opencv2/hdf/hdf5.hpp
View file @
f03e415e
...
@@ -36,7 +36,7 @@
...
@@ -36,7 +36,7 @@
#define __OPENCV_HDF5_HPP__
#define __OPENCV_HDF5_HPP__
#include <vector>
#include <vector>
#include <opencv2/core.hpp>
namespace
cv
namespace
cv
{
{
...
@@ -50,7 +50,7 @@ using namespace std;
...
@@ -50,7 +50,7 @@ using namespace std;
/** @brief Hierarchical Data Format version 5 interface.
/** @brief Hierarchical Data Format version 5 interface.
Notice that module is compiled only when hdf5 is correctly installed.
Notice that
this
module is compiled only when hdf5 is correctly installed.
*/
*/
class
CV_EXPORTS_W
HDF5
class
CV_EXPORTS_W
HDF5
...
@@ -59,7 +59,11 @@ public:
...
@@ -59,7 +59,11 @@ public:
CV_WRAP
enum
CV_WRAP
enum
{
{
H5_UNLIMITED
=
-
1
,
H5_NONE
=
-
1
,
H5_GETDIMS
=
100
,
H5_GETMAXDIMS
=
101
,
H5_GETCHUNKDIMS
=
102
,
H5_UNLIMITED
=
-
1
,
//!< The dimension size is unlimited, @sa dscreate()
H5_NONE
=
-
1
,
//!< No compression, @sa dscreate()
H5_GETDIMS
=
100
,
//!< Get the dimension information of a dataset. @sa dsgetsize()
H5_GETMAXDIMS
=
101
,
//!< Get the maximum dimension information of a dataset. @sa dsgetsize()
H5_GETCHUNKDIMS
=
102
,
//!< Get the chunk sizes of a dataset. @sa dsgetsize()
};
};
virtual
~
HDF5
()
{}
virtual
~
HDF5
()
{}
...
@@ -71,63 +75,57 @@ public:
...
@@ -71,63 +75,57 @@ public:
/** @brief Create a group.
/** @brief Create a group.
@param grlabel specify the hdf5 group label.
@param grlabel specify the hdf5 group label.
Create a hdf5 group.
Create a hdf5 group
with default properties. The group is closed automatically after creation
.
@note Groups are useful for better organis
e
multiple datasets. It is possible to create subgroups within any group.
@note Groups are useful for better organis
ing
multiple datasets. It is possible to create subgroups within any group.
Existence of a particular group can be checked using hlexists(). In case of subgroups label would be e.g: 'Group1/SubGroup1'
Existence of a particular group can be checked using hlexists(). In case of subgroups
, a
label would be e.g: 'Group1/SubGroup1'
where SubGroup1 is within the root group Group1.
where SubGroup1 is within the root group Group1.
Before creating a subgroup, its parent group MUST be created.
- In this example Group1 will have one subgrup labeled SubGroup1:
- In this example, Group1 will have one subgroup called SubGroup1:
@code{.cpp}
// open / autocreate hdf5 file
cv::Ptr<cv::hdf::HDF5> h5io = cv::hdf::open( "mytest.h5" );
// create Group1 if does not exists
if ( ! h5io->hlexists( "Group1" ) )
h5io->grcreate( "Group1" );
else
printf("Group1 already created, skipping\n" );
// create SubGroup1 if does not exists
if ( ! h5io->hlexists( "Group1/SubGroup1" ) )
h5io->grcreate( "Group1/SubGroup1" );
else
printf("SubGroup1 already created, skipping\n" );
// release
h5io->close();
@endcode
@note When a dataset is created with dscreate() or kpcreate() it can be created right within a group by specifying
@snippet samples/create_groups.cpp create_group
full path within the label, in our example would be: 'Group1/SubGroup1/MyDataSet'. It is not thread safe.
The corresponding result visualized using the HDFView tool is
![Visualization of groups using the HDFView tool](pics/create_groups.png)
@note When a dataset is created with dscreate() or kpcreate(), it can be created within a group by specifying the
full path within the label. In our example, it would be: 'Group1/SubGroup1/MyDataSet'. It is not thread safe.
*/
*/
CV_WRAP
virtual
void
grcreate
(
String
grlabel
)
=
0
;
CV_WRAP
virtual
void
grcreate
(
const
String
&
grlabel
)
=
0
;
/** @brief Check if label exists or not.
/** @brief Check if label exists or not.
@param label specify the hdf5 dataset label.
@param label specify the hdf5 dataset label.
Returns **true** if dataset exists, and **false**
if does not
.
Returns **true** if dataset exists, and **false**
otherwise
.
@note Checks if dataset, group or other object type (hdf5 link) exists under the label name. It is thread safe.
@note Checks if dataset, group or other object type (hdf5 link) exists under the label name. It is thread safe.
*/
*/
CV_WRAP
virtual
bool
hlexists
(
String
label
)
const
=
0
;
CV_WRAP
virtual
bool
hlexists
(
const
String
&
label
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
CV_WRAP
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
)
const
=
0
;
const
String
&
dslabel
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
CV_WRAP
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
)
const
=
0
;
const
String
&
dslabel
,
const
int
compresslevel
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
CV_WRAP
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
,
const
vector
<
int
>&
dims_chunks
)
const
=
0
;
const
String
&
dslabel
,
const
int
compresslevel
,
const
vector
<
int
>&
dims_chunks
)
const
=
0
;
/** @brief Create and allocate storage for two dimensional single or multi channel dataset.
/** @brief Create and allocate storage for two dimensional single or multi channel dataset.
@param rows declare amount of rows
@param rows declare amount of rows
@param cols declare amount of cols
@param cols declare amount of columns
@param type type to be used
@param type type to be used, e.g, CV_8UC3, CV_32FC1 and etc.
@param dslabel specify the hdf5 dataset label, any existing dataset with the same label will be overwritten.
@param dslabel specify the hdf5 dataset label. Existing dataset label will cause an error.
@param compresslevel specify the compression level 0-9 to be used, H5_NONE is default and means no compression.
@param compresslevel specify the compression level 0-9 to be used, H5_NONE is the default value and means no compression.
@param dims_chunks each array member specify chunking sizes to be used for block i/o,
The value 0 also means no compression.
A value 9 indicating the best compression ration. Note
that a higher compression level indicates a higher computational cost. It relies
on GNU gzip for compression.
@param dims_chunks each array member specifies the chunking size to be used for block I/O,
by default NULL means none at all.
by default NULL means none at all.
@note If the dataset already exists
an exception will be thrown
.
@note If the dataset already exists
, an exception will be thrown (CV_Error() is called)
.
- Existence of the dataset can be checked using hlexists(), see in this example:
- Existence of the dataset can be checked using hlexists(), see in this example:
@code{.cpp}
@code{.cpp}
...
@@ -143,9 +141,9 @@ public:
...
@@ -143,9 +141,9 @@ public:
@endcode
@endcode
@note Activating compression requires internal chunking. Chunking can significantly improve access
@note Activating compression requires internal chunking. Chunking can significantly improve access
speed bo
oth at read or write time
especially for windowed access logic that shifts offset inside dataset.
speed bo
th at read and write time,
especially for windowed access logic that shifts offset inside dataset.
If no custom chunking is specified
default one will be invoked by the size of
**whole** dataset
If no custom chunking is specified
, the default one will be invoked by the size of the
**whole** dataset
as single big chunk of data.
as
a
single big chunk of data.
- See example of level 9 compression using internal default chunking:
- See example of level 9 compression using internal default chunking:
@code{.cpp}
@code{.cpp}
...
@@ -160,11 +158,11 @@ public:
...
@@ -160,11 +158,11 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
@note A value of H5_UNLIMITED for **rows** or **cols** or bo
o
th means **unlimited** data on the specified dimension,
@note A value of H5_UNLIMITED for **rows** or **cols** or both means **unlimited** data on the specified dimension,
thus
is possible to expand anytime such dataset on row, col or bo
oth directions. Presence of H5_UNLIMITED on any
thus
, it is possible to expand anytime such a dataset on row, col or on b
oth directions. Presence of H5_UNLIMITED on any
dimension **require
** to define custom chunking. No default chunking will be defined in
unlimited scenario since
dimension **require
s** to define custom chunking. No default chunking will be defined in the
unlimited scenario since
default size on that dimension will be zero, and will grow once dataset is written. Writing into
dataset that have
default size on that dimension will be zero, and will grow once dataset is written. Writing into
a dataset that has
H5_UNLIMITED on some of its dimension
requires dsinsert() that allow growth on unlimited dimension
instead of dswrite()
H5_UNLIMITED on some of its dimension
s requires dsinsert() that allows growth on unlimited dimensions,
instead of dswrite()
that allows to write only in predefined data space.
that allows to write only in predefined data space.
- Example below shows no compression but unlimited dimension on cols using 100x100 internal chunking:
- Example below shows no compression but unlimited dimension on cols using 100x100 internal chunking:
...
@@ -178,31 +176,35 @@ public:
...
@@ -178,31 +176,35 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
@note It is **not** thread safe, it must be called only once at dataset creation
otherwise
exception will occur.
@note It is **not** thread safe, it must be called only once at dataset creation
, otherwise an
exception will occur.
Multiple datasets inside
single hdf5 file is
allowed.
Multiple datasets inside
a single hdf5 file are
allowed.
*/
*/
CV_WRAP
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
CV_WRAP
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
=
0
;
const
String
&
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
CV_WRAP
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
)
const
=
0
;
const
String
&
dslabel
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
CV_WRAP
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
)
const
=
0
;
const
String
&
dslabel
,
const
int
compresslevel
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dscreate
(
const
vector
<
int
>&
sizes
,
const
int
type
,
CV_WRAP
virtual
void
dscreate
(
const
vector
<
int
>&
sizes
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
=
HDF5
::
H5_NONE
,
const
String
&
dslabel
,
const
int
compresslevel
=
HDF5
::
H5_NONE
,
const
vector
<
int
>&
dims_chunks
=
vector
<
int
>
()
)
const
=
0
;
const
vector
<
int
>&
dims_chunks
=
vector
<
int
>
()
)
const
=
0
;
/** @brief Create and allocate storage for n-dimensional dataset, single or mutichannel type.
/** @brief Create and allocate storage for n-dimensional dataset, single or mu
l
tichannel type.
@param n_dims declare number of dimensions
@param n_dims declare number of dimensions
@param sizes array containing sizes for each dimensions
@param sizes array containing sizes for each dimensions
@param type type to be used
@param type type to be used, e.g., CV_8UC3, CV_32FC1, etc.
@param dslabel specify the hdf5 dataset label, any existing dataset with the same label will be overwritten.
@param dslabel specify the hdf5 dataset label. Existing dataset label will cause an error.
@param compresslevel specify the compression level 0-9 to be used, H5_NONE is default and means no compression.
@param compresslevel specify the compression level 0-9 to be used, H5_NONE is the default value and means no compression.
@param dims_chunks each array member specify chunking sizes to be used for block i/o,
The value 0 also means no compression.
A value 9 indicating the best compression ration. Note
that a higher compression level indicates a higher computational cost. It relies
on GNU gzip for compression.
@param dims_chunks each array member specifies chunking sizes to be used for block I/O,
by default NULL means none at all.
by default NULL means none at all.
@note If the dataset already exists an exception will be thrown. Existence of the dataset can be checked
@note If the dataset already exists
,
an exception will be thrown. Existence of the dataset can be checked
using hlexists().
using hlexists().
- See example below that creates a 6 dimensional storage space:
- See example below that creates a 6 dimensional storage space:
...
@@ -221,12 +223,12 @@ public:
...
@@ -221,12 +223,12 @@ public:
@endcode
@endcode
@note Activating compression requires internal chunking. Chunking can significantly improve access
@note Activating compression requires internal chunking. Chunking can significantly improve access
speed bo
oth at read or write time
especially for windowed access logic that shifts offset inside dataset.
speed bo
th at read and write time,
especially for windowed access logic that shifts offset inside dataset.
If no custom chunking is specified default one will be invoked by the size of **whole** dataset
If no custom chunking is specified
, the
default one will be invoked by the size of **whole** dataset
as single big chunk of data.
as single big chunk of data.
- See example of level 0 compression (shallow) using chunking against first
- See example of level 0 compression (shallow) using chunking against
the
first
dimension, thus storage will consists
by
100 chunks of data:
dimension, thus storage will consists
of
100 chunks of data:
@code{.cpp}
@code{.cpp}
// open / autocreate hdf5 file
// open / autocreate hdf5 file
cv::Ptr<cv::hdf::HDF5> h5io = cv::hdf::open( "mytest.h5" );
cv::Ptr<cv::hdf::HDF5> h5io = cv::hdf::open( "mytest.h5" );
...
@@ -242,11 +244,11 @@ public:
...
@@ -242,11 +244,11 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
@note A value of H5_UNLIMITED inside the **sizes** array means **unlimited** data on that dimension, thus is
@note A value of H5_UNLIMITED inside the **sizes** array means **unlimited** data on that dimension, thus i
t i
s
possible to expand anytime such dataset on those unlimited directions. Presence of H5_UNLIMITED on any dimension
possible to expand anytime such dataset on those unlimited directions. Presence of H5_UNLIMITED on any dimension
**require
** to define custom chunking. No default chunking will be defined in unlimited scenario sinc
e default size
**require
s** to define custom chunking. No default chunking will be defined in unlimited scenario since th
e default size
on that dimension will be zero, and will grow once dataset is written. Writing into dataset that ha
ve
H5_UNLIMITED on
on that dimension will be zero, and will grow once dataset is written. Writing into dataset that ha
s
H5_UNLIMITED on
some of its dimension requires dsinsert() instead of dswrite() that allow growth on unlimited dimension instead of
some of its dimension requires dsinsert() instead of dswrite() that allow
s
growth on unlimited dimension instead of
dswrite() that allows to write only in predefined data space.
dswrite() that allows to write only in predefined data space.
- Example below shows a 3 dimensional dataset using no compression with all unlimited sizes and one unit chunking:
- Example below shows a 3 dimensional dataset using no compression with all unlimited sizes and one unit chunking:
...
@@ -262,11 +264,12 @@ public:
...
@@ -262,11 +264,12 @@ public:
@endcode
@endcode
*/
*/
CV_WRAP
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
CV_WRAP
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
=
0
;
const
String
&
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
=
0
;
/** @brief Fetch dataset sizes
/** @brief Fetch dataset sizes
@param dslabel specify the hdf5 dataset label to be measured.
@param dslabel specify the hdf5 dataset label to be measured.
@param dims_flag will fetch dataset dimensions on H5_GETDIMS, and dataset maximum dimensions on H5_GETMAXDIMS.
@param dims_flag will fetch dataset dimensions on H5_GETDIMS, dataset maximum dimensions on H5_GETMAXDIMS,
and chunk sizes on H5_GETCHUNKDIMS.
Returns vector object containing sizes of dataset on each dimensions.
Returns vector object containing sizes of dataset on each dimensions.
...
@@ -278,7 +281,7 @@ public:
...
@@ -278,7 +281,7 @@ public:
return the dimension of chunk if dataset was created with chunking options otherwise returned vector size
return the dimension of chunk if dataset was created with chunking options otherwise returned vector size
will be zero.
will be zero.
*/
*/
CV_WRAP
virtual
vector
<
int
>
dsgetsize
(
String
dslabel
,
int
dims_flag
=
HDF5
::
H5_GETDIMS
)
const
=
0
;
CV_WRAP
virtual
vector
<
int
>
dsgetsize
(
const
String
&
dslabel
,
int
dims_flag
=
HDF5
::
H5_GETDIMS
)
const
=
0
;
/** @brief Fetch dataset type
/** @brief Fetch dataset type
@param dslabel specify the hdf5 dataset label to be checked.
@param dslabel specify the hdf5 dataset label to be checked.
...
@@ -289,15 +292,15 @@ public:
...
@@ -289,15 +292,15 @@ public:
@note Result can be parsed with CV_MAT_CN() to obtain amount of channels and CV_MAT_DEPTH() to obtain native cvdata type.
@note Result can be parsed with CV_MAT_CN() to obtain amount of channels and CV_MAT_DEPTH() to obtain native cvdata type.
It is thread safe.
It is thread safe.
*/
*/
CV_WRAP
virtual
int
dsgettype
(
String
dslabel
)
const
=
0
;
CV_WRAP
virtual
int
dsgettype
(
const
String
&
dslabel
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dswrite
(
InputArray
Array
,
String
dslabel
)
const
=
0
;
CV_WRAP
virtual
void
dswrite
(
InputArray
Array
,
const
String
&
dslabel
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dswrite
(
InputArray
Array
,
String
dslabel
,
CV_WRAP
virtual
void
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
=
0
;
const
int
*
dims_offset
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dswrite
(
InputArray
Array
,
String
dslabel
,
CV_WRAP
virtual
void
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
=
0
;
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
=
0
;
/** @brief Write or overwrite a Mat object into specified dataset of hdf5 file.
/** @brief Write or overwrite a Mat object into specified dataset of hdf5 file.
...
@@ -305,16 +308,16 @@ public:
...
@@ -305,16 +308,16 @@ public:
@param dslabel specify the target hdf5 dataset label.
@param dslabel specify the target hdf5 dataset label.
@param dims_offset each array member specify the offset location
@param dims_offset each array member specify the offset location
over dataset's each dimensions from where InputArray will be (over)written into dataset.
over dataset's each dimensions from where InputArray will be (over)written into dataset.
@param dims_counts each array member specif
y
the amount of data over dataset's
@param dims_counts each array member specif
ies
the amount of data over dataset's
each dimensions from InputArray that will be written into dataset.
each dimensions from InputArray that will be written into dataset.
Writes Mat object into targeted dataset.
Writes Mat object into targeted dataset.
@note If dataset is not created and does not exist it will be created **automatically**. Only Mat is supported and
@note If dataset is not created and does not exist it will be created **automatically**. Only Mat is supported and
it must
to be **continuous**. It is thread safe but it is recommended that writes to happen over separate non
overlapping
it must
be **continuous**. It is thread safe but it is recommended that writes to happen over separate non-
overlapping
regions. Multiple datasets can be written inside single hdf5 file.
regions. Multiple datasets can be written inside
a
single hdf5 file.
- Example below writes a 100x100 CV_64FC2 matrix into a dataset. No dataset precreation required. If routine
- Example below writes a 100x100 CV_64FC2 matrix into a dataset. No dataset pre
-
creation required. If routine
is called multiple times dataset will be just overwritten:
is called multiple times dataset will be just overwritten:
@code{.cpp}
@code{.cpp}
// dual channel hilbert matrix
// dual channel hilbert matrix
...
@@ -362,19 +365,19 @@ public:
...
@@ -362,19 +365,19 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
*/
*/
CV_WRAP
virtual
void
dswrite
(
InputArray
Array
,
String
dslabel
,
CV_WRAP
virtual
void
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
=
0
;
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dsinsert
(
InputArray
Array
,
String
dslabel
)
const
=
0
;
CV_WRAP
virtual
void
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dsinsert
(
InputArray
Array
,
CV_WRAP
virtual
void
dsinsert
(
InputArray
Array
,
String
dslabel
,
const
int
*
dims_offset
)
const
=
0
;
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dsinsert
(
InputArray
Array
,
CV_WRAP
virtual
void
dsinsert
(
InputArray
Array
,
String
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
=
0
;
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
=
0
;
/** @brief Insert or overwrite a Mat object into specified dataset and autoexpand dataset size if **unlimited** property allows.
/** @brief Insert or overwrite a Mat object into specified dataset and auto
expand dataset size if **unlimited** property allows.
@param Array specify Mat data array to be written.
@param Array specify Mat data array to be written.
@param dslabel specify the target hdf5 dataset label.
@param dslabel specify the target hdf5 dataset label.
@param dims_offset each array member specify the offset location
@param dims_offset each array member specify the offset location
...
@@ -384,13 +387,13 @@ public:
...
@@ -384,13 +387,13 @@ public:
Writes Mat object into targeted dataset and **autoexpand** dataset dimension if allowed.
Writes Mat object into targeted dataset and **autoexpand** dataset dimension if allowed.
@note Unlike dswrite(), datasets are **not** created **automatically**. Only Mat is supported and it must
to
be **continuous**.
@note Unlike dswrite(), datasets are **not** created **automatically**. Only Mat is supported and it must be **continuous**.
If dsinsert() happen over outer regions of dataset dimensions and on that dimension of dataset is in **unlimited** mode then
If dsinsert() happen
s
over outer regions of dataset dimensions and on that dimension of dataset is in **unlimited** mode then
dataset is expanded, otherwise exception is thrown. To create datasets with **unlimited** property on specific or more
dataset is expanded, otherwise exception is thrown. To create datasets with **unlimited** property on specific or more
dimensions see dscreate() and the optional H5_UNLIMITED flag at creation time. It is not thread safe over same dataset
dimensions see dscreate() and the optional H5_UNLIMITED flag at creation time. It is not thread safe over same dataset
but multiple datasets can be merged inside single hdf5 file.
but multiple datasets can be merged inside
a
single hdf5 file.
- Example below creates **unlimited** rows x 100 cols and expand rows 5 times with dsinsert() using single 100x100 CV_64FC2
- Example below creates **unlimited** rows x 100 cols and expand
s
rows 5 times with dsinsert() using single 100x100 CV_64FC2
over the dataset. Final size will have 5x100 rows and 100 cols, reflecting H matrix five times over row's span. Chunks size is
over the dataset. Final size will have 5x100 rows and 100 cols, reflecting H matrix five times over row's span. Chunks size is
100x100 just optimized against the H matrix size having compression disabled. If routine is called multiple times dataset will be
100x100 just optimized against the H matrix size having compression disabled. If routine is called multiple times dataset will be
just overwritten:
just overwritten:
...
@@ -421,17 +424,17 @@ public:
...
@@ -421,17 +424,17 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
*/
*/
CV_WRAP
virtual
void
dsinsert
(
InputArray
Array
,
String
dslabel
,
CV_WRAP
virtual
void
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
=
0
;
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dsread
(
OutputArray
Array
,
String
dslabel
)
const
=
0
;
CV_WRAP
virtual
void
dsread
(
OutputArray
Array
,
const
String
&
dslabel
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dsread
(
OutputArray
Array
,
CV_WRAP
virtual
void
dsread
(
OutputArray
Array
,
String
dslabel
,
const
int
*
dims_offset
)
const
=
0
;
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
=
0
;
/* @overload */
/* @overload */
CV_WRAP
virtual
void
dsread
(
OutputArray
Array
,
String
dslabel
,
CV_WRAP
virtual
void
dsread
(
OutputArray
Array
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
=
0
;
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
=
0
;
/** @brief Read specific dataset from hdf5 file into Mat object.
/** @brief Read specific dataset from hdf5 file into Mat object.
...
@@ -473,7 +476,7 @@ public:
...
@@ -473,7 +476,7 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
*/
*/
CV_WRAP
virtual
void
dsread
(
OutputArray
Array
,
String
dslabel
,
CV_WRAP
virtual
void
dsread
(
OutputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
=
0
;
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
=
0
;
/** @brief Fetch keypoint dataset size
/** @brief Fetch keypoint dataset size
...
@@ -489,13 +492,13 @@ public:
...
@@ -489,13 +492,13 @@ public:
exception. The H5_GETCHUNKDIMS will return the dimension of chunk if dataset was created with chunking options otherwise
exception. The H5_GETCHUNKDIMS will return the dimension of chunk if dataset was created with chunking options otherwise
returned vector size will be zero.
returned vector size will be zero.
*/
*/
CV_WRAP
virtual
int
kpgetsize
(
String
kplabel
,
int
dims_flag
=
HDF5
::
H5_GETDIMS
)
const
=
0
;
CV_WRAP
virtual
int
kpgetsize
(
const
String
&
kplabel
,
int
dims_flag
=
HDF5
::
H5_GETDIMS
)
const
=
0
;
/** @brief Create and allocate special storage for cv::KeyPoint dataset.
/** @brief Create and allocate special storage for cv::KeyPoint dataset.
@param size declare fixed number of KeyPoints
@param size declare fixed number of KeyPoints
@param kplabel specify the hdf5 dataset label, any existing dataset with the same label will be overwritten.
@param kplabel specify the hdf5 dataset label, any existing dataset with the same label will be overwritten.
@param compresslevel specify the compression level 0-9 to be used, H5_NONE is default and means no compression.
@param compresslevel specify the compression level 0-9 to be used, H5_NONE is default and means no compression.
@param chunks each array member specif
y chunking sizes to be used for block i/o
,
@param chunks each array member specif
ies chunking sizes to be used for block I/O
,
H5_NONE is default and means no compression.
H5_NONE is default and means no compression.
@note If the dataset already exists an exception will be thrown. Existence of the dataset can be checked
@note If the dataset already exists an exception will be thrown. Existence of the dataset can be checked
using hlexists().
using hlexists().
...
@@ -526,7 +529,7 @@ public:
...
@@ -526,7 +529,7 @@ public:
printf("DS already created, skipping\n" );
printf("DS already created, skipping\n" );
@endcode
@endcode
*/
*/
virtual
void
kpcreate
(
const
int
size
,
String
kplabel
,
virtual
void
kpcreate
(
const
int
size
,
const
String
&
kplabel
,
const
int
compresslevel
=
H5_NONE
,
const
int
chunks
=
H5_NONE
)
const
=
0
;
const
int
compresslevel
=
H5_NONE
,
const
int
chunks
=
H5_NONE
)
const
=
0
;
/** @brief Write or overwrite list of KeyPoint into specified dataset of hdf5 file.
/** @brief Write or overwrite list of KeyPoint into specified dataset of hdf5 file.
...
@@ -579,7 +582,7 @@ public:
...
@@ -579,7 +582,7 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
*/
*/
virtual
void
kpwrite
(
const
vector
<
KeyPoint
>
keypoints
,
String
kplabel
,
virtual
void
kpwrite
(
const
vector
<
KeyPoint
>
keypoints
,
const
String
&
kplabel
,
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
=
0
;
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
=
0
;
/** @brief Insert or overwrite list of KeyPoint into specified dataset and autoexpand dataset size if **unlimited** property allows.
/** @brief Insert or overwrite list of KeyPoint into specified dataset and autoexpand dataset size if **unlimited** property allows.
...
@@ -614,7 +617,7 @@ public:
...
@@ -614,7 +617,7 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
*/
*/
virtual
void
kpinsert
(
const
vector
<
KeyPoint
>
keypoints
,
String
kplabel
,
virtual
void
kpinsert
(
const
vector
<
KeyPoint
>
keypoints
,
const
String
&
kplabel
,
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
=
0
;
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
=
0
;
/** @brief Read specific keypoint dataset from hdf5 file into vector<KeyPoint> object.
/** @brief Read specific keypoint dataset from hdf5 file into vector<KeyPoint> object.
...
@@ -652,7 +655,7 @@ public:
...
@@ -652,7 +655,7 @@ public:
h5io->close();
h5io->close();
@endcode
@endcode
*/
*/
virtual
void
kpread
(
vector
<
KeyPoint
>&
keypoints
,
String
kplabel
,
virtual
void
kpread
(
vector
<
KeyPoint
>&
keypoints
,
const
String
&
kplabel
,
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
=
0
;
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
=
0
;
};
};
...
@@ -660,18 +663,20 @@ public:
...
@@ -660,18 +663,20 @@ public:
/** @brief Open or create hdf5 file
/** @brief Open or create hdf5 file
@param HDF5Filename specify the HDF5 filename.
@param HDF5Filename specify the HDF5 filename.
Returns pointer to the hdf5 object class
Returns
a
pointer to the hdf5 object class
@note If hdf5 file does not exist it will be created. Any operations except dscreate() functions on object
@note If the specified file does not exist, it will be created using default properties.
will be thread safe. Multiple datasets can be created inside single hdf5 file, and can be accessed
Otherwise, it is opened in read and write mode with default access properties.
from same hdf5 object from multiple instances as long read or write operations are done over
Any operations except dscreate() functions on object
will be thread safe. Multiple datasets can be created inside a single hdf5 file, and can be accessed
from the same hdf5 object from multiple instances as long read or write operations are done over
non-overlapping regions of dataset. Single hdf5 file also can be opened by multiple instances,
non-overlapping regions of dataset. Single hdf5 file also can be opened by multiple instances,
reads and writes can be instantiated at the same time as long non-overlapping regions are involved. Object
reads and writes can be instantiated at the same time as long
as
non-overlapping regions are involved. Object
is released using close().
is released using close().
- Example below open
and then release
the file.
- Example below open
s and then releases
the file.
@code{.cpp}
@code{.cpp}
// open / autocreate hdf5 file
// open / auto
create hdf5 file
cv::Ptr<cv::hdf::HDF5> h5io = cv::hdf::open( "mytest.h5" );
cv::Ptr<cv::hdf::HDF5> h5io = cv::hdf::open( "mytest.h5" );
// ...
// ...
// release
// release
...
@@ -698,7 +703,7 @@ public:
...
@@ -698,7 +703,7 @@ public:
}
}
@endcode
@endcode
*/
*/
CV_EXPORTS_W
Ptr
<
HDF5
>
open
(
String
HDF5Filename
);
CV_EXPORTS_W
Ptr
<
HDF5
>
open
(
const
String
&
HDF5Filename
);
//! @}
//! @}
...
...
modules/hdf/samples/create_groups.cpp
0 → 100644
View file @
f03e415e
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
/**
* @file create_groups.cpp
* @author Fangjun Kuang <csukuangfj dot at gmail dot com>
* @date December 2017
*
* @brief It demonstrates how to create HDF5 groups and subgroups.
*
* Basic steps:
* 1. Use hdf::open to create a HDF5 file
* 2. Use HDF5::hlexists to check if a group exists or not
* 3. Use HDF5::grcreate to create a group by specifying its name
* 4. Use hdf::close to close a HDF5 file after modifying it
*
*/
//! [tutorial]
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/hdf.hpp>
using
namespace
cv
;
int
main
()
{
//! [create_group]
//! [tutorial_create_file]
Ptr
<
hdf
::
HDF5
>
h5io
=
hdf
::
open
(
"mytest.h5"
);
//! [tutorial_create_file]
//! [tutorial_create_group]
// "/" means the root group, which is always present
if
(
!
h5io
->
hlexists
(
"/Group1"
))
h5io
->
grcreate
(
"/Group1"
);
else
std
::
cout
<<
"/Group1 has already been created, skip it.
\n
"
;
//! [tutorial_create_group]
//! [tutorial_create_subgroup]
// Note that Group1 has been created above, otherwise exception will occur
if
(
!
h5io
->
hlexists
(
"/Group1/SubGroup1"
))
h5io
->
grcreate
(
"/Group1/SubGroup1"
);
else
std
::
cout
<<
"/Group1/SubGroup1 has already been created, skip it.
\n
"
;
//! [tutorial_create_subgroup]
//! [tutorial_close_file]
h5io
->
close
();
//! [tutorial_close_file]
//! [create_group]
return
0
;
}
//! [tutorial]
modules/hdf/samples/create_read_write_datasets.cpp
0 → 100644
View file @
f03e415e
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
/**
* @file create_read_write.cpp
* @author Fangjun Kuang <csukuangfj dot at gmail dot com>
* @date December 2017
*
* @brief It demonstrates how to create a dataset, how
* to write a cv::Mat to the dataset and how to
* read a cv::Mat from it.
*
*/
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-declarations"
# if defined __clang__ || defined __APPLE__
# pragma GCC diagnostic ignored "-Wmissing-prototypes"
# pragma GCC diagnostic ignored "-Wextra"
# endif
#endif
//! [tutorial]
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/hdf.hpp>
using
namespace
cv
;
void
write_root_group_single_channel
()
{
String
filename
=
"root_group_single_channel.h5"
;
String
dataset_name
=
"/single"
;
// Note that it is a child of the root group /
// prepare data
Mat
data
;
data
=
(
cv
::
Mat_
<
float
>
(
2
,
3
)
<<
0
,
1
,
2
,
3
,
4
,
5
,
6
);
//! [tutorial_open_file]
Ptr
<
hdf
::
HDF5
>
h5io
=
hdf
::
open
(
filename
);
//! [tutorial_open_file]
//! [tutorial_write_root_single_channel]
// write data to the given dataset
// the dataset "/single" is created automatically, since it is a child of the root
h5io
->
dswrite
(
data
,
dataset_name
);
//! [tutorial_write_root_single_channel]
//! [tutorial_read_dataset]
Mat
expected
;
h5io
->
dsread
(
expected
,
dataset_name
);
//! [tutorial_read_dataset]
//! [tutorial_check_result]
double
diff
=
norm
(
data
-
expected
);
CV_Assert
(
abs
(
diff
)
<
1e-10
);
//! [tutorial_check_result]
h5io
->
close
();
}
void
write_single_channel
()
{
String
filename
=
"single_channel.h5"
;
String
parent_name
=
"/data"
;
String
dataset_name
=
parent_name
+
"/single"
;
// prepare data
Mat
data
;
data
=
(
cv
::
Mat_
<
float
>
(
2
,
3
)
<<
0
,
1
,
2
,
3
,
4
,
5
);
Ptr
<
hdf
::
HDF5
>
h5io
=
hdf
::
open
(
filename
);
//! [tutorial_create_dataset]
// first we need to create the parent group
if
(
!
h5io
->
hlexists
(
parent_name
))
h5io
->
grcreate
(
parent_name
);
// create the dataset if it not exists
if
(
!
h5io
->
hlexists
(
dataset_name
))
h5io
->
dscreate
(
data
.
rows
,
data
.
cols
,
data
.
type
(),
dataset_name
);
//! [tutorial_create_dataset]
// the following is the same with the above function write_root_group_single_channel()
h5io
->
dswrite
(
data
,
dataset_name
);
Mat
expected
;
h5io
->
dsread
(
expected
,
dataset_name
);
double
diff
=
norm
(
data
-
expected
);
CV_Assert
(
abs
(
diff
)
<
1e-10
);
h5io
->
close
();
}
/*
* creating, reading and writing multiple-channel matrices
* are the same with single channel matrices
*/
void
write_multiple_channels
()
{
String
filename
=
"two_channels.h5"
;
String
parent_name
=
"/data"
;
String
dataset_name
=
parent_name
+
"/two_channels"
;
// prepare data
Mat
data
(
2
,
3
,
CV_32SC2
);
for
(
size_t
i
=
0
;
i
<
data
.
total
()
*
data
.
channels
();
i
++
)
((
int
*
)
data
.
data
)[
i
]
=
(
int
)
i
;
Ptr
<
hdf
::
HDF5
>
h5io
=
hdf
::
open
(
filename
);
// first we need to create the parent group
if
(
!
h5io
->
hlexists
(
parent_name
))
h5io
->
grcreate
(
parent_name
);
// create the dataset if it not exists
if
(
!
h5io
->
hlexists
(
dataset_name
))
h5io
->
dscreate
(
data
.
rows
,
data
.
cols
,
data
.
type
(),
dataset_name
);
// the following is the same with the above function write_root_group_single_channel()
h5io
->
dswrite
(
data
,
dataset_name
);
Mat
expected
;
h5io
->
dsread
(
expected
,
dataset_name
);
double
diff
=
norm
(
data
-
expected
);
CV_Assert
(
abs
(
diff
)
<
1e-10
);
h5io
->
close
();
}
int
main
()
{
write_root_group_single_channel
();
write_single_channel
();
write_multiple_channels
();
return
0
;
}
//! [tutorial]
modules/hdf/src/hdf5.cpp
View file @
f03e415e
...
@@ -47,7 +47,7 @@ class HDF5Impl : public HDF5
...
@@ -47,7 +47,7 @@ class HDF5Impl : public HDF5
{
{
public
:
public
:
HDF5Impl
(
String
HDF5Filename
);
HDF5Impl
(
const
String
&
HDF5Filename
);
virtual
~
HDF5Impl
()
{
close
();
};
virtual
~
HDF5Impl
()
{
close
();
};
...
@@ -59,96 +59,96 @@ public:
...
@@ -59,96 +59,96 @@ public:
*/
*/
// check if object / link exists
// check if object / link exists
virtual
bool
hlexists
(
String
label
)
const
;
virtual
bool
hlexists
(
const
String
&
label
)
const
;
/*
/*
* h5 group
* h5 group
*/
*/
// create a group
// create a group
virtual
void
grcreate
(
String
grlabel
);
virtual
void
grcreate
(
const
String
&
grlabel
);
/*
/*
* cv::Mat
* cv::Mat
*/
*/
// get sizes of dataset
// get sizes of dataset
virtual
vector
<
int
>
dsgetsize
(
String
dslabel
,
int
dims_flag
=
H5_GETDIMS
)
const
;
virtual
vector
<
int
>
dsgetsize
(
const
String
&
dslabel
,
int
dims_flag
=
H5_GETDIMS
)
const
;
/* get data type of dataset */
/* get data type of dataset */
virtual
int
dsgettype
(
String
dslabel
)
const
;
virtual
int
dsgettype
(
const
String
&
dslabel
)
const
;
// overload dscreate() #1
// overload dscreate() #1
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
)
const
;
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
const
String
&
dslabel
)
const
;
// overload dscreate() #2
// overload dscreate() #2
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
const
String
&
dslabel
,
const
int
compresslevel
)
const
;
const
int
compresslevel
)
const
;
// overload dscreate() #3
// overload dscreate() #3
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
const
String
&
dslabel
,
const
int
compresslevel
,
const
vector
<
int
>&
dims_chunks
)
const
;
const
int
compresslevel
,
const
vector
<
int
>&
dims_chunks
)
const
;
/* create two dimensional single or mutichannel dataset */
/* create two dimensional single or mutichannel dataset */
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
virtual
void
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
const
String
&
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
;
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
;
// overload dscreate() #1
// overload dscreate() #1
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
)
const
;
const
String
&
dslabel
)
const
;
// overload dscreate() #2
// overload dscreate() #2
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
)
const
;
const
String
&
dslabel
,
const
int
compresslevel
)
const
;
// overload dscreate() #3
// overload dscreate() #3
virtual
void
dscreate
(
const
vector
<
int
>&
sizes
,
const
int
type
,
String
dslabel
,
virtual
void
dscreate
(
const
vector
<
int
>&
sizes
,
const
int
type
,
const
String
&
dslabel
,
const
int
compresslevel
=
H5_NONE
,
const
vector
<
int
>&
dims_chunks
=
vector
<
int
>
()
)
const
;
const
int
compresslevel
=
H5_NONE
,
const
vector
<
int
>&
dims_chunks
=
vector
<
int
>
()
)
const
;
/* create n-dimensional single or mutichannel dataset */
/* create n-dimensional single or mutichannel dataset */
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
virtual
void
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
;
const
String
&
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
;
// overload dswrite() #1
// overload dswrite() #1
virtual
void
dswrite
(
InputArray
Array
,
String
dslabel
)
const
;
virtual
void
dswrite
(
InputArray
Array
,
const
String
&
dslabel
)
const
;
// overload dswrite() #2
// overload dswrite() #2
virtual
void
dswrite
(
InputArray
Array
,
String
dslabel
,
const
int
*
dims_offset
)
const
;
virtual
void
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
;
// overload dswrite() #3
// overload dswrite() #3
virtual
void
dswrite
(
InputArray
Array
,
String
dslabel
,
const
vector
<
int
>&
dims_offset
,
virtual
void
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
;
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
;
/* write into dataset */
/* write into dataset */
virtual
void
dswrite
(
InputArray
Array
,
String
dslabel
,
virtual
void
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
;
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
;
// overload dsinsert() #1
// overload dsinsert() #1
virtual
void
dsinsert
(
InputArray
Array
,
String
dslabel
)
const
;
virtual
void
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
)
const
;
// overload dsinsert() #2
// overload dsinsert() #2
virtual
void
dsinsert
(
InputArray
Array
,
String
dslabel
,
const
int
*
dims_offset
)
const
;
virtual
void
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
;
// overload dsinsert() #3
// overload dsinsert() #3
virtual
void
dsinsert
(
InputArray
Array
,
String
dslabel
,
virtual
void
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
;
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
;
/* append / merge into dataset */
/* append / merge into dataset */
virtual
void
dsinsert
(
InputArray
Array
,
String
dslabel
,
virtual
void
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
=
NULL
,
const
int
*
dims_counts
=
NULL
)
const
;
const
int
*
dims_offset
=
NULL
,
const
int
*
dims_counts
=
NULL
)
const
;
// overload dsread() #1
// overload dsread() #1
virtual
void
dsread
(
OutputArray
Array
,
String
dslabel
)
const
;
virtual
void
dsread
(
OutputArray
Array
,
const
String
&
dslabel
)
const
;
// overload dsread() #2
// overload dsread() #2
virtual
void
dsread
(
OutputArray
Array
,
String
dslabel
,
const
int
*
dims_offset
)
const
;
virtual
void
dsread
(
OutputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
;
// overload dsread() #3
// overload dsread() #3
virtual
void
dsread
(
OutputArray
Array
,
String
dslabel
,
virtual
void
dsread
(
OutputArray
Array
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
;
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
=
vector
<
int
>
()
)
const
;
// read from dataset
// read from dataset
virtual
void
dsread
(
OutputArray
Array
,
String
dslabel
,
virtual
void
dsread
(
OutputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
;
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
;
/*
/*
...
@@ -156,36 +156,36 @@ public:
...
@@ -156,36 +156,36 @@ public:
*/
*/
// get size of keypoints dataset
// get size of keypoints dataset
virtual
int
kpgetsize
(
String
kplabel
,
int
dims_flag
=
H5_GETDIMS
)
const
;
virtual
int
kpgetsize
(
const
String
&
kplabel
,
int
dims_flag
=
H5_GETDIMS
)
const
;
// create KeyPoint structure
// create KeyPoint structure
virtual
void
kpcreate
(
const
int
size
,
String
kplabel
,
virtual
void
kpcreate
(
const
int
size
,
const
String
&
kplabel
,
const
int
compresslevel
=
H5_NONE
,
const
int
chunks
=
H5_NONE
)
const
;
const
int
compresslevel
=
H5_NONE
,
const
int
chunks
=
H5_NONE
)
const
;
// write KeyPoint structures
// write KeyPoint structures
virtual
void
kpwrite
(
const
vector
<
KeyPoint
>
keypoints
,
String
kplabel
,
virtual
void
kpwrite
(
const
vector
<
KeyPoint
>
keypoints
,
const
String
&
kplabel
,
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
;
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
;
// append / merge KeyPoint structures
// append / merge KeyPoint structures
virtual
void
kpinsert
(
const
vector
<
KeyPoint
>
keypoints
,
String
kplabel
,
virtual
void
kpinsert
(
const
vector
<
KeyPoint
>
keypoints
,
const
String
&
kplabel
,
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
;
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
;
// read KeyPoint structure
// read KeyPoint structure
virtual
void
kpread
(
vector
<
KeyPoint
>&
keypoints
,
String
kplabel
,
virtual
void
kpread
(
vector
<
KeyPoint
>&
keypoints
,
const
String
&
kplabel
,
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
;
const
int
offset
=
H5_NONE
,
const
int
counts
=
H5_NONE
)
const
;
private
:
private
:
// store filename
//
!
store filename
String
m_hdf5_filename
;
String
m_hdf5_filename
;
// hdf5 file handler
//
!
hdf5 file handler
hid_t
m_h5_file_id
;
hid_t
m_h5_file_id
;
// translate cvType -> h5Type
//
!
translate cvType -> h5Type
inline
hid_t
GetH5type
(
int
cvType
)
const
;
inline
hid_t
GetH5type
(
int
cvType
)
const
;
// translate h5Type -> cvType
//
!
translate h5Type -> cvType
inline
int
GetCVtype
(
hid_t
h5Type
)
const
;
inline
int
GetCVtype
(
hid_t
h5Type
)
const
;
};
};
...
@@ -247,7 +247,7 @@ inline int HDF5Impl::GetCVtype( hid_t h5Type ) const
...
@@ -247,7 +247,7 @@ inline int HDF5Impl::GetCVtype( hid_t h5Type ) const
return
cvType
;
return
cvType
;
}
}
HDF5Impl
::
HDF5Impl
(
String
_hdf5_filename
)
HDF5Impl
::
HDF5Impl
(
const
String
&
_hdf5_filename
)
:
m_hdf5_filename
(
_hdf5_filename
)
:
m_hdf5_filename
(
_hdf5_filename
)
{
{
// save old
// save old
...
@@ -260,7 +260,7 @@ HDF5Impl::HDF5Impl( String _hdf5_filename )
...
@@ -260,7 +260,7 @@ HDF5Impl::HDF5Impl( String _hdf5_filename )
// turn off error handling
// turn off error handling
H5Eset_auto
(
stackid
,
NULL
,
NULL
);
H5Eset_auto
(
stackid
,
NULL
,
NULL
);
// check HDF5 file presence (err supressed)
// check HDF5 file presence (err sup
p
ressed)
htri_t
check
=
H5Fis_hdf5
(
m_hdf5_filename
.
c_str
()
);
htri_t
check
=
H5Fis_hdf5
(
m_hdf5_filename
.
c_str
()
);
// restore previous error handler
// restore previous error handler
...
@@ -290,7 +290,7 @@ void HDF5Impl::close()
...
@@ -290,7 +290,7 @@ void HDF5Impl::close()
* h5 generic
* h5 generic
*/
*/
bool
HDF5Impl
::
hlexists
(
String
label
)
const
bool
HDF5Impl
::
hlexists
(
const
String
&
label
)
const
{
{
bool
exists
=
false
;
bool
exists
=
false
;
...
@@ -306,7 +306,7 @@ bool HDF5Impl::hlexists( String label ) const
...
@@ -306,7 +306,7 @@ bool HDF5Impl::hlexists( String label ) const
* h5 group
* h5 group
*/
*/
void
HDF5Impl
::
grcreate
(
String
grlabel
)
void
HDF5Impl
::
grcreate
(
const
String
&
grlabel
)
{
{
hid_t
gid
=
H5Gcreate
(
m_h5_file_id
,
grlabel
.
c_str
(),
hid_t
gid
=
H5Gcreate
(
m_h5_file_id
,
grlabel
.
c_str
(),
H5P_DEFAULT
,
H5P_DEFAULT
,
H5P_DEFAULT
);
H5P_DEFAULT
,
H5P_DEFAULT
,
H5P_DEFAULT
);
...
@@ -317,7 +317,7 @@ void HDF5Impl::grcreate( String grlabel )
...
@@ -317,7 +317,7 @@ void HDF5Impl::grcreate( String grlabel )
* cv:Mat
* cv:Mat
*/
*/
vector
<
int
>
HDF5Impl
::
dsgetsize
(
String
dslabel
,
int
dims_flag
)
const
vector
<
int
>
HDF5Impl
::
dsgetsize
(
const
String
&
dslabel
,
int
dims_flag
)
const
{
{
// open dataset
// open dataset
hid_t
dsdata
=
H5Dopen
(
m_h5_file_id
,
dslabel
.
c_str
(),
H5P_DEFAULT
);
hid_t
dsdata
=
H5Dopen
(
m_h5_file_id
,
dslabel
.
c_str
(),
H5P_DEFAULT
);
...
@@ -372,7 +372,7 @@ vector<int> HDF5Impl::dsgetsize( String dslabel, int dims_flag ) const
...
@@ -372,7 +372,7 @@ vector<int> HDF5Impl::dsgetsize( String dslabel, int dims_flag ) const
return
SizeVect
;
return
SizeVect
;
}
}
int
HDF5Impl
::
dsgettype
(
String
dslabel
)
const
int
HDF5Impl
::
dsgettype
(
const
String
&
dslabel
)
const
{
{
hid_t
h5type
;
hid_t
h5type
;
...
@@ -408,7 +408,7 @@ int HDF5Impl::dsgettype( String dslabel ) const
...
@@ -408,7 +408,7 @@ int HDF5Impl::dsgettype( String dslabel ) const
// overload
// overload
void
HDF5Impl
::
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
void
HDF5Impl
::
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
)
const
const
String
&
dslabel
)
const
{
{
// dataset dims
// dataset dims
int
dsizes
[
2
]
=
{
rows
,
cols
};
int
dsizes
[
2
]
=
{
rows
,
cols
};
...
@@ -419,7 +419,7 @@ void HDF5Impl::dscreate( const int rows, const int cols, const int type,
...
@@ -419,7 +419,7 @@ void HDF5Impl::dscreate( const int rows, const int cols, const int type,
// overload
// overload
void
HDF5Impl
::
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
void
HDF5Impl
::
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
)
const
const
String
&
dslabel
,
const
int
compresslevel
)
const
{
{
// dataset dims
// dataset dims
int
dsizes
[
2
]
=
{
rows
,
cols
};
int
dsizes
[
2
]
=
{
rows
,
cols
};
...
@@ -430,7 +430,7 @@ void HDF5Impl::dscreate( const int rows, const int cols, const int type,
...
@@ -430,7 +430,7 @@ void HDF5Impl::dscreate( const int rows, const int cols, const int type,
// overload
// overload
void
HDF5Impl
::
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
void
HDF5Impl
::
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
,
const
String
&
dslabel
,
const
int
compresslevel
,
const
vector
<
int
>&
dims_chunks
)
const
const
vector
<
int
>&
dims_chunks
)
const
{
{
CV_Assert
(
dims_chunks
.
empty
()
||
dims_chunks
.
size
()
==
2
);
CV_Assert
(
dims_chunks
.
empty
()
||
dims_chunks
.
size
()
==
2
);
...
@@ -438,7 +438,7 @@ void HDF5Impl::dscreate( const int rows, const int cols, const int type,
...
@@ -438,7 +438,7 @@ void HDF5Impl::dscreate( const int rows, const int cols, const int type,
}
}
void
HDF5Impl
::
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
void
HDF5Impl
::
dscreate
(
const
int
rows
,
const
int
cols
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
const
String
&
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
{
{
// dataset dims
// dataset dims
int
dsizes
[
2
]
=
{
rows
,
cols
};
int
dsizes
[
2
]
=
{
rows
,
cols
};
...
@@ -449,21 +449,21 @@ void HDF5Impl::dscreate( const int rows, const int cols, const int type,
...
@@ -449,21 +449,21 @@ void HDF5Impl::dscreate( const int rows, const int cols, const int type,
// overload
// overload
void
HDF5Impl
::
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
void
HDF5Impl
::
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
)
const
const
String
&
dslabel
)
const
{
{
dscreate
(
n_dims
,
sizes
,
type
,
dslabel
,
H5_NONE
,
NULL
);
dscreate
(
n_dims
,
sizes
,
type
,
dslabel
,
H5_NONE
,
NULL
);
}
}
// overload
// overload
void
HDF5Impl
::
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
void
HDF5Impl
::
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
)
const
const
String
&
dslabel
,
const
int
compresslevel
)
const
{
{
dscreate
(
n_dims
,
sizes
,
type
,
dslabel
,
compresslevel
,
NULL
);
dscreate
(
n_dims
,
sizes
,
type
,
dslabel
,
compresslevel
,
NULL
);
}
}
// overload
// overload
void
HDF5Impl
::
dscreate
(
const
vector
<
int
>&
sizes
,
const
int
type
,
void
HDF5Impl
::
dscreate
(
const
vector
<
int
>&
sizes
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
,
const
String
&
dslabel
,
const
int
compresslevel
,
const
vector
<
int
>&
dims_chunks
)
const
const
vector
<
int
>&
dims_chunks
)
const
{
{
CV_Assert
(
dims_chunks
.
empty
()
||
dims_chunks
.
size
()
==
sizes
.
size
()
);
CV_Assert
(
dims_chunks
.
empty
()
||
dims_chunks
.
size
()
==
sizes
.
size
()
);
...
@@ -473,7 +473,7 @@ void HDF5Impl::dscreate( const vector<int>& sizes, const int type,
...
@@ -473,7 +473,7 @@ void HDF5Impl::dscreate( const vector<int>& sizes, const int type,
}
}
void
HDF5Impl
::
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
void
HDF5Impl
::
dscreate
(
const
int
n_dims
,
const
int
*
sizes
,
const
int
type
,
String
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
const
String
&
dslabel
,
const
int
compresslevel
,
const
int
*
dims_chunks
)
const
{
{
// compress valid H5_NONE, 0-9
// compress valid H5_NONE, 0-9
CV_Assert
(
compresslevel
>=
H5_NONE
&&
compresslevel
<=
9
);
CV_Assert
(
compresslevel
>=
H5_NONE
&&
compresslevel
<=
9
);
...
@@ -552,27 +552,27 @@ void HDF5Impl::dscreate( const int n_dims, const int* sizes, const int type,
...
@@ -552,27 +552,27 @@ void HDF5Impl::dscreate( const int n_dims, const int* sizes, const int type,
}
}
// overload
// overload
void
HDF5Impl
::
dsread
(
OutputArray
Array
,
String
dslabel
)
const
void
HDF5Impl
::
dsread
(
OutputArray
Array
,
const
String
&
dslabel
)
const
{
{
dsread
(
Array
,
dslabel
,
NULL
,
NULL
);
dsread
(
Array
,
dslabel
,
NULL
,
NULL
);
}
}
// overload
// overload
void
HDF5Impl
::
dsread
(
OutputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dsread
(
OutputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
const
int
*
dims_offset
)
const
{
{
dsread
(
Array
,
dslabel
,
dims_offset
,
NULL
);
dsread
(
Array
,
dslabel
,
dims_offset
,
NULL
);
}
}
// overload
// overload
void
HDF5Impl
::
dsread
(
OutputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dsread
(
OutputArray
Array
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
)
const
const
vector
<
int
>&
dims_counts
)
const
{
{
dsread
(
Array
,
dslabel
,
&
dims_offset
[
0
],
&
dims_counts
[
0
]
);
dsread
(
Array
,
dslabel
,
&
dims_offset
[
0
],
&
dims_counts
[
0
]
);
}
}
void
HDF5Impl
::
dsread
(
OutputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dsread
(
OutputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
{
{
// only Mat support
// only Mat support
...
@@ -672,25 +672,25 @@ void HDF5Impl::dsread( OutputArray Array, String dslabel,
...
@@ -672,25 +672,25 @@ void HDF5Impl::dsread( OutputArray Array, String dslabel,
}
}
// overload
// overload
void
HDF5Impl
::
dswrite
(
InputArray
Array
,
String
dslabel
)
const
void
HDF5Impl
::
dswrite
(
InputArray
Array
,
const
String
&
dslabel
)
const
{
{
dswrite
(
Array
,
dslabel
,
NULL
,
NULL
);
dswrite
(
Array
,
dslabel
,
NULL
,
NULL
);
}
}
// overload
// overload
void
HDF5Impl
::
dswrite
(
InputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
const
int
*
dims_offset
)
const
{
{
dswrite
(
Array
,
dslabel
,
dims_offset
,
NULL
);
dswrite
(
Array
,
dslabel
,
dims_offset
,
NULL
);
}
}
// overload
// overload
void
HDF5Impl
::
dswrite
(
InputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
)
const
const
vector
<
int
>&
dims_counts
)
const
{
{
dswrite
(
Array
,
dslabel
,
&
dims_offset
[
0
],
&
dims_counts
[
0
]
);
dswrite
(
Array
,
dslabel
,
&
dims_offset
[
0
],
&
dims_counts
[
0
]
);
}
}
void
HDF5Impl
::
dswrite
(
InputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dswrite
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
{
{
// only Mat support
// only Mat support
...
@@ -715,6 +715,9 @@ void HDF5Impl::dswrite( InputArray Array, String dslabel,
...
@@ -715,6 +715,9 @@ void HDF5Impl::dswrite( InputArray Array, String dslabel,
dsdims
[
d
]
=
matrix
.
size
[
d
];
dsdims
[
d
]
=
matrix
.
size
[
d
];
}
}
// FixMe: If one of the groups the dataset belongs to does not exist,
// FixMe: dscreate() will fail!
// FixMe: It should be an error if the specified dataset has not been created instead of trying to create it
// pre-create dataset if needed
// pre-create dataset if needed
if
(
hlexists
(
dslabel
)
==
false
)
if
(
hlexists
(
dslabel
)
==
false
)
dscreate
(
n_dims
,
dsizes
,
matrix
.
type
(),
dslabel
);
dscreate
(
n_dims
,
dsizes
,
matrix
.
type
(),
dslabel
);
...
@@ -771,27 +774,27 @@ void HDF5Impl::dswrite( InputArray Array, String dslabel,
...
@@ -771,27 +774,27 @@ void HDF5Impl::dswrite( InputArray Array, String dslabel,
}
}
// overload
// overload
void
HDF5Impl
::
dsinsert
(
InputArray
Array
,
String
dslabel
)
const
void
HDF5Impl
::
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
)
const
{
{
dsinsert
(
Array
,
dslabel
,
NULL
,
NULL
);
dsinsert
(
Array
,
dslabel
,
NULL
,
NULL
);
}
}
// overload
// overload
void
HDF5Impl
::
dsinsert
(
InputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
)
const
const
int
*
dims_offset
)
const
{
{
dsinsert
(
Array
,
dslabel
,
dims_offset
,
NULL
);
dsinsert
(
Array
,
dslabel
,
dims_offset
,
NULL
);
}
}
// overload
// overload
void
HDF5Impl
::
dsinsert
(
InputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_offset
,
const
vector
<
int
>&
dims_counts
)
const
const
vector
<
int
>&
dims_counts
)
const
{
{
dsinsert
(
Array
,
dslabel
,
&
dims_offset
[
0
],
&
dims_counts
[
0
]
);
dsinsert
(
Array
,
dslabel
,
&
dims_offset
[
0
],
&
dims_counts
[
0
]
);
}
}
void
HDF5Impl
::
dsinsert
(
InputArray
Array
,
String
dslabel
,
void
HDF5Impl
::
dsinsert
(
InputArray
Array
,
const
String
&
dslabel
,
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
const
int
*
dims_offset
,
const
int
*
dims_counts
)
const
{
{
// only Mat support
// only Mat support
...
@@ -859,7 +862,7 @@ void HDF5Impl::dsinsert( InputArray Array, String dslabel,
...
@@ -859,7 +862,7 @@ void HDF5Impl::dsinsert( InputArray Array, String dslabel,
// add offset
// add offset
if
(
dims_offset
!=
NULL
)
if
(
dims_offset
!=
NULL
)
nwdims
[
d
]
+=
dims_offset
[
d
];
nwdims
[
d
]
+=
dims_offset
[
d
];
// add counts or matrixsize
// add counts or matrix
size
if
(
dims_counts
!=
NULL
)
if
(
dims_counts
!=
NULL
)
nwdims
[
d
]
+=
dims_counts
[
d
];
nwdims
[
d
]
+=
dims_counts
[
d
];
else
else
...
@@ -910,7 +913,7 @@ void HDF5Impl::dsinsert( InputArray Array, String dslabel,
...
@@ -910,7 +913,7 @@ void HDF5Impl::dsinsert( InputArray Array, String dslabel,
* std::vector<cv::KeyPoint>
* std::vector<cv::KeyPoint>
*/
*/
int
HDF5Impl
::
kpgetsize
(
String
kplabel
,
int
dims_flag
)
const
int
HDF5Impl
::
kpgetsize
(
const
String
&
kplabel
,
int
dims_flag
)
const
{
{
vector
<
int
>
sizes
=
dsgetsize
(
kplabel
,
dims_flag
);
vector
<
int
>
sizes
=
dsgetsize
(
kplabel
,
dims_flag
);
...
@@ -919,7 +922,7 @@ int HDF5Impl::kpgetsize( String kplabel, int dims_flag ) const
...
@@ -919,7 +922,7 @@ int HDF5Impl::kpgetsize( String kplabel, int dims_flag ) const
return
sizes
[
0
];
return
sizes
[
0
];
}
}
void
HDF5Impl
::
kpcreate
(
const
int
size
,
String
kplabel
,
void
HDF5Impl
::
kpcreate
(
const
int
size
,
const
String
&
kplabel
,
const
int
compresslevel
,
const
int
chunks
)
const
const
int
compresslevel
,
const
int
chunks
)
const
{
{
// size valid
// size valid
...
@@ -992,7 +995,7 @@ void HDF5Impl::kpcreate( const int size, String kplabel,
...
@@ -992,7 +995,7 @@ void HDF5Impl::kpcreate( const int size, String kplabel,
H5Sclose
(
dspace
);
H5Sclose
(
dspace
);
}
}
void
HDF5Impl
::
kpwrite
(
const
vector
<
KeyPoint
>
keypoints
,
String
kplabel
,
void
HDF5Impl
::
kpwrite
(
const
vector
<
KeyPoint
>
keypoints
,
const
String
&
kplabel
,
const
int
offset
,
const
int
counts
)
const
const
int
offset
,
const
int
counts
)
const
{
{
CV_Assert
(
keypoints
.
size
()
>
0
);
CV_Assert
(
keypoints
.
size
()
>
0
);
...
@@ -1048,7 +1051,7 @@ void HDF5Impl::kpwrite( const vector<KeyPoint> keypoints, String kplabel,
...
@@ -1048,7 +1051,7 @@ void HDF5Impl::kpwrite( const vector<KeyPoint> keypoints, String kplabel,
H5Dclose
(
dsdata
);
H5Dclose
(
dsdata
);
}
}
void
HDF5Impl
::
kpinsert
(
const
vector
<
KeyPoint
>
keypoints
,
String
kplabel
,
void
HDF5Impl
::
kpinsert
(
const
vector
<
KeyPoint
>
keypoints
,
const
String
&
kplabel
,
const
int
offset
,
const
int
counts
)
const
const
int
offset
,
const
int
counts
)
const
{
{
CV_Assert
(
keypoints
.
size
()
>
0
);
CV_Assert
(
keypoints
.
size
()
>
0
);
...
@@ -1132,7 +1135,7 @@ void HDF5Impl::kpinsert( const vector<KeyPoint> keypoints, String kplabel,
...
@@ -1132,7 +1135,7 @@ void HDF5Impl::kpinsert( const vector<KeyPoint> keypoints, String kplabel,
H5Dclose
(
dsdata
);
H5Dclose
(
dsdata
);
}
}
void
HDF5Impl
::
kpread
(
vector
<
KeyPoint
>&
keypoints
,
String
kplabel
,
void
HDF5Impl
::
kpread
(
vector
<
KeyPoint
>&
keypoints
,
const
String
&
kplabel
,
const
int
offset
,
const
int
counts
)
const
const
int
offset
,
const
int
counts
)
const
{
{
CV_Assert
(
keypoints
.
size
()
==
0
);
CV_Assert
(
keypoints
.
size
()
==
0
);
...
@@ -1193,7 +1196,7 @@ void HDF5Impl::kpread( vector<KeyPoint>& keypoints, String kplabel,
...
@@ -1193,7 +1196,7 @@ void HDF5Impl::kpread( vector<KeyPoint>& keypoints, String kplabel,
H5Dclose
(
dsdata
);
H5Dclose
(
dsdata
);
}
}
CV_EXPORTS
Ptr
<
HDF5
>
open
(
String
HDF5Filename
)
CV_EXPORTS
Ptr
<
HDF5
>
open
(
const
String
&
HDF5Filename
)
{
{
return
makePtr
<
HDF5Impl
>
(
HDF5Filename
);
return
makePtr
<
HDF5Impl
>
(
HDF5Filename
);
}
}
...
...
modules/hdf/test/test_hdf5.cpp
0 → 100644
View file @
f03e415e
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
/**
* @file test_hdf5.cpp
* @author Fangjun Kuang <csukuangfj dot at gmail dot com>
* @date December 2017
*
*/
#include<stdio.h> // for remove()
#include "test_precomp.hpp"
#include <vector>
using
namespace
cv
;
struct
HDF5_Test
:
public
testing
::
Test
{
virtual
void
SetUp
()
{
m_filename
=
"test.h5"
;
// 0 1 2
// 3 4 5
m_single_channel
.
create
(
2
,
3
,
CV_32F
);
for
(
size_t
i
=
0
;
i
<
m_single_channel
.
total
();
i
++
)
{
((
float
*
)
m_single_channel
.
data
)[
i
]
=
i
;
}
// 0 1 2 3 4 5
// 6 7 8 9 10 11
m_two_channels
.
create
(
2
,
3
,
CV_32SC2
);
for
(
size_t
i
=
0
;
i
<
m_two_channels
.
total
()
*
m_two_channels
.
channels
();
i
++
)
{
((
int
*
)
m_two_channels
.
data
)[
i
]
=
(
int
)
i
;
}
}
//! Remove the hdf5 file
void
reset
()
{
remove
(
m_filename
.
c_str
());
}
String
m_filename
;
//!< filename for testing
Ptr
<
hdf
::
HDF5
>
m_hdf_io
;
//!< HDF5 file pointer
Mat
m_single_channel
;
//!< single channel matrix for test
Mat
m_two_channels
;
//!< two-channel matrix for test
};
TEST_F
(
HDF5_Test
,
create_a_single_group
)
{
reset
();
String
group_name
=
"parent"
;
m_hdf_io
=
hdf
::
open
(
m_filename
);
m_hdf_io
->
grcreate
(
group_name
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
group_name
),
true
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
"child"
),
false
);
m_hdf_io
->
close
();
}
TEST_F
(
HDF5_Test
,
create_a_child_group
)
{
reset
();
String
parent
=
"parent"
;
String
child
=
parent
+
"/child"
;
m_hdf_io
=
hdf
::
open
(
m_filename
);
m_hdf_io
->
grcreate
(
parent
);
m_hdf_io
->
grcreate
(
child
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
parent
),
true
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
child
),
true
);
m_hdf_io
->
close
();
}
TEST_F
(
HDF5_Test
,
create_dataset
)
{
reset
();
String
dataset_single_channel
=
"/single"
;
String
dataset_two_channels
=
"/dual"
;
m_hdf_io
=
hdf
::
open
(
m_filename
);
m_hdf_io
->
dscreate
(
m_single_channel
.
rows
,
m_single_channel
.
cols
,
m_single_channel
.
type
(),
dataset_single_channel
);
m_hdf_io
->
dscreate
(
m_two_channels
.
rows
,
m_two_channels
.
cols
,
m_two_channels
.
type
(),
dataset_two_channels
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
dataset_single_channel
),
true
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
dataset_two_channels
),
true
);
std
::
vector
<
int
>
dims
;
dims
=
m_hdf_io
->
dsgetsize
(
dataset_single_channel
,
hdf
::
HDF5
::
H5_GETDIMS
);
EXPECT_EQ
(
dims
.
size
(),
(
size_t
)
2
);
EXPECT_EQ
(
dims
[
0
],
m_single_channel
.
rows
);
EXPECT_EQ
(
dims
[
1
],
m_single_channel
.
cols
);
dims
=
m_hdf_io
->
dsgetsize
(
dataset_two_channels
,
hdf
::
HDF5
::
H5_GETDIMS
);
EXPECT_EQ
(
dims
.
size
(),
(
size_t
)
2
);
EXPECT_EQ
(
dims
[
0
],
m_two_channels
.
rows
);
EXPECT_EQ
(
dims
[
1
],
m_two_channels
.
cols
);
int
type
;
type
=
m_hdf_io
->
dsgettype
(
dataset_single_channel
);
EXPECT_EQ
(
type
,
m_single_channel
.
type
());
type
=
m_hdf_io
->
dsgettype
(
dataset_two_channels
);
EXPECT_EQ
(
type
,
m_two_channels
.
type
());
m_hdf_io
->
close
();
}
TEST_F
(
HDF5_Test
,
write_read_dataset_1
)
{
reset
();
String
dataset_single_channel
=
"/single"
;
String
dataset_two_channels
=
"/dual"
;
m_hdf_io
=
hdf
::
open
(
m_filename
);
// since the dataset is under the root group, it is created by dswrite() automatically.
m_hdf_io
->
dswrite
(
m_single_channel
,
dataset_single_channel
);
m_hdf_io
->
dswrite
(
m_two_channels
,
dataset_two_channels
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
dataset_single_channel
),
true
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
dataset_two_channels
),
true
);
// read single channel matrix
Mat
single
;
m_hdf_io
->
dsread
(
single
,
dataset_single_channel
);
EXPECT_EQ
(
single
.
type
(),
m_single_channel
.
type
());
EXPECT_EQ
(
single
.
size
(),
m_single_channel
.
size
());
EXPECT_NEAR
(
norm
(
single
-
m_single_channel
),
0
,
1e-10
);
// read dual channel matrix
Mat
dual
;
m_hdf_io
->
dsread
(
dual
,
dataset_two_channels
);
EXPECT_EQ
(
dual
.
type
(),
m_two_channels
.
type
());
EXPECT_EQ
(
dual
.
size
(),
m_two_channels
.
size
());
EXPECT_NEAR
(
norm
(
dual
-
m_two_channels
),
0
,
1e-10
);
m_hdf_io
->
close
();
}
TEST_F
(
HDF5_Test
,
write_read_dataset_2
)
{
reset
();
// create the dataset manually if it is not inside
// the root group
String
parent
=
"/parent"
;
String
dataset_single_channel
=
parent
+
"/single"
;
String
dataset_two_channels
=
parent
+
"/dual"
;
m_hdf_io
=
hdf
::
open
(
m_filename
);
m_hdf_io
->
grcreate
(
parent
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
parent
),
true
);
m_hdf_io
->
dscreate
(
m_single_channel
.
rows
,
m_single_channel
.
cols
,
m_single_channel
.
type
(),
dataset_single_channel
);
m_hdf_io
->
dscreate
(
m_two_channels
.
rows
,
m_two_channels
.
cols
,
m_two_channels
.
type
(),
dataset_two_channels
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
dataset_single_channel
),
true
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
dataset_two_channels
),
true
);
m_hdf_io
->
dswrite
(
m_single_channel
,
dataset_single_channel
);
m_hdf_io
->
dswrite
(
m_two_channels
,
dataset_two_channels
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
dataset_single_channel
),
true
);
EXPECT_EQ
(
m_hdf_io
->
hlexists
(
dataset_two_channels
),
true
);
// read single channel matrix
Mat
single
;
m_hdf_io
->
dsread
(
single
,
dataset_single_channel
);
EXPECT_EQ
(
single
.
type
(),
m_single_channel
.
type
());
EXPECT_EQ
(
single
.
size
(),
m_single_channel
.
size
());
EXPECT_NEAR
(
norm
(
single
-
m_single_channel
),
0
,
1e-10
);
// read dual channel matrix
Mat
dual
;
m_hdf_io
->
dsread
(
dual
,
dataset_two_channels
);
EXPECT_EQ
(
dual
.
type
(),
m_two_channels
.
type
());
EXPECT_EQ
(
dual
.
size
(),
m_two_channels
.
size
());
EXPECT_NEAR
(
norm
(
dual
-
m_two_channels
),
0
,
1e-10
);
m_hdf_io
->
close
();
}
modules/hdf/test/test_main.cpp
0 → 100644
View file @
f03e415e
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#include "test_precomp.hpp"
CV_TEST_MAIN
(
"cv"
)
\ No newline at end of file
modules/hdf/test/test_precomp.hpp
0 → 100644
View file @
f03e415e
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
#ifdef __GNUC__
# pragma GCC diagnostic ignored "-Wmissing-declarations"
# if defined __clang__ || defined __APPLE__
# pragma GCC diagnostic ignored "-Wmissing-prototypes"
# pragma GCC diagnostic ignored "-Wextra"
# endif
#endif
#ifndef __OPENCV_TEST_PRECOMP_HPP__
#define __OPENCV_TEST_PRECOMP_HPP__
#include "opencv2/ts.hpp"
#include "opencv2/core.hpp"
#include "opencv2/hdf.hpp"
#endif
modules/hdf/tutorials/create_groups/how_to_create_groups.markdown
0 → 100644
View file @
f03e415e
Creating Groups {#tutorial_hdf_create_groups}
===============================
Goal
----
This tutorial will show you:
-
How to create a HDF5 file?
-
How to create a group?
-
How to check whether a given group exists or not?
-
How to create a subgroup?
Source Code
----
The following code creates two groups:
`Group1`
and
`SubGroup1`
, where
`SubGroup1`
is a child of
`Group1`
.
You can download the code from
[
here
][
1
]
or find it in the file
`modules/hdf/samples/create_groups.cpp`
of the opencv_contrib source code library.
@snippet samples/create_groups.cpp tutorial
Explanation
----
First, we create a HDF5 file
@snippet samples/create_groups.cpp tutorial_create_file
If the given file does not exist, it will be created. Otherwise, it is open for read and write.
Next, we create the group
`Group1`
@snippet samples/create_groups.cpp tutorial_create_group
Note that we have to check whether
`/Group1`
exists or not using
the function
`hlexists`
before creating it. You can not create
a group with an existing name. Otherwise, an error will occur.
Then, we create the subgroup named
`Subgroup1`
. In order to
indicate that it is a sub group of
`Group1`
, we have to
use the group name
`/Group1/SubGroup1`
:
@snippet samples/create_groups.cpp tutorial_create_subgroup
Note that before creating a subgroup, we have to make sure
that its parent group exists. Otherwise, an error will occur.
In the end, we have to close the file
@snippet samples/create_groups.cpp tutorial_close_file
Result
----
There are many tools that can be used to inspect a given HDF file, such
as HDFView and h5dump. If you are using Ubuntu, you can install
them with the following commands:
@code
sudo apt-get install hdf5-tools hdfview
@endcode
There are also binaries available from the The HDF Group official website
<https://support.hdfgroup.org/HDF5/Tutor/tools.html>
.
The following figure shows the result visualized with the tool HDFView:
![
Figure 1: Results of creating groups and subgroups
](
pics/create_groups.png
)
The output for
`h5dump`
is:
@code
$ h5dump mytest.h5
HDF5 "mytest.h5" {
GROUP "/" {
GROUP "Group1" {
GROUP "SubGroup1" {
}
}
}
}
@endcode
[
1
]:
https://github.com/opencv/opencv_contrib/tree/master/modules/hdf/samples/create_groups.cpp
modules/hdf/tutorials/create_read_write_dataset/create_read_write_dataset.markdown
0 → 100644
View file @
f03e415e
Creating, Writing and Reading Datasets {#tutorial_hdf_create_read_write_datasets}
===============================
Goal
----
This tutorial shows you:
-
How to create a dataset?
-
How to write a
`cv::Mat`
to a dataset?
-
How to read a
`cv::Mat`
from a dataset?
@note Currently, it supports only reading and writing
`cv::Mat`
and the matrix should be continuous
in memory. Supports for other data types have not been implemented yet.
Source Code
----
The following code demonstrates writing a single channel
matrix and a two-channel matrix to datasets and then reading them
back.
You can download the code from
[
here
][
1
]
or find it in the file
`modules/hdf/samples/create_read_write_datasets.cpp`
of the opencv_contrib source code library.
@snippet samples/create_read_write_datasets.cpp tutorial
Explanation
----
The first step for creating a dataset is to open the file
@snippet samples/create_read_write_datasets.cpp tutorial_open_file
For the function
`write_root_group_single_channel()`
, since
the dataset name is
`/single`
, which is inside the root group, we can use
@snippet samples/create_read_write_datasets.cpp tutorial_write_root_single_channel
to write the data directly to the dataset without the need of creating
it beforehand. Because it is created inside
`HDF5::dswrite()`
automatically.
@warning This applies only to datasets that reside inside the root group.
Of course, we can create the dataset by ourselves:
@snippet samples/create_read_write_datasets.cpp tutorial_create_dataset
To read data from a dataset, we use
@snippet samples/create_read_write_datasets.cpp tutorial_read_dataset
by specifying the name of the dataset.
We can check that the data read out is exactly the data written before by using
@snippet samples/create_read_write_datasets.cpp tutorial_check_result
Results
----
Figure 1 shows the result visualized using the tool HDFView for the file
`root_group_sinle_channel`
. The results
of matrices for datasets that are not the direct children of the root group
are given in Figure 2 and Figure 3, respectively.
![
Figure 1: Result for writing a single channel matrix to a dataset inside the root group
](
pics/root_group_single_channel.png
)
![
Figure 2: Result for writing a single channel matrix to a dataset not in the root group
](
pics/single_channel.png
)
![
Figure 3: Result for writing a two-channel matrix to a dataset not in the root group
](
pics/two_channels.png
)
[
1
]:
https://github.com/opencv/opencv_contrib/tree/master/modules/hdf/samples/create_read_write_datasets.cpp
modules/hdf/tutorials/table_of_content_hdf.markdown
0 → 100644
View file @
f03e415e
The Hierarchical Data Format (hdf) I/O {#tutorial_table_of_content_hdf}
=====================================
Here you will know how to read and write a HDF5 file using OpenCV.
Currently, only
`cv::Mat`
is supported.
Note that the HDF5 library has to be installed in your system
to use this module.
-
@subpage tutorial_hdf_create_groups
*Compatibility:* \> OpenCV 3.0
*Author:* Fangjun Kuang
You will learn how to create groups and subgroups.
-
@subpage tutorial_hdf_create_read_write_datasets
*Compatibility:* \> OpenCV 3.0
*Author:* Fangjun Kuang
You will learn how to create, read and write datasets.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment