Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
O
opencv_contrib
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
opencv_contrib
Commits
076f53d6
Commit
076f53d6
authored
Jul 27, 2015
by
StevenPuttemans
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fixing facerecognizer tutorials and interface
parent
aa11ac48
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
14 changed files
with
59 additions
and
402 deletions
+59
-402
CMakeLists.txt
modules/face/CMakeLists.txt
+2
-1
at.txt
modules/face/doc/etc/at.txt
+0
-0
facerec_demo.cpp
modules/face/doc/src/facerec_demo.cpp
+0
-169
CMakeLists.txt
modules/face/samples/CMakeLists.txt
+0
-0
create_csv.py
modules/face/samples/etc/create_csv.py
+0
-0
crop_face.py
modules/face/samples/etc/crop_face.py
+0
-0
facerec_at_t.txt
modules/face/samples/facerec_at_t.txt
+0
-0
facerec_eigenfaces.cpp
modules/face/samples/facerec_eigenfaces.cpp
+10
-9
facerec_fisherfaces.cpp
modules/face/samples/facerec_fisherfaces.cpp
+10
-9
facerec_lbph.cpp
modules/face/samples/facerec_lbph.cpp
+12
-21
facerec_save_load.cpp
modules/face/samples/facerec_save_load.cpp
+10
-9
facerec_video.cpp
modules/face/samples/facerec_video.cpp
+6
-6
facerec_demo.cpp
modules/face/samples/src/facerec_demo.cpp
+0
-169
face_tutorial.markdown
modules/face/tutorials/face_tutorial.markdown
+9
-9
No files found.
modules/face/CMakeLists.txt
View file @
076f53d6
set
(
the_description
"Face recognition etc"
)
ocv_define_module
(
face opencv_core opencv_imgproc WRAP python
)
ocv_define_module
(
face opencv_core opencv_imgproc opencv_objdetect WRAP python
)
# NOTE: objdetect module is needed for one of the samples
modules/face/doc/etc/at.txt
deleted
100644 → 0
View file @
aa11ac48
This diff is collapsed.
Click to expand it.
modules/face/doc/src/facerec_demo.cpp
deleted
100644 → 0
View file @
aa11ac48
/*
* Copyright (c) 2011. Philipp Wagner <bytefish[at]gmx[dot]de>.
* Released to public domain under terms of the BSD Simplified license.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the organization nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* See <http://www.opensource.org/licenses/bsd-license>
*/
#include "opencv2/core.hpp"
#include "opencv2/face.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
#include <fstream>
#include <sstream>
using
namespace
cv
;
using
namespace
cv
::
face
;
using
namespace
std
;
static
Mat
norm_0_255
(
InputArray
_src
)
{
Mat
src
=
_src
.
getMat
();
// Create and return normalized image:
Mat
dst
;
switch
(
src
.
channels
())
{
case
1
:
cv
::
normalize
(
_src
,
dst
,
0
,
255
,
NORM_MINMAX
,
CV_8UC1
);
break
;
case
3
:
cv
::
normalize
(
_src
,
dst
,
0
,
255
,
NORM_MINMAX
,
CV_8UC3
);
break
;
default:
src
.
copyTo
(
dst
);
break
;
}
return
dst
;
}
static
void
read_csv
(
const
string
&
filename
,
vector
<
Mat
>&
images
,
vector
<
int
>&
labels
,
char
separator
=
';'
)
{
std
::
ifstream
file
(
filename
.
c_str
(),
ifstream
::
in
);
if
(
!
file
)
{
string
error_message
=
"No valid input file was given, please check the given filename."
;
CV_Error
(
CV_StsBadArg
,
error_message
);
}
string
line
,
path
,
classlabel
;
while
(
getline
(
file
,
line
))
{
stringstream
liness
(
line
);
getline
(
liness
,
path
,
separator
);
getline
(
liness
,
classlabel
);
if
(
!
path
.
empty
()
&&
!
classlabel
.
empty
())
{
images
.
push_back
(
imread
(
path
,
0
));
labels
.
push_back
(
atoi
(
classlabel
.
c_str
()));
}
}
}
int
main
(
int
argc
,
const
char
*
argv
[])
{
// Check for valid command line arguments, print usage
// if no arguments were given.
if
(
argc
!=
2
)
{
cout
<<
"usage: "
<<
argv
[
0
]
<<
" <csv.ext>"
<<
endl
;
exit
(
1
);
}
// Get the path to your CSV.
string
fn_csv
=
string
(
argv
[
1
]);
// These vectors hold the images and corresponding labels.
vector
<
Mat
>
images
;
vector
<
int
>
labels
;
// Read in the data. This can fail if no valid
// input filename is given.
try
{
read_csv
(
fn_csv
,
images
,
labels
);
}
catch
(
cv
::
Exception
&
e
)
{
cerr
<<
"Error opening file
\"
"
<<
fn_csv
<<
"
\"
. Reason: "
<<
e
.
msg
<<
endl
;
// nothing more we can do
exit
(
1
);
}
// Quit if there are not enough images for this demo.
if
(
images
.
size
()
<=
1
)
{
string
error_message
=
"This demo needs at least 2 images to work. Please add more images to your data set!"
;
CV_Error
(
CV_StsError
,
error_message
);
}
// Get the height from the first image. We'll need this
// later in code to reshape the images to their original
// size:
int
height
=
images
[
0
].
rows
;
// The following lines simply get the last images from
// your dataset and remove it from the vector. This is
// done, so that the training data (which we learn the
// cv::FaceRecognizer on) and the test data we test
// the model with, do not overlap.
Mat
testSample
=
images
[
images
.
size
()
-
1
];
int
testLabel
=
labels
[
labels
.
size
()
-
1
];
images
.
pop_back
();
labels
.
pop_back
();
// The following lines create an Eigenfaces model for
// face recognition and train it with the images and
// labels read from the given CSV file.
// This here is a full PCA, if you just want to keep
// 10 principal components (read Eigenfaces), then call
// the factory method like this:
//
// cv::createEigenFaceRecognizer(10);
//
// If you want to create a FaceRecognizer with a
// confidennce threshold, call it with:
//
// cv::createEigenFaceRecognizer(10, 123.0);
//
Ptr
<
FaceRecognizer
>
model
=
createFisherFaceRecognizer
();
model
->
train
(
images
,
labels
);
// The following line predicts the label of a given
// test image:
int
predictedLabel
=
model
->
predict
(
testSample
);
//
// To get the confidence of a prediction call the model with:
//
// int predictedLabel = -1;
// double confidence = 0.0;
// model->predict(testSample, predictedLabel, confidence);
//
string
result_message
=
format
(
"Predicted class = %d / Actual class = %d."
,
predictedLabel
,
testLabel
);
cout
<<
result_message
<<
endl
;
// Sometimes you'll need to get/set internal model data,
// which isn't exposed by the public cv::FaceRecognizer.
// Since each cv::FaceRecognizer is derived from a
// cv::Algorithm, you can query the data.
//
// First we'll use it to set the threshold of the FaceRecognizer
// to 0.0 without retraining the model. This can be useful if
// you are evaluating the model:
//
model
->
set
(
"threshold"
,
0.0
);
// Now the threshold of this model is set to 0.0. A prediction
// now returns -1, as it's impossible to have a distance below
// it
predictedLabel
=
model
->
predict
(
testSample
);
cout
<<
"Predicted class = "
<<
predictedLabel
<<
endl
;
// Here is how to get the eigenvalues of this Eigenfaces model:
Mat
eigenvalues
=
model
->
getMat
(
"eigenvalues"
);
// And we can do the same to display the Eigenvectors (read Eigenfaces):
Mat
W
=
model
->
getMat
(
"eigenvectors"
);
// From this we will display the (at most) first 10 Eigenfaces:
for
(
int
i
=
0
;
i
<
min
(
10
,
W
.
cols
);
i
++
)
{
string
msg
=
format
(
"Eigenvalue #%d = %.5f"
,
i
,
eigenvalues
.
at
<
double
>
(
i
));
cout
<<
msg
<<
endl
;
// get eigenvector #i
Mat
ev
=
W
.
col
(
i
).
clone
();
// Reshape to original size & normalize to [0...255] for imshow.
Mat
grayscale
=
norm_0_255
(
ev
.
reshape
(
1
,
height
));
// Show the image & apply a Jet colormap for better sensing.
Mat
cgrayscale
;
applyColorMap
(
grayscale
,
cgrayscale
,
COLORMAP_JET
);
imshow
(
format
(
"%d"
,
i
),
cgrayscale
);
}
waitKey
(
0
);
return
0
;
}
modules/face/samples/
src/
CMakeLists.txt
→
modules/face/samples/CMakeLists.txt
View file @
076f53d6
File moved
modules/face/samples/
sr
c/create_csv.py
→
modules/face/samples/
et
c/create_csv.py
View file @
076f53d6
File moved
modules/face/samples/
sr
c/crop_face.py
→
modules/face/samples/
et
c/crop_face.py
View file @
076f53d6
File moved
modules/face/samples/facerec_at_t.txt
deleted
100644 → 0
View file @
aa11ac48
This diff is collapsed.
Click to expand it.
modules/face/samples/
src/
facerec_eigenfaces.cpp
→
modules/face/samples/facerec_eigenfaces.cpp
View file @
076f53d6
...
...
@@ -19,6 +19,7 @@
#include "opencv2/core.hpp"
#include "opencv2/face.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <fstream>
...
...
@@ -50,7 +51,7 @@ static void read_csv(const string& filename, vector<Mat>& images, vector<int>& l
std
::
ifstream
file
(
filename
.
c_str
(),
ifstream
::
in
);
if
(
!
file
)
{
string
error_message
=
"No valid input file was given, please check the given filename."
;
CV_Error
(
CV_
StsBadArg
,
error_message
);
CV_Error
(
Error
::
StsBadArg
,
error_message
);
}
string
line
,
path
,
classlabel
;
while
(
getline
(
file
,
line
))
{
...
...
@@ -92,7 +93,7 @@ int main(int argc, const char *argv[]) {
// Quit if there are not enough images for this demo.
if
(
images
.
size
()
<=
1
)
{
string
error_message
=
"This demo needs at least 2 images to work. Please add more images to your data set!"
;
CV_Error
(
CV_
StsError
,
error_message
);
CV_Error
(
Error
::
StsError
,
error_message
);
}
// Get the height from the first image. We'll need this
// later in code to reshape the images to their original
...
...
@@ -101,7 +102,7 @@ int main(int argc, const char *argv[]) {
// The following lines simply get the last images from
// your dataset and remove it from the vector. This is
// done, so that the training data (which we learn the
// cv::FaceRecognizer on) and the test data we test
// cv::
Basic
FaceRecognizer on) and the test data we test
// the model with, do not overlap.
Mat
testSample
=
images
[
images
.
size
()
-
1
];
int
testLabel
=
labels
[
labels
.
size
()
-
1
];
...
...
@@ -126,7 +127,7 @@ int main(int argc, const char *argv[]) {
//
// cv::createEigenFaceRecognizer(0, 123.0);
//
Ptr
<
FaceRecognizer
>
model
=
createEigenFaceRecognizer
();
Ptr
<
Basic
FaceRecognizer
>
model
=
createEigenFaceRecognizer
();
model
->
train
(
images
,
labels
);
// The following line predicts the label of a given
// test image:
...
...
@@ -141,11 +142,11 @@ int main(int argc, const char *argv[]) {
string
result_message
=
format
(
"Predicted class = %d / Actual class = %d."
,
predictedLabel
,
testLabel
);
cout
<<
result_message
<<
endl
;
// Here is how to get the eigenvalues of this Eigenfaces model:
Mat
eigenvalues
=
model
->
get
Mat
(
"eigenvalues"
);
Mat
eigenvalues
=
model
->
get
EigenValues
(
);
// And we can do the same to display the Eigenvectors (read Eigenfaces):
Mat
W
=
model
->
get
Mat
(
"eigenvectors"
);
Mat
W
=
model
->
get
EigenVectors
(
);
// Get the sample mean from the training data
Mat
mean
=
model
->
getM
at
(
"mean"
);
Mat
mean
=
model
->
getM
ean
(
);
// Display or save:
if
(
argc
==
2
)
{
imshow
(
"mean"
,
norm_0_255
(
mean
.
reshape
(
1
,
images
[
0
].
rows
)));
...
...
@@ -175,8 +176,8 @@ int main(int argc, const char *argv[]) {
for
(
int
num_components
=
min
(
W
.
cols
,
10
);
num_components
<
min
(
W
.
cols
,
300
);
num_components
+=
15
)
{
// slice the eigenvectors from the model
Mat
evs
=
Mat
(
W
,
Range
::
all
(),
Range
(
0
,
num_components
));
Mat
projection
=
subspaceProject
(
evs
,
mean
,
images
[
0
].
reshape
(
1
,
1
));
Mat
reconstruction
=
subspaceReconstruct
(
evs
,
mean
,
projection
);
Mat
projection
=
LDA
::
subspaceProject
(
evs
,
mean
,
images
[
0
].
reshape
(
1
,
1
));
Mat
reconstruction
=
LDA
::
subspaceReconstruct
(
evs
,
mean
,
projection
);
// Normalize the result:
reconstruction
=
norm_0_255
(
reconstruction
.
reshape
(
1
,
images
[
0
].
rows
));
// Display or save:
...
...
modules/face/samples/
src/
facerec_fisherfaces.cpp
→
modules/face/samples/facerec_fisherfaces.cpp
View file @
076f53d6
...
...
@@ -19,6 +19,7 @@
#include "opencv2/core.hpp"
#include "opencv2/face.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <fstream>
...
...
@@ -50,7 +51,7 @@ static void read_csv(const string& filename, vector<Mat>& images, vector<int>& l
std
::
ifstream
file
(
filename
.
c_str
(),
ifstream
::
in
);
if
(
!
file
)
{
string
error_message
=
"No valid input file was given, please check the given filename."
;
CV_Error
(
CV_
StsBadArg
,
error_message
);
CV_Error
(
Error
::
StsBadArg
,
error_message
);
}
string
line
,
path
,
classlabel
;
while
(
getline
(
file
,
line
))
{
...
...
@@ -92,7 +93,7 @@ int main(int argc, const char *argv[]) {
// Quit if there are not enough images for this demo.
if
(
images
.
size
()
<=
1
)
{
string
error_message
=
"This demo needs at least 2 images to work. Please add more images to your data set!"
;
CV_Error
(
CV_
StsError
,
error_message
);
CV_Error
(
Error
::
StsError
,
error_message
);
}
// Get the height from the first image. We'll need this
// later in code to reshape the images to their original
...
...
@@ -101,7 +102,7 @@ int main(int argc, const char *argv[]) {
// The following lines simply get the last images from
// your dataset and remove it from the vector. This is
// done, so that the training data (which we learn the
// cv::FaceRecognizer on) and the test data we test
// cv::
Basic
FaceRecognizer on) and the test data we test
// the model with, do not overlap.
Mat
testSample
=
images
[
images
.
size
()
-
1
];
int
testLabel
=
labels
[
labels
.
size
()
-
1
];
...
...
@@ -125,7 +126,7 @@ int main(int argc, const char *argv[]) {
//
// cv::createFisherFaceRecognizer(0, 123.0);
//
Ptr
<
FaceRecognizer
>
model
=
createFisherFaceRecognizer
();
Ptr
<
Basic
FaceRecognizer
>
model
=
createFisherFaceRecognizer
();
model
->
train
(
images
,
labels
);
// The following line predicts the label of a given
// test image:
...
...
@@ -140,11 +141,11 @@ int main(int argc, const char *argv[]) {
string
result_message
=
format
(
"Predicted class = %d / Actual class = %d."
,
predictedLabel
,
testLabel
);
cout
<<
result_message
<<
endl
;
// Here is how to get the eigenvalues of this Eigenfaces model:
Mat
eigenvalues
=
model
->
get
Mat
(
"eigenvalues"
);
Mat
eigenvalues
=
model
->
get
EigenValues
(
);
// And we can do the same to display the Eigenvectors (read Eigenfaces):
Mat
W
=
model
->
get
Mat
(
"eigenvectors"
);
Mat
W
=
model
->
get
EigenVectors
(
);
// Get the sample mean from the training data
Mat
mean
=
model
->
getM
at
(
"mean"
);
Mat
mean
=
model
->
getM
ean
(
);
// Display or save:
if
(
argc
==
2
)
{
imshow
(
"mean"
,
norm_0_255
(
mean
.
reshape
(
1
,
images
[
0
].
rows
)));
...
...
@@ -173,8 +174,8 @@ int main(int argc, const char *argv[]) {
for
(
int
num_component
=
0
;
num_component
<
min
(
16
,
W
.
cols
);
num_component
++
)
{
// Slice the Fisherface from the model:
Mat
ev
=
W
.
col
(
num_component
);
Mat
projection
=
subspaceProject
(
ev
,
mean
,
images
[
0
].
reshape
(
1
,
1
));
Mat
reconstruction
=
subspaceReconstruct
(
ev
,
mean
,
projection
);
Mat
projection
=
LDA
::
subspaceProject
(
ev
,
mean
,
images
[
0
].
reshape
(
1
,
1
));
Mat
reconstruction
=
LDA
::
subspaceReconstruct
(
ev
,
mean
,
projection
);
// Normalize the result:
reconstruction
=
norm_0_255
(
reconstruction
.
reshape
(
1
,
images
[
0
].
rows
));
// Display or save:
...
...
modules/face/samples/
src/
facerec_lbph.cpp
→
modules/face/samples/facerec_lbph.cpp
View file @
076f53d6
...
...
@@ -32,7 +32,7 @@ static void read_csv(const string& filename, vector<Mat>& images, vector<int>& l
std
::
ifstream
file
(
filename
.
c_str
(),
ifstream
::
in
);
if
(
!
file
)
{
string
error_message
=
"No valid input file was given, please check the given filename."
;
CV_Error
(
CV_
StsBadArg
,
error_message
);
CV_Error
(
Error
::
StsBadArg
,
error_message
);
}
string
line
,
path
,
classlabel
;
while
(
getline
(
file
,
line
))
{
...
...
@@ -70,16 +70,12 @@ int main(int argc, const char *argv[]) {
// Quit if there are not enough images for this demo.
if
(
images
.
size
()
<=
1
)
{
string
error_message
=
"This demo needs at least 2 images to work. Please add more images to your data set!"
;
CV_Error
(
CV_
StsError
,
error_message
);
CV_Error
(
Error
::
StsError
,
error_message
);
}
// Get the height from the first image. We'll need this
// later in code to reshape the images to their original
// size:
int
height
=
images
[
0
].
rows
;
// The following lines simply get the last images from
// your dataset and remove it from the vector. This is
// done, so that the training data (which we learn the
// cv::FaceRecognizer on) and the test data we test
// cv::
LBPH
FaceRecognizer on) and the test data we test
// the model with, do not overlap.
Mat
testSample
=
images
[
images
.
size
()
-
1
];
int
testLabel
=
labels
[
labels
.
size
()
-
1
];
...
...
@@ -107,7 +103,7 @@ int main(int argc, const char *argv[]) {
//
// cv::createLBPHFaceRecognizer(1,8,8,8,123.0)
//
Ptr
<
FaceRecognizer
>
model
=
createLBPHFaceRecognizer
();
Ptr
<
LBPH
FaceRecognizer
>
model
=
createLBPHFaceRecognizer
();
model
->
train
(
images
,
labels
);
// The following line predicts the label of a given
// test image:
...
...
@@ -121,16 +117,11 @@ int main(int argc, const char *argv[]) {
//
string
result_message
=
format
(
"Predicted class = %d / Actual class = %d."
,
predictedLabel
,
testLabel
);
cout
<<
result_message
<<
endl
;
// Sometimes you'll need to get/set internal model data,
// which isn't exposed by the public cv::FaceRecognizer.
// Since each cv::FaceRecognizer is derived from a
// cv::Algorithm, you can query the data.
//
// First we'll use it to set the threshold of the FaceRecognizer
// First we'll use it to set the threshold of the LBPHFaceRecognizer
// to 0.0 without retraining the model. This can be useful if
// you are evaluating the model:
//
model
->
set
(
"threshold"
,
0.0
);
model
->
set
Threshold
(
0.0
);
// Now the threshold of this model is set to 0.0. A prediction
// now returns -1, as it's impossible to have a distance below
// it
...
...
@@ -142,14 +133,14 @@ int main(int argc, const char *argv[]) {
// within the model:
cout
<<
"Model Information:"
<<
endl
;
string
model_info
=
format
(
"
\t
LBPH(radius=%i, neighbors=%i, grid_x=%i, grid_y=%i, threshold=%.2f)"
,
model
->
get
Int
(
"radius"
),
model
->
get
Int
(
"neighbors"
),
model
->
get
Int
(
"grid_x"
),
model
->
get
Int
(
"grid_y"
),
model
->
get
Double
(
"threshold"
));
model
->
get
Radius
(
),
model
->
get
Neighbors
(
),
model
->
get
GridX
(
),
model
->
get
GridY
(
),
model
->
get
Threshold
(
));
cout
<<
model_info
<<
endl
;
// We could get the histograms for example:
vector
<
Mat
>
histograms
=
model
->
get
MatVector
(
"histograms"
);
vector
<
Mat
>
histograms
=
model
->
get
Histograms
(
);
// But should I really visualize it? Probably the length is interesting:
cout
<<
"Size of the histograms: "
<<
histograms
[
0
].
total
()
<<
endl
;
return
0
;
...
...
modules/face/samples/
src/
facerec_save_load.cpp
→
modules/face/samples/facerec_save_load.cpp
View file @
076f53d6
...
...
@@ -19,6 +19,7 @@
#include "opencv2/core.hpp"
#include "opencv2/face.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
#include <fstream>
...
...
@@ -50,7 +51,7 @@ static void read_csv(const string& filename, vector<Mat>& images, vector<int>& l
std
::
ifstream
file
(
filename
.
c_str
(),
ifstream
::
in
);
if
(
!
file
)
{
string
error_message
=
"No valid input file was given, please check the given filename."
;
CV_Error
(
CV_
StsBadArg
,
error_message
);
CV_Error
(
Error
::
StsBadArg
,
error_message
);
}
string
line
,
path
,
classlabel
;
while
(
getline
(
file
,
line
))
{
...
...
@@ -92,7 +93,7 @@ int main(int argc, const char *argv[]) {
// Quit if there are not enough images for this demo.
if
(
images
.
size
()
<=
1
)
{
string
error_message
=
"This demo needs at least 2 images to work. Please add more images to your data set!"
;
CV_Error
(
CV_
StsError
,
error_message
);
CV_Error
(
Error
::
StsError
,
error_message
);
}
// Get the height from the first image. We'll need this
// later in code to reshape the images to their original
...
...
@@ -126,7 +127,7 @@ int main(int argc, const char *argv[]) {
//
// cv::createEigenFaceRecognizer(0, 123.0);
//
Ptr
<
FaceRecognizer
>
model0
=
createEigenFaceRecognizer
();
Ptr
<
Basic
FaceRecognizer
>
model0
=
createEigenFaceRecognizer
();
model0
->
train
(
images
,
labels
);
// save the model to eigenfaces_at.yaml
model0
->
save
(
"eigenfaces_at.yml"
);
...
...
@@ -134,7 +135,7 @@ int main(int argc, const char *argv[]) {
//
// Now create a new Eigenfaces Recognizer
//
Ptr
<
FaceRecognizer
>
model1
=
createEigenFaceRecognizer
();
Ptr
<
Basic
FaceRecognizer
>
model1
=
createEigenFaceRecognizer
();
model1
->
load
(
"eigenfaces_at.yml"
);
// The following line predicts the label of a given
// test image:
...
...
@@ -149,11 +150,11 @@ int main(int argc, const char *argv[]) {
string
result_message
=
format
(
"Predicted class = %d / Actual class = %d."
,
predictedLabel
,
testLabel
);
cout
<<
result_message
<<
endl
;
// Here is how to get the eigenvalues of this Eigenfaces model:
Mat
eigenvalues
=
model1
->
get
Mat
(
"eigenvalues"
);
Mat
eigenvalues
=
model1
->
get
EigenValues
(
);
// And we can do the same to display the Eigenvectors (read Eigenfaces):
Mat
W
=
model1
->
get
Mat
(
"eigenvectors"
);
Mat
W
=
model1
->
get
EigenVectors
(
);
// Get the sample mean from the training data
Mat
mean
=
model1
->
getM
at
(
"mean"
);
Mat
mean
=
model1
->
getM
ean
(
);
// Display or save:
if
(
argc
==
2
)
{
imshow
(
"mean"
,
norm_0_255
(
mean
.
reshape
(
1
,
images
[
0
].
rows
)));
...
...
@@ -182,8 +183,8 @@ int main(int argc, const char *argv[]) {
for
(
int
num_components
=
10
;
num_components
<
300
;
num_components
+=
15
)
{
// slice the eigenvectors from the model
Mat
evs
=
Mat
(
W
,
Range
::
all
(),
Range
(
0
,
num_components
));
Mat
projection
=
subspaceProject
(
evs
,
mean
,
images
[
0
].
reshape
(
1
,
1
));
Mat
reconstruction
=
subspaceReconstruct
(
evs
,
mean
,
projection
);
Mat
projection
=
LDA
::
subspaceProject
(
evs
,
mean
,
images
[
0
].
reshape
(
1
,
1
));
Mat
reconstruction
=
LDA
::
subspaceReconstruct
(
evs
,
mean
,
projection
);
// Normalize the result:
reconstruction
=
norm_0_255
(
reconstruction
.
reshape
(
1
,
images
[
0
].
rows
));
// Display or save:
...
...
modules/face/samples/
src/
facerec_video.cpp
→
modules/face/samples/facerec_video.cpp
View file @
076f53d6
...
...
@@ -34,7 +34,7 @@ static void read_csv(const string& filename, vector<Mat>& images, vector<int>& l
std
::
ifstream
file
(
filename
.
c_str
(),
ifstream
::
in
);
if
(
!
file
)
{
string
error_message
=
"No valid input file was given, please check the given filename."
;
CV_Error
(
CV_
StsBadArg
,
error_message
);
CV_Error
(
Error
::
StsBadArg
,
error_message
);
}
string
line
,
path
,
classlabel
;
while
(
getline
(
file
,
line
))
{
...
...
@@ -79,7 +79,7 @@ int main(int argc, const char *argv[]) {
int
im_width
=
images
[
0
].
cols
;
int
im_height
=
images
[
0
].
rows
;
// Create a FaceRecognizer and train it on the given images:
Ptr
<
FaceRecognizer
>
model
=
createFisherFaceRecognizer
();
Ptr
<
Basic
FaceRecognizer
>
model
=
createFisherFaceRecognizer
();
model
->
train
(
images
,
labels
);
// That's it for learning the Face Recognition model. You now
// need to create the classifier for the task of Face Detection.
...
...
@@ -103,14 +103,14 @@ int main(int argc, const char *argv[]) {
Mat
original
=
frame
.
clone
();
// Convert the current frame to grayscale:
Mat
gray
;
cvtColor
(
original
,
gray
,
C
V
_BGR2GRAY
);
cvtColor
(
original
,
gray
,
C
OLOR
_BGR2GRAY
);
// Find the faces in the frame:
vector
<
Rect_
<
int
>
>
faces
;
haar_cascade
.
detectMultiScale
(
gray
,
faces
);
// At this point you have the position of the faces in
// faces. Now we'll get the faces, make a prediction and
// annotate it in the video. Cool or what?
for
(
in
t
i
=
0
;
i
<
faces
.
size
();
i
++
)
{
for
(
size_
t
i
=
0
;
i
<
faces
.
size
();
i
++
)
{
// Process face by face:
Rect
face_i
=
faces
[
i
];
// Crop the face from the image. So simple with OpenCV C++:
...
...
@@ -131,7 +131,7 @@ int main(int argc, const char *argv[]) {
int
prediction
=
model
->
predict
(
face_resized
);
// And finally write all we've found out to the original image!
// First of all draw a green rectangle around the detected face:
rectangle
(
original
,
face_i
,
CV_RGB
(
0
,
255
,
0
),
1
);
rectangle
(
original
,
face_i
,
Scalar
(
0
,
255
,
0
),
1
);
// Create the text we will annotate the box with:
string
box_text
=
format
(
"Prediction = %d"
,
prediction
);
// Calculate the position for annotated text (make sure we don't
...
...
@@ -139,7 +139,7 @@ int main(int argc, const char *argv[]) {
int
pos_x
=
std
::
max
(
face_i
.
tl
().
x
-
10
,
0
);
int
pos_y
=
std
::
max
(
face_i
.
tl
().
y
-
10
,
0
);
// And now put it into the image:
putText
(
original
,
box_text
,
Point
(
pos_x
,
pos_y
),
FONT_HERSHEY_PLAIN
,
1.0
,
CV_RGB
(
0
,
255
,
0
),
2.0
);
putText
(
original
,
box_text
,
Point
(
pos_x
,
pos_y
),
FONT_HERSHEY_PLAIN
,
1.0
,
Scalar
(
0
,
255
,
0
),
2
);
}
// Show the result:
imshow
(
"face_recognizer"
,
original
);
...
...
modules/face/samples/src/facerec_demo.cpp
deleted
100755 → 0
View file @
aa11ac48
/*
* Copyright (c) 2011. Philipp Wagner <bytefish[at]gmx[dot]de>.
* Released to public domain under terms of the BSD Simplified license.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the organization nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* See <http://www.opensource.org/licenses/bsd-license>
*/
#include "opencv2/core.hpp"
#include "opencv2/face.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
#include <fstream>
#include <sstream>
using
namespace
cv
;
using
namespace
cv
::
face
;
using
namespace
std
;
static
Mat
norm_0_255
(
InputArray
_src
)
{
Mat
src
=
_src
.
getMat
();
// Create and return normalized image:
Mat
dst
;
switch
(
src
.
channels
())
{
case
1
:
cv
::
normalize
(
_src
,
dst
,
0
,
255
,
NORM_MINMAX
,
CV_8UC1
);
break
;
case
3
:
cv
::
normalize
(
_src
,
dst
,
0
,
255
,
NORM_MINMAX
,
CV_8UC3
);
break
;
default:
src
.
copyTo
(
dst
);
break
;
}
return
dst
;
}
static
void
read_csv
(
const
string
&
filename
,
vector
<
Mat
>&
images
,
vector
<
int
>&
labels
,
char
separator
=
';'
)
{
std
::
ifstream
file
(
filename
.
c_str
(),
ifstream
::
in
);
if
(
!
file
)
{
string
error_message
=
"No valid input file was given, please check the given filename."
;
CV_Error
(
CV_StsBadArg
,
error_message
);
}
string
line
,
path
,
classlabel
;
while
(
getline
(
file
,
line
))
{
stringstream
liness
(
line
);
getline
(
liness
,
path
,
separator
);
getline
(
liness
,
classlabel
);
if
(
!
path
.
empty
()
&&
!
classlabel
.
empty
())
{
images
.
push_back
(
imread
(
path
,
0
));
labels
.
push_back
(
atoi
(
classlabel
.
c_str
()));
}
}
}
int
main
(
int
argc
,
const
char
*
argv
[])
{
// Check for valid command line arguments, print usage
// if no arguments were given.
if
(
argc
!=
2
)
{
cout
<<
"usage: "
<<
argv
[
0
]
<<
" <csv.ext>"
<<
endl
;
exit
(
1
);
}
// Get the path to your CSV.
string
fn_csv
=
string
(
argv
[
1
]);
// These vectors hold the images and corresponding labels.
vector
<
Mat
>
images
;
vector
<
int
>
labels
;
// Read in the data. This can fail if no valid
// input filename is given.
try
{
read_csv
(
fn_csv
,
images
,
labels
);
}
catch
(
cv
::
Exception
&
e
)
{
cerr
<<
"Error opening file
\"
"
<<
fn_csv
<<
"
\"
. Reason: "
<<
e
.
msg
<<
endl
;
// nothing more we can do
exit
(
1
);
}
// Quit if there are not enough images for this demo.
if
(
images
.
size
()
<=
1
)
{
string
error_message
=
"This demo needs at least 2 images to work. Please add more images to your data set!"
;
CV_Error
(
CV_StsError
,
error_message
);
}
// Get the height from the first image. We'll need this
// later in code to reshape the images to their original
// size:
int
height
=
images
[
0
].
rows
;
// The following lines simply get the last images from
// your dataset and remove it from the vector. This is
// done, so that the training data (which we learn the
// cv::FaceRecognizer on) and the test data we test
// the model with, do not overlap.
Mat
testSample
=
images
[
images
.
size
()
-
1
];
int
testLabel
=
labels
[
labels
.
size
()
-
1
];
images
.
pop_back
();
labels
.
pop_back
();
// The following lines create an Eigenfaces model for
// face recognition and train it with the images and
// labels read from the given CSV file.
// This here is a full PCA, if you just want to keep
// 10 principal components (read Eigenfaces), then call
// the factory method like this:
//
// cv::createEigenFaceRecognizer(10);
//
// If you want to create a FaceRecognizer with a
// confidennce threshold, call it with:
//
// cv::createEigenFaceRecognizer(10, 123.0);
//
Ptr
<
FaceRecognizer
>
model
=
createFisherFaceRecognizer
();
model
->
train
(
images
,
labels
);
// The following line predicts the label of a given
// test image:
int
predictedLabel
=
model
->
predict
(
testSample
);
//
// To get the confidence of a prediction call the model with:
//
// int predictedLabel = -1;
// double confidence = 0.0;
// model->predict(testSample, predictedLabel, confidence);
//
string
result_message
=
format
(
"Predicted class = %d / Actual class = %d."
,
predictedLabel
,
testLabel
);
cout
<<
result_message
<<
endl
;
// Sometimes you'll need to get/set internal model data,
// which isn't exposed by the public cv::FaceRecognizer.
// Since each cv::FaceRecognizer is derived from a
// cv::Algorithm, you can query the data.
//
// First we'll use it to set the threshold of the FaceRecognizer
// to 0.0 without retraining the model. This can be useful if
// you are evaluating the model:
//
model
->
set
(
"threshold"
,
0.0
);
// Now the threshold of this model is set to 0.0. A prediction
// now returns -1, as it's impossible to have a distance below
// it
predictedLabel
=
model
->
predict
(
testSample
);
cout
<<
"Predicted class = "
<<
predictedLabel
<<
endl
;
// Here is how to get the eigenvalues of this Eigenfaces model:
Mat
eigenvalues
=
model
->
getMat
(
"eigenvalues"
);
// And we can do the same to display the Eigenvectors (read Eigenfaces):
Mat
W
=
model
->
getMat
(
"eigenvectors"
);
// From this we will display the (at most) first 10 Eigenfaces:
for
(
int
i
=
0
;
i
<
min
(
10
,
W
.
cols
);
i
++
)
{
string
msg
=
format
(
"Eigenvalue #%d = %.5f"
,
i
,
eigenvalues
.
at
<
double
>
(
i
));
cout
<<
msg
<<
endl
;
// get eigenvector #i
Mat
ev
=
W
.
col
(
i
).
clone
();
// Reshape to original size & normalize to [0...255] for imshow.
Mat
grayscale
=
norm_0_255
(
ev
.
reshape
(
1
,
height
));
// Show the image & apply a Jet colormap for better sensing.
Mat
cgrayscale
;
applyColorMap
(
grayscale
,
cgrayscale
,
COLORMAP_JET
);
imshow
(
format
(
"%d"
,
i
),
cgrayscale
);
}
waitKey
(
0
);
return
0
;
}
modules/face/tutorials/face_tutorial.markdown
View file @
076f53d6
...
...
@@ -246,7 +246,7 @@ every source code listing is commented in detail, so you should have no problems
The source code for this demo application is also available in the src folder coming with this
documentation:
@include
src
/facerec_eigenfaces.cpp
@include
face/samples
/facerec_eigenfaces.cpp
I've used the jet colormap, so you can see how the grayscale values are distributed within the
specific Eigenfaces. You can see, that the Eigenfaces do not only encode facial features, but also
...
...
@@ -263,8 +263,8 @@ let's see how many Eigenfaces are needed for a good reconstruction. I'll do a su
for(int num_components = 10; num_components < 300; num_components+=15) {
// slice the eigenvectors from the model
Mat evs = Mat(W, Range::all(), Range(0, num_components));
Mat projection = subspaceProject(evs, mean, images
[
0
]
.reshape(1,1));
Mat reconstruction = subspaceReconstruct(evs, mean, projection);
Mat projection =
LDA::
subspaceProject(evs, mean, images
[
0
]
.reshape(1,1));
Mat reconstruction =
LDA::
subspaceReconstruct(evs, mean, projection);
// Normalize the result:
reconstruction = norm_0_255(reconstruction.reshape(1, images
[
0
]
.rows));
// Display or save:
...
...
@@ -370,7 +370,7 @@ given by:
The source code for this demo application is also available in the src folder coming with this
documentation:
@include
src
/facerec_fisherfaces.cpp
@include
face/samples
/facerec_fisherfaces.cpp
For this example I am going to use the Yale Facedatabase A, just because the plots are nicer. Each
Fisherface has the same length as an original image, thus it can be displayed as an image. The demo
...
...
@@ -398,8 +398,8 @@ Fisherfaces describes:
for(int num_component = 0; num_component < min(16, W.cols); num_component++) {
// Slice the Fisherface from the model:
Mat ev = W.col(num_component);
Mat projection = subspaceProject(ev, mean, images
[
0
]
.reshape(1,1));
Mat reconstruction = subspaceReconstruct(ev, mean, projection);
Mat projection =
LDA::
subspaceProject(ev, mean, images
[
0
]
.reshape(1,1));
Mat reconstruction =
LDA::
subspaceReconstruct(ev, mean, projection);
// Normalize the result:
reconstruction = norm_0_255(reconstruction.reshape(1, images
[
0
]
.rows));
// Display or save:
...
...
@@ -528,7 +528,7 @@ Patterns Histograms*.
The source code for this demo application is also available in the src folder coming with this
documentation:
@include
src
/facerec_lbph.cpp
@include
face/samples
/facerec_lbph.cpp
Conclusion {#tutorial_face_conclusion}
----------
...
...
@@ -658,7 +658,7 @@ at/s17/3.pgm;1
Here is the script, if you can't find it:
@verbinclude face/samples/
sr
c/create_csv.py
@verbinclude face/samples/
et
c/create_csv.py
### Aligning Face Images {#tutorial_face_appendix_align}
...
...
@@ -677,7 +677,7 @@ where:
If you are using the same
*offset_pct*
and
*dest_sz*
for your images, they are all aligned at the
eyes.
@verbinclude face/samples/
sr
c/crop_face.py
@verbinclude face/samples/
et
c/crop_face.py
Imagine we are given
[
this photo of Arnold
Schwarzenegger](http://en.wikipedia.org/wiki/File:Arnold_Schwarzenegger_edit%28ws%29.jpg), which is
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment