Commit e85a802a authored by kurnianggoro's avatar kurnianggoro Committed by Alexander Alekhin

Merge pull request #1257 from kurnianggoro:facelandmark

GSOC17 - Facemark API (#1257)

* Initial commit of facemark API

Initial structure of the facemark API and AAM header

* make training function as virtual

* Add: dataset parser

* Bug fix: clear the container before add points

* Add: AAM training - procrustes analysis

* Add AAM model

* Added training function for AAM

* Building bot fixes: remove training overload, explicit cast to float for atof

* + add dependency: imgcodecs

* Build bot fixes: add imgproc.hpp and type casting

* Building bot fix: type casting

* fixing the AAM training to match with Matlab version

fewer model parameters, change the image warp method, change the feature extraction method

* add: AAM fitting

added several functionalities for fitting

* fix warings

* Add: transformation for the initial fitting

* add sample file for aam implementation

* fix warning

* Add LFB Header

* loadTrainingData: Throw an error message if file not exist

* add: LBF prepare training data

* add: data augmentation

* change to double

* add: getMeanShape

* shuffling the dataset and parameters initialization

* add: initial structure of LBF class

* add: getDeltaShapes

Difference between the current shape and the desired shape

* add: random forest training

* generate lbf features

* global regression

* save training data

* fix the parameter initialization

* set the default parameters

* add: initial version of lbf sample

* update the current shape

* compute error

* add: prediction function

* fix some warnings

* fitting function

the result is mis-aligned, shuould be double checked

* add: fitting in the demo

* add dependencies

* Add: tutorial

* add: load model

* fixing training

* use user defined face detector

* Documents, tests, and samples

* Allow custom parameters

* Cleaning up

* Custom parameters for default detector, training, and get custom data

* AAM scales

* minor fixes , update the opencv_extra files

* change path to lbp cascade

* face: avoid memory leaks

* utilize the filestorage for the model, fixing some minor issues

* remove the liblinear dependency

* fix the aam test, avoiding to write any files

* use RNG and changes the test files
parent e7955998
......@@ -40,7 +40,7 @@ the use of this software, even if advised of the possibility of such damage.
#define __OPENCV_FACE_HPP__
/**
@defgroup face Face Recognition
@defgroup face Face Analysis
- @ref face_changelog
- @ref tutorial_face_main
......@@ -374,5 +374,7 @@ protected:
}}
#include "opencv2/face/facerec.hpp"
#include "opencv2/face/facemark.hpp"
#include "opencv2/face/facemarkLBF.hpp"
#include "opencv2/face/facemarkAAM.hpp"
#endif
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
#ifndef __OPENCV_FACELANDMARK_HPP__
#define __OPENCV_FACELANDMARK_HPP__
/**
@defgroup face Face Analysis
- @ref tutorial_table_of_content_facemark
- The Facemark API
*/
#include "opencv2/face.hpp"
#include "opencv2/objdetect.hpp"
#include "opencv2/objdetect/objdetect_c.h"
#include "opencv2/imgproc/types_c.h"
namespace cv {
namespace face {
//! @addtogroup face
//! @{
struct CV_EXPORTS_W CParams{
String cascade; //!< the face detector
double scaleFactor; //!< Parameter specifying how much the image size is reduced at each image scale.
int minNeighbors; //!< Parameter specifying how many neighbors each candidate rectangle should have to retain it.
Size minSize; //!< Minimum possible object size.
Size maxSize; //!< Maximum possible object size.
CParams(
String cascade_model,
double sf = 1.1,
int minN = 3,
Size minSz = Size(30, 30),
Size maxSz = Size()
);
};
/** @brief Default face detector
This function is mainly utilized by the implementation of a Facemark Algorithm.
End users are advised to use function Facemark::getFaces which can be manually defined
and circumvented to the algorithm by Facemark::setFaceDetector.
@param image The input image to be processed.
@param faces Output of the function which represent region of interest of the detected faces.
Each face is stored in cv::Rect container.
@param extra_params extra parameters
<B>Example of usage</B>
@code
std::vector<cv::Rect> faces;
CParams params("haarcascade_frontalface_alt.xml");
cv::face::getFaces(frame, faces, &params);
for(int j=0;j<faces.size();j++){
cv::rectangle(frame, faces[j], cv::Scalar(255,0,255));
}
cv::imshow("detection", frame);
@endcode
*/
/*other option: move this function inside Facemark as default face detector*/
CV_EXPORTS bool getFaces( InputArray image,
OutputArray faces,
void * extra_params
);
/** @brief A utility to load list of paths to training image and annotation file.
@param imageList The specified file contains paths to the training images.
@param annotationList The specified file contains paths to the training annotations.
@param images The loaded paths of training images.
@param annotations The loaded paths of annotation files.
Example of usage:
@code
String imageFiles = "images_path.txt";
String ptsFiles = "annotations_path.txt";
std::vector<String> images_train;
std::vector<String> landmarks_train;
loadDatasetList(imageFiles,ptsFiles,images_train,landmarks_train);
@endcode
*/
CV_EXPORTS_W bool loadDatasetList(String imageList,
String annotationList,
std::vector<String> & images,
std::vector<String> & annotations);
/** @brief A utility to load facial landmark dataset from a single file.
@param filename The filename of a file that contains the dataset information.
Each line contains the filename of an image followed by
pairs of x and y values of facial landmarks points separated by a space.
Example
@code
/home/user/ibug/image_003_1.jpg 336.820955 240.864510 334.238298 260.922709 335.266918 ...
/home/user/ibug/image_005_1.jpg 376.158428 230.845712 376.736984 254.924635 383.265403 ...
@endcode
@param images A vector where each element represent the filename of image in the dataset.
Images are not loaded by default to save the memory.
@param facePoints The loaded landmark points for all training data.
@param delim Delimiter between each element, the default value is a whitespace.
@param offset An offset value to adjust the loaded points.
<B>Example of usage</B>
@code
cv::String imageFiles = "../data/images_train.txt";
cv::String ptsFiles = "../data/points_train.txt";
std::vector<String> images;
std::vector<std::vector<Point2f> > facePoints;
loadTrainingData(imageFiles, ptsFiles, images, facePoints, 0.0);
@endcode
*/
CV_EXPORTS_W bool loadTrainingData( String filename , std::vector<String> & images,
OutputArray facePoints,
char delim = ' ', float offset = 0.0);
/** @brief A utility to load facial landmark information from the dataset.
@param imageList A file contains the list of image filenames in the training dataset.
@param groundTruth A file contains the list of filenames
where the landmarks points information are stored.
The content in each file should follow the standard format (see face::loadFacePoints).
@param images A vector where each element represent the filename of image in the dataset.
Images are not loaded by default to save the memory.
@param facePoints The loaded landmark points for all training data.
@param offset An offset value to adjust the loaded points.
<B>Example of usage</B>
@code
cv::String imageFiles = "../data/images_train.txt";
cv::String ptsFiles = "../data/points_train.txt";
std::vector<String> images;
std::vector<std::vector<Point2f> > facePoints;
loadTrainingData(imageFiles, ptsFiles, images, facePoints, 0.0);
@endcode
example of content in the images_train.txt
@code
/home/user/ibug/image_003_1.jpg
/home/user/ibug/image_004_1.jpg
/home/user/ibug/image_005_1.jpg
/home/user/ibug/image_006.jpg
@endcode
example of content in the points_train.txt
@code
/home/user/ibug/image_003_1.pts
/home/user/ibug/image_004_1.pts
/home/user/ibug/image_005_1.pts
/home/user/ibug/image_006.pts
@endcode
*/
CV_EXPORTS_W bool loadTrainingData( String imageList, String groundTruth,
std::vector<String> & images,
OutputArray facePoints,
float offset = 0.0);
/** @brief A utility to load facial landmark information from a given file.
@param filename The filename of file contains the facial landmarks data.
@param points The loaded facial landmark points.
@param offset An offset value to adjust the loaded points.
<B>Example of usage</B>
@code
std::vector<Point2f> points;
face::loadFacePoints("filename.txt", points, 0.0);
@endcode
The annotation file should follow the default format which is
@code
version: 1
n_points: 68
{
212.716603 499.771793
230.232816 566.290071
...
}
@endcode
where n_points is the number of points considered
and each point is represented as its position in x and y.
*/
CV_EXPORTS_W bool loadFacePoints( String filename, OutputArray points,
float offset = 0.0);
/** @brief Utility to draw the detected facial landmark points
@param image The input image to be processed.
@param points Contains the data of points which will be drawn.
@param color The color of points in BGR format represented by cv::Scalar.
<B>Example of usage</B>
@code
std::vector<Rect> faces;
std::vector<std::vector<Point2f> > landmarks;
facemark->getFaces(img, faces);
facemark->fit(img, faces, landmarks);
for(int j=0;j<rects.size();j++){
face::drawFacemarks(frame, landmarks[j], Scalar(0,0,255));
}
@endcode
*/
CV_EXPORTS_W void drawFacemarks( InputOutputArray image, InputArray points,
Scalar color = Scalar(255,0,0));
/** @brief Abstract base class for all facemark models
All facemark models in OpenCV are derived from the abstract base class Facemark, which
provides a unified access to all facemark algorithms in OpenCV.
To utilize this API in your program, please take a look at the @ref tutorial_table_of_content_facemark
### Description
Facemark is a base class which provides universal access to any specific facemark algorithm.
Therefore, the users should declare a desired algorithm before they can use it in their application.
Here is an example on how to declare facemark algorithm:
@code
// Using Facemark in your code:
Ptr<Facemark> facemark = FacemarkLBF::create();
@endcode
The typical pipeline for facemark detection is listed as follows:
- (Non-mandatory) Set a user defined face detection using Facemark::setFaceDetector.
The facemark algorithms are desgined to fit the facial points into a face.
Therefore, the face information should be provided to the facemark algorithm.
Some algorithms might provides a default face recognition function.
However, the users might prefer to use their own face detector to obtains the best possible detection result.
- (Non-mandatory) Training the model for a specific algorithm using Facemark::training.
In this case, the model should be automatically saved by the algorithm.
If the user already have a trained model, then this part can be omitted.
- Load the trained model using Facemark::loadModel.
- Perform the fitting via the Facemark::fit.
*/
class CV_EXPORTS_W Facemark : public virtual Algorithm
{
public:
virtual void read( const FileNode& fn )=0;
virtual void write( FileStorage& fs ) const=0;
/** @brief Add one training sample to the trainer.
@param image Input image.
@param landmarks The ground-truth of facial landmarks points corresponds to the image.
<B>Example of usage</B>
@code
String imageFiles = "../data/images_train.txt";
String ptsFiles = "../data/points_train.txt";
std::vector<String> images_train;
std::vector<String> landmarks_train;
// load the list of dataset: image paths and landmark file paths
loadDatasetList(imageFiles,ptsFiles,images_train,landmarks_train);
Mat image;
std::vector<Point2f> facial_points;
for(size_t i=0;i<images_train.size();i++){
image = imread(images_train[i].c_str());
loadFacePoints(landmarks_train[i],facial_points);
facemark->addTrainingSample(image, facial_points);
}
@endcode
The contents in the training files should follows the standard format.
Here are examples for the contents in these files.
example of content in the images_train.txt
@code
/home/user/ibug/image_003_1.jpg
/home/user/ibug/image_004_1.jpg
/home/user/ibug/image_005_1.jpg
/home/user/ibug/image_006.jpg
@endcode
example of content in the points_train.txt
@code
/home/user/ibug/image_003_1.pts
/home/user/ibug/image_004_1.pts
/home/user/ibug/image_005_1.pts
/home/user/ibug/image_006.pts
@endcode
*/
virtual bool addTrainingSample(InputArray image, InputArray landmarks)=0;
/** @brief Trains a Facemark algorithm using the given dataset.
Before the training process, training samples should be added to the trainer
using face::addTrainingSample function.
@param parameters Optional extra parameters (algorithm dependent).
<B>Example of usage</B>
@code
FacemarkLBF::Params params;
params.model_filename = "ibug68.model"; // filename to save the trained model
Ptr<Facemark> facemark = FacemarkLBF::create(params);
// add training samples (see Facemark::addTrainingSample)
facemark->training();
@endcode
*/
virtual void training(void* parameters=0)=0;
/** @brief A function to load the trained model before the fitting process.
@param model A string represent the filename of a trained model.
<B>Example of usage</B>
@code
facemark->loadModel("../data/lbf.model");
@endcode
*/
virtual void loadModel(String model)=0;
// virtual void saveModel(String fs)=0;
/** @brief Trains a Facemark algorithm using the given dataset.
@param image Input image.
@param faces Output of the function which represent region of interest of the detected faces.
Each face is stored in cv::Rect container.
@param landmarks The detected landmark points for each faces.
@param config Algorithm specific for running time parameters.
<B>Example of usage</B>
@code
Mat image = imread("image.jpg");
std::vector<Rect> faces;
std::vector<std::vector<Point2f> > landmarks;
facemark->fit(image, faces, landmarks);
@endcode
*/
virtual bool fit( InputArray image,\
InputArray faces,\
InputOutputArray landmarks,\
void * config = 0)=0;
/** @brief Set a user defined face detector for the Facemark algorithm.
@param f The user defined face detector function
<B>Example of usage</B>
@code
facemark->setFaceDetector(myDetector);
@endcode
Example of a user defined face detector
@code
bool myDetector( InputArray image, OutputArray ROIs ){
std::vector<Rect> & faces = *(std::vector<Rect>*) ROIs.getObj();
faces.clear();
Mat img = image.getMat();
// -------- do something --------
}
@endcode
*/
virtual bool setFaceDetector(bool(*f)(InputArray , OutputArray, void * ))=0;
/** @brief Detect faces from a given image using default or user defined face detector.
Some Algorithm might not provide a default face detector.
@param image Input image.
@param faces Output of the function which represent region of interest of the detected faces.
Each face is stored in cv::Rect container.
@param extra_params Optional extra-parameters for the face detector function.
<B>Example of usage</B>
@code
std::vector<cv::Rect> faces;
facemark->getFaces(img, faces);
for(int j=0;j<faces.size();j++){
cv::rectangle(img, faces[j], cv::Scalar(255,0,255));
}
@endcode
*/
virtual bool getFaces( InputArray image , OutputArray faces, void * extra_params=0)=0;
/** @brief Get data from an algorithm
@param items The obtained data, algorithm dependent.
<B>Example of usage</B>
@code
Ptr<FacemarkAAM> facemark = FacemarkAAM::create();
facemark->loadModel("AAM.yml");
FacemarkAAM::Data data;
facemark->getData(&data);
std::vector<Point2f> s0 = data.s0;
cout<<s0<<endl;
@endcode
*/
virtual bool getData(void * items=0)=0;
}; /* Facemark*/
//! @}
} /* namespace face */
} /* namespace cv */
#endif //__OPENCV_FACELANDMARK_HPP__
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
#ifndef __OPENCV_FACEMARK_AAM_HPP__
#define __OPENCV_FACEMARK_AAM_HPP__
#include "opencv2/face/facemark.hpp"
namespace cv {
namespace face {
//! @addtogroup face
//! @{
class CV_EXPORTS_W FacemarkAAM : public Facemark
{
public:
struct CV_EXPORTS Params
{
/**
* \brief Constructor
*/
Params();
/**
* \brief Read parameters from file, currently unused
*/
void read(const FileNode& /*fn*/);
/**
* \brief Read parameters from file, currently unused
*/
void write(FileStorage& /*fs*/) const;
std::string model_filename;
int m;
int n;
int n_iter;
bool verbose;
bool save_model;
int max_m, max_n, texture_max_m;
std::vector<float>scales;
};
/**
* \brief Optional parameter for fitting process.
*/
struct CV_EXPORTS Config
{
Config( Mat rot = Mat::eye(2,2,CV_32F),
Point2f trans = Point2f(0.0,0.0),
float scaling = 1.0,
int scale_id=0
);
Mat R;
Point2f t;
float scale;
int model_scale_idx;
};
/**
* \brief Data container for the facemark::getData function
*/
struct CV_EXPORTS Data
{
std::vector<Point2f> s0;
};
/**
* \brief The model of AAM Algorithm
*/
struct CV_EXPORTS Model
{
int npts; //!< unused delete
int max_n; //!< unused delete
std::vector<float>scales;
//!< defines the scales considered to build the model
/*warping*/
std::vector<Vec3i> triangles;
//!< each element contains 3 values, represent index of facemarks that construct one triangle (obtained using delaunay triangulation)
struct Texture{
int max_m; //!< unused delete
Rect resolution;
//!< resolution of the current scale
Mat A;
//!< gray values from all face region in the dataset, projected in PCA space
Mat A0;
//!< average of gray values from all face region in the dataset
Mat AA;
//!< gray values from all erorded face region in the dataset, projected in PCA space
Mat AA0;
//!< average of gray values from all erorded face region in the dataset
std::vector<std::vector<Point> > textureIdx;
//!< index for warping of each delaunay triangle region constructed by 3 facemarks
std::vector<Point2f> base_shape;
//!< basic shape, normalized to be fit in an image with current detection resolution
std::vector<int> ind1;
//!< index of pixels for mapping process to obtains the grays values of face region
std::vector<int> ind2;
//!< index of pixels for mapping process to obtains the grays values of eroded face region
};
std::vector<Texture> textures;
//!< a container to holds the texture data for each scale of fitting
/*shape*/
std::vector<Point2f> s0;
//!< the basic shape obtained from training dataset
Mat S,Q;
//!< the encoded shapes from training data
};
//!< initializer
static Ptr<FacemarkAAM> create(const FacemarkAAM::Params &parameters = FacemarkAAM::Params() );
virtual ~FacemarkAAM() {}
}; /* AAM */
//! @}
} /* namespace face */
} /* namespace cv */
#endif
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
#ifndef __OPENCV_FACEMARK_LBF_HPP__
#define __OPENCV_FACEMARK_LBF_HPP__
#include "opencv2/face/facemark.hpp"
namespace cv {
namespace face {
//! @addtogroup face
//! @{
class CV_EXPORTS_W FacemarkLBF : public Facemark
{
public:
struct CV_EXPORTS Params
{
/**
* \brief Constructor
*/
Params();
double shape_offset;
//!< offset for the loaded face landmark points
String cascade_face;
//!< filename of the face detector model
bool verbose;
//!< show the training print-out
int n_landmarks;
//!< number of landmark points
int initShape_n;
//!< multiplier for augment the training data
int stages_n;
//!< number of refinement stages
int tree_n;
//!< number of tree in the model for each landmark point refinement
int tree_depth;
//!< the depth of decision tree, defines the size of feature
double bagging_overlap;
//!< overlap ratio for training the LBF feature
std::string model_filename;
//!< filename where the trained model will be saved
bool save_model; //!< flag to save the trained model or not
unsigned int seed; //!< seed for shuffling the training data
std::vector<int> feats_m;
std::vector<double> radius_m;
std::vector<int> pupils[2];
//!< index of facemark points on pupils of left and right eye
Rect detectROI;
void read(const FileNode& /*fn*/);
void write(FileStorage& /*fs*/) const;
};
class BBox {
public:
BBox();
~BBox();
BBox(double x, double y, double w, double h);
cv::Mat project(const cv::Mat &shape) const;
cv::Mat reproject(const cv::Mat &shape) const;
double x, y;
double x_center, y_center;
double x_scale, y_scale;
double width, height;
};
static Ptr<FacemarkLBF> create(const FacemarkLBF::Params &parameters = FacemarkLBF::Params() );
virtual ~FacemarkLBF(){};
}; /* LBF */
//! @}
} /* namespace face */
}/* namespace cv */
#endif
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
/*----------------------------------------------
* Usage:
* facemark_demo_aam <face_cascade_model> <eyes_cascade_model> <training_images> <annotation_files> [test_files]
*
* Example:
* facemark_demo_aam ../face_cascade.xml ../eyes_cascade.xml ../images_train.txt ../points_train.txt ../test.txt
*
* Notes:
* the user should provides the list of training images_train
* accompanied by their corresponding landmarks location in separated files.
* example of contents for images_train.txt:
* ../trainset/image_0001.png
* ../trainset/image_0002.png
* example of contents for points_train.txt:
* ../trainset/image_0001.pts
* ../trainset/image_0002.pts
* where the image_xxxx.pts contains the position of each face landmark.
* example of the contents:
* version: 1
* n_points: 68
* {
* 115.167660 220.807529
* 116.164839 245.721357
* 120.208690 270.389841
* ...
* }
* example of the dataset is available at https://ibug.doc.ic.ac.uk/download/annotations/lfpw.zip
*--------------------------------------------------*/
#include <stdio.h>
#include <fstream>
#include <sstream>
#include "opencv2/core.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/face.hpp"
#include <iostream>
#include <string>
#include <ctime>
using namespace std;
using namespace cv;
using namespace cv::face;
bool myDetector( InputArray image, OutputArray ROIs, CascadeClassifier face_cascade);
bool getInitialFitting(Mat image, Rect face, std::vector<Point2f> s0,
CascadeClassifier eyes_cascade, Mat & R, Point2f & Trans, float & scale);
bool parseArguments(int argc, char** argv, CommandLineParser & , String & cascade,
String & model, String & images, String & annotations, String & testImages
);
int main(int argc, char** argv )
{
CommandLineParser parser(argc, argv,"");
String cascade_path,eyes_cascade_path,images_path, annotations_path, test_images_path;
if(!parseArguments(argc, argv, parser,cascade_path,eyes_cascade_path,images_path, annotations_path, test_images_path))
return -1;
//! [instance_creation]
/*create the facemark instance*/
FacemarkAAM::Params params;
params.scales.push_back(2.0);
params.scales.push_back(4.0);
params.model_filename = "AAM.yaml";
Ptr<FacemarkAAM> facemark = FacemarkAAM::create(params);
//! [instance_creation]
//! [load_dataset]
/*Loads the dataset*/
std::vector<String> images_train;
std::vector<String> landmarks_train;
loadDatasetList(images_path,annotations_path,images_train,landmarks_train);
//! [load_dataset]
//! [add_samples]
Mat image;
std::vector<Point2f> facial_points;
for(size_t i=0;i<images_train.size();i++){
image = imread(images_train[i].c_str());
loadFacePoints(landmarks_train[i],facial_points);
facemark->addTrainingSample(image, facial_points);
}
//! [add_samples]
//! [training]
/* trained model will be saved to AAM.yml */
facemark->training();
//! [training]
//! [load_test_images]
/*test using some images*/
String testFiles(images_path), testPts(annotations_path);
if(!test_images_path.empty()){
testFiles = test_images_path;
testPts = test_images_path; //unused
}
std::vector<String> images;
std::vector<String> facePoints;
loadDatasetList(testFiles, testPts, images, facePoints);
//! [load_test_images]
//! [trainsformation_variables]
float scale ;
Point2f T;
Mat R;
//! [trainsformation_variables]
//! [base_shape]
FacemarkAAM::Data data;
facemark->getData(&data);
std::vector<Point2f> s0 = data.s0;
//! [base_shape]
//! [fitting]
/*fitting process*/
std::vector<Rect> faces;
//! [load_cascade_models]
CascadeClassifier face_cascade(cascade_path);
CascadeClassifier eyes_cascade(eyes_cascade_path);
//! [load_cascade_models]
for(int i=0;i<(int)images.size();i++){
printf("image #%i ", i);
//! [detect_face]
image = imread(images[i]);
myDetector(image, faces, face_cascade);
//! [detect_face]
if(faces.size()>0){
//! [get_initialization]
std::vector<FacemarkAAM::Config> conf;
std::vector<Rect> faces_eyes;
for(unsigned j=0;j<faces.size();j++){
if(getInitialFitting(image,faces[j],s0,eyes_cascade, R,T,scale)){
conf.push_back(FacemarkAAM::Config(R,T,scale,(int)params.scales.size()-1));
faces_eyes.push_back(faces[j]);
}
}
//! [get_initialization]
//! [fitting_process]
if(conf.size()>0){
printf(" - face with eyes found %i ", (int)conf.size());
std::vector<std::vector<Point2f> > landmarks;
double newtime = (double)getTickCount();
facemark->fit(image, faces_eyes, landmarks, (void*)&conf);
double fittime = ((getTickCount() - newtime)/getTickFrequency());
for(unsigned j=0;j<landmarks.size();j++){
drawFacemarks(image, landmarks[j],Scalar(0,255,0));
}
printf("%f ms\n",fittime*1000);
imshow("fitting", image);
waitKey(0);
}else{
printf("initialization cannot be computed - skipping\n");
}
//! [fitting_process]
}
} //for
//! [fitting]
}
bool myDetector( InputArray image, OutputArray ROIs, CascadeClassifier face_cascade){
Mat gray;
std::vector<Rect> & faces = *(std::vector<Rect>*) ROIs.getObj();
faces.clear();
if(image.channels()>1){
cvtColor(image.getMat(),gray,CV_BGR2GRAY);
}else{
gray = image.getMat().clone();
}
equalizeHist( gray, gray );
face_cascade.detectMultiScale( gray, faces, 1.2, 2, CV_HAAR_SCALE_IMAGE, Size(30, 30) );
return true;
}
bool getInitialFitting(Mat image, Rect face, std::vector<Point2f> s0 ,CascadeClassifier eyes_cascade, Mat & R, Point2f & Trans, float & scale){
std::vector<Point2f> mybase;
std::vector<Point2f> T;
std::vector<Point2f> base = Mat(Mat(s0)+Scalar(image.cols/2,image.rows/2)).reshape(2);
std::vector<Point2f> base_shape,base_shape2 ;
Point2f e1 = Point2f((float)((base[39].x+base[36].x)/2.0),(float)((base[39].y+base[36].y)/2.0)); //eye1
Point2f e2 = Point2f((float)((base[45].x+base[42].x)/2.0),(float)((base[45].y+base[42].y)/2.0)); //eye2
if(face.width==0 || face.height==0) return false;
std::vector<Point2f> eye;
bool found=false;
Mat faceROI = image( face);
std::vector<Rect> eyes;
//-- In each face, detect eyes
eyes_cascade.detectMultiScale( faceROI, eyes, 1.1, 2, 0 |CV_HAAR_SCALE_IMAGE, Size(20, 20) );
if(eyes.size()==2){
found = true;
int j=0;
Point2f c1( (float)(face.x + eyes[j].x + eyes[j].width*0.5), (float)(face.y + eyes[j].y + eyes[j].height*0.5));
j=1;
Point2f c2( (float)(face.x + eyes[j].x + eyes[j].width*0.5), (float)(face.y + eyes[j].y + eyes[j].height*0.5));
Point2f pivot;
double a0,a1;
if(c1.x<c2.x){
pivot = c1;
a0 = atan2(c2.y-c1.y, c2.x-c1.x);
}else{
pivot = c2;
a0 = atan2(c1.y-c2.y, c1.x-c2.x);
}
scale = (float)(norm(Mat(c1)-Mat(c2))/norm(Mat(e1)-Mat(e2)));
mybase= Mat(Mat(s0)*scale).reshape(2);
Point2f ey1 = Point2f((float)((mybase[39].x+mybase[36].x)/2.0),(float)((mybase[39].y+mybase[36].y)/2.0));
Point2f ey2 = Point2f((float)((mybase[45].x+mybase[42].x)/2.0),(float)((mybase[45].y+mybase[42].y)/2.0));
#define TO_DEGREE 180.0/3.14159265
a1 = atan2(ey2.y-ey1.y, ey2.x-ey1.x);
Mat rot = getRotationMatrix2D(Point2f(0,0), (a1-a0)*TO_DEGREE, 1.0);
rot(Rect(0,0,2,2)).convertTo(R, CV_32F);
base_shape = Mat(Mat(R*scale*Mat(Mat(s0).reshape(1)).t()).t()).reshape(2);
ey1 = Point2f((float)((base_shape[39].x+base_shape[36].x)/2.0),(float)((base_shape[39].y+base_shape[36].y)/2.0));
ey2 = Point2f((float)((base_shape[45].x+base_shape[42].x)/2.0),(float)((base_shape[45].y+base_shape[42].y)/2.0));
T.push_back(Point2f(pivot.x-ey1.x,pivot.y-ey1.y));
Trans = Point2f(pivot.x-ey1.x,pivot.y-ey1.y);
return true;
}else{
Trans = Point2f( (float)(face.x + face.width*0.5),(float)(face.y + face.height*0.5));
}
return found;
}
bool parseArguments(int argc, char** argv, CommandLineParser & parser,
String & cascade,
String & model,
String & images,
String & annotations,
String & test_images
){
const String keys =
"{ @f face-cascade | | (required) path to the cascade model file for the face detector }"
"{ @e eyes-cascade | | (required) path to the cascade model file for the eyes detector }"
"{ @i images | | (required) path of a text file contains the list of paths to all training images}"
"{ @a annotations | | (required) Path of a text file contains the list of paths to all annotations files}"
"{ @t test-images | | Path of a text file contains the list of paths to the test images}"
"{ help h usage ? | | facemark_demo_aam -face-cascade -eyes-cascade -images -annotations [-t]\n"
" example: facemark_demo_aam ../face_cascade.xml ../eyes_cascade.xml ../images_train.txt ../points_train.txt ../test.txt}"
;
parser = CommandLineParser(argc, argv,keys);
parser.about("hello");
if (parser.has("help")){
parser.printMessage();
return false;
}
cascade = String(parser.get<String>("face-cascade"));
model = String(parser.get<string>("eyes-cascade"));
images = String(parser.get<string>("images"));
annotations = String(parser.get<string>("annotations"));
test_images = String(parser.get<string>("test-images"));
if(cascade.empty() || model.empty() || images.empty() || annotations.empty()){
std::cerr << "one or more required arguments are not found" << '\n';
cout<<"face-cascade : "<<cascade.c_str()<<endl;
cout<<"eyes-cascade : "<<model.c_str()<<endl;
cout<<"images : "<<images.c_str()<<endl;
cout<<"annotations : "<<annotations.c_str()<<endl;
parser.printMessage();
return false;
}
return true;
}
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
/*----------------------------------------------
* Usage:
* facemark_demo_lbf <face_cascade_model> <saved_model_filename> <training_images> <annotation_files> [test_files]
*
* Example:
* facemark_demo_lbf ../face_cascade.xml ../LBF.model ../images_train.txt ../points_train.txt ../test.txt
*
* Notes:
* the user should provides the list of training images_train
* accompanied by their corresponding landmarks location in separated files.
* example of contents for images_train.txt:
* ../trainset/image_0001.png
* ../trainset/image_0002.png
* example of contents for points_train.txt:
* ../trainset/image_0001.pts
* ../trainset/image_0002.pts
* where the image_xxxx.pts contains the position of each face landmark.
* example of the contents:
* version: 1
* n_points: 68
* {
* 115.167660 220.807529
* 116.164839 245.721357
* 120.208690 270.389841
* ...
* }
* example of the dataset is available at https://ibug.doc.ic.ac.uk/download/annotations/ibug.zip
*--------------------------------------------------*/
#include <stdio.h>
#include <fstream>
#include <sstream>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/face.hpp"
using namespace std;
using namespace cv;
using namespace cv::face;
CascadeClassifier face_cascade;
bool myDetector( InputArray image, OutputArray roi, void * config=0 );
bool parseArguments(int argc, char** argv, CommandLineParser & , String & cascade,
String & model, String & images, String & annotations, String & testImages
);
int main(int argc, char** argv)
{
CommandLineParser parser(argc, argv,"");
String cascade_path,model_path,images_path, annotations_path, test_images_path;
if(!parseArguments(argc, argv, parser,cascade_path,model_path,images_path, annotations_path, test_images_path))
return -1;
/*create the facemark instance*/
FacemarkLBF::Params params;
params.model_filename = model_path;
params.cascade_face = cascade_path;
Ptr<Facemark> facemark = FacemarkLBF::create(params);
face_cascade.load(params.cascade_face.c_str());
facemark->setFaceDetector(myDetector);
/*Loads the dataset*/
std::vector<String> images_train;
std::vector<String> landmarks_train;
loadDatasetList(images_path,annotations_path,images_train,landmarks_train);
Mat image;
std::vector<Point2f> facial_points;
for(size_t i=0;i<images_train.size();i++){
printf("%i/%i :: %s\n", (int)(i+1), (int)images_train.size(),images_train[i].c_str());
image = imread(images_train[i].c_str());
loadFacePoints(landmarks_train[i],facial_points);
facemark->addTrainingSample(image, facial_points);
}
/*train the Algorithm*/
facemark->training();
/*test using some images*/
String testFiles(images_path), testPts(annotations_path);
if(!test_images_path.empty()){
testFiles = test_images_path;
testPts = test_images_path; //unused
}
std::vector<String> images;
std::vector<String> facePoints;
loadDatasetList(testFiles, testPts, images, facePoints);
std::vector<Rect> rects;
CascadeClassifier cc(params.cascade_face.c_str());
for(size_t i=0;i<images.size();i++){
std::vector<std::vector<Point2f> > landmarks;
cout<<images[i];
Mat img = imread(images[i]);
facemark->getFaces(img, rects);
facemark->fit(img, rects, landmarks);
for(size_t j=0;j<rects.size();j++){
drawFacemarks(img, landmarks[j], Scalar(0,0,255));
rectangle(img, rects[j], Scalar(255,0,255));
}
if(rects.size()>0){
cout<<endl;
imshow("result", img);
waitKey(0);
}else{
cout<<"face not found"<<endl;
}
}
}
bool myDetector( InputArray image, OutputArray roi, void * config ){
Mat gray;
std::vector<Rect> & faces = *(std::vector<Rect>*) roi.getObj();
faces.clear();
if(config!=0){
//do nothing
}
if(image.channels()>1){
cvtColor(image,gray,CV_BGR2GRAY);
}else{
gray = image.getMat().clone();
}
equalizeHist( gray, gray );
face_cascade.detectMultiScale( gray, faces, 1.4, 2, CV_HAAR_SCALE_IMAGE, Size(30, 30) );
return true;
}
bool parseArguments(int argc, char** argv, CommandLineParser & parser,
String & cascade,
String & model,
String & images,
String & annotations,
String & test_images
){
const String keys =
"{ @c cascade | | (required) path to the face cascade xml file fo the face detector }"
"{ @i images | | (required) path of a text file contains the list of paths to all training images}"
"{ @a annotations | | (required) Path of a text file contains the list of paths to all annotations files}"
"{ @m model | | (required) path to save the trained model }"
"{ t test-images | | Path of a text file contains the list of paths to the test images}"
"{ help h usage ? | | facemark_demo_lbf -cascade -images -annotations -model [-t] \n"
" example: facemark_demo_lbf ../face_cascade.xml ../images_train.txt ../points_train.txt ../lbf.model}"
;
parser = CommandLineParser(argc, argv,keys);
parser.about("hello");
if (parser.has("help")){
parser.printMessage();
return false;
}
cascade = String(parser.get<String>("cascade"));
model = String(parser.get<string>("model"));
images = String(parser.get<string>("images"));
annotations = String(parser.get<string>("annotations"));
test_images = String(parser.get<string>("t"));
cout<<"cascade : "<<cascade.c_str()<<endl;
cout<<"model : "<<model.c_str()<<endl;
cout<<"images : "<<images.c_str()<<endl;
cout<<"annotations : "<<annotations.c_str()<<endl;
if(cascade.empty() || model.empty() || images.empty() || annotations.empty()){
std::cerr << "one or more required arguments are not found" << '\n';
parser.printMessage();
return false;
}
return true;
}
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
/*----------------------------------------------
* Usage:
* facemark_lbf_fitting <face_cascade_model> <lbf_model> <video_name>
*
* example:
* facemark_lbf_fitting ../face_cascade.xml ../LBF.model ../video.mp4
*
* note: do not forget to provide the LBF_MODEL and DETECTOR_MODEL
* the model are available at opencv_contrib/modules/face/data/
*--------------------------------------------------*/
#include <stdio.h>
#include <ctime>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/face.hpp"
using namespace std;
using namespace cv;
using namespace cv::face;
CascadeClassifier face_cascade;
bool myDetector( InputArray image, OutputArray ROIs, void * config = 0);
bool parseArguments(int argc, char** argv, CommandLineParser & parser,
String & cascade, String & model,String & video);
int main(int argc, char** argv ){
CommandLineParser parser(argc, argv,"");
String cascade_path,model_path,images_path, video_path;
if(!parseArguments(argc, argv, parser,cascade_path,model_path,video_path))
return -1;
face_cascade.load(cascade_path);
FacemarkLBF::Params params;
params.model_filename = model_path;
params.cascade_face = cascade_path;
Ptr<Facemark> facemark = FacemarkLBF::create(params);
facemark->setFaceDetector(myDetector);
facemark->loadModel(params.model_filename.c_str());
VideoCapture capture(video_path);
Mat frame;
if( !capture.isOpened() ){
printf("Error when reading vide\n");
return 0;
}
Mat img;
String text;
char buff[255];
double fittime;
int nfaces;
std::vector<Rect> rects,rects_scaled;
std::vector<std::vector<Point2f> > landmarks;
CascadeClassifier cc(params.cascade_face.c_str());
namedWindow( "w", 1);
for( ; ; )
{
capture >> frame;
if(frame.empty())
break;
double __time__ = (double)getTickCount();
float scale = (float)(400.0/frame.cols);
resize(frame, img, Size((int)(frame.cols*scale), (int)(frame.rows*scale)));
facemark->getFaces(img, rects);
rects_scaled.clear();
for(int j=0;j<(int)rects.size();j++){
rects_scaled.push_back(Rect(
(int)(rects[j].x/scale),
(int)(rects[j].y/scale),
(int)(rects[j].width/scale),
(int)(rects[j].height/scale)));
}
rects = rects_scaled;
fittime=0;
nfaces = (int)rects.size();
if(rects.size()>0){
double newtime = (double)getTickCount();
facemark->fit(frame, rects, landmarks);
fittime = ((getTickCount() - newtime)/getTickFrequency());
for(int j=0;j<(int)rects.size();j++){
landmarks[j] = Mat(Mat(landmarks[j]));
drawFacemarks(frame, landmarks[j], Scalar(0,0,255));
}
}
double fps = (getTickFrequency()/(getTickCount() - __time__));
sprintf(buff, "faces: %i %03.2f fps, fit:%03.0f ms",nfaces,fps,fittime*1000);
text = buff;
putText(frame, text, Point(20,40), FONT_HERSHEY_PLAIN , 2.0,Scalar::all(255), 2, 8);
imshow("w", frame);
waitKey(1); // waits to display frame
}
waitKey(0); // key press to close window
}
bool myDetector( InputArray image, OutputArray ROIs, void * config ){
Mat gray;
std::vector<Rect> & faces = *(std::vector<Rect>*) ROIs.getObj();
faces.clear();
if(config!=0){
//do nothing
}
if(image.channels()>1){
cvtColor(image.getMat(),gray,CV_BGR2GRAY);
}else{
gray = image.getMat().clone();
}
equalizeHist( gray, gray );
face_cascade.detectMultiScale( gray, faces, 1.4, 2, CV_HAAR_SCALE_IMAGE, Size(30, 30) );
return true;
}
bool parseArguments(int argc, char** argv, CommandLineParser & parser,
String & cascade,
String & model,
String & video
){
const String keys =
"{ @c cascade | | (required) path to the cascade model file for the face detector }"
"{ @m model | | (required) path to the trained model }"
"{ @v video | | (required) path input video}"
"{ help h usage ? | | facemark_lbf_fitting -cascade -model -video [-t]\n"
" example: facemark_lbf_fitting ../face_cascade.xml ../LBF.model ../video.mp4}"
;
parser = CommandLineParser(argc, argv,keys);
parser.about("hello");
if (parser.has("help")){
parser.printMessage();
return false;
}
cascade = String(parser.get<String>("cascade"));
model = String(parser.get<string>("model"));
video = String(parser.get<string>("video"));
if(cascade.empty() || model.empty() || video.empty() ){
std::cerr << "one or more required arguments are not found" << '\n';
cout<<"cascade : "<<cascade.c_str()<<endl;
cout<<"model : "<<model.c_str()<<endl;
cout<<"video : "<<video.c_str()<<endl;
parser.printMessage();
return false;
}
return true;
}
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
#include "opencv2/face.hpp"
#include "opencv2/core.hpp"
#include "precomp.hpp"
/*dataset parser*/
#include <fstream>
#include <sstream>
#include <string>
#include <stdlib.h> /* atoi */
namespace cv {
namespace face {
CParams::CParams(String s, double sf, int minN, Size minSz, Size maxSz){
cascade = s;
scaleFactor = sf;
minNeighbors = minN;
minSize = minSz;
maxSize = maxSz;
}
bool getFaces(InputArray image, OutputArray faces, void * parameters){
Mat gray;
std::vector<Rect> roi;
if(parameters!=0){
CParams * params = (CParams *)parameters;
cvtColor( image.getMat(), gray, CV_BGR2GRAY );
equalizeHist( gray, gray );
CascadeClassifier face_cascade;
if( !face_cascade.load( params->cascade ) ){ printf("--(!)Error loading face_cascade\n"); return false; };
face_cascade.detectMultiScale( gray, roi, params->scaleFactor, params->minNeighbors, 0|CV_HAAR_SCALE_IMAGE, params->minSize, params->maxSize);
Mat(roi).copyTo(faces);
return true;
}else{
return false;
}
}
bool loadDatasetList(String imageList, String groundTruth, std::vector<String> & images, std::vector<String> & landmarks){
std::string line;
/*clear the output containers*/
images.clear();
landmarks.clear();
/*open the files*/
std::ifstream infile;
infile.open(imageList.c_str(), std::ios::in);
std::ifstream ss_gt;
ss_gt.open(groundTruth.c_str(), std::ios::in);
if ((!infile) || !(ss_gt)) {
printf("No valid input file was given, please check the given filename.\n");
return false;
}
/*load the images path*/
while (getline (infile, line)){
images.push_back(line);
}
/*load the points*/
while (getline (ss_gt, line)){
landmarks.push_back(line);
}
return true;
}
bool loadTrainingData(String filename, std::vector<String> & images, OutputArray _facePoints, char delim, float offset){
std::string line;
std::string item;
std::vector<Point2f> pts;
std::vector<float> raw;
std::vector<std::vector<Point2f> > & facePoints =
*(std::vector<std::vector<Point2f> >*) _facePoints.getObj();
std::ifstream infile;
infile.open(filename.c_str(), std::ios::in);
if (!infile) {
std::string error_message = "No valid input file was given, please check the given filename.";
CV_Error(CV_StsBadArg, error_message);
}
/*clear the output containers*/
images.clear();
facePoints.clear();
/*the main loading process*/
while (getline (infile, line)){
std::istringstream ss(line); // string stream for the current line
/*pop the image path*/
getline (ss, item, delim);
images.push_back(item);
/*load all numbers*/
raw.clear();
while (getline (ss, item, delim)){
raw.push_back((float)atof(item.c_str()));
}
/*convert to opencv points*/
pts.clear();
for(unsigned i = 0;i< raw.size();i+=2){
pts.push_back(Point2f(raw[i]+offset,raw[i+1]+offset));
}
facePoints.push_back(pts);
} // main loading process
return true;
}
bool loadTrainingData(String imageList, String groundTruth, std::vector<String> & images, OutputArray _facePoints, float offset){
std::string line;
std::vector<Point2f> facePts;
std::vector<std::vector<Point2f> > & facePoints =
*(std::vector<std::vector<Point2f> >*) _facePoints.getObj();
/*clear the output containers*/
images.clear();
facePoints.clear();
/*load the images path*/
std::ifstream infile;
infile.open(imageList.c_str(), std::ios::in);
if (!infile) {
std::string error_message = "No valid input file was given, please check the given filename.";
CV_Error(CV_StsBadArg, error_message);
}
while (getline (infile, line)){
images.push_back(line);
}
/*load the points*/
std::ifstream ss_gt(groundTruth.c_str());
while (getline (ss_gt, line)){
facePts.clear();
loadFacePoints(line, facePts, offset);
facePoints.push_back(facePts);
}
return true;
}
bool loadFacePoints(String filename, OutputArray points, float offset){
std::vector<Point2f> & pts = *(std::vector<Point2f> *)points.getObj();
std::string line, item;
std::ifstream infile(filename.c_str());
/*pop the version*/
std::getline(infile, line);
CV_Assert(line.compare(0,7,"version")==0);
/*pop the number of points*/
std::getline(infile, line);
CV_Assert(line.compare(0,8,"n_points")==0);
/*get the number of points*/
std::string item_npts;
int npts;
std::istringstream linestream(line);
linestream>>item_npts>>npts;
/*pop out '{' character*/
std::getline(infile, line);
/*main process*/
int cnt = 0;
std::string x, y;
pts.clear();
while (std::getline(infile, line) && cnt<npts )
{
cnt+=1;
std::istringstream ss(line);
ss>>x>>y;
pts.push_back(Point2f((float)atof(x.c_str())+offset,(float)atof(y.c_str())+offset));
}
return true;
}
void drawFacemarks(InputOutputArray image, InputArray points, Scalar color){
Mat img = image.getMat();
std::vector<Point2f> pts = *(std::vector<Point2f>*)points.getObj();
for(size_t i=0;i<pts.size();i++){
circle(img, pts[i],3, color,-1);
}
} //drawPoints
} /* namespace face */
} /* namespace cv */
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
#include "opencv2/face.hpp"
#include "precomp.hpp"
namespace cv {
namespace face {
/*
* Parameters
*/
FacemarkAAM::Params::Params(){
model_filename = "";
m = 200;
n = 10;
n_iter = 50;
verbose = true;
save_model = true;
scales.push_back(1.0);
max_m = 550;
max_n = 136;
texture_max_m = 145;
}
FacemarkAAM::Config::Config(Mat rot, Point2f trans, float scaling,int scale_id){
R = rot.clone();
t = trans;
scale = scaling;
model_scale_idx = scale_id;
}
void FacemarkAAM::Params::read( const cv::FileNode& fn ){
*this = FacemarkAAM::Params();
if (!fn["model_filename"].empty()) fn["model_filename"] >> model_filename;
if (!fn["m"].empty()) fn["m"] >> m;
if (!fn["n"].empty()) fn["n"] >> m;
if (!fn["n_iter"].empty()) fn["n_iter"] >> m;
if (!fn["verbose"].empty()) fn["verbose"] >> m;
if (!fn["max_m"].empty()) fn["max_m"] >> m;
if (!fn["max_n"].empty()) fn["max_n"] >> m;
if (!fn["texture_max_m"].empty()) fn["texture_max_m"] >> m;
if (!fn["scales"].empty()) fn["scales"] >> m;
}
void FacemarkAAM::Params::write( cv::FileStorage& fs ) const{
fs << "model_filename" << model_filename;
fs << "m" << m;
fs << "n" << n;
fs << "n_iter" << n_iter;
fs << "verbose" << verbose;
fs << "max_m" << verbose;
fs << "max_n" << verbose;
fs << "texture_max_m" << verbose;
fs << "scales" << verbose;
}
class FacemarkAAMImpl : public FacemarkAAM {
public:
FacemarkAAMImpl( const FacemarkAAM::Params &parameters = FacemarkAAM::Params() );
void read( const FileNode& /*fn*/ );
void write( FileStorage& /*fs*/ ) const;
void saveModel(String fs);
void loadModel(String fs);
bool setFaceDetector(bool(*f)(InputArray , OutputArray, void * ));
bool getFaces( InputArray image ,OutputArray faces, void * extra_params);
bool getData(void * items);
protected:
bool fit( InputArray image, InputArray faces, InputOutputArray landmarks, void * runtime_params);//!< from many ROIs
bool fitImpl( const Mat image, std::vector<Point2f>& landmarks,const Mat R,const Point2f T,const float scale, const int sclIdx=0 );
bool addTrainingSample(InputArray image, InputArray landmarks);
void training(void* parameters);
Mat procrustes(std::vector<Point2f> , std::vector<Point2f> , Mat & , Scalar & , float & );
void calcMeanShape(std::vector<std::vector<Point2f> > ,std::vector<Point2f> & );
void procrustesAnalysis(std::vector<std::vector<Point2f> > , std::vector<std::vector<Point2f> > & , std::vector<Point2f> & );
inline Mat linearize(Mat );
inline Mat linearize(std::vector<Point2f> );
void getProjection(const Mat , Mat &, int );
void calcSimilarityEig(std::vector<Point2f> ,Mat , Mat & , Mat & );
Mat orthonormal(Mat );
void delaunay(std::vector<Point2f> , std::vector<Vec3i> & );
Mat createMask(std::vector<Point2f> , Rect );
Mat createTextureBase(std::vector<Point2f> , std::vector<Vec3i> , Rect , std::vector<std::vector<Point> > & );
Mat warpImage(const Mat ,const std::vector<Point2f> ,const std::vector<Point2f> ,
const std::vector<Vec3i> , const Rect , const std::vector<std::vector<Point> > );
template <class T>
Mat getFeature(const Mat , std::vector<int> map);
void createMaskMapping(const Mat mask, const Mat mask2, std::vector<int> & , std::vector<int> &, std::vector<int> &);
void warpUpdate(std::vector<Point2f> & shape, Mat delta, std::vector<Point2f> s0, Mat S, Mat Q, std::vector<Vec3i> triangles,std::vector<std::vector<int> > Tp);
Mat computeWarpParts(std::vector<Point2f> curr_shape,std::vector<Point2f> s0, Mat ds0, std::vector<Vec3i> triangles,std::vector<std::vector<int> > Tp);
void image_jacobian(const Mat gx, const Mat gy, const Mat Jx, const Mat Jy, Mat & G);
void gradient(const Mat M, Mat & gx, Mat & gy);
void createWarpJacobian(Mat S, Mat Q, std::vector<Vec3i> , Model::Texture & T, Mat & Wx_dp, Mat & Wy_dp, std::vector<std::vector<int> > & Tp);
std::vector<Mat> images;
std::vector<std::vector<Point2f> > facePoints;
FacemarkAAM::Params params;
FacemarkAAM::Model AAM;
bool(*faceDetector)(InputArray , OutputArray, void *);
bool isSetDetector;
private:
bool isModelTrained;
};
/*
* Constructor
*/
Ptr<FacemarkAAM> FacemarkAAM::create(const FacemarkAAM::Params &parameters){
return Ptr<FacemarkAAMImpl>(new FacemarkAAMImpl(parameters));
}
FacemarkAAMImpl::FacemarkAAMImpl( const FacemarkAAM::Params &parameters ) :
params( parameters )
{
isSetDetector =false;
isModelTrained = false;
}
void FacemarkAAMImpl::read( const cv::FileNode& fn ){
params.read( fn );
}
void FacemarkAAMImpl::write( cv::FileStorage& fs ) const {
params.write( fs );
}
bool FacemarkAAMImpl::setFaceDetector(bool(*f)(InputArray , OutputArray, void *)){
faceDetector = f;
isSetDetector = true;
return true;
}
bool FacemarkAAMImpl::getFaces( InputArray image , OutputArray roi, void * extra_params){
if(!isSetDetector){
return false;
}
if(extra_params!=0){
//do nothing
}
std::vector<Rect> faces;
faces.clear();
faceDetector(image.getMat(), faces, extra_params);
Mat(faces).copyTo(roi);
return true;
}
bool FacemarkAAMImpl::getData(void * items){
if(items==0){
return true;
}else{
Data * data = (Data*)items;
data->s0 = AAM.s0;
return true;
}
}
bool FacemarkAAMImpl::addTrainingSample(InputArray image, InputArray landmarks){
std::vector<Point2f> & _landmarks = *(std::vector<Point2f>*)landmarks.getObj();
images.push_back(image.getMat());
facePoints.push_back(_landmarks);
return true;
}
void FacemarkAAMImpl::training(void* parameters){
if(parameters!=0){/*do nothing*/}
if (images.size()<1) {
std::string error_message =
"Training data is not provided. Consider to add using addTrainingSample() function!";
CV_Error(CV_StsBadArg, error_message);
}
if(strcmp(params.model_filename.c_str(),"")==0 && params.save_model){
std::string error_message = "The model_filename parameter should be set!";
CV_Error(CV_StsBadArg, error_message);
}
std::vector<std::vector<Point2f> > normalized;
Mat erode_kernel = getStructuringElement(MORPH_RECT, Size(3,3), Point(1,1));
Mat image;
int param_max_m = params.max_m;//550;
int param_max_n = params.max_n;//136;
AAM.scales = params.scales;
AAM.textures.resize(AAM.scales.size());
/*-------------- A. Load the training data---------*/
procrustesAnalysis(facePoints, normalized,AAM.s0);
/*-------------- B. Create the shape model---------*/
Mat s0_lin = linearize(AAM.s0).t() ;
// linearize all shapes data, all x and then all y for each shape
Mat M;
for(unsigned i=0;i<normalized.size();i++){
M.push_back(linearize(normalized[i]).t()-s0_lin);
}
/* get PCA Projection vectors */
Mat S;
getProjection(M.t(),S,param_max_n);
/* Create similarity eig*/
Mat shape_S,shape_Q;
calcSimilarityEig(AAM.s0,S,AAM.Q,AAM.S);
/* ----------C. Create the coordinate frame ------------*/
delaunay(AAM.s0,AAM.triangles);
for(size_t scale=0; scale<AAM.scales.size();scale++){
AAM.textures[scale].max_m = params.texture_max_m;//145;
if(params.verbose) printf("Training for scale %f ...\n", AAM.scales[scale]);
Mat s0_scaled_m = Mat(AAM.s0)/AAM.scales[scale]; // scale the shape
std::vector<Point2f> s0_scaled = s0_scaled_m.reshape(2); //convert to points
/*get the min and max of x and y coordinate*/
double min_x, max_x, min_y, max_y;
s0_scaled_m = s0_scaled_m.reshape(1);
Mat s0_scaled_x = s0_scaled_m.col(0);
Mat s0_scaled_y = s0_scaled_m.col(1);
minMaxIdx(s0_scaled_x, &min_x, &max_x);
minMaxIdx(s0_scaled_y, &min_y, &max_y);
std::vector<Point2f> base_shape = Mat(Mat(s0_scaled)-Scalar(min_x-2.0,min_y-2.0)).reshape(2);
AAM.textures[scale].base_shape = base_shape;
AAM.textures[scale].resolution = Rect(0,0,(int)ceil(max_x-min_x+3),(int)ceil(max_y-min_y+3));
Mat base_texture = createTextureBase(base_shape, AAM.triangles, AAM.textures[scale].resolution, AAM.textures[scale].textureIdx);
Mat mask1 = base_texture>0;
Mat mask2;
erode(mask1, mask1, erode_kernel);
erode(mask1, mask2, erode_kernel);
Mat warped;
std::vector<int> fe_map;
createMaskMapping(mask1,mask2, AAM.textures[scale].ind1, AAM.textures[scale].ind2,fe_map);//ok
/* ------------ Part D. Get textures -------------*/
Mat texture_feats, feat;
if(params.verbose) printf("(1/4) Feature extraction ...\n");
for(size_t i=0; i<images.size();i++){
if(params.verbose) printf("extract features from image #%i/%i\n", (int)(i+1), (int)images.size());
warped = warpImage(images[i],base_shape, facePoints[i], AAM.triangles, AAM.textures[scale].resolution,AAM.textures[scale].textureIdx);
feat = getFeature<uchar>(warped, AAM.textures[scale].ind1);
texture_feats.push_back(feat.t());
}
Mat T= texture_feats.t();
/* -------------- E. Create the texture model -----------------*/
reduce(T,AAM.textures[scale].A0,1, CV_REDUCE_AVG);
if(params.verbose) printf("(2/4) Compute the feature average ...\n");
Mat A0_mtx = repeat(AAM.textures[scale].A0,1,T.cols);
Mat textures_normalized = T - A0_mtx;
if(params.verbose) printf("(3/4) Projecting the features ...\n");
getProjection(textures_normalized, AAM.textures[scale].A ,param_max_m);
AAM.textures[scale].AA0 = getFeature<float>(AAM.textures[scale].A0, fe_map);
if(params.verbose) printf("(4/4) Extraction of the eroded face features ...\n");
Mat U_data, ud;
for(int i =0;i<AAM.textures[scale].A.cols;i++){
Mat c = AAM.textures[scale].A.col(i);
ud = getFeature<float>(c,fe_map);
U_data.push_back(ud.t());
}
Mat U = U_data.t();
AAM.textures[scale].AA = orthonormal(U);
} // scale
images.clear();
if(params.save_model){
if(params.verbose) printf("Saving the model\n");
saveModel(params.model_filename);
}
isModelTrained = true;
if(params.verbose) printf("Training is completed\n");
}
bool FacemarkAAMImpl::fit( InputArray image, InputArray roi, InputOutputArray _landmarks, void * runtime_params)
{
std::vector<Rect> & faces = *(std::vector<Rect> *)roi.getObj();
if(faces.size()<1) return false;
std::vector<std::vector<Point2f> > & landmarks =
*(std::vector<std::vector<Point2f> >*) _landmarks.getObj();
landmarks.resize(faces.size());
Mat img = image.getMat();
if(runtime_params!=0){
std::vector<Config> conf = *(std::vector<Config>*)runtime_params;
if (conf.size()!=faces.size()) {
std::string error_message =
"Number of faces and extra_parameters are different!";
CV_Error(CV_StsBadArg, error_message);
}
for(size_t i=0; i<conf.size();i++){
fitImpl(img, landmarks[i], conf[i].R,conf[i].t, conf[i].scale, conf[i].model_scale_idx);
}
}else{
Mat R = Mat::eye(2, 2, CV_32F);
Point2f t = Point2f((float)(img.cols/2.0),(float)(img.rows/2.0));
float scale = 1.0;
for(unsigned i=0; i<faces.size();i++){
fitImpl(img, landmarks[i], R,t, scale);
}
}
return true;
}
bool FacemarkAAMImpl::fitImpl( const Mat image, std::vector<Point2f>& landmarks, const Mat R, const Point2f T, const float scale, int _scl){
if (landmarks.size()>0)
landmarks.clear();
CV_Assert(isModelTrained);
int param_n = params.n, param_m = params.m;
int scl = _scl<(int)AAM.scales.size()?_scl:(int)AAM.scales.size();
/*variables*/
std::vector<Point2f> s0 = Mat(Mat(AAM.s0)/AAM.scales[scl]).reshape(2);
/*pre-computation*/
Mat S = Mat(AAM.S, Range::all(), Range(0,param_n>AAM.S.cols?AAM.S.cols:param_n)).clone(); // chop the shape data
std::vector<std::vector<int> > Tp;
Mat Wx_dp, Wy_dp;
createWarpJacobian(S, AAM.Q, AAM.triangles, AAM.textures[scl],Wx_dp, Wy_dp, Tp);
std::vector<Point2f> s0_init = Mat(Mat(R*scale*AAM.scales[scl]*Mat(Mat(s0).reshape(1)).t()).t()).reshape(2);
std::vector<Point2f> curr_shape = Mat(Mat(s0_init)+Scalar(T.x,T.y));
curr_shape = Mat(1.0/scale*Mat(curr_shape)).reshape(2);
Mat imgray;
Mat img;
if(image.channels()>1){
cvtColor(image,imgray,CV_BGR2GRAY);
}else{
imgray = image;
}
resize(imgray,img,Size(int(image.cols/scale),int(image.rows/scale)));// matlab use bicubic interpolation, the result is float numbers
/*chop the textures model*/
int maxCol = param_m;
if(AAM.textures[scl].A.cols<param_m)maxCol = AAM.textures[scl].A.cols;
if(AAM.textures[scl].AA.cols<maxCol)maxCol = AAM.textures[scl].AA.cols;
Mat A = Mat(AAM.textures[scl].A,Range(0,AAM.textures[scl].A.rows), Range(0,maxCol)).clone();
Mat AA = Mat(AAM.textures[scl].AA,Range(0,AAM.textures[scl].AA.rows), Range(0,maxCol)).clone();
/*iteratively update the fitting*/
Mat I, II, warped, c, gx, gy, Irec, Irec_feat, dc;
Mat refI, refII, refWarped, ref_c, ref_gx, ref_gy, refIrec, refIrec_feat, ref_dc ;
for(int t=0;t<params.n_iter;t++){
warped = warpImage(img,AAM.textures[scl].base_shape, curr_shape,
AAM.triangles,
AAM.textures[scl].resolution ,
AAM.textures[scl].textureIdx);
I = getFeature<uchar>(warped, AAM.textures[scl].ind1);
II = getFeature<uchar>(warped, AAM.textures[scl].ind2);
if(t==0){
c = A.t()*(I-AAM.textures[scl].A0); //little bit different to matlab, probably due to datatype
}else{
c = c+dc;
}
Irec_feat = (AAM.textures[scl].A0+A*c);
Irec = Mat::zeros(AAM.textures[scl].resolution.width, AAM.textures[scl].resolution.height, CV_32FC1);
for(int j=0;j<(int)AAM.textures[scl].ind1.size();j++){
Irec.at<float>(AAM.textures[scl].ind1[j]) = Irec_feat.at<float>(j);
}
Mat irec = Irec.t();
gradient(irec, gx, gy);
Mat Jc;
image_jacobian(Mat(gx.t()).reshape(0,1).t(),Mat(gy.t()).reshape(0,1).t(),Wx_dp, Wy_dp,Jc);
Mat J;
std::vector<float> Irec_vec;
for(size_t j=0;j<AAM.textures[scl].ind2.size();j++){
J.push_back(Jc.row(AAM.textures[scl].ind2[j]));
Irec_vec.push_back(Irec.at<float>(AAM.textures[scl].ind2[j]));
}
/*compute Jfsic and Hfsic*/
Mat Jfsic = J - AA*(AA.t()*J);
Mat Hfsic = Jfsic.t()*Jfsic;
Mat iHfsic;
invert(Hfsic, iHfsic);
/*compute dp dq and dc*/
Mat dqp = iHfsic*Jfsic.t()*(II-AAM.textures[scl].AA0);
dc = AA.t()*(II-Mat(Irec_vec)-J*dqp);
warpUpdate(curr_shape, dqp, s0,S, AAM.Q, AAM.triangles,Tp);
}
landmarks = Mat(scale*Mat(curr_shape)).reshape(2);
return true;
}
void FacemarkAAMImpl::saveModel(String s){
FileStorage fs(s.c_str(),FileStorage::WRITE_BASE64);
fs << "AAM_tri" << AAM.triangles;
fs << "scales" << AAM.scales;
fs << "s0" << AAM.s0;
fs << "S" << AAM.S;
fs << "Q" << AAM.Q;
String x;
for(int i=0;i< (int)AAM.scales.size();i++){
x = cv::format("scale%i_max_m",i);
fs << x << AAM.textures[i].max_m;
x = cv::format("scale%i_resolution",i);
fs << x << AAM.textures[i].resolution;
x = cv::format("scale%i_textureIdx",i);
fs << x << AAM.textures[i].textureIdx;
x = cv::format("scale%i_base_shape",i);
fs << x << AAM.textures[i].base_shape;
x = cv::format("scale%i_A",i);
fs << x << AAM.textures[i].A;
x = cv::format("scale%i_A0",i);
fs << x << AAM.textures[i].A0;
x = cv::format("scale%i_AA",i);
fs << x << AAM.textures[i].AA;
x = cv::format("scale%i_AA0",i);
fs << x << AAM.textures[i].AA0;
x = cv::format("scale%i_ind1",i);
fs << x << AAM.textures[i].ind1;
x = cv::format("scale%i_ind2",i);
fs << x << AAM.textures[i].ind2;
}
fs.release();
if(params.verbose) printf("The model is successfully saved! \n");
}
void FacemarkAAMImpl::loadModel(String s){
FileStorage fs(s.c_str(),FileStorage::READ);
String x;
fs["AAM_tri"] >> AAM.triangles;
fs["scales"] >> AAM.scales;
fs["s0"] >> AAM.s0;
fs["S"] >> AAM.S;
fs["Q"] >> AAM.Q;
AAM.textures.resize(AAM.scales.size());
for(int i=0;i< (int)AAM.scales.size();i++){
x = cv::format("scale%i_max_m",i);
fs[x] >> AAM.textures[i].max_m;
x = cv::format("scale%i_resolution",i);
fs[x] >> AAM.textures[i].resolution;
x = cv::format("scale%i_textureIdx",i);
fs[x] >> AAM.textures[i].textureIdx;
x = cv::format("scale%i_base_shape",i);
fs[x] >> AAM.textures[i].base_shape;
x = cv::format("scale%i_A",i);
fs[x] >> AAM.textures[i].A;
x = cv::format("scale%i_A0",i);
fs[x] >> AAM.textures[i].A0;
x = cv::format("scale%i_AA",i);
fs[x] >> AAM.textures[i].AA;
x = cv::format("scale%i_AA0",i);
fs[x] >> AAM.textures[i].AA0;
x = cv::format("scale%i_ind1",i);
fs[x] >> AAM.textures[i].ind1;
x = cv::format("scale%i_ind2",i);
fs[x] >> AAM.textures[i].ind2;
}
fs.release();
isModelTrained = true;
if(params.verbose) printf("the model has been loaded\n");
}
Mat FacemarkAAMImpl::procrustes(std::vector<Point2f> P, std::vector<Point2f> Q, Mat & rot, Scalar & trans, float & scale){
// calculate average
Scalar mx = mean(P);
Scalar my = mean(Q);
// zero centered data
Mat X0 = Mat(P) - mx;
Mat Y0 = Mat(Q) - my;
// calculate magnitude
Mat Xs, Ys;
multiply(X0,X0,Xs);
multiply(Y0,Y0,Ys);
// calculate the sum
Mat sumXs, sumYs;
reduce(Xs,sumXs, 0, CV_REDUCE_SUM);
reduce(Ys,sumYs, 0, CV_REDUCE_SUM);
//calculate the normrnd
double normX = sqrt(Mat(sumXs.reshape(1)).at<float>(0)+Mat(sumXs.reshape(1)).at<float>(1));
double normY = sqrt(Mat(sumYs.reshape(1)).at<float>(0)+Mat(sumYs.reshape(1)).at<float>(1));
//normalization
X0 = X0/normX;
Y0 = Y0/normY;
//reshape, convert to 2D Matrix
Mat Xn=X0.reshape(1);
Mat Yn=Y0.reshape(1);
//calculate the covariance matrix
Mat M = Xn.t()*Yn;
// decompose
Mat U,S,Vt;
SVD::compute(M, S, U, Vt);
// extract the transformations
scale = (S.at<float>(0)+S.at<float>(1))*(float)normX/(float)normY;
rot = Vt.t()*U.t();
Mat muX(mx),mX; muX.pop_back();muX.pop_back();
Mat muY(my),mY; muY.pop_back();muY.pop_back();
muX.convertTo(mX,CV_32FC1);
muY.convertTo(mY,CV_32FC1);
Mat t = mX.t()-scale*mY.t()*rot;
trans[0] = t.at<float>(0);
trans[1] = t.at<float>(1);
// calculate the recovered form
Mat Qmat = Mat(Q).reshape(1);
return Mat(scale*Qmat*rot+trans).clone();
}
void FacemarkAAMImpl::procrustesAnalysis(std::vector<std::vector<Point2f> > shapes, std::vector<std::vector<Point2f> > & normalized, std::vector<Point2f> & new_mean){
std::vector<Scalar> mean_every_shape;
mean_every_shape.resize(shapes.size());
Point2f temp;
// calculate the mean of every shape
for(size_t i=0; i< shapes.size();i++){
mean_every_shape[i] = mean(shapes[i]);
}
//normalize every shapes
Mat tShape;
normalized.clear();
for(size_t i=0; i< shapes.size();i++){
normalized.push_back((Mat)(Mat(shapes[i]) - mean_every_shape[i]));
}
// calculate the mean shape
std::vector<Point2f> mean_shape;
calcMeanShape(normalized, mean_shape);
// update the mean shape and normalized shapes iteratively
int maxIter = 100;
Mat R;
Scalar t;
float s;
Mat aligned;
for(int i=0;i<maxIter;i++){
// align
for(unsigned k=0;k< normalized.size();k++){
aligned=procrustes(mean_shape, normalized[k], R, t, s);
aligned.reshape(2).copyTo(normalized[k]);
}
//calc new mean
calcMeanShape(normalized, new_mean);
// align the new mean
aligned=procrustes(mean_shape, new_mean, R, t, s);
// update
aligned.reshape(2).copyTo(mean_shape);
}
}
void FacemarkAAMImpl::calcMeanShape(std::vector<std::vector<Point2f> > shapes,std::vector<Point2f> & mean){
mean.resize(shapes[0].size());
Point2f tmp;
for(unsigned i=0;i<shapes[0].size();i++){
tmp.x=0;
tmp.y=0;
for(unsigned k=0;k< shapes.size();k++){
tmp.x+= shapes[k][i].x;
tmp.y+= shapes[k][i].y;
}
tmp.x/=shapes.size();
tmp.y/=shapes.size();
mean[i] = tmp;
}
}
void FacemarkAAMImpl::getProjection(const Mat M, Mat & P, int n){
Mat U,S,Vt,S1, Ut;
int k;
if(M.rows < M.cols){
// SVD::compute(M*M.t(), S, U, Vt);
eigen(M*M.t(), S, Ut); U=Ut.t();
// find the minimum between number of non-zero eigval,
// compressed dim, row, and column
// threshold(S,S1,0.00001,1,THRESH_BINARY);
k= S.rows; //countNonZero(S1);
if(k>n)k=n;
if(k>M.rows)k=M.rows;
if(k>M.cols)k=M.cols;
// cut the column of eigen vector
U.colRange(0,k).copyTo(P);
}else{
// SVD::compute(M.t()*M, S, U, Vt);
eigen(M.t()*M, S, Ut);U=Ut.t();
// threshold(S,S1,0.00001,1,THRESH_BINARY);
k= S.rows; //countNonZero(S1);
if(k>n)k=n;
if(k>M.rows)k=M.rows;
if(k>M.cols)k=M.cols;
// cut the eigen values to k-amount
Mat D = Mat::zeros(k,k,CV_32FC1);
Mat diag = D.diag();
Mat s; pow(S,-0.5,s);
s(Range(0,k), Range::all()).copyTo(diag);
// cut the eigen vector to k-column,
P = Mat(M*U.colRange(0,k)*D).clone();
}
}
Mat FacemarkAAMImpl::orthonormal(Mat Mo){
Mat M;
Mo.convertTo(M,CV_32FC1);
// TODO: float precission is only 1e-7, but MATLAB version use thresh=2.2204e-16
float thresh = (float)2.2204e-6;
Mat O = Mat::zeros(M.rows, M.cols, CV_32FC1);
int k = 0; //storing index
Mat w,nv;
float n;
for(int i=0;i<M.cols;i++){
Mat v = M.col(i); // processed column to orthogonalize
// subtract projection over previous vectors
for(int j=0;j<k;j++){
Mat o=O.col(j);
w = v-o*(o.t()*v);
w.copyTo(v);
}
// only keep non zero vector
n = (float)norm(v);
if(n>thresh){
Mat ok=O.col(k);
// nv=v/n;
normalize(v,nv);
nv.copyTo(ok);
k+=1;
}
}
return O.colRange(0,k).clone();
}
void FacemarkAAMImpl::calcSimilarityEig(std::vector<Point2f> s0,Mat S, Mat & Q_orth, Mat & S_orth){
int npts = (int)s0.size();
Mat Q = Mat::zeros(2*npts,4,CV_32FC1);
Mat c0 = Q.col(0);
Mat c1 = Q.col(1);
Mat c2 = Q.col(2);
Mat c3 = Q.col(3);
/*c0 = s0(:)*/
Mat w = linearize(s0);
// w.convertTo(w, CV_64FC1);
w.copyTo(c0);
/*c1 = [-s0(npts:2*npts); s0(0:npts-1)]*/
Mat s0_mat = Mat(s0).reshape(1);
// s0_mat.convertTo(s0_mat, CV_64FC1);
Mat swapper = Mat::zeros(2,npts,CV_32FC1);
Mat s00 = s0_mat.col(0);
Mat s01 = s0_mat.col(1);
Mat sw0 = swapper.row(0);
Mat sw1 = swapper.row(1);
Mat(s00.t()).copyTo(sw1);
s01 = -s01;
Mat(s01.t()).copyTo(sw0);
Mat(swapper.reshape(1,2*npts)).copyTo(c1);
/*c2 - [ones(npts); zeros(npts)]*/
Mat ones = Mat::ones(1,npts,CV_32FC1);
Mat c2_mat = Mat::zeros(2,npts,CV_32FC1);
Mat c20 = c2_mat.row(0);
ones.copyTo(c20);
Mat(c2_mat.reshape(1,2*npts)).copyTo(c2);
/*c3 - [zeros(npts); ones(npts)]*/
Mat c3_mat = Mat::zeros(2,npts,CV_32FC1);
Mat c31 = c3_mat.row(1);
ones.copyTo(c31);
Mat(c3_mat.reshape(1,2*npts)).copyTo(c3);
Mat Qo = orthonormal(Q);
Mat all = Qo.t();
all.push_back(S.t());
Mat allOrth = orthonormal(all.t());
Q_orth = allOrth.colRange(0,4).clone();
S_orth = allOrth.colRange(4,allOrth.cols).clone();
}
inline Mat FacemarkAAMImpl::linearize(Mat s){ // all x values and then all y values
return Mat(s.reshape(1).t()).reshape(1,2*s.rows);
}
inline Mat FacemarkAAMImpl::linearize(std::vector<Point2f> s){ // all x values and then all y values
return linearize(Mat(s));
}
void FacemarkAAMImpl::delaunay(std::vector<Point2f> s, std::vector<Vec3i> & triangles){
triangles.clear();
std::vector<int> idx;
std::vector<Vec6f> tp;
double min_x, max_x, min_y, max_y;
Mat S = Mat(s).reshape(1);
Mat s_x = S.col(0);
Mat s_y = S.col(1);
minMaxIdx(s_x, &min_x, &max_x);
minMaxIdx(s_y, &min_y, &max_y);
// TODO: set the rectangle as configurable parameter
Subdiv2D subdiv(Rect(-500,-500,1000,1000));
subdiv.insert(s);
int a,b;
subdiv.locate(s.back(),a,b);
idx.resize(b+1);
Point2f p;
for(unsigned i=0;i<s.size();i++){
subdiv.locate(s[i],a,b);
idx[b] = i;
}
int v1,v2,v3;
subdiv.getTriangleList(tp);
for(unsigned i=0;i<tp.size();i++){
Vec6f t = tp[i];
//accept only vertex point
if(t[0]>=min_x && t[0]<=max_x && t[1]>=min_y && t[1]<=max_y
&& t[2]>=min_x && t[2]<=max_x && t[3]>=min_y && t[3]<=max_y
&& t[4]>=min_x && t[4]<=max_x && t[5]>=min_y && t[5]<=max_y
){
subdiv.locate(Point2f(t[0],t[1]),a,v1);
subdiv.locate(Point2f(t[2],t[3]),a,v2);
subdiv.locate(Point2f(t[4],t[5]),a,v3);
triangles.push_back(Vec3i(idx[v1],idx[v2],idx[v3]));
} //if
} // for
}
Mat FacemarkAAMImpl::createMask(std::vector<Point2f> base_shape, Rect res){
Mat mask = Mat::zeros(res.height, res.width, CV_8U);
std::vector<Point> hull;
std::vector<Point> shape;
Mat(base_shape).convertTo(shape, CV_32S);
convexHull(shape,hull);
fillConvexPoly(mask, &hull[0], (int)hull.size(), 255, 8 ,0);
return mask.clone();
}
Mat FacemarkAAMImpl::createTextureBase(std::vector<Point2f> shape, std::vector<Vec3i> triangles, Rect res, std::vector<std::vector<Point> > & textureIdx){
// max supported amount of triangles only 255
Mat mask = Mat::zeros(res.height, res.width, CV_8U);
std::vector<Point2f> p(3);
textureIdx.clear();
for(size_t i=0;i<triangles.size();i++){
p[0] = shape[triangles[i][0]];
p[1] = shape[triangles[i][1]];
p[2] = shape[triangles[i][2]];
std::vector<Point> polygon;
approxPolyDP(p,polygon, 1.0, true);
fillConvexPoly(mask, &polygon[0], (int)polygon.size(), (double)i+1,8,0 );
std::vector<Point> list;
for(int y=0;y<res.height;y++){
for(int x=0;x<res.width;x++){
if(mask.at<uchar>(y,x)==(uchar)(i+1)){
list.push_back(Point(x,y));
}
}
}
textureIdx.push_back(list);
}
return mask.clone();
}
Mat FacemarkAAMImpl::warpImage(
const Mat img, const std::vector<Point2f> target_shape,
const std::vector<Point2f> curr_shape, const std::vector<Vec3i> triangles,
const Rect res, const std::vector<std::vector<Point> > textureIdx)
{
// TODO: this part can be optimized, collect tranformation pair form all triangles first, then do one time remapping
Mat warped = Mat::zeros(res.height, res.width, CV_8U);
Mat warped2 = Mat::zeros(res.height, res.width, CV_8U);
Mat image,part, warped_part;
if(img.channels()>1){
cvtColor(img,image,CV_BGR2GRAY);
}else{
image = img;
}
Mat A,R,t;
A = Mat::zeros(2,3,CV_64F);
std::vector<Point2f> target(3),source(3);
std::vector<Point> polygon;
for(size_t i=0;i<triangles.size();i++){
target[0] = target_shape[triangles[i][0]];
target[1] = target_shape[triangles[i][1]];
target[2] = target_shape[triangles[i][2]];
source[0] = curr_shape[triangles[i][0]];
source[1] = curr_shape[triangles[i][1]];
source[2] = curr_shape[triangles[i][2]];
Mat target_mtx = Mat(target).reshape(1)-1.0;
Mat source_mtx = Mat(source).reshape(1)-1.0;
Mat U = target_mtx.col(0);
Mat V = target_mtx.col(1);
Mat X = source_mtx.col(0);
Mat Y = source_mtx.col(1);
double denominator = (target[1].x-target[0].x)*(target[2].y-target[0].y)-
(target[1].y-target[0].y)*(target[2].x-target[0].x);
// denominator = 1.0/denominator;
A.at<double>(0) = ((target[2].y-target[0].y)*(source[1].x-source[0].x)-
(target[1].y-target[0].y)*(source[2].x-source[0].x))/denominator;
A.at<double>(1) = ((target[1].x-target[0].x)*(source[2].x-source[0].x)-
(target[2].x-target[0].x)*(source[1].x-source[0].x))/denominator;
A.at<double>(2) =X.at<float>(0) + ((V.at<float>(0) * (U.at<float>(2) - U.at<float>(0)) - U.at<float>(0)*(V.at<float>(2) - V.at<float>(0))) * (X.at<float>(1) - X.at<float>(0)) + (U.at<float>(0) * (V.at<float>(1) - V.at<float>(0)) - V.at<float>(0)*(U.at<float>(1) - U.at<float>(0))) * (X.at<float>(2) - X.at<float>(0))) / denominator;
A.at<double>(3) =((V.at<float>(2) - V.at<float>(0)) * (Y.at<float>(1) - Y.at<float>(0)) - (V.at<float>(1) - V.at<float>(0)) * (Y.at<float>(2) - Y.at<float>(0))) / denominator;
A.at<double>(4) = ((U.at<float>(1) - U.at<float>(0)) * (Y.at<float>(2) - Y.at<float>(0)) - (U.at<float>(2) - U.at<float>(0)) * (Y.at<float>(1) - Y.at<float>(0))) / denominator;
A.at<double>(5) = Y.at<float>(0) + ((V.at<float>(0) * (U.at<float>(2) - U.at<float>(0)) - U.at<float>(0) * (V.at<float>(2) - V.at<float>(0))) * (Y.at<float>(1) - Y.at<float>(0)) + (U.at<float>(0) * (V.at<float>(1) - V.at<float>(0)) - V.at<float>(0)*(U.at<float>(1) - U.at<float>(0))) * (Y.at<float>(2) - Y.at<float>(0))) / denominator;
// A = getAffineTransform(target,source);
R=A.colRange(0,2);
t=A.colRange(2,3);
Mat pts_ori = Mat(textureIdx[i]).reshape(1);
Mat pts = pts_ori.t(); //matlab
Mat bx = pts_ori.col(0);
Mat by = pts_ori.col(1);
Mat base_ind = (by-1)*res.width+bx;
Mat pts_f;
pts.convertTo(pts_f,CV_64FC1);
pts_f.push_back(Mat::ones(1,(int)textureIdx[i].size(),CV_64FC1));
Mat trans = (A*pts_f).t();
Mat T; trans.convertTo(T, CV_32S); // this rounding make the result a little bit different to matlab
Mat mx = T.col(0);
Mat my = T.col(1);
Mat ind = (my-1)*image.cols+mx;
int maxIdx = image.rows*image.cols;
int idx;
for(int k=0;k<ind.rows;k++){
idx=ind.at<int>(k);
if(idx>=0 && idx<maxIdx){
warped.at<uchar>(base_ind.at<int>(k)) = (uchar)(image.at<uchar>(idx));
}
}
warped.copyTo(warped2);
}
return warped2.clone();
}
template <class T>
Mat FacemarkAAMImpl::getFeature(const Mat m, std::vector<int> map){
std::vector<float> feat;
Mat M = m.t();//matlab
for(size_t i=0;i<map.size();i++){
feat.push_back((float)M.at<T>(map[i]));
}
return Mat(feat).clone();
}
void FacemarkAAMImpl::createMaskMapping(const Mat m1, const Mat m2, std::vector<int> & ind1, std::vector<int> & ind2, std::vector<int> & ind3){
int cnt = 0, idx=0;
ind1.clear();
ind2.clear();
ind3.clear();
Mat mask = m1.t();//matlab
Mat mask2 = m2.t();//matlab
for(int i=0;i<mask.rows;i++){
for(int j=0;j<mask.cols;j++){
if(mask.at<uchar>(i,j)>0){
if(mask2.at<uchar>(i,j)>0){
ind2.push_back(idx);
ind3.push_back(cnt);
}
ind1.push_back(idx);
cnt +=1;
}
idx+=1;
} // j
} // i
}
void FacemarkAAMImpl::image_jacobian(const Mat gx, const Mat gy, const Mat Jx, const Mat Jy, Mat & G){
Mat Gx = repeat(gx,1,Jx.cols);
Mat Gy = repeat(gy,1,Jx.cols);
Mat G1,G2;
multiply(Gx,Jx,G1);
multiply(Gy,Jy,G2);
G=G1+G2;
}
void FacemarkAAMImpl::warpUpdate(std::vector<Point2f> & shape, Mat delta, std::vector<Point2f> s0, Mat S, Mat Q, std::vector<Vec3i> triangles,std::vector<std::vector<int> > Tp){
std::vector<Point2f> new_shape;
int nSimEig = 4;
/*get dr, dp and compute ds0*/
Mat dr = -Mat(delta, Range(0,nSimEig));
Mat dp = -Mat(delta, Range(nSimEig, delta.rows));
Mat ds0 = S*dp + Q*dr;
Mat ds0_mat = Mat::zeros((int)s0.size(),2, CV_32FC1);
Mat c0 = ds0_mat.col(0);
Mat c1 = ds0_mat.col(1);
Mat(ds0, Range(0,(int)s0.size())).copyTo(c0);
Mat(ds0, Range((int)s0.size(),(int)s0.size()*2)).copyTo(c1);
Mat s_new = computeWarpParts(shape,s0,ds0_mat, triangles, Tp);
Mat diff =linearize(Mat(s_new - Mat(s0).reshape(1)));
Mat r = Q.t()*diff;
Mat p = S.t()*diff;
Mat s = linearize(s0) +S*p + Q*r;
Mat(Mat(s.t()).reshape(0,2).t()).reshape(2).copyTo(shape);
}
Mat FacemarkAAMImpl::computeWarpParts(std::vector<Point2f> curr_shape,std::vector<Point2f> s0, Mat ds0, std::vector<Vec3i> triangles,std::vector<std::vector<int> > Tp){
std::vector<Point2f> new_shape;
std::vector<Point2f> ds = ds0.reshape(2);
float mx,my;
Mat A;
std::vector<Point2f> target(3),source(3);
std::vector<double> p(3);
p[2] = 1;
for(size_t i=0;i<s0.size();i++){
p[0] = s0[i].x + ds[i].x;
p[1] = s0[i].y + ds[i].y;
std::vector<Point2f> v;
std::vector<float>vx, vy;
for(size_t j=0;j<Tp[i].size();j++){
int idx = Tp[i][j];
target[0] = s0[triangles[idx][0]];
target[1] = s0[triangles[idx][1]];
target[2] = s0[triangles[idx][2]];
source[0] = curr_shape[triangles[idx][0]];
source[1] = curr_shape[triangles[idx][1]];
source[2] = curr_shape[triangles[idx][2]];
A = getAffineTransform(target,source);
Mat(A*Mat(p)).reshape(2).copyTo(v);
vx.push_back(v[0].x);
vy.push_back(v[0].y);
}// j
/*find the median*/
size_t n = vx.size()/2;
nth_element(vx.begin(), vx.begin()+n, vx.end());
mx = vx[n];
nth_element(vy.begin(), vy.begin()+n, vy.end());
my = vy[n];
new_shape.push_back(Point2f(mx,my));
} // s0.size()
return Mat(new_shape).reshape(1).clone();
}
void FacemarkAAMImpl::gradient(const Mat M, Mat & gx, Mat & gy){
gx = Mat::zeros(M.size(),CV_32FC1);
gy = Mat::zeros(M.size(),CV_32FC1);
/*gx*/
for(int i=0;i<M.rows;i++){
for(int j=0;j<M.cols;j++){
if(j>0 && j<M.cols-1){
gx.at<float>(i,j) = ((float)0.5)*(M.at<float>(i,j+1)-M.at<float>(i,j-1));
}else if (j==0){
gx.at<float>(i,j) = M.at<float>(i,j+1)-M.at<float>(i,j);
}else{
gx.at<float>(i,j) = M.at<float>(i,j)-M.at<float>(i,j-1);
}
}
}
/*gy*/
for(int i=0;i<M.rows;i++){
for(int j=0;j<M.cols;j++){
if(i>0 && i<M.rows-1){
gy.at<float>(i,j) = ((float)0.5)*(M.at<float>(i+1,j)-M.at<float>(i-1,j));
}else if (i==0){
gy.at<float>(i,j) = M.at<float>(i+1,j)-M.at<float>(i,j);
}else{
gy.at<float>(i,j) = M.at<float>(i,j)-M.at<float>(i-1,j);
}
}
}
}
void FacemarkAAMImpl::createWarpJacobian(Mat S, Mat Q, std::vector<Vec3i> triangles, Model::Texture & T, Mat & Wx_dp, Mat & Wy_dp, std::vector<std::vector<int> > & Tp){
std::vector<Point2f> base_shape = T.base_shape;
Rect resolution = T.resolution;
std::vector<std::vector<int> >triangles_on_a_point;
int npts = (int)base_shape.size();
Mat dW_dxdyt ;
/*get triangles for each point*/
std::vector<int> trianglesIdx;
triangles_on_a_point.resize(npts);
for(int i=0;i<(int)triangles.size();i++){
triangles_on_a_point[triangles[i][0]].push_back(i);
triangles_on_a_point[triangles[i][1]].push_back(i);
triangles_on_a_point[triangles[i][2]].push_back(i);
}
Tp = triangles_on_a_point;
/*calculate dW_dxdy*/
float v0x,v0y,v1x,v1y,v2x,v2y, denominator;
for(int k=0;k<npts;k++){
Mat acc = Mat::zeros(resolution.height, resolution.width, CV_32F);
/*for each triangle on k-th point*/
for(size_t i=0;i<triangles_on_a_point[k].size();i++){
int tId = triangles_on_a_point[k][i];
Vec3i v;
if(triangles[tId][0]==k ){
v=Vec3i(triangles[tId][0],triangles[tId][1],triangles[tId][2]);
}else if(triangles[tId][1]==k){
v=Vec3i(triangles[tId][1],triangles[tId][0],triangles[tId][2]);
}else{
v=Vec3i(triangles[tId][2],triangles[tId][0],triangles[tId][1]);
}
v0x = base_shape[v[0]].x;
v0y = base_shape[v[0]].y;
v1x = base_shape[v[1]].x;
v1y = base_shape[v[1]].y;
v2x = base_shape[v[2]].x;
v2y = base_shape[v[2]].y;
denominator = (v1x-v0x)*(v2y-v0y)-(v1y-v0y)*(v2x-v0x);
Mat pixels = Mat(T.textureIdx[tId]).reshape(1); // same, just different order
Mat p;
pixels.convertTo(p,CV_32F, 1.0,1.0); //matlab use offset
Mat x = p.col(0);
Mat y = p.col(1);
Mat alpha = (x-v0x)*(v2y-v0y)-(y-v0y)*(v2x-v0x);
Mat beta = (v1x-v0x)*(y-v0y)-(v1y-v0y)*(x-v0x);
Mat res = 1.0 - alpha/denominator - beta/denominator; // same just different order
/*remap to image form*/
Mat dx = Mat::zeros(resolution.height, resolution.width, CV_32F);
for(int j=0;j<res.rows;j++){
dx.at<float>((int)(y.at<float>(j)-1.0), (int)(x.at<float>(j)-1.0)) = res.at<float>(j); // matlab use offset
};
acc = acc+dx;
}
Mat vectorized = Mat(acc.t()).reshape(0,1);
dW_dxdyt.push_back(vectorized.clone());
}// k
Mat dx_dp;
hconcat(Q, S, dx_dp);
Mat dW_dxdy = dW_dxdyt.t();
Wx_dp = dW_dxdy* Mat(dx_dp,Range(0,npts));
Wy_dp = dW_dxdy* Mat(dx_dp,Range(npts,2*npts));
} //createWarpJacobian
} /* namespace face */
} /* namespace cv */
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
#include "opencv2/face.hpp"
#include "precomp.hpp"
#include <fstream>
#include <cmath>
#include <ctime>
#include <cstdio>
#include <cassert>
#include <cstdarg>
namespace cv {
namespace face {
#define TIMER_BEGIN { double __time__ = (double)getTickCount();
#define TIMER_NOW ((getTickCount() - __time__) / getTickFrequency())
#define TIMER_END }
#define SIMILARITY_TRANSFORM(x, y, scale, rotate) do { \
double x_tmp = scale * (rotate(0, 0)*x + rotate(0, 1)*y); \
double y_tmp = scale * (rotate(1, 0)*x + rotate(1, 1)*y); \
x = x_tmp; y = y_tmp; \
} while(0)
FacemarkLBF::Params::Params(){
cascade_face = "";
shape_offset = 0.0;
n_landmarks = 68;
initShape_n = 10;
stages_n=5;
tree_n=6;
tree_depth=5;
bagging_overlap = 0.4;
model_filename = "";
save_model = true;
verbose = true;
seed = 0;
int _pupils[][6] = { { 36, 37, 38, 39, 40, 41 }, { 42, 43, 44, 45, 46, 47 } };
for (int i = 0; i < 6; i++) {
pupils[0].push_back(_pupils[0][i]);
pupils[1].push_back(_pupils[1][i]);
}
int _feats_m[] = { 500, 500, 500, 300, 300, 300, 200, 200, 200, 100 };
double _radius_m[] = { 0.3, 0.2, 0.15, 0.12, 0.10, 0.10, 0.08, 0.06, 0.06, 0.05 };
for (int i = 0; i < 10; i++) {
feats_m.push_back(_feats_m[i]);
radius_m.push_back(_radius_m[i]);
}
detectROI = Rect(-1,-1,-1,-1);
}
void FacemarkLBF::Params::read( const cv::FileNode& fn ){
*this = FacemarkLBF::Params();
if (!fn["verbose"].empty())
fn["verbose"] >> verbose;
}
void FacemarkLBF::Params::write( cv::FileStorage& fs ) const{
fs << "verbose" << verbose;
}
class FacemarkLBFImpl : public FacemarkLBF {
public:
FacemarkLBFImpl( const FacemarkLBF::Params &parameters = FacemarkLBF::Params() );
void read( const FileNode& /*fn*/ );
void write( FileStorage& /*fs*/ ) const;
void loadModel(String fs);
bool setFaceDetector(bool(*f)(InputArray , OutputArray, void * extra_params ));
bool getFaces( InputArray image , OutputArray faces, void * extra_params);
bool getData(void * items);
Params params;
protected:
bool fit( InputArray image, InputArray faces, InputOutputArray landmarks, void * runtime_params );//!< from many ROIs
bool fitImpl( const Mat image, std::vector<Point2f> & landmarks );//!< from a face
bool addTrainingSample(InputArray image, InputArray landmarks);
void training(void* parameters);
Rect getBBox(Mat &img, const Mat_<double> shape);
void prepareTrainingData(Mat img, std::vector<Point2f> facePoints,
std::vector<Mat> & cropped, std::vector<Mat> & shapes, std::vector<BBox> &boxes);
void data_augmentation(std::vector<Mat> &imgs, std::vector<Mat> &gt_shapes, std::vector<BBox> &bboxes);
Mat getMeanShape(std::vector<Mat> &gt_shapes, std::vector<BBox> &bboxes);
bool configFaceDetector();
bool defaultFaceDetector(const Mat image, std::vector<Rect> & faces);
CascadeClassifier face_cascade;
bool(*faceDetector)(InputArray , OutputArray, void * );
bool isSetDetector;
/*training data*/
std::vector<std::vector<Point2f> > data_facemarks; //original position
std::vector<Mat> data_faces; //face ROI
std::vector<BBox> data_boxes;
std::vector<Mat> data_shapes; //position in the face ROI
private:
bool isModelTrained;
/*---------------LBF Class---------------------*/
class LBF {
public:
void calcSimilarityTransform(const Mat &shape1, const Mat &shape2, double &scale, Mat &rotate);
std::vector<Mat> getDeltaShapes(std::vector<Mat> &gt_shapes, std::vector<Mat> &current_shapes,
std::vector<BBox> &bboxes, Mat &mean_shape);
double calcVariance(const Mat &vec);
double calcVariance(const std::vector<double> &vec);
double calcMeanError(std::vector<Mat> &gt_shapes, std::vector<Mat> &current_shapes, int landmark_n , std::vector<int> &left, std::vector<int> &right );
};
/*---------------RandomTree Class---------------------*/
class RandomTree : public LBF {
public:
RandomTree(){};
~RandomTree(){};
void initTree(int landmark_id, int depth, std::vector<int>, std::vector<double>);
void train(std::vector<Mat> &imgs, std::vector<Mat> &current_shapes, std::vector<BBox> &bboxes,
std::vector<Mat> &delta_shapes, Mat &mean_shape, std::vector<int> &index, int stage);
void splitNode(std::vector<cv::Mat> &imgs, std::vector<cv::Mat> &current_shapes, std::vector<BBox> &bboxes,
cv::Mat &delta_shapes, cv::Mat &mean_shape, std::vector<int> &root, int idx, int stage);
void write(FileStorage fs, int forestId, int i, int j);
void read(FileStorage fs, int forestId, int i, int j);
int depth;
int nodes_n;
int landmark_id;
cv::Mat_<double> feats;
std::vector<int> thresholds;
std::vector<int> params_feats_m;
std::vector<double> params_radius_m;
};
/*---------------RandomForest Class---------------------*/
class RandomForest : public LBF {
public:
RandomForest(){};
~RandomForest(){};
void initForest(int landmark_n, int trees_n, int tree_depth, double , std::vector<int>, std::vector<double>, bool);
void train(std::vector<cv::Mat> &imgs, std::vector<cv::Mat> &current_shapes, \
std::vector<BBox> &bboxes, std::vector<cv::Mat> &delta_shapes, cv::Mat &mean_shape, int stage);
Mat generateLBF(Mat &img, Mat &current_shape, BBox &bbox, Mat &mean_shape);
void write(FileStorage fs, int forestId);
void read(FileStorage fs, int forestId);
bool verbose;
int landmark_n;
int trees_n, tree_depth;
double overlap_ratio;
std::vector<std::vector<RandomTree> > random_trees;
std::vector<int> feats_m;
std::vector<double> radius_m;
};
/*---------------Regressor Class---------------------*/
class Regressor : public LBF {
protected:
struct feature_node{
int index;
double value;
};
public:
Regressor(){};
~Regressor(){};
void initRegressor(Params);
void trainRegressor(std::vector<cv::Mat> &imgs, std::vector<cv::Mat> &gt_shapes, \
std::vector<cv::Mat> &current_shapes, std::vector<BBox> &bboxes, \
cv::Mat &mean_shape, int start_from, Params );
Mat globalRegressionPredict(const Mat &lbf, int stage);
Mat predict(Mat &img, BBox &bbox);
void write(FileStorage fs, Params config);
void read(FileStorage fs, Params & config);
void globalRegressionTrain(
std::vector<Mat> &lbfs, std::vector<Mat> &delta_shapes,
int stage, Params config
);
Mat supportVectorRegression(
feature_node **x, double *y, int nsamples, int feat_size, bool verbose=0
);
int stages_n;
int landmark_n;
cv::Mat mean_shape;
std::vector<RandomForest> random_forests;
std::vector<cv::Mat> gl_regression_weights;
}; // LBF
Regressor regressor;
}; // class
/*
* Constructor
*/
Ptr<FacemarkLBF> FacemarkLBF::create(const FacemarkLBF::Params &parameters){
return Ptr<FacemarkLBFImpl>(new FacemarkLBFImpl(parameters));
}
FacemarkLBFImpl::FacemarkLBFImpl( const FacemarkLBF::Params &parameters )
{
isSetDetector =false;
isModelTrained = false;
params = parameters;
}
bool FacemarkLBFImpl::setFaceDetector(bool(*f)(InputArray , OutputArray, void * extra_params )){
faceDetector = f;
isSetDetector = true;
return true;
}
bool FacemarkLBFImpl::getFaces( InputArray image , OutputArray roi, void * extra_params){
if(!isSetDetector){
return false;
}
if(extra_params!=0){
//do nothing
}
std::vector<Rect> & faces = *(std::vector<Rect>*)roi.getObj();
faces.clear();
faceDetector(image.getMat(), faces, extra_params);
return true;
}
bool FacemarkLBFImpl::configFaceDetector(){
if(!isSetDetector){
/*check the cascade classifier file*/
std::ifstream infile;
infile.open(params.cascade_face.c_str(), std::ios::in);
if (!infile) {
std::string error_message = "The cascade classifier model is not found.";
CV_Error(CV_StsBadArg, error_message);
return false;
}
face_cascade.load(params.cascade_face.c_str());
}
return true;
}
bool FacemarkLBFImpl::defaultFaceDetector(const Mat image, std::vector<Rect> & faces){
Mat gray;
faces.clear();
if(image.channels()>1){
cvtColor(image,gray,CV_BGR2GRAY);
}else{
gray = image;
}
equalizeHist( gray, gray );
face_cascade.detectMultiScale( gray, faces, 1.05, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
return true;
}
bool FacemarkLBFImpl::getData(void * items){
if(items!=0){
// do nothing
}
return true;
}
bool FacemarkLBFImpl::addTrainingSample(InputArray image, InputArray landmarks){
std::vector<Point2f> & _landmarks = *(std::vector<Point2f>*)landmarks.getObj();
configFaceDetector();
prepareTrainingData(image.getMat(), _landmarks, data_faces, data_shapes, data_boxes);
return true;
}
void FacemarkLBFImpl::training(void* parameters){
if(parameters!=0){/*do nothing*/}
if (data_faces.size()<1) {
std::string error_message =
"Training data is not provided. Consider to add using addTrainingSample() function!";
CV_Error(CV_StsBadArg, error_message);
}
if(strcmp(params.cascade_face.c_str(),"")==0
||(strcmp(params.model_filename.c_str(),"")==0 && params.save_model)
){
std::string error_message = "The parameter cascade_face and model_filename should be set!";
CV_Error(CV_StsBadArg, error_message);
}
// flip the image and swap the landmark position
data_augmentation(data_faces, data_shapes, data_boxes);
Mat mean_shape = getMeanShape(data_shapes, data_boxes);
int N = (int)data_faces.size();
int L = N*params.initShape_n;
std::vector<Mat> imgs(L), gt_shapes(L), current_shapes(L);
std::vector<BBox> bboxes(L);
RNG rng(params.seed);
for (int i = 0; i < N; i++) {
for (int j = 0; j < params.initShape_n; j++) {
int idx = i*params.initShape_n + j;
int k = 0;
do {
k = rng.uniform(0, N);
} while (k == i);
imgs[idx] = data_faces[i];
gt_shapes[idx] = data_shapes[i];
bboxes[idx] = data_boxes[i];
current_shapes[idx] = data_boxes[i].reproject(data_boxes[k].project(data_shapes[k]));
}
}
regressor.initRegressor(params);
regressor.trainRegressor(imgs, gt_shapes, current_shapes, bboxes, mean_shape, 0, params);
if(params.save_model){
FileStorage fs(params.model_filename.c_str(),FileStorage::WRITE_BASE64);
regressor.write(fs, params);
}
isModelTrained = true;
}
bool FacemarkLBFImpl::fit( InputArray image, InputArray roi, InputOutputArray _landmarks, void * runtime_params )
{
if(runtime_params!=0){
// do nothing
}
std::vector<Rect> & faces = *(std::vector<Rect> *)roi.getObj();
if(faces.size()<1) return false;
std::vector<std::vector<Point2f> > & landmarks =
*(std::vector<std::vector<Point2f> >*) _landmarks.getObj();
landmarks.resize(faces.size());
for(unsigned i=0; i<faces.size();i++){
params.detectROI = faces[i];
fitImpl(image.getMat(), landmarks[i]);
}
return true;
}
bool FacemarkLBFImpl::fitImpl( const Mat image, std::vector<Point2f>& landmarks){
if (landmarks.size()>0)
landmarks.clear();
if (!isModelTrained) {
std::string error_message = "The LBF model is not trained yet. Please provide a trained model.";
CV_Error(CV_StsBadArg, error_message);
}
Mat img;
if(image.channels()>1){
cvtColor(image,img,CV_BGR2GRAY);
}else{
img = image;
}
Rect box;
if (params.detectROI.width>0){
box = params.detectROI;
}else{
std::vector<Rect> rects;
if(!isSetDetector){
defaultFaceDetector(img, rects);
}else{
faceDetector(img, rects,0);
}
if (rects.size() == 0) return 0; //failed to get face
box = rects[0];
}
double min_x, min_y, max_x, max_y;
min_x = std::max(0., (double)box.x - box.width / 2);
max_x = std::min(img.cols - 1., (double)box.x+box.width + box.width / 2);
min_y = std::max(0., (double)box.y - box.height / 2);
max_y = std::min(img.rows - 1., (double)box.y + box.height + box.height / 2);
double w = max_x - min_x;
double h = max_y - min_y;
BBox bbox(box.x - min_x, box.y - min_y, box.width, box.height);
Mat crop = img(Rect((int)min_x, (int)min_y, (int)w, (int)h)).clone();
Mat shape = regressor.predict(crop, bbox);
if(params.detectROI.width>0){
landmarks = Mat(shape.reshape(2)+Scalar(min_x, min_y));
params.detectROI.width = -1;
}else{
landmarks = Mat(shape.reshape(2)+Scalar(min_x, min_y));
}
return 1;
}
void FacemarkLBFImpl::read( const cv::FileNode& fn ){
params.read( fn );
}
void FacemarkLBFImpl::write( cv::FileStorage& fs ) const {
params.write( fs );
}
void FacemarkLBFImpl::loadModel(String s){
if(params.verbose) printf("loading data from : %s\n", s.c_str());
std::ifstream infile;
infile.open(s.c_str(), std::ios::in);
if (!infile) {
std::string error_message = "No valid input file was given, please check the given filename.";
CV_Error(CV_StsBadArg, error_message);
}
FileStorage fs(s.c_str(),FileStorage::READ);
regressor.read(fs, params);
isModelTrained = true;
}
Rect FacemarkLBFImpl::getBBox(Mat &img, const Mat_<double> shape) {
std::vector<Rect> rects;
if(!isSetDetector){
defaultFaceDetector(img, rects);
}else{
faceDetector(img, rects,0);
}
if (rects.size() == 0) return Rect(-1, -1, -1, -1);
double center_x=0, center_y=0, min_x, max_x, min_y, max_y;
min_x = shape(0, 0);
max_x = shape(0, 0);
min_y = shape(0, 1);
max_y = shape(0, 1);
for (int i = 0; i < shape.rows; i++) {
center_x += shape(i, 0);
center_y += shape(i, 1);
min_x = std::min(min_x, shape(i, 0));
max_x = std::max(max_x, shape(i, 0));
min_y = std::min(min_y, shape(i, 1));
max_y = std::max(max_y, shape(i, 1));
}
center_x /= shape.rows;
center_y /= shape.rows;
for (int i = 0; i < (int)rects.size(); i++) {
Rect r = rects[i];
if (max_x - min_x > r.width*1.5) continue;
if (max_y - min_y > r.height*1.5) continue;
if (abs(center_x - (r.x + r.width / 2)) > r.width / 2) continue;
if (abs(center_y - (r.y + r.height / 2)) > r.height / 2) continue;
return r;
}
return Rect(-1, -1, -1, -1);
}
void FacemarkLBFImpl::prepareTrainingData(Mat img, std::vector<Point2f> facePoints,
std::vector<Mat> & cropped, std::vector<Mat> & shapes, std::vector<BBox> &boxes)
{
if(img.channels()>1){
cvtColor(img,img,CV_BGR2GRAY);
}
Mat shape;
Mat _shape = Mat(facePoints).reshape(1);
Rect box = getBBox(img, _shape);
if(box.x != -1){
_shape.convertTo(shape, CV_64FC1);
Mat sx = shape.col(0);
Mat sy = shape.col(1);
double min_x, max_x, min_y, max_y;
minMaxIdx(sx, &min_x, &max_x);
minMaxIdx(sy, &min_y, &max_y);
min_x = std::max(0., min_x - (double)box.width / 2.);
max_x = std::min(img.cols - 1., max_x + (double)box.width / 2.);
min_y = std::max(0., min_y - (double)box.height / 2.);
max_y = std::min(img.rows - 1., max_y + (double)box.height / 2.);
double w = max_x - min_x;
double h = max_y - min_y;
shape = Mat(shape.reshape(2)-Scalar(min_x, min_y)).reshape(1);
boxes.push_back(BBox(box.x - min_x, box.y - min_y, box.width, box.height));
Mat crop = img(Rect((int)min_x, (int)min_y, (int)w, (int)h)).clone();
cropped.push_back(crop);
shapes.push_back(shape);
} // if box is valid
} // prepareTrainingData
void FacemarkLBFImpl::data_augmentation(std::vector<Mat> &imgs, std::vector<Mat> &gt_shapes, std::vector<BBox> &bboxes) {
int N = (int)imgs.size();
imgs.reserve(2 * N);
gt_shapes.reserve(2 * N);
bboxes.reserve(2 * N);
for (int i = 0; i < N; i++) {
Mat img_flipped;
Mat_<double> gt_shape_flipped(gt_shapes[i].size());
flip(imgs[i], img_flipped, 1);
int w = img_flipped.cols - 1;
// int h = img_flipped.rows - 1;
for (int k = 0; k < gt_shapes[i].rows; k++) {
gt_shape_flipped(k, 0) = w - gt_shapes[i].at<double>(k, 0);
gt_shape_flipped(k, 1) = gt_shapes[i].at<double>(k, 1);
}
int x_b, y_b, w_b, h_b;
x_b = w - (int)bboxes[i].x - (int)bboxes[i].width;
y_b = (int)bboxes[i].y;
w_b = (int)bboxes[i].width;
h_b = (int)bboxes[i].height;
BBox bbox_flipped(x_b, y_b, w_b, h_b);
imgs.push_back(img_flipped);
gt_shapes.push_back(gt_shape_flipped);
bboxes.push_back(bbox_flipped);
}
#define SWAP(shape, i, j) do { \
double tmp = shape.at<double>(i-1, 0); \
shape.at<double>(i-1, 0) = shape.at<double>(j-1, 0); \
shape.at<double>(j-1, 0) = tmp; \
tmp = shape.at<double>(i-1, 1); \
shape.at<double>(i-1, 1) = shape.at<double>(j-1, 1); \
shape.at<double>(j-1, 1) = tmp; \
} while(0)
if (params.n_landmarks == 29) {
for (int i = N; i < (int)gt_shapes.size(); i++) {
SWAP(gt_shapes[i], 1, 2);
SWAP(gt_shapes[i], 3, 4);
SWAP(gt_shapes[i], 5, 7);
SWAP(gt_shapes[i], 6, 8);
SWAP(gt_shapes[i], 13, 15);
SWAP(gt_shapes[i], 9, 10);
SWAP(gt_shapes[i], 11, 12);
SWAP(gt_shapes[i], 17, 18);
SWAP(gt_shapes[i], 14, 16);
SWAP(gt_shapes[i], 19, 20);
SWAP(gt_shapes[i], 23, 24);
}
}
else if (params.n_landmarks == 68) {
for (int i = N; i < (int)gt_shapes.size(); i++) {
for (int k = 1; k <= 8; k++) SWAP(gt_shapes[i], k, 18 - k);
for (int k = 18; k <= 22; k++) SWAP(gt_shapes[i], k, 45 - k);
for (int k = 37; k <= 40; k++) SWAP(gt_shapes[i], k, 83 - k);
SWAP(gt_shapes[i], 42, 47);
SWAP(gt_shapes[i], 41, 48);
SWAP(gt_shapes[i], 32, 36);
SWAP(gt_shapes[i], 33, 35);
for (int k = 49; k <= 51; k++) SWAP(gt_shapes[i], k, 104 - k);
SWAP(gt_shapes[i], 60, 56);
SWAP(gt_shapes[i], 59, 57);
SWAP(gt_shapes[i], 61, 65);
SWAP(gt_shapes[i], 62, 64);
SWAP(gt_shapes[i], 68, 66);
}
}
else {
printf("Wrong n_landmarks, currently only 29 and 68 landmark points are supported");
}
#undef SWAP
}
FacemarkLBFImpl::BBox::BBox() {}
FacemarkLBFImpl::BBox::~BBox() {}
FacemarkLBFImpl::BBox::BBox(double _x, double _y, double w, double h) {
x = _x;
y = _y;
width = w;
height = h;
x_center = x + w / 2.;
y_center = y + h / 2.;
x_scale = w / 2.;
y_scale = h / 2.;
}
// Project absolute shape to relative shape binding to this bbox
Mat FacemarkLBFImpl::BBox::project(const Mat &shape) const {
Mat_<double> res(shape.rows, shape.cols);
const Mat_<double> &shape_ = (Mat_<double>)shape;
for (int i = 0; i < shape.rows; i++) {
res(i, 0) = (shape_(i, 0) - x_center) / x_scale;
res(i, 1) = (shape_(i, 1) - y_center) / y_scale;
}
return res;
}
// Project relative shape to absolute shape binding to this bbox
Mat FacemarkLBFImpl::BBox::reproject(const Mat &shape) const {
Mat_<double> res(shape.rows, shape.cols);
const Mat_<double> &shape_ = (Mat_<double>)shape;
for (int i = 0; i < shape.rows; i++) {
res(i, 0) = shape_(i, 0)*x_scale + x_center;
res(i, 1) = shape_(i, 1)*y_scale + y_center;
}
return res;
}
Mat FacemarkLBFImpl::getMeanShape(std::vector<Mat> &gt_shapes, std::vector<BBox> &bboxes) {
int N = (int)gt_shapes.size();
Mat mean_shape = Mat::zeros(gt_shapes[0].rows, 2, CV_64FC1);
for (int i = 0; i < N; i++) {
mean_shape += bboxes[i].project(gt_shapes[i]);
}
mean_shape /= N;
return mean_shape;
}
// Similarity Transform, project shape2 to shape1
// p1 ~= scale * rotate * p2, p1 and p2 are vector in math
void FacemarkLBFImpl::LBF::calcSimilarityTransform(const Mat &shape1, const Mat &shape2, double &scale, Mat &rotate) {
Mat_<double> rotate_(2, 2);
double x1_center, y1_center, x2_center, y2_center;
x1_center = cv::mean(shape1.col(0))[0];
y1_center = cv::mean(shape1.col(1))[0];
x2_center = cv::mean(shape2.col(0))[0];
y2_center = cv::mean(shape2.col(1))[0];
Mat temp1(shape1.rows, shape1.cols, CV_64FC1);
Mat temp2(shape2.rows, shape2.cols, CV_64FC1);
temp1.col(0) = shape1.col(0) - x1_center;
temp1.col(1) = shape1.col(1) - y1_center;
temp2.col(0) = shape2.col(0) - x2_center;
temp2.col(1) = shape2.col(1) - y2_center;
Mat_<double> covar1, covar2;
Mat_<double> mean1, mean2;
calcCovarMatrix(temp1, covar1, mean1, CV_COVAR_COLS);
calcCovarMatrix(temp2, covar2, mean2, CV_COVAR_COLS);
double s1 = sqrt(cv::norm(covar1));
double s2 = sqrt(cv::norm(covar2));
scale = s1 / s2;
temp1 /= s1;
temp2 /= s2;
double num = temp1.col(1).dot(temp2.col(0)) - temp1.col(0).dot(temp2.col(1));
double den = temp1.col(0).dot(temp2.col(0)) + temp1.col(1).dot(temp2.col(1));
double normed = sqrt(num*num + den*den);
double sin_theta = num / normed;
double cos_theta = den / normed;
rotate_(0, 0) = cos_theta; rotate_(0, 1) = -sin_theta;
rotate_(1, 0) = sin_theta; rotate_(1, 1) = cos_theta;
rotate = rotate_;
}
// Get relative delta_shapes for predicting target
std::vector<Mat> FacemarkLBFImpl::LBF::getDeltaShapes(std::vector<Mat> &gt_shapes, std::vector<Mat> &current_shapes,
std::vector<BBox> &bboxes, Mat &mean_shape) {
std::vector<Mat> delta_shapes;
int N = (int)gt_shapes.size();
delta_shapes.resize(N);
double scale;
Mat_<double> rotate;
for (int i = 0; i < N; i++) {
delta_shapes[i] = bboxes[i].project(gt_shapes[i]) - bboxes[i].project(current_shapes[i]);
calcSimilarityTransform(mean_shape, bboxes[i].project(current_shapes[i]), scale, rotate);
// delta_shapes[i] = scale * delta_shapes[i] * rotate.t(); // the result is better without this part
}
return delta_shapes;
}
double FacemarkLBFImpl::LBF::calcVariance(const Mat &vec) {
double m1 = cv::mean(vec)[0];
double m2 = cv::mean(vec.mul(vec))[0];
double variance = m2 - m1*m1;
return variance;
}
double FacemarkLBFImpl::LBF::calcVariance(const std::vector<double> &vec) {
if (vec.size() == 0) return 0.;
Mat_<double> vec_(vec);
double m1 = cv::mean(vec_)[0];
double m2 = cv::mean(vec_.mul(vec_))[0];
double variance = m2 - m1*m1;
return variance;
}
double FacemarkLBFImpl::LBF::calcMeanError(std::vector<Mat> &gt_shapes, std::vector<Mat> &current_shapes, int landmark_n , std::vector<int> &left, std::vector<int> &right ) {
int N = (int)gt_shapes.size();
double e = 0;
// every train data
for (int i = 0; i < N; i++) {
const Mat_<double> &gt_shape = (Mat_<double>)gt_shapes[i];
const Mat_<double> &current_shape = (Mat_<double>)current_shapes[i];
double x1, y1, x2, y2;
x1 = x2 = y1 = y2 = 0;
for (int j = 0; j < (int)left.size(); j++) {
x1 += gt_shape(left[j], 0);
y1 += gt_shape(left[j], 1);
}
for (int j = 0; j < (int)right.size(); j++) {
x2 += gt_shape(right[j], 0);
y2 += gt_shape(right[j], 1);
}
x1 /= left.size(); y1 /= left.size();
x2 /= right.size(); y2 /= right.size();
double pupils_distance = sqrt((x2 - x1)*(x2 - x1) + (y2 - y1)*(y2 - y1));
// every landmark
double e_ = 0;
for (int j = 0; j < landmark_n; j++) {
e_ += norm(gt_shape.row(j) - current_shape.row(j));
}
e += e_ / pupils_distance;
}
e /= N*landmark_n;
return e;
}
/*---------------RandomTree Implementation---------------------*/
void FacemarkLBFImpl::RandomTree::initTree(int _landmark_id, int _depth, std::vector<int> feats_m, std::vector<double> radius_m) {
landmark_id = _landmark_id;
depth = _depth;
nodes_n = 1 << depth;
feats = Mat::zeros(nodes_n, 4, CV_64FC1);
thresholds.resize(nodes_n);
params_feats_m = feats_m;
params_radius_m = radius_m;
}
void FacemarkLBFImpl::RandomTree::train(std::vector<Mat> &imgs, std::vector<Mat> &current_shapes, std::vector<BBox> &bboxes,
std::vector<Mat> &delta_shapes, Mat &mean_shape, std::vector<int> &index, int stage) {
Mat_<double> delta_shapes_((int)delta_shapes.size(), 2);
for (int i = 0; i < (int)delta_shapes.size(); i++) {
delta_shapes_(i, 0) = delta_shapes[i].at<double>(landmark_id, 0);
delta_shapes_(i, 1) = delta_shapes[i].at<double>(landmark_id, 1);
}
splitNode(imgs, current_shapes, bboxes, delta_shapes_, mean_shape, index, 1, stage);
}
void FacemarkLBFImpl::RandomTree::splitNode(std::vector<Mat> &imgs, std::vector<Mat> &current_shapes, std::vector<BBox> &bboxes,
Mat &delta_shapes, Mat &mean_shape, std::vector<int> &root, int idx, int stage) {
int N = (int)root.size();
if (N == 0) {
thresholds[idx] = 0;
feats.row(idx).setTo(0);
std::vector<int> left, right;
// split left and right child in DFS
if (2 * idx < feats.rows / 2)
splitNode(imgs, current_shapes, bboxes, delta_shapes, mean_shape, left, 2 * idx, stage);
if (2 * idx + 1 < feats.rows / 2)
splitNode(imgs, current_shapes, bboxes, delta_shapes, mean_shape, right, 2 * idx + 1, stage);
return;
}
int feats_m = params_feats_m[stage];
double radius_m = params_radius_m[stage];
Mat_<double> candidate_feats(feats_m, 4);
RNG rng(getTickCount());
// generate feature pool
for (int i = 0; i < feats_m; i++) {
double x1, y1, x2, y2;
x1 = rng.uniform(-1., 1.); y1 = rng.uniform(-1., 1.);
x2 = rng.uniform(-1., 1.); y2 = rng.uniform(-1., 1.);
if (x1*x1 + y1*y1 > 1. || x2*x2 + y2*y2 > 1.) {
i--;
continue;
}
candidate_feats[i][0] = x1 * radius_m;
candidate_feats[i][1] = y1 * radius_m;
candidate_feats[i][2] = x2 * radius_m;
candidate_feats[i][3] = y2 * radius_m;
}
// calc features
Mat_<int> densities(feats_m, N);
for (int i = 0; i < N; i++) {
double scale;
Mat_<double> rotate;
const Mat_<double> &current_shape = (Mat_<double>)current_shapes[root[i]];
BBox &bbox = bboxes[root[i]];
Mat &img = imgs[root[i]];
calcSimilarityTransform(bbox.project(current_shape), mean_shape, scale, rotate);
for (int j = 0; j < feats_m; j++) {
double x1 = candidate_feats(j, 0);
double y1 = candidate_feats(j, 1);
double x2 = candidate_feats(j, 2);
double y2 = candidate_feats(j, 3);
SIMILARITY_TRANSFORM(x1, y1, scale, rotate);
SIMILARITY_TRANSFORM(x2, y2, scale, rotate);
x1 = x1*bbox.x_scale + current_shape(landmark_id, 0);
y1 = y1*bbox.y_scale + current_shape(landmark_id, 1);
x2 = x2*bbox.x_scale + current_shape(landmark_id, 0);
y2 = y2*bbox.y_scale + current_shape(landmark_id, 1);
x1 = max(0., min(img.cols - 1., x1)); y1 = max(0., min(img.rows - 1., y1));
x2 = max(0., min(img.cols - 1., x2)); y2 = max(0., min(img.rows - 1., y2));
densities(j, i) = (int)img.at<uchar>(int(y1), int(x1)) - (int)img.at<uchar>(int(y2), int(x2));
}
}
Mat_<int> densities_sorted;
cv::sort(densities, densities_sorted, SORT_EVERY_ROW + SORT_ASCENDING);
//select a feat which reduces maximum variance
double variance_all = (calcVariance(delta_shapes.col(0)) + calcVariance(delta_shapes.col(1)))*N;
double variance_reduce_max = 0;
int threshold = 0;
int feat_id = 0;
std::vector<double> left_x, left_y, right_x, right_y;
left_x.reserve(N); left_y.reserve(N);
right_x.reserve(N); right_y.reserve(N);
for (int j = 0; j < feats_m; j++) {
left_x.clear(); left_y.clear();
right_x.clear(); right_y.clear();
int threshold_ = densities_sorted(j, (int)(N*rng.uniform(0.05, 0.95)));
for (int i = 0; i < N; i++) {
if (densities(j, i) < threshold_) {
left_x.push_back(delta_shapes.at<double>(root[i], 0));
left_y.push_back(delta_shapes.at<double>(root[i], 1));
}
else {
right_x.push_back(delta_shapes.at<double>(root[i], 0));
right_y.push_back(delta_shapes.at<double>(root[i], 1));
}
}
double variance_ = (calcVariance(left_x) + calcVariance(left_y))*left_x.size() + \
(calcVariance(right_x) + calcVariance(right_y))*right_x.size();
double variance_reduce = variance_all - variance_;
if (variance_reduce > variance_reduce_max) {
variance_reduce_max = variance_reduce;
threshold = threshold_;
feat_id = j;
}
}
thresholds[idx] = threshold;
feats(idx, 0) = candidate_feats(feat_id, 0); feats(idx, 1) = candidate_feats(feat_id, 1);
feats(idx, 2) = candidate_feats(feat_id, 2); feats(idx, 3) = candidate_feats(feat_id, 3);
// generate left and right child
std::vector<int> left, right;
left.reserve(N);
right.reserve(N);
for (int i = 0; i < N; i++) {
if (densities(feat_id, i) < threshold) left.push_back(root[i]);
else right.push_back(root[i]);
}
// split left and right child in DFS
if (2 * idx < feats.rows / 2)
splitNode(imgs, current_shapes, bboxes, delta_shapes, mean_shape, left, 2 * idx, stage);
if (2 * idx + 1 < feats.rows / 2)
splitNode(imgs, current_shapes, bboxes, delta_shapes, mean_shape, right, 2 * idx + 1, stage);
}
void FacemarkLBFImpl::RandomTree::write(FileStorage fs, int k, int i, int j) {
String x;
x = cv::format("tree_%i_%i_%i",k,i,j);
fs << x << feats;
x = cv::format("thresholds_%i_%i_%i",k,i,j);
fs << x << thresholds;
}
void FacemarkLBFImpl::RandomTree::read(FileStorage fs, int k, int i, int j) {
String x;
x = cv::format("tree_%i_%i_%i",k,i,j);
fs[x] >> feats;
x = cv::format("thresholds_%i_%i_%i",k,i,j);
fs[x] >> thresholds;
}
/*---------------RandomForest Implementation---------------------*/
void FacemarkLBFImpl::RandomForest::initForest(
int _landmark_n,
int _trees_n,
int _tree_depth,
double _overlap_ratio,
std::vector<int>_feats_m,
std::vector<double>_radius_m,
bool verbose_mode
) {
trees_n = _trees_n;
landmark_n = _landmark_n;
tree_depth = _tree_depth;
overlap_ratio = _overlap_ratio;
feats_m = _feats_m;
radius_m = _radius_m;
verbose = verbose_mode;
random_trees.resize(landmark_n);
for (int i = 0; i < landmark_n; i++) {
random_trees[i].resize(trees_n);
for (int j = 0; j < trees_n; j++) random_trees[i][j].initTree(i, tree_depth, feats_m, radius_m);
}
}
void FacemarkLBFImpl::RandomForest::train(std::vector<Mat> &imgs, std::vector<Mat> &current_shapes, \
std::vector<BBox> &bboxes, std::vector<Mat> &delta_shapes, Mat &mean_shape, int stage) {
int N = (int)imgs.size();
int Q = int(N / ((1. - overlap_ratio) * trees_n));
#ifdef _OPENMP
#pragma omp parallel for
#endif
for (int i = 0; i < landmark_n; i++) {
TIMER_BEGIN
std::vector<int> root;
for (int j = 0; j < trees_n; j++) {
int start = max(0, int(floor(j*Q - j*Q*overlap_ratio)));
int end = min(int(start + Q + 1), N);
int L = end - start;
root.resize(L);
for (int k = 0; k < L; k++) root[k] = start + k;
random_trees[i][j].train(imgs, current_shapes, bboxes, delta_shapes, mean_shape, root, stage);
}
if(verbose) printf("Train %2dth of %d landmark Done, it costs %.4lf s\n", i+1, landmark_n, TIMER_NOW);
TIMER_END
}
}
Mat FacemarkLBFImpl::RandomForest::generateLBF(Mat &img, Mat &current_shape, BBox &bbox, Mat &mean_shape) {
Mat_<int> lbf_feat(1, landmark_n*trees_n);
double scale;
Mat_<double> rotate;
calcSimilarityTransform(bbox.project(current_shape), mean_shape, scale, rotate);
int base = 1 << (tree_depth - 1);
#ifdef _OPENMP
#pragma omp parallel for
#endif
for (int i = 0; i < landmark_n; i++) {
for (int j = 0; j < trees_n; j++) {
RandomTree &tree = random_trees[i][j];
int code = 0;
int idx = 1;
for (int k = 1; k < tree.depth; k++) {
double x1 = tree.feats(idx, 0);
double y1 = tree.feats(idx, 1);
double x2 = tree.feats(idx, 2);
double y2 = tree.feats(idx, 3);
SIMILARITY_TRANSFORM(x1, y1, scale, rotate);
SIMILARITY_TRANSFORM(x2, y2, scale, rotate);
x1 = x1*bbox.x_scale + current_shape.at<double>(i, 0);
y1 = y1*bbox.y_scale + current_shape.at<double>(i, 1);
x2 = x2*bbox.x_scale + current_shape.at<double>(i, 0);
y2 = y2*bbox.y_scale + current_shape.at<double>(i, 1);
x1 = max(0., min(img.cols - 1., x1)); y1 = max(0., min(img.rows - 1., y1));
x2 = max(0., min(img.cols - 1., x2)); y2 = max(0., min(img.rows - 1., y2));
int density = img.at<uchar>(int(y1), int(x1)) - img.at<uchar>(int(y2), int(x2));
code <<= 1;
if (density < tree.thresholds[idx]) {
idx = 2 * idx;
}
else {
code += 1;
idx = 2 * idx + 1;
}
}
lbf_feat(i*trees_n + j) = (i*trees_n + j)*base + code;
}
}
return lbf_feat;
}
void FacemarkLBFImpl::RandomForest::write(FileStorage fs, int k) {
for (int i = 0; i < landmark_n; i++) {
for (int j = 0; j < trees_n; j++) {
random_trees[i][j].write(fs,k,i,j);
}
}
}
void FacemarkLBFImpl::RandomForest::read(FileStorage fs,int k)
{
for (int i = 0; i < landmark_n; i++) {
for (int j = 0; j < trees_n; j++) {
random_trees[i][j].initTree(i, tree_depth, feats_m, radius_m);
random_trees[i][j].read(fs,k,i,j);
}
}
}
/*---------------Regressor Implementation---------------------*/
void FacemarkLBFImpl::Regressor::initRegressor(Params config) {
stages_n = config.stages_n;
landmark_n = config.n_landmarks;
random_forests.resize(stages_n);
for (int i = 0; i < stages_n; i++)
random_forests[i].initForest(
config.n_landmarks,
config.tree_n,
config.tree_depth,
config.bagging_overlap,
config.feats_m,
config.radius_m,
config.verbose
);
mean_shape.create(config.n_landmarks, 2, CV_64FC1);
gl_regression_weights.resize(stages_n);
int F = config.n_landmarks * config.tree_n * (1 << (config.tree_depth - 1));
for (int i = 0; i < stages_n; i++) {
gl_regression_weights[i].create(2 * config.n_landmarks, F, CV_64FC1);
}
}
void FacemarkLBFImpl::Regressor::trainRegressor(std::vector<Mat> &imgs, std::vector<Mat> &gt_shapes, std::vector<Mat> &current_shapes,
std::vector<BBox> &bboxes, Mat &mean_shape_, int start_from, Params config) {
assert(start_from >= 0 && start_from < stages_n);
mean_shape = mean_shape_;
int N = (int)imgs.size();
for (int k = start_from; k < stages_n; k++) {
std::vector<Mat> delta_shapes = getDeltaShapes(gt_shapes, current_shapes, bboxes, mean_shape);
// train random forest
if(config.verbose) printf("training random forest %dth of %d stages, ",k+1, stages_n);
TIMER_BEGIN
random_forests[k].train(imgs, current_shapes, bboxes, delta_shapes, mean_shape, k);
if(config.verbose) printf("costs %.4lf s\n", TIMER_NOW);
TIMER_END
// generate lbf of every train data
std::vector<Mat> lbfs;
lbfs.resize(N);
for (int i = 0; i < N; i++) {
lbfs[i] = random_forests[k].generateLBF(imgs[i], current_shapes[i], bboxes[i], mean_shape);
}
// global regression
if(config.verbose) printf("start train global regression of %dth stage\n", k);
TIMER_BEGIN
globalRegressionTrain(lbfs, delta_shapes, k, config);
if(config.verbose) printf("end of train global regression of %dth stage, costs %.4lf s\n", k, TIMER_NOW);
TIMER_END
// update current_shapes
double scale;
Mat rotate;
for (int i = 0; i < N; i++) {
Mat delta_shape = globalRegressionPredict(lbfs[i], k);
calcSimilarityTransform(bboxes[i].project(current_shapes[i]), mean_shape, scale, rotate);
current_shapes[i] = bboxes[i].reproject(bboxes[i].project(current_shapes[i]) + scale * delta_shape * rotate.t());
}
// calc mean error
double e = calcMeanError(gt_shapes, current_shapes, config.n_landmarks, config.pupils[0],config.pupils[1]);
if(config.verbose) printf("Train %dth stage Done with Error = %lf\n", k, e);
} // for int k
}//Regressor::training
void FacemarkLBFImpl::Regressor::globalRegressionTrain(
std::vector<Mat> &lbfs, std::vector<Mat> &delta_shapes,
int stage, Params config
) {
int N = (int)lbfs.size();
int M = lbfs[0].cols;
int F = config.n_landmarks*config.tree_n*(1 << (config.tree_depth - 1));
int landmark_n_ = delta_shapes[0].rows;
feature_node **X = (feature_node **)malloc(N * sizeof(feature_node *));
double **Y = (double **)malloc(landmark_n_ * 2 * sizeof(double *));
for (int i = 0; i < N; i++) {
X[i] = (feature_node *)malloc((M + 1) * sizeof(feature_node));
for (int j = 0; j < M; j++) {
X[i][j].index = lbfs[i].at<int>(0, j) + 1; // index starts from 1
X[i][j].value = 1;
}
X[i][M].index = -1;
X[i][M].value = -1;
}
for (int i = 0; i < landmark_n_; i++) {
Y[2 * i] = (double *)malloc(N*sizeof(double));
Y[2 * i + 1] = (double *)malloc(N*sizeof(double));
for (int j = 0; j < N; j++) {
Y[2 * i][j] = delta_shapes[j].at<double>(i, 0);
Y[2 * i + 1][j] = delta_shapes[j].at<double>(i, 1);
}
}
double *y;
Mat weights;
for(int i=0; i< landmark_n_; i++){
y = Y[2 * i];
Mat wx = supportVectorRegression(X,y,N,F,config.verbose);
weights.push_back(wx);
y = Y[2 * i + 1];
Mat wy = supportVectorRegression(X,y,N,F,config.verbose);
weights.push_back(wy);
}
gl_regression_weights[stage] = weights;
// free
for (int i = 0; i < N; i++) free(X[i]);
for (int i = 0; i < 2 * landmark_n_; i++) free(Y[i]);
free(X);
free(Y);
} // Regressor:globalRegressionTrain
/*adapted from the liblinear library*/
/* TODO: change feature_node to MAT
* as the index in feature_node is only used for "counter"
*/
Mat FacemarkLBFImpl::Regressor::supportVectorRegression(
feature_node **x, double *y, int nsamples, int feat_size, bool verbose
){
#define GETI(i) ((int) y[i])
std::vector<double> w;
w.resize(feat_size);
RNG rng(0);
int l = nsamples; // n-samples
double C = 1./(double)nsamples;
double p = 0;
int w_size = feat_size; //feat size
double eps = 0.00001;
int i, s, iter = 0;
int max_iter = 1000;
int active_size = l;
std::vector<int> index(l);
double d, G, H;
double Gmax_old = HUGE_VAL;
double Gmax_new, Gnorm1_new;
double Gnorm1_init = -1.0; // Gnorm1_init is initialized at the first iteration
std::vector<double> beta(l);
std::vector<double> QD(l);
double lambda[1], upper_bound[1];
lambda[0] = 0.5/C;
upper_bound[0] = HUGE_VAL;
// Initial beta can be set here. Note that
// -upper_bound <= beta[i] <= upper_bound
for(i=0; i<l; i++)
beta[i] = 0;
for(i=0; i<w_size; i++)
w[i] = 0;
for(i=0; i<l; i++){
QD[i] = 0;
feature_node *xi = x[i];
while(xi->index != -1){
double val = xi->value;
QD[i] += val*val;
w[xi->index-1] += beta[i]*val;
xi++;
}
index[i] = i;
}
while(iter < max_iter){
Gmax_new = 0;
Gnorm1_new = 0;
for(i=0; i<active_size; i++){
int j = i+rng.uniform(0,RAND_MAX)%(active_size-i);
swap(index[i], index[j]);
}
for(s=0; s<active_size; s++){
i = index[s];
G = -y[i] + lambda[GETI(i)]*beta[i];
H = QD[i] + lambda[GETI(i)];
feature_node *xi = x[i];
while(xi->index != -1){
int ind = xi->index-1;
double val = xi->value;
G += val*w[ind];
xi++;
}
double Gp = G+p;
double Gn = G-p;
double violation = 0;
if(beta[i] == 0){
if(Gp < 0)
violation = -Gp;
else if(Gn > 0)
violation = Gn;
else if(Gp>Gmax_old && Gn<-Gmax_old){
active_size--;
swap(index[s], index[active_size]);
s--;
continue;
}
}else if(beta[i] >= upper_bound[GETI(i)]){
if(Gp > 0)
violation = Gp;
else if(Gp < -Gmax_old){
active_size--;
swap(index[s], index[active_size]);
s--;
continue;
}
}else if(beta[i] <= -upper_bound[GETI(i)]){
if(Gn < 0)
violation = -Gn;
else if(Gn > Gmax_old){
active_size--;
swap(index[s], index[active_size]);
s--;
continue;
}
}else if(beta[i] > 0)
violation = fabs(Gp);
else
violation = fabs(Gn);
Gmax_new = max(Gmax_new, violation);
Gnorm1_new += violation;
// obtain Newton direction d
if(Gp < H*beta[i])
d = -Gp/H;
else if(Gn > H*beta[i])
d = -Gn/H;
else
d = -beta[i];
if(fabs(d) < 1.0e-12)
continue;
double beta_old = beta[i];
beta[i] = min(max(beta[i]+d, -upper_bound[GETI(i)]), upper_bound[GETI(i)]);
d = beta[i]-beta_old;
if(d != 0){
xi = x[i];
while(xi->index != -1){
w[xi->index-1] += d*xi->value;
xi++;
}
}
}// for s<active_size
if(iter == 0)
Gnorm1_init = Gnorm1_new;
iter++;
if(Gnorm1_new <= eps*Gnorm1_init){
if(active_size == l)
break;
else{
active_size = l;
Gmax_old = HUGE_VAL;
continue;
}
}
Gmax_old = Gmax_new;
} //while <max_iter
if(verbose) printf("optimization finished, #iter = %d\n", iter);
if(iter >= max_iter && verbose)
printf("WARNING: reaching max number of iterations\n");
// calculate objective value
double v = 0;
int nSV = 0;
for(i=0; i<w_size; i++)
v += w[i]*w[i];
v = 0.5*v;
for(i=0; i<l; i++){
v += p*fabs(beta[i]) - y[i]*beta[i] + 0.5*lambda[GETI(i)]*beta[i]*beta[i];
if(beta[i] != 0)
nSV++;
}
if(verbose) printf("Objective value = %lf\n", v);
if(verbose) printf("nSV = %d\n",nSV);
return Mat(Mat(w).t()).clone();
}//end
Mat FacemarkLBFImpl::Regressor::globalRegressionPredict(const Mat &lbf, int stage) {
const Mat_<double> &weight = (Mat_<double>)gl_regression_weights[stage];
Mat_<double> delta_shape(weight.rows / 2, 2);
const double *w_ptr = NULL;
const int *lbf_ptr = lbf.ptr<int>(0);
//#pragma omp parallel for num_threads(2) private(w_ptr)
for (int i = 0; i < delta_shape.rows; i++) {
w_ptr = weight.ptr<double>(2 * i);
double y = 0;
for (int j = 0; j < lbf.cols; j++) y += w_ptr[lbf_ptr[j]];
delta_shape(i, 0) = y;
w_ptr = weight.ptr<double>(2 * i + 1);
y = 0;
for (int j = 0; j < lbf.cols; j++) y += w_ptr[lbf_ptr[j]];
delta_shape(i, 1) = y;
}
return delta_shape;
} // Regressor::globalRegressionPredict
Mat FacemarkLBFImpl::Regressor::predict(Mat &img, BBox &bbox) {
Mat current_shape = bbox.reproject(mean_shape);
double scale;
Mat rotate;
Mat lbf_feat;
for (int k = 0; k < stages_n; k++) {
// generate lbf
lbf_feat = random_forests[k].generateLBF(img, current_shape, bbox, mean_shape);
// update current_shapes
Mat delta_shape = globalRegressionPredict(lbf_feat, k);
delta_shape = delta_shape.reshape(0, landmark_n);
calcSimilarityTransform(bbox.project(current_shape), mean_shape, scale, rotate);
current_shape = bbox.reproject(bbox.project(current_shape) + scale * delta_shape * rotate.t());
}
return current_shape;
} // Regressor::predict
void FacemarkLBFImpl::Regressor::write(FileStorage fs, Params config) {
fs << "stages_n" << config.stages_n;
fs << "tree_n" << config.tree_n;
fs << "tree_depth" << config.tree_depth;
fs << "n_landmarks" << config.n_landmarks;
fs << "regressor_meanshape" <<mean_shape;
// every stages
String x;
for (int k = 0; k < config.stages_n; k++) {
if(config.verbose) printf("Write %dth stage\n", k);
random_forests[k].write(fs,k);
x = cv::format("weights_%i",k);
fs << x << gl_regression_weights[k];
}
}
void FacemarkLBFImpl::Regressor::read(FileStorage fs, Params & config){
fs["stages_n"] >> config.stages_n;
fs["tree_n"] >> config.tree_n;
fs["tree_depth"] >> config.tree_depth;
fs["n_landmarks"] >> config.n_landmarks;
stages_n = config.stages_n;
landmark_n = config.n_landmarks;
initRegressor(config);
fs["regressor_meanshape"] >> mean_shape;
// every stages
String x;
for (int k = 0; k < stages_n; k++) {
random_forests[k].initForest(
config.n_landmarks,
config.tree_n,
config.tree_depth,
config.bagging_overlap,
config.feats_m,
config.radius_m,
config.verbose
);
random_forests[k].read(fs,k);
x = cv::format("weights_%i",k);
fs[x] >> gl_regression_weights[k];
}
}
#undef TIMER_BEGIN
#undef TIMER_NOW
#undef TIMER_END
#undef SIMILARITY_TRANSFORM
} /* namespace face */
} /* namespace cv */
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
#include "test_precomp.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/face.hpp"
#include <vector>
#include <string>
using namespace std;
using namespace cv;
using namespace cv::face;
TEST(CV_Face_Facemark, test_utilities) {
string image_file = cvtest::findDataFile("face/david1.jpg", true);
string annotation_file = cvtest::findDataFile("face/david1.pts", true);
string cascade_filename =
cvtest::findDataFile("cascadeandhog/cascades/lbpcascade_frontalface.xml", true);
std::vector<Point2f> facial_points;
EXPECT_NO_THROW(loadFacePoints(annotation_file,facial_points));
Mat img = imread(image_file);
EXPECT_NO_THROW(drawFacemarks(img, facial_points, Scalar(0,0,255)));
CParams params(cascade_filename);
std::vector<Rect> faces;
EXPECT_TRUE(getFaces(img, faces, &params));
}
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
/*Usage:
download the opencv_extra from https://github.com/opencv/opencv_extra
and then execute the following commands:
export OPENCV_TEST_DATA_PATH=/home/opencv/opencv_extra/testdata
<build_folder>/bin/opencv_test_face
*/
#include "test_precomp.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/face.hpp"
#include <vector>
#include <string>
using namespace std;
using namespace cv;
using namespace cv::face;
CascadeClassifier face_detector;
static bool customDetector( InputArray image, OutputArray ROIs, void * config = 0 ){
Mat gray;
std::vector<Rect> & faces = *(std::vector<Rect>*) ROIs.getObj();
faces.clear();
if(config!=0){
//do nothing
}
if(image.channels()>1){
cvtColor(image.getMat(),gray,CV_BGR2GRAY);
}else{
gray = image.getMat().clone();
}
equalizeHist( gray, gray );
face_detector.detectMultiScale( gray, faces, 1.4, 2, CV_HAAR_SCALE_IMAGE, Size(30, 30) );
return true;
}
TEST(CV_Face_FacemarkAAM, can_create_default) {
FacemarkAAM::Params params;
Ptr<Facemark> facemark;
EXPECT_NO_THROW(facemark = FacemarkAAM::create(params));
EXPECT_FALSE(facemark.empty());
}
TEST(CV_Face_FacemarkAAM, can_set_custom_detector) {
string cascade_filename =
cvtest::findDataFile("cascadeandhog/cascades/lbpcascade_frontalface.xml", true);
EXPECT_TRUE(face_detector.load(cascade_filename));
Ptr<Facemark> facemark = FacemarkAAM::create();
EXPECT_TRUE(facemark->setFaceDetector(customDetector));
}
TEST(CV_Face_FacemarkAAM, test_workflow) {
string i1 = cvtest::findDataFile("face/david1.jpg", true);
string p1 = cvtest::findDataFile("face/david1.pts", true);
string i2 = cvtest::findDataFile("face/david2.jpg", true);
string p2 = cvtest::findDataFile("face/david2.pts", true);
std::vector<string> images_train;
images_train.push_back(i1);
images_train.push_back(i2);
std::vector<String> points_train;
points_train.push_back(p1);
points_train.push_back(p2);
string cascade_filename =
cvtest::findDataFile("cascadeandhog/cascades/lbpcascade_frontalface.xml", true);
FacemarkAAM::Params params;
params.n = 1;
params.m = 1;
params.verbose = false;
params.save_model = false;
Ptr<Facemark> facemark = FacemarkAAM::create(params);
Mat image;
std::vector<Point2f> landmarks;
for(size_t i=0;i<images_train.size();i++){
image = imread(images_train[i].c_str());
EXPECT_TRUE(loadFacePoints(points_train[i].c_str(),landmarks));
EXPECT_TRUE(landmarks.size()>0);
EXPECT_TRUE(facemark->addTrainingSample(image, landmarks));
}
EXPECT_NO_THROW(facemark->training());
/*------------ Fitting Part ---------------*/
facemark->setFaceDetector(customDetector);
string image_filename = cvtest::findDataFile("face/david1.jpg", true);
image = imread(image_filename.c_str());
EXPECT_TRUE(!image.empty());
std::vector<Rect> rects;
std::vector<std::vector<Point2f> > facial_points;
EXPECT_TRUE(facemark->getFaces(image, rects));
EXPECT_TRUE(rects.size()>0);
EXPECT_TRUE(facemark->fit(image, rects, facial_points));
EXPECT_TRUE(facial_points[0].size()>0);
/*------------ Test getData ---------------*/
FacemarkAAM::Data data;
EXPECT_TRUE(facemark->getData(&data));
EXPECT_TRUE(data.s0.size()>0);
}
/*
By downloading, copying, installing or using the software you agree to this
license. If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Copyright (C) 2013, OpenCV Foundation, all rights reserved.
Third party copyrights are property of their respective owners.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall copyright holders or contributors be liable for
any direct, indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
This file was part of GSoC Project: Facemark API for OpenCV
Final report: https://gist.github.com/kurnianggoro/74de9121e122ad0bd825176751d47ecc
Student: Laksono Kurnianggoro
Mentor: Delia Passalacqua
*/
/*Usage:
download the opencv_extra from https://github.com/opencv/opencv_extra
and then execute the following commands:
export OPENCV_TEST_DATA_PATH=/home/opencv/opencv_extra/testdata
<build_folder>/bin/opencv_test_face
*/
#include "test_precomp.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/face.hpp"
#include <vector>
#include <string>
using namespace std;
using namespace cv;
using namespace cv::face;
CascadeClassifier cascade_detector;
static bool myCustomDetector( InputArray image, OutputArray ROIs, void * config = 0 ){
Mat gray;
std::vector<Rect> & faces = *(std::vector<Rect>*) ROIs.getObj();
faces.clear();
if(config!=0){
//do nothing
}
if(image.channels()>1){
cvtColor(image.getMat(),gray,CV_BGR2GRAY);
}else{
gray = image.getMat().clone();
}
equalizeHist( gray, gray );
cascade_detector.detectMultiScale( gray, faces, 1.4, 2, CV_HAAR_SCALE_IMAGE, Size(30, 30) );
return true;
}
TEST(CV_Face_FacemarkLBF, can_create_default) {
FacemarkLBF::Params params;
params.n_landmarks = 68;
Ptr<Facemark> facemark;
EXPECT_NO_THROW(facemark = FacemarkLBF::create(params));
EXPECT_FALSE(facemark.empty());
}
TEST(CV_Face_FacemarkLBF, can_set_custom_detector) {
string cascade_filename =
cvtest::findDataFile("cascadeandhog/cascades/lbpcascade_frontalface.xml", true);
EXPECT_TRUE(cascade_detector.load(cascade_filename));
Ptr<Facemark> facemark = FacemarkLBF::create();
EXPECT_TRUE(facemark->setFaceDetector(myCustomDetector));
}
TEST(CV_Face_FacemarkLBF, test_workflow) {
string i1 = cvtest::findDataFile("face/david1.jpg", true);
string p1 = cvtest::findDataFile("face/david1.pts", true);
string i2 = cvtest::findDataFile("face/david2.jpg", true);
string p2 = cvtest::findDataFile("face/david2.pts", true);
std::vector<string> images_train;
images_train.push_back(i1);
images_train.push_back(i2);
std::vector<String> points_train;
points_train.push_back(p1);
points_train.push_back(p2);
string cascade_filename =
cvtest::findDataFile("cascadeandhog/cascades/lbpcascade_frontalface.xml", true);
FacemarkLBF::Params params;
params.cascade_face = cascade_filename;
params.verbose = false;
params.save_model = false;
Ptr<Facemark> facemark = FacemarkLBF::create(params);
Mat image;
std::vector<Point2f> landmarks;
for(size_t i=0;i<images_train.size();i++){
image = imread(images_train[i].c_str());
EXPECT_TRUE(loadFacePoints(points_train[i].c_str(),landmarks));
EXPECT_TRUE(landmarks.size()>0);
EXPECT_TRUE(facemark->addTrainingSample(image, landmarks));
}
EXPECT_NO_THROW(facemark->training());
/*------------ Fitting Part ---------------*/
cascade_detector.load(cascade_filename);
facemark->setFaceDetector(myCustomDetector);
string image_filename = cvtest::findDataFile("face/david1.jpg", true);
image = imread(image_filename.c_str());
EXPECT_TRUE(!image.empty());
std::vector<Rect> rects;
std::vector<std::vector<Point2f> > facial_points;
EXPECT_TRUE(facemark->getFaces(image, rects));
EXPECT_TRUE(rects.size()>0);
EXPECT_TRUE(facemark->fit(image, rects, facial_points));
EXPECT_TRUE(facial_points[0].size()>0);
}
TEST(CV_Face_FacemarkLBF, get_data) {
Ptr<Facemark> facemark = FacemarkLBF::create();
EXPECT_TRUE(facemark->getData());
}
Using the FacemarkAAM {#tutorial_facemark_aam}
==========================================================
Goals
----
In this tutorial you will learn how to:
- creating the instance of FacemarkAAM
- training the AAM model
- Fitting using FacemarkAAM
Preparation
--------
Before you continue with this tutorial, you should download the dataset of facial landmarks detection.
We suggest you to download the LFPW dataset which can be retrieved at <https://ibug.doc.ic.ac.uk/download/annotations/lfpw.zip>.
Make sure that the annotation format is supported by the API, the contents in annotation file should look like the following snippet:
@code
version: 1
n_points: 68
{
212.716603 499.771793
230.232816 566.290071
...
}
@endcode
The next thing to do is to make 2 text files containing the list of image files and annotation files respectively. Make sure that the order or image and annotation in both files are matched. Furthermore, it is advised to use absolute path instead of relative path.
Example to make the file list in Linux machine
@code
ls $PWD/trainset/*.jpg > images_train.txt
ls $PWD/trainset/*.pts > annotation_train.txt
@endcode
example of content in the images_train.txt
@code
/home/user/lfpw/trainset/100032540_1.jpg
/home/user/lfpw/trainset/100040721_1.jpg
/home/user/lfpw/trainset/100040721_2.jpg
/home/user/lfpw/trainset/1002681492_1.jpg
@endcode
example of content in the annotation_train.txt
@code
/home/user/lfpw/trainset/100032540_1.pts
/home/user/lfpw/trainset/100040721_1.pts
/home/user/lfpw/trainset/100040721_2.pts
/home/user/lfpw/trainset/1002681492_1.pts
@endcode
Optionally, you can create the similar files for the testset.
In this tutorial, the pre-trained model will not be provided due to its large file size (~500MB). By following this tutorial, you will be able to train obtain your own trained model within few minutes.
Working with the AAM Algorithm
--------
The full working code is available in the face/samples/facemark_demo_aam.cpp file. In this tutorial, the explanation of some important parts are covered.
-# <B>Creating the instance of AAM algorithm</B>
@snippet face/samples/facemark_demo_aam.cpp instance_creation
Firstly, an instance of parameter for the AAM algorithm is created. In this case, we will modify the default list of the scaling factor. By default, the scaling factor used is 1.0 (no scaling). Here we add two more scaling factor which will make the instance trains two more model at scale 2 and 4 (2 time smaller and 4 time smaller, faster faster fitting time). However, you should make sure that this scaling factor is not too big since it will make the image scaled into a very small one. Thus it will lost all of its important information for the landmark detection purpose.
Alternatively, you can override the default scaling in similar way to this example:
@code
std::vector<float>scales;
scales.push_back(1.5);
scales.push_back(2.4);
FacemarkAAM::Params params;
params.scales = scales;
@endcode
-# <B>Loading the dataset</B>
@snippet face/samples/facemark_demo_aam.cpp load_dataset
List of the dataset are loaded into the program. We will put the samples from dataset one by one in the next step.
-# <B>Adding the samples to the trainer</B>
@snippet face/samples/facemark_demo_aam.cpp add_samples
The image from the dataset list are loaded one by one as well as its corresponding annotation data. Then the pair of sample is added to the trainer.
-# <B>Training process</B>
@snippet face/samples/facemark_demo_aam.cpp training
The training process is called using a single line of code. Make sure that all the required training samples are already added to the trainer.
-# <B>Preparation for fitting</B>
First of all, you need to load the list of test files.
@snippet face/samples/facemark_demo_aam.cpp load_test_images
Since the AAM needs initialization parameters (rotation, translation, and scaling), you need to declare the required variable to store these information which will be obtained using a custom function. Since the implementation of getInitialFitting() function in this example is not optimal, you can create your own function.
The initialization is obtained by comparing the base shape of the trained model with the current face image. In this case, the rotation is obtained by comparing the angle of line formed by two eyes in the input face image with the same line in the base shape. Meanwhile, the scaling is obtained by comparing the length of line between eyes in the input image compared to the base shape.
-# <B>Fitting process</B>
The fitting process is started by detecting the face in a given image.
@snippet face/samples/facemark_demo_aam.cpp detect_face
If at least one face is found, then the next step is computing the initialization parameters. In this case, since the getInitialFitting() function is not optimal, it may not find pair of eyes from a given face. Therefore, we will filter out the face without initialization parameters and in this case, each element in the `conf` vector represent the initialization parameter for each filtered face.
@snippet face/samples/facemark_demo_aam.cpp get_initialization
For the fitting parameter stored in the `conf` vector, the last parameter represent the ID of scaling factor that will be used in the fitting process. In this example the fitting will use the biggest scaling factor (4) which is expected to have the fastest computation time compared to the other scales. If the ID if bigger than the available trained scale in the model, the the model with the biggest scale ID is used.
The fitting process is quite simple, you just need to put the corresponding image, vector of `cv::Rect` representing the ROIs of all faces in the given image, container of the landmark points represented by `landmarks` variable, and the configuration variables.
@snippet face/samples/facemark_demo_aam.cpp fitting_process
After the fitting process is finished, you can visualize the result using the `drawFacemarks` function.
Adding a new algorithm to the Facemark API {#tutorial_facemark_add_algorithm}
==========================================================
Goals
----
In this tutorial you will learn how to:
- integrate a new algorithm of facial landmark detector into the Facemark API
- compile a specific contrib module
- using extra parameters in a function
Explanation
-----------
- **Add the class header**
The class header for a new algorithm should be added to a new file in include/opencv2/face.
Here is the template that you can use to integrate a new algorithm, change the FacemarkNEW to a representative name of the new algorithm and save it using a representative filename accordingly.
@code{.cpp}
class CV_EXPORTS_W FacemarkNEW : public Facemark {
public:
struct CV_EXPORTS Config {
Config();
/*read only parameters - just for example*/
double detect_thresh; //!< detection confidence threshold
double sigma; //!< another parameter
void read(const FileNode& /*fn*/);
void write(FileStorage& /*fs*/) const;
};
/*Builder and destructor*/
static Ptr<FacemarkNEW> create(const FacemarkNEW::Config &conf = FacemarkNEW::Config() );
virtual ~FacemarkNEW(){};
};
@endcode
- **Add the implementation code**
Create a new file in the source folder with name representing the new algorithm.
Here is the template that you can use.
@code{.cpp}
#include "opencv2/face.hpp"
#include "precomp.hpp"
namespace cv
{
FacemarkNEW::Config::Config(){
detect_thresh = 0.5;
sigma=0.2;
}
void FacemarkNEW::Config::read( const cv::FileNode& fn ){
*this = FacemarkNEW::Config();
if (!fn["detect_thresh"].empty())
fn["detect_thresh"] >> detect_thresh;
if (!fn["sigma"].empty())
fn["sigma"] >> sigma;
}
void FacemarkNEW::Config::write( cv::FileStorage& fs ) const{
fs << "detect_thresh" << detect_thresh;
fs << "sigma" << sigma;
}
/*implementation of the algorithm is in this class*/
class FacemarkNEWImpl : public FacemarkNEW {
public:
FacemarkNEWImpl( const FacemarkNEW::Config &conf = FacemarkNEW::Config() );
void read( const FileNode& /*fn*/ );
void write( FileStorage& /*fs*/ ) const;
void loadModel(String filename);
bool setFaceDetector(bool(*f)(InputArray , OutputArray, void * extra_params));
bool getFaces( InputArray image , OutputArray faces, void * extra_params);
Config config;
protected:
bool addTrainingSample(InputArray image, InputArray landmarks);
void training();
bool fit(InputArray image, InputArray faces, InputOutputArray landmarks, void * runtime_params);
Config config; // configurations
/*proxy to the user defined face detector function*/
bool(*faceDetector)(InputArray , OutputArray, void * );
}; // class
Ptr<FacemarkNEW> FacemarkNEW::create(const FacemarkNEW::Config &conf){
return Ptr<FacemarkNEWImpl>(new FacemarkNEWImpl(conf));
}
FacemarkNEWImpl::FacemarkNEWImpl( const FacemarkNEW::Config &conf ) :
config( conf )
{
// other initialization
}
bool FacemarkNEWImpl::addTrainingSample(InputArray image, InputArray landmarks){
// pre-process and save the new training sample
return true;
}
void FacemarkNEWImpl::training(){
printf("training\n");
}
bool FacemarkNEWImpl::fit(
InputArray image,
InputArray faces,
InputOutputArray landmarks,
void * runtime_params)
{
if(runtime_params!=0){
// do something based on the extra parameters
}
printf("fitting\n");
return 0;
}
void FacemarkNEWImpl::read( const cv::FileNode& fn ){
config.read( fn );
}
void FacemarkNEWImpl::write( cv::FileStorage& fs ) const {
config.write( fs );
}
void FacemarkNEWImpl::loadModel(String filename){
// load the model
}
bool FacemarkNEWImpl::setFaceDetector(bool(*f)(InputArray , OutputArray, void * extra_params )){
faceDetector = f;
isSetDetector = true;
return true;
}
bool FacemarkNEWImpl::getFaces( InputArray image , OutputArray roi, void * extra_params){
if(!isSetDetector){
return false;
}
if(extra_params!=0){
//extract the extra parameters
}
std::vector<Rect> & faces = *(std::vector<Rect>*)roi.getObj();
faces.clear();
faceDetector(image.getMat(), faces, extra_params);
return true;
}
}
@endcode
- **Compiling the code**
Clear the build folder and then rebuild the entire library.
Note that you can deactivate the compilation of other contrib modules by adding "-D BUILD_opencv_<MODULE_NAME>=OFF" flag to the cmake.
After that you can execute make command in "<build_folder>/modules/face" to speed up the compiling process.
Best Practice
-----------
- **Handling the extra parameters**
To handle the extra parameters, a new struct should be created to holds all the required parameters.
Here is an example of of a parameters container
@code
struct CV_EXPORTS Params
{
Params( Mat rot = Mat::eye(2,2,CV_32F),
Point2f trans = Point2f(0.0,0.0),
float scaling = 1.0
);
Mat R;
Point2f t;
float scale;
};
@endcode
Here is a snippet to extract the extra parameters:
@code
if(runtime_params!=0){
Telo* conf = (Telo*)params;
Params* params
std::vector<Params> params = *(std::vector<Params>*)runtime_params;
for(size_t i=0; i<params.size();i++){
fit(img, landmarks[i], params[i].R,params[i].t, params[i].scale);
}
}else{
// do something
}
@endcode
And here is an example to pass the extra parameter into fit function
@code
FacemarkAAM::Params * params = new FacemarkAAM::Params(R,T,scale);
facemark->fit(image, faces, landmarks, params)
@endcode
In order to understand this scheme, here is a simple example that you can try to compile and see how it works.
@code
struct Params{
int x,y;
Params(int _x, int _y);
};
Params::Params(int _x,int _y){
x = _x;
y = _y;
}
void test(int a, void * params=0){
printf("a:%i\n", a);
if(params!=0){
Params* params = (Params*)params;
printf("extra parameters:%i %i\n", params->x, params->y);
}
}
int main(){
Params* params = new Params(7,22);
test(99, params);
return 0;
}
@endcode
- **Minimize the dependency**
It is highly recomended to keep the code as small as possible when compiled. For this purpose, the developers are ecouraged to avoid the needs of heavy dependency such as `imgcodecs` and `highgui`.
- **Documentation and examples**
Please update the documentation whenever needed and put example code for the new algorithm.
- **Test codes**
An algorithm should be accompanied with its corresponding test code to ensure that the algorithm is compatible with various types of environment (Linux, Windows64, Windows32, Android, etc). There are several basic test that should be performed as demonstrated in the test/test_facemark_lbf.cpp file including cration of its instance, add training data, perform the training process, load a trained model, and perform the fitting to obtain facial landmarks.
- **Data organization**
It is advised to divide the data for a new algorithm into 3 parts :
@code
class CV_EXPORTS_W FacemarkNEW : public Facemark {
public:
struct CV_EXPORTS Params
{
// variables utilized as extra parameters
}
struct CV_EXPORTS Config
{
// variables used to configure the algorithm
}
struct CV_EXPORTS Model
{
// variables to store the information of model
}
static Ptr<FacemarkNEW> create(const FacemarkNEW::Config &conf = FacemarkNEW::Config() );
virtual ~FacemarkNEW(){};
}
@endcode
Tutorial on Facial Landmark Detector API {#tutorial_table_of_content_facemark}
==========================================================
The facial landmark detector API is useful to detect facial landmarks from an input image.
- @subpage tutorial_facemark_add_algorithm
*Compatibility:* \> OpenCV 3.0
*Author:* Laksono Kurnianggoro
Adding a new algorithm in to the API.
- @subpage tutorial_facemark_usage
*Compatibility:* \> OpenCV 3.0
*Author:* Laksono Kurnianggoro
Tutorial on how to use the API.
- @subpage tutorial_facemark_aam
*Compatibility:* \> OpenCV 3.0
*Author:* Laksono Kurnianggoro
Tutorial on how to use the FacemarkAAM algorithm.
Using the Facemark API {#tutorial_facemark_usage}
==========================================================
Goals
----
In this tutorial will helps you to
- Create a Facemark object.
- Set a user defined face detector for the facemark algorithm
- Train the algorithm.
- Use the trained model to detect the facial landmarks from a given image.
Preparation
---------
Before you continue with this tutorial, you should download the dataset of facial landmarks detection.
We suggest you to download the helen dataset which can be retrieved at <http://www.ifp.illinois.edu/~vuongle2/helen/> (Caution! The algorithm requires around 9GB of RAM to train on this dataset).
Make sure that the annotation format is supported by the API, the contents in annotation file should look like the following snippet:
@code
version: 1
n_points: 68
{
212.716603 499.771793
230.232816 566.290071
...
}
@endcode
The next thing to do is to make 2 text files containing the list of image files and annotation files respectively. Make sure that the order or image and annotation in both files are matched. Furthermore, it is advised to use absolute path instead of relative path.
Example to make the file list in Linux machine
@code
ls $PWD/trainset/*.jpg > images_train.txt
ls $PWD/trainset/*.pts > annotation_train.txt
@endcode
example of content in the images_train.txt
@code
/home/user/helen/trainset/100032540_1.jpg
/home/user/helen/trainset/100040721_1.jpg
/home/user/helen/trainset/100040721_2.jpg
/home/user/helen/trainset/1002681492_1.jpg
@endcode
example of content in the annotation_train.txt
@code
/home/user/helen/trainset/100032540_1.pts
/home/user/helen/trainset/100040721_1.pts
/home/user/helen/trainset/100040721_2.pts
/home/user/helen/trainset/1002681492_1.pts
@endcode
Creating the facemark object
---------
@code
/*create the facemark instance*/
FacemarkLBF::Params params;
params.model_filename = "helen.model"; // the trained model will be saved using this filename
Ptr<Facemark> facemark = FacemarkLBF::create(params);
@endcode
Set a custom face detector function
---------
Firstly, you need to create your own face detector function, you might also need to create a `struct` to save the custom parameter. Alternatively, you can just make these parameter hard coded within the `myDetector` function.
@code
struct Conf {
cv::String model_path;
double scaleFactor;
Conf(cv::String s, double d){
model_path = s;
scaleFactor = d;
};
};
bool myDetector( InputArray image, OutputArray roi, void * config ){
Mat gray;
std::vector<Rect> & faces = *(std::vector<Rect>*) roi.getObj();
faces.clear();
if(config!=0){
Conf* conf = (Conf*)config;
if(image.channels()>1){
cvtColor(image,gray,CV_BGR2GRAY);
}else{
gray = image.getMat().clone();
}
equalizeHist( gray, gray );
CascadeClassifier face_cascade(conf->model_path);
face_cascade.detectMultiScale( gray, faces, conf->scaleFactor, 2, CV_HAAR_SCALE_IMAGE, Size(30, 30) );
return true;
}else{
return false;
}
}
@endcode
The following snippet demonstrates how to set the custom detector to the facemark object and use it to detect the faces. Keep in mind that some facemark object might use the face detector during the training process.
@code
Conf* config = new Conf("../data/lbpcascade_frontalface.xml",1.4);
facemark->setFaceDetector(myDetector);
@endcode
Here is the snippet for detecting face using the user defined face detector function.
@code
Mat img = imread("../data/himym3.jpg");
std::vector<cv::Rect> faces;
facemark->getFaces(img, faces, config);
for(int j=0;j<faces.size();j++){
cv::rectangle(img, faces[j], cv::Scalar(255,0,255));
}
imshow("result", img);
waitKey(0);
@endcode
Training a facemark object
----
- First of all, you need to set the training parameters
@code
params.n_landmarks = 68; // number of landmark points
params.initShape_n = 10; // number of multiplier for make data augmentation
params.stages_n=5; // amount of refinement stages
params.tree_n=6; // number of tree in the model for each landmark point
params.tree_depth=5; //he depth of decision tree
facemark = FacemarkLBF::create(params);
@endcode
- And then, you need to load the file list from the dataset that you have prepared.
@code
std::vector<String> images_train;
std::vector<String> landmarks_train;
loadDatasetList("images_train.txt","annotation_train.txt",images_train,landmarks_train);
@endcode
- The next step is to add training samples into the facemark object.
@code
Mat image;
std::vector<Point2f> facial_points;
for(size_t i=0;i<images_train.size();i++){
image = imread(images_train[i].c_str());
loadFacePoints(landmarks_train[i],facial_points);
facemark->addTrainingSample(image, facial_points);
}
@endcode
- execute the training process
@code
/*train the Algorithm*/
facemark->training();
@endcode
Use the trained model to detect the facial landmarks from a given image.
-----
- First of all, load the trained model. You can also download the pre-trained model in this link <https://raw.githubusercontent.com/kurnianggoro/GSOC2017/master/data/lbfmodel.yaml>
@code
facemark->loadModel(params.model_filename);
@endcode
- Detect the faces
@code
facemark->getFaces(img, faces, config);
@endcode
- Perform the fitting process
@code
std::vector<std::vector<Point2f> > landmarks;
facemark->fit(img, faces, landmarks);
@endcode
- Display the result
@code
for(int j=0;j<faces.size();j++){
face::drawFacemarks(img, landmarks[j], Scalar(0,0,255));
}
imshow("result", img);
waitKey(0);
@endcode
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment