Commit 61cd2e60 authored by Anh Nguyen's avatar Anh Nguyen

Removed sferes folder and add a script for fetching sferes

parent 72372a78
#!/bin/bash
path="./sferes"
if [ -d "${path}" ]; then
echo "Please remove the existing [${path}] folder and re-run this script."
exit 1
fi
# Download the version of Caffe that can be used for generating fooling images via EAs.
echo "Downloading Caffe ..."
wget https://github.com/Evolving-AI-Lab/fooling/archive/master.zip
echo "Extracting into ${path}"
unzip master.zip
mv ./fooling-master/sferes ./
# Clean up
rm -rf fooling-master master.zip
echo "Done."
\ No newline at end of file
This diff is collapsed.
*.pyc
build/
.waf-1.5.14*/
.Totti-*/
Totti-*/
.lock-wscript
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>sferes</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.cdt.managedbuilder.core.genmakebuilder</name>
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>org.eclipse.cdt.managedbuilder.core.ScannerConfigBuilder</name>
<triggers>full,incremental,</triggers>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.cdt.core.cnature</nature>
<nature>org.eclipse.cdt.core.ccnature</nature>
<nature>org.eclipse.cdt.managedbuilder.core.managedBuildNature</nature>
<nature>org.eclipse.cdt.managedbuilder.core.ScannerConfigNature</nature>
</natures>
<linkedResources>
<link>
<name>src</name>
<type>2</type>
<location>/home/anh/src/caffe/src</location>
</link>
</linkedResources>
</projectDescription>
This diff is collapsed.
You can access this repo with SSH or with HTTPS.
sferes2
=======
Sferes2 is a high-performance, lightweight, generic C++ framework for evolutionary computation.
**If you use this software in an academic article, please cite:**
Mouret, J.-B. and Doncieux, S. (2010). SFERESv2: Evolvin' in the Multi-Core World. _Proc. of Congress on Evolutionary Computation (CEC)_ Pages 4079--4086.
The article is available here: http://www.isir.upmc.fr/files/2010ACTI1524.pdf
@INPROCEEDINGS{Mouret2010,
AUTHOR = {Mouret, J.-B. and Doncieux, S.},
TITLE = {{SFERES}v2: Evolvin' in the Multi-Core World},
YEAR = {2010},
BOOKTITLE = {Proc. of Congress on Evolutionary Computation (CEC)},
PAGES = {4079--4086}
}
Documentation (including instruction for compilation)
-------------
We are in the process of porting the documentation to the github wiki: https://github.com/jbmouret/sferes2/wiki
Optional modules
---------------
- evolvable neural networks: https://github.com/jbmouret/nn2
- khepera-like simulator: https://github.com/jbmouret/fastsim
Design
-----
The following choices were made in the initial design:
- use of modern c++ techniques (template-based programming) to employ object-oriented programming without the cost of virtual functions;
- use of Intel TBB to take full advantages of multicore and SMP systems;
- use of boost libraries when it's useful (shared_ptr, serialization, filesystem, test,...);
- use of MPI to distribute the computational cost on clusters;
- a full set of unit tests;
- no configuration file: a fully optimized executable is built for each particular experiment.
Sferes2 is extended via modules and experiments.
Sferes2 work on most Unix systems (in particular, GNU/Linux and OSX). It successfully compiles with gcc, clang and icc.
Authors
-------
- Jean-Baptiste Mouret mouret@isir.upmc.frfr: main author and maintainer
- Stephane Doncieux doncieux@isir.upmc.fr
- Paul Tonellitonelli@isir.upmc.fr (documentation)
- Many members of ISIR (http://isir.upmc.fr)
Academic papers that used Sferes2:
-----------------------------------
*If you used Sferes2 in an academic paper, please send us an e-mail (mouret@isir.upmc.fr) so that we can add it here!*
(you can find a pdf for most of these publications on http://scholar.google.com).
### 2014
- Lesaint, F., Sigaud, O., Clark, J. J., Flagel, S. B., & Khamassi, M. (2014). Experimental predictions drawn from a computational model of sign-trackers and goal-trackers. Journal of Physiology-Paris.
- Lesaint, F., Sigaud, O., Flagel, S. B., Robinson, T. E., & Khamassi, M. (2014). Modelling Individual Differences in the Form of Pavlovian Conditioned Approach Responses: A Dual Learning Systems Approach with Factored Representations. PLoS computational biology, 10(2), e1003466.
- Shrouf, F., Ordieres-Meré, J., García-Sánchez, A., & Ortega-Mier, M. (2014). Optimizing the production scheduling of a single machine to minimize total energy consumption costs. Journal of Cleaner Production, 67, 197-207.
- Huizinga, J., Mouret, J. B., & Clune, J. (2014). Evolving Neural Networks That Are Both Modular and Regular: HyperNeat Plus the Connection Cost Technique. In Proceedings of GECCO (pp. 1-8).
- Li, J., Storie, J., & Clune, J. (2014). Encouraging Creative Thinking in Robots Improves Their Ability to Solve Challenging Problems. Proceedings of GECCO (pp 1-8)
- Tarapore, D. and Mouret, J.-B. (2014). Comparing the evolvability of generative encoding schemes.
Artificial Life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems, MIT Press, publisher. Pages 1-8.
### 2013
- Koos, S. and Cully, A. and Mouret, J.-B. (2013). Fast Damage Recovery in Robotics with the T-Resilience Algorithm. International Journal of Robotics Research. Vol 32 No 14 Pages 1700-1723.
- Tonelli, P. and Mouret, J.-B. (2013). On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks. PLoS One. Vol 8 No 11 Pages e79138
- Clune*, J. and Mouret, J.-B. and Lipson, H. (2013). The evolutionary origins of modularity. Proceedings of the Royal Society B. Vol 280 (J. Clune and J.-B. Mouret contributed equally to this work) Pages 20122863
- Koos, S. and Mouret, J.-B. and Doncieux, S. (2013). The Transferability Approach: Crossing the Reality Gap in Evolutionary Robotics. IEEE Transactions on Evolutionary Computation. Vol 17 No 1 Pages 122 - 145
- Doncieux, S. and Mouret, J.B. (2013). Behavioral Diversity with Multiple Behavioral Distances. Proc. of IEEE Congress on Evolutionary Computation, 2013 (CEC 2013). Pages 1-8
- Cully, A. and Mouret, J.-B. (2013). Behavioral Repertoire Learning in Robotics. Genetic and Evolutionary Computation Conference (GECCO). Pages 175-182.
- Doncieux, S. (2013). Transfer Learning for Direct Policy Search: A Reward Shaping Approach. Proceedings of ICDL-EpiRob conference. Pages 1-6.
### 2012
- Mouret, J.-B. and Doncieux, S. (2012). Encouraging Behavioral Diversity in Evolutionary Robotics: an Empirical Study. Evolutionary Computation. Vol 20 No 1 Pages 91-133.
- Ollion, Charles and Doncieux, Stéphane (2012). Towards Behavioral Consistency in Neuroevolution. From Animals to Animats: Proceedings of the 12th International Conference on Adaptive Behaviour (SAB 2012), Springer, publisher. Pages 1-10.
- Ollion, C. and Pinville, T. and Doncieux, S. (2012). With a little help from selection pressures: evolution of memory in robot controllers. Proc. Alife XIII. Pages 1-8.
### 2011
- Rubrecht, S. and Singla, E. and Padois, V. and Bidaud, P. and de Broissia, M. (2011). Evolutionary design of a robotic manipulator for a highly constrained environment. Studies in Computational Intelligence, New Horizons in Evolutionary Robotics, Springer, publisher. Vol 341 Pages 109-121.
- Doncieux, S. and Hamdaoui, M. (2011). Evolutionary Algorithms to Analyse and Design a Controller for a Flapping Wings Aircraft. New Horizons in Evolutionary Robotics Extended Contributions from the 2009 EvoDeRob Workshop, Springer, publisher. Pages 67--83.
- Mouret, J.-B. (2011). Novelty-based Multiobjectivization. New Horizons in Evolutionary Robotics: Extended Contributions from the 2009 EvoDeRob Workshop, Springer, publisher. Pages 139--154.
- Pinville, T. and Koos, S. and Mouret, J-B. and Doncieux, S. (2011). How to Promote Generalisation in Evolutionary Robotics: the ProGAb Approach. GECCO'11: Proceedings of the 13th annual conference on Genetic and evolutionary computation ACM, publisher . Pages 259--266.
- Koos, S. and Mouret, J-B. (2011). Online Discovery of Locomotion Modes for Wheel-Legged Hybrid Robots: a Transferability-based Approach. Proceedings of CLAWAR, World Scientific Publishing Co., publisher. Pages 70-77.
- Tonelli, P. and Mouret, J.-B. (2011). On the Relationships between Synaptic Plasticity and Generative Systems. Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation. Pages 1531--1538. (Best paper of the Generative and Developmental Systems (GDS) track).
- Terekhov, A.V. and Mouret, J.-B. and Grand, C. (2011). Stochastic optimization of a chain sliding mode controller for the mobile robot maneuvering. Proceedings of IEEE / IROS Int. Conf. on Robots and Intelligents Systems. Pages 4360 - 4365
### 2010
- Mouret, J.-B. and Doncieux, S. and Girard, B. (2010). Importing the Computational Neuroscience Toolbox into Neuro-Evolution---Application to Basal Ganglia. GECCO'10: Proceedings of the 12th annual conference on Genetic and evolutionary computation ACM, publisher . Pages 587--594.
- Koos, S. and Mouret, J.-B. and Doncieux, S. (2010). Crossing the Reality Gap in Evolutionary Robotics by Promoting Transferable Controllers. GECCO'10: Proceedings of the 12th annual conference on Genetic and evolutionary computation ACM, publisher . Pages 119--126.
- Doncieux, S. and Mouret, J.-B. (2010). Behavioral diversity measures for Evolutionary Robotics. WCCI 2010 IEEE World Congress on Computational Intelligence, Congress on Evolutionary Computation (CEC). Pages 1303--1310.
- Terekhov, A.V. and Mouret, J.-B. and Grand, C. (2010). Stochastic optimization of a neural network-based controller for aggressive maneuvers on loose surfaces. Proceedings of IEEE / IROS Int. Conf. on Robots and Intelligents Systems. Pages 4782 - 4787
- Terekhov, A.V and Mouret, J.-B. and Grand, C (2010). Stochastic multi-objective optimization for aggressive maneuver trajectory planning on loose surface.
Proceedings of IFAC: the 7th Symposium on Intelligent Autonomous Vehicles. Pages 1-6
- Liénard, J. and Guillot, A. and Girard, B. (2010). Multi-Objective Evolutionary Algorithms to Investigate Neurocomputational Issues : The Case Study of Basal Ganglia Models. From animals to animats 11, Springer, publisher. Vol 6226 Pages 597--606
### 2009
- Koos, S. and Mouret, J.-B. and Doncieux, S. (2009). Automatic system identification based on coevolution of models and tests. IEEE Congress on Evolutionary Computation, 2009 (CEC 2009). Pages 560--567
- Mouret, J.-B. and Doncieux, S. (2009). Evolving modular neural-networks through exaptation. IEEE Congress on Evolutionary Computation, 2009 (CEC 2009). Pages 1570--1577. (Best student paper award)
- Mouret, J.-B. and Doncieux, S. (2009). Overcoming the bootstrap problem in evolutionary robotics using behavioral diversity. IEEE Congress on Evolutionary Computation, 2009 (CEC 2009). Pages 1161 - 1168
- Mouret, J.-B. and Doncieux, S. (2009). Using Behavioral Exploration Objectives to Solve Deceptive Problems in Neuro-evolution. GECCO'09: Proceedings of the 11th annual conference on Genetic and evolutionary computation , ACM, publisher. Pages 627--634.
#!/bin/bash
home=$(echo ~)
quit=0
# Remove the build folder
rm -rf ./build
echo "Build folder removed."
# Check the building folder, either on local or Moran
if [ "$home" == "/home/anh" ]
then
echo "Configuring sferes for local.."
echo "..."
./waf clean
./waf distclean
#./waf configure --boost-include=/home/anh/src/sferes/include --boost-lib=/home/anh/src/sferes/lib --eigen3=/home/anh/src/sferes/include --mpi=/home/anh/openmpi
./waf configure --boost-include=/home/anh/src/sferes/include --boost-lib=/home/anh/src/sferes/lib --eigen3=/home/anh/src/sferes/include
quit=1
else
if [ "$home" == "/home/anguyen8" ]
then
echo "Configuring sferes for Moran.."
echo "..."
./waf clean
./waf distclean
# TBB
# ./waf configure --boost-include=/project/RIISVis/anguyen8/sferes/include/ --boost-libs=/project/RIISVis/anguyen8/sferes/lib/ --eigen3=/home/anguyen8/local/include --mpi=/apps/OPENMPI/gnu/4.8.2/1.6.5 --tbb=/home/anguyen8/sferes --libs=/home/anguyen8/local/lib
# MPI (No TBB)
./waf configure --boost-include=/project/RIISVis/anguyen8/sferes/include/ --boost-libs=/project/RIISVis/anguyen8/sferes/lib/ --eigen3=/home/anguyen8/local/include --mpi=/apps/OPENMPI/gnu/4.8.2/1.6.5 --libs=/home/anguyen8/local/lib
quit=1
else
echo "Unknown environment. Building stopped."
fi
fi
if [ "$quit" -eq "1" ]
then
echo "Building sferes.."
echo "..."
echo "..."
./waf build
echo "Building exp/images.."
echo "..."
echo "..."
./waf --exp images
fi
#! /usr/bin/env python
# encoding: utf-8
# JB Mouret - 2009
"""
Quick n dirty eigen3 detection
"""
import os, glob, types
import Options, Configure
def detect_eigen3(conf):
env = conf.env
opt = Options.options
conf.env['LIB_EIGEN3'] = ''
conf.env['EIGEN3_FOUND'] = False
if Options.options.no_eigen3:
return 0
if Options.options.eigen3:
conf.env['CPPPATH_EIGEN3'] = [Options.options.eigen3]
conf.env['LIBPATH_EIGEN3'] = [Options.options.eigen3]
else:
conf.env['CPPPATH_EIGEN3'] = ['/usr/include/eigen3', '/usr/local/include/eigen3', '/usr/include', '/usr/local/include']
conf.env['LIBPATH_EIGEN3'] = ['/usr/lib', '/usr/local/lib']
res = Configure.find_file('Eigen/Core', conf.env['CPPPATH_EIGEN3'])
conf.check_message('header','Eigen/Core', (res != '') , res)
if (res == '') :
return 0
conf.env['EIGEN3_FOUND'] = True
return 1
def detect(conf):
return detect_eigen3(conf)
def set_options(opt):
opt.add_option('--eigen3', type='string', help='path to eigen3', dest='eigen3')
opt.add_option('--no-eigen3', type='string', help='disable eigen3', dest='no_eigen3')
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include <sferes/gen/evo_float.hpp>
#include <sferes/ea/rank_simple.hpp>
#include <sferes/eval/eval.hpp>
#include <sferes/stat/best_fit.hpp>
#include <sferes/stat/mean_fit.hpp>
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
using namespace sferes;
using namespace sferes::gen::evo_float;
struct Params
{
struct evo_float
{
// we choose the polynomial mutation type
SFERES_CONST mutation_t mutation_type = polynomial;
// we choose the polynomial cross-over type
SFERES_CONST cross_over_t cross_over_type = sbx;
// the mutation rate of the real-valued vector
SFERES_CONST float mutation_rate = 0.1f;
// the cross rate of the real-valued vector
SFERES_CONST float cross_rate = 0.5f;
// a parameter of the polynomial mutation
SFERES_CONST float eta_m = 15.0f;
// a parameter of the polynomial cross-over
SFERES_CONST float eta_c = 10.0f;
};
struct pop
{
// size of the population
SFERES_CONST unsigned size = 200;
// number of generations
SFERES_CONST unsigned nb_gen = 2000;
// how often should the result file be written (here, each 5
// generation)
SFERES_CONST int dump_period = 5;
// how many individuals should be created during the random
// generation process?
SFERES_CONST int initial_aleat = 1;
// used by RankSimple to select the pressure
SFERES_CONST float coeff = 1.1f;
// the number of individuals that are kept from on generation to
// another (elitism)
SFERES_CONST float keep_rate = 0.6f;
};
struct parameters
{
// maximum value of parameters
SFERES_CONST float min = -10.0f;
// minimum value
SFERES_CONST float max = 10.0f;
};
};
SFERES_FITNESS(FitTest, sferes::fit::Fitness)
{
public:
// indiv will have the type defined in the main (phen_t)
template<typename Indiv>
void eval(const Indiv& ind)
{
float v = 0;
for (unsigned i = 0; i < ind.size(); ++i)
{
float p = ind.data(i);
v += p * p * p * p;
}
this->_value = -v;
}
};
int main(int argc, char **argv)
{
// Our fitness is the class FitTest (see above), that we will call
// fit_t. Params is the set of parameters (struct Params) defined in
// this file.
typedef FitTest<Params> fit_t;
// We define the genotype. Here we choose EvoFloat (real
// numbers). We evolve 10 real numbers, with the params defined in
// Params (cf the beginning of this file)
typedef gen::EvoFloat<10, Params> gen_t;
// This genotype should be simply transformed into a vector of
// parameters (phen::Parameters). The genotype could also have been
// transformed into a shape, a neural network... The phenotype need
// to know which fitness to use; we pass fit_t.
typedef phen::Parameters<gen_t, fit_t, Params> phen_t;
// The evaluator is in charge of distributing the evaluation of the
// population. It can be simple eval::Eval (nothing special),
// parallel (for multicore machines, eval::Parallel) or distributed
// (for clusters, eval::Mpi).
typedef eval::Eval<Params> eval_t;
// Statistics gather data about the evolutionary process (mean
// fitness, Pareto front, ...). Since they can also stores the best
// individuals, they are the container of our results. We can add as
// many statistics as required thanks to the boost::fusion::vector.
typedef boost::fusion::vector<stat::BestFit<phen_t, Params>, stat::MeanFit<Params> > stat_t;
// Modifiers are functors that are run once all individuals have
// been evalutated. Their typical use is to add some evolutionary
// pressures towards diversity (e.g. fitness sharing). Here we don't
// use this feature. As a consequence we use a "dummy" modifier that
// does nothing.
typedef modif::Dummy<> modifier_t;
// We can finally put everything together. RankSimple is the
// evolutianary algorithm. It is parametrized by the phenotype, the
// evaluator, the statistics list, the modifier and the general params.
typedef ea::RankSimple<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
// We now have a special class for our experiment: ea_t. The next
// line instantiate an object of this class
ea_t ea;
// we can now process the comannd line options an run the
// evolutionary algorithm (if a --load argument is passed, the file
// is loaded; otherwise, the algorithm is launched).
run_ea(argc, argv, ea);
//
return 0;
}
{
"machines" :
{
"localhost" : 3,
"fortaleza" : 2
},
"nb_runs": 3,
"exp" : "examples/ex_ea",
"dir" : "res/ex1",
"debug" : 0
}
\ No newline at end of file
#include <iostream>
#ifdef MPI_ENABLED
#include <sferes/phen/parameters.hpp>
#include <sferes/gen/evo_float.hpp>
#include <sferes/ea/rank_simple.hpp>
#include <sferes/eval/eval.hpp>
#include <sferes/stat/best_fit.hpp>
#include <sferes/stat/mean_fit.hpp>
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
#include <boost/program_options.hpp>
#include <sferes/eval/mpi.hpp>
using namespace sferes;
using namespace sferes::gen::evo_float;
struct Params
{
struct evo_float
{
SFERES_CONST float cross_rate = 0.5f;
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float eta_m = 15.0f;
SFERES_CONST float eta_c = 10.0f;
SFERES_CONST mutation_t mutation_type = polynomial;
SFERES_CONST cross_over_t cross_over_type = sbx;
};
struct pop
{
SFERES_CONST unsigned size = 200;
SFERES_CONST unsigned nb_gen = 40;
SFERES_CONST int dump_period = 5;
SFERES_CONST int initial_aleat = 1;
SFERES_CONST float coeff = 1.1f;
SFERES_CONST float keep_rate = 0.6f;
};
struct parameters
{
SFERES_CONST float min = -10.0f;
SFERES_CONST float max = 10.0f;
};
};
SFERES_FITNESS(FitTest, sferes::fit::Fitness)
{
public:
FitTest()
{}
template<typename Indiv>
void eval(const Indiv& ind)
{
float v = 0;
for (unsigned i = 0; i < ind.size(); ++i)
{
float p = ind.data(i);
v += p * p * p * p;
}
// slow down to simulate a slow fitness
usleep(1e4);
this->_value = -v;
}
};
int main(int argc, char **argv)
{
dbg::out(dbg::info)<<"running ex_ea ... try --help for options (verbose)"<<std::endl;
std::cout << "To run this example, you need to use mpirun" << std::endl;
std::cout << "mpirun -x LD_LIBRARY_PATH=/home/creadapt/lib -np 50 -machinefile machines.pinfo build/debug/examples/ex_ea_mpi" << std::endl;
typedef gen::EvoFloat<10, Params> gen_t;
typedef phen::Parameters<gen_t, FitTest<Params>, Params> phen_t;
typedef eval::Mpi<Params> eval_t;
typedef boost::fusion::vector<stat::BestFit<phen_t, Params>, stat::MeanFit<Params> > stat_t;
typedef modif::Dummy<> modifier_t;
typedef ea::RankSimple<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
ea_t ea;
run_ea(argc, argv, ea);
std::cout<<"==> best fitness ="<<ea.stat<0>().best()->fit().value()<<std::endl;
// std::cout<<"==> mean fitness ="<<ea.stat<1>().mean()<<std::endl;
return 0;
}
#else
#warning MPI is disabled, ex_ea_mpi is not compiled
int main()
{
std::cerr<<"MPI is disabled"<<std::endl;
return 0;
}
#endif
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include <sferes/gen/evo_float.hpp>
#include <sferes/ea/eps_moea.hpp>
#include <sferes/eval/eval.hpp>
#include <sferes/stat/pareto_front.hpp>
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
#include <boost/program_options.hpp>
using namespace sferes;
using namespace sferes::gen::evo_float;
struct Params
{
struct evo_float
{
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float cross_rate = 0.5f;
SFERES_CONST float eta_m = 15.0f;
SFERES_CONST float eta_c = 10.0f;
SFERES_CONST mutation_t mutation_type = polynomial;
SFERES_CONST cross_over_t cross_over_type = sbx;
};
struct pop
{
SFERES_CONST unsigned size = 200;
SFERES_CONST int dump_period = 50;
SFERES_ARRAY(float, eps, 0.0075f, 0.0075f);
SFERES_ARRAY(float, min_fit, 0.0f, 0.0f);
SFERES_CONST size_t grain = size / 4;
SFERES_CONST unsigned nb_gen = 2000;
};
struct parameters
{
SFERES_CONST float min = 0.0f;
SFERES_CONST float max = 1.0f;
};
};
template<typename Indiv>
float _g(const Indiv &ind)
{
float g = 0.0f;
assert(ind.size() == 30);
for (size_t i = 1; i < 30; ++i)
g += ind.data(i);
g = 9.0f * g / 29.0f;
g += 1.0f;
return g;
}
SFERES_FITNESS(FitZDT2, sferes::fit::Fitness)
{
public:
FitZDT2() {}
template<typename Indiv>
void eval(Indiv& ind)
{
this->_objs.resize(2);
float f1 = ind.data(0);
float g = _g(ind);
float h = 1.0f - pow((f1 / g), 2.0);
float f2 = g * h;
this->_objs[0] = -f1;
this->_objs[1] = -f2;
}
};
int main(int argc, char **argv)
{
std::cout<<"running "<<argv[0]<<" ... try --help for options (verbose)"<<std::endl;
typedef gen::EvoFloat<30, Params> gen_t;
typedef phen::Parameters<gen_t, FitZDT2<Params>, Params> phen_t;
typedef eval::Eval<Params> eval_t;
typedef boost::fusion::vector<stat::ParetoFront<phen_t, Params> > stat_t;
typedef modif::Dummy<> modifier_t;
typedef ea::EpsMOEA<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
ea_t ea;
run_ea(argc, argv, ea);
return 0;
}
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include <sferes/gen/evo_float.hpp>
#include <sferes/ea/nsga2.hpp>
#include <sferes/eval/eval.hpp>
#include <sferes/stat/pareto_front.hpp>
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
#include <boost/program_options.hpp>
using namespace sferes;
using namespace sferes::gen::evo_float;
struct Params
{
struct evo_float
{
SFERES_CONST float cross_rate = 0.5f;
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float eta_m = 15.0f;
SFERES_CONST float eta_c = 10.0f;
SFERES_CONST mutation_t mutation_type = polynomial;
SFERES_CONST cross_over_t cross_over_type = sbx;
};
struct pop
{
SFERES_CONST unsigned size = 300;
SFERES_CONST unsigned nb_gen = 500;
SFERES_CONST int dump_period = 50;
SFERES_CONST int initial_aleat = 1;
};
struct parameters
{
SFERES_CONST float min = 0.0f;
SFERES_CONST float max = 1.0f;
};
};
template<typename Indiv>
float _g(const Indiv &ind)
{
float g = 0.0f;
assert(ind.size() == 30);
for (size_t i = 1; i < 30; ++i)
g += ind.data(i);
g = 9.0f * g / 29.0f;
g += 1.0f;
return g;
}
SFERES_FITNESS(FitZDT2, sferes::fit::Fitness)
{
public:
FitZDT2() {}
template<typename Indiv>
void eval(Indiv& ind)
{
this->_objs.resize(2);
float f1 = ind.data(0);
float g = _g(ind);
float h = 1.0f - pow((f1 / g), 2.0);
float f2 = g * h;
this->_objs[0] = -f1;
this->_objs[1] = -f2;
}
};
int main(int argc, char **argv)
{
std::cout<<"running "<<argv[0]<<" ... try --help for options (verbose)"<<std::endl;
typedef gen::EvoFloat<30, Params> gen_t;
typedef phen::Parameters<gen_t, FitZDT2<Params>, Params> phen_t;
typedef eval::Eval<Params> eval_t;
typedef boost::fusion::vector<stat::ParetoFront<phen_t, Params> > stat_t;
typedef modif::Dummy<> modifier_t;
typedef ea::Nsga2<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
ea_t ea;
run_ea(argc, argv, ea);
return 0;
}
{
"email" : "mouret@isir.upmc.fr",
"wall_time" : "24:00:00",
"nb_runs": 3,
"bin_dir": "/home/mouret/svn/sferes2/trunk/build/default/examples/",
"res_dir": "/home/mouret/svn/sferes2/trunk/res/",
"exps" : ["ex_ea"]
}
#! /usr/bin/env python
def build(bld):
return None
# ex_ea
#obj = bld.new_task_gen('cxx', 'program')
#obj.source = 'ex_ea.cpp'
#obj.includes = '../'
#obj.target = 'ex_ea'
#obj.uselib_local = 'sferes2'
#obj.uselib = 'TBB BOOST BOOST_UNIT_TEST_FRAMEWORK EIGEN3'
## ex_ea
#obj = bld.new_task_gen('cxx', 'program')
#obj.source = 'ex_ea_mpi.cpp'
#obj.includes = '../'
#obj.target = 'ex_ea_mpi'
#obj.uselib_local = 'sferes2'
#obj.uselib = 'TBB BOOST BOOST_UNIT_TEST_FRAMEWORK EIGEN3'
## ex_nsga2
#obj = bld.new_task_gen('cxx', 'program')
#obj.source = 'ex_nsga2.cpp'
#obj.includes = '../'
#obj.target = 'ex_nsga2'
#obj.uselib_local = 'sferes2'
#obj.uselib = 'TBB BOOST BOOST_UNIT_TEST_FRAMEWORK EIGEN3'
## ex_eps_moea
#obj = bld.new_task_gen('cxx', 'program')
#obj.source = 'ex_eps_moea.cpp'
#obj.includes = '../'
#obj.target = 'ex_eps_moea'
#obj.uselib_local = 'sferes2'
#obj.uselib = 'TBB BOOST BOOST_UNIT_TEST_FRAMEWORK EIGEN3'
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include <sferes/gen/evo_float.hpp>
#include <sferes/ea/nsga2.hpp>
#include <sferes/eval/eval.hpp>
#include <sferes/stat/pareto_front.hpp>
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
#include <boost/program_options.hpp>
using namespace sferes;
using namespace sferes::gen::evo_float;
struct Params
{
struct evo_float
{
SFERES_CONST float cross_rate = 0.1f;
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float eta_m = 15.0f;
SFERES_CONST float eta_c = 10.0f;
SFERES_CONST mutation_t mutation_type = polynomial;
SFERES_CONST cross_over_t cross_over_type = sbx;
};
struct pop
{
SFERES_CONST unsigned size = 300;
SFERES_CONST unsigned nb_gen = 500;
SFERES_CONST int dump_period = 50;
SFERES_CONST int initial_aleat = 1;
};
struct parameters
{
SFERES_CONST float min = 0.0f;
SFERES_CONST float max = 1.0f;
};
};
template<typename Indiv>
float _g(const Indiv &ind)
{
float g = 0.0f;
assert(ind.size() == 30);
for (size_t i = 1; i < 30; ++i)
g += ind.data(i);
g = 9.0f * g / 29.0f;
g += 1.0f;
return g;
}
SFERES_FITNESS(FitZDT2, sferes::fit::Fitness)
{
public:
FitZDT2() {}
template<typename Indiv>
void eval(Indiv& ind)
{
this->_objs.resize(2);
float f1 = ind.data(0);
float g = _g(ind);
float h = 1.0f - pow((f1 / g), 2.0);
float f2 = g * h;
this->_objs[0] = -f1;
this->_objs[1] = -f2;
}
};
int main(int argc, char **argv)
{
std::cout<<"running "<<argv[0]<<" ... try --help for options (verbose)"<<std::endl;
typedef gen::EvoFloat<30, Params> gen_t;
typedef phen::Parameters<gen_t, FitZDT2<Params>, Params> phen_t;
typedef eval::Eval<Params> eval_t;
typedef boost::fusion::vector<stat::ParetoFront<phen_t, Params> > stat_t;
typedef modif::Dummy<> modifier_t;
typedef ea::Nsga2<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
ea_t ea;
run_ea(argc, argv, ea);
return 0;
}
#! /usr/bin/env python
def build(bld):
obj = bld.new_task_gen('cxx', 'program')
obj.source = 'example.cpp'
obj.includes = '. ../../'
obj.uselib_local = 'sferes2'
obj.uselib = ''
obj.target = 'example'
obj.uselib_local = 'sferes2'
#!/bin/bash
home=$(echo ~)
echo "You are building from: $home"
# Check the building folder, either on local or Moran
if [ "$home" == "/home/anh" ]
then
echo "Enabled local settings.."
cp ./wscript.local ./wscript
else
if [ "$home" == "/home/anguyen8" ]
then
echo "Enabled Moran settings.."
cp ./wscript.moran ./wscript
else
echo "Unknown environment. Building stopped."
fi
fi
/*
* continue_run.hpp
*
* Created on: Aug 14, 2014
* Author: joost
*/
#ifndef CONTINUE_RUN_HPP_
#define CONTINUE_RUN_HPP_
#include <boost/fusion/container.hpp>
#include <boost/fusion/include/vector.hpp>
#include <sferes/dbg/dbg.hpp>
#include "global_options.hpp"
#include <exp/images/stat/stat_map_image.hpp>
namespace sferes
{
namespace cont
{
template<typename EAType, typename Params>
class Continuator
{
public:
typedef std::vector<typename EAType::indiv_t> pop_t;
bool enabled()
{
return options::vm.count("continue");
}
pop_t getPopulationFromFile(EAType& ea)
{
ea.load(options::vm["continue"].as<std::string>());
return boost::fusion::at_c<Params::cont::getPopIndex>(ea.stat()).getPopulation();
}
pop_t getPopulationFromFile(EAType& ea, const std::string& path_gen_file)
{
ea.load(path_gen_file);
return boost::fusion::at_c<Params::cont::getPopIndex>(ea.stat()).getPopulation();
}
void run_with_current_population(EAType& ea, const std::string filename)
{
// Read the number of generation from gen file. Ex: gen_450
int start = 0;
std::string gen_prefix("gen_");
std::size_t pos = filename.rfind(gen_prefix) + gen_prefix.size();
std::string gen_number = filename.substr(pos);
std::istringstream ss(gen_number);
ss >> start;
start++;
dbg::out(dbg::info, "continue") << "File name: " << filename << " number start: " << pos << " gen number: " << gen_number << " result: " << start << std::endl;
// Similar to the run() function in <sferes/ea/ea.hpp>
for (int _gen = start; _gen < Params::pop::nb_gen; ++_gen)
{
ea.setGen(_gen);
ea.epoch();
ea.update_stats();
if (_gen % Params::pop::dump_period == 0)
{
ea.write();
}
}
std::cout << "Finished all the runs.\n";
exit(0);
}
void run_with_current_population(EAType& ea)
{
const std::string filename = options::vm["continue"].as<std::string>();
run_with_current_population(ea, filename);
}
};
}
}
#endif /* CONTINUE_RUN_HPP_ */
/*
* global_options.hpp
*
* Created on: Aug 14, 2014
* Author: Joost Huizinga
*
* Header file created to allow for custom command-line options to be added to an experiment.
* Requires you to replace run_ea with options::run_ea to works.
* Options can be added by:
* options::add()("option_name","description");
*/
#ifndef GLOBAL_OPTIONS_HPP_
#define GLOBAL_OPTIONS_HPP_
#include <boost/program_options.hpp>
#include <boost/archive/xml_iarchive.hpp>
#include <boost/foreach.hpp>
#include <sferes/eval/parallel.hpp>
//#include <sferes/dbg/dbg.hpp>
namespace sferes
{
namespace options
{
boost::program_options::variables_map vm;
template<typename Ea>
static void run_ea ( int argc, char **argv, Ea& ea,
const boost::program_options::options_description& add_opts =
boost::program_options::options_description(),
bool init_rand = true )
{
namespace po = boost::program_options;
std::cout << "sferes2 version: " << VERSION << std::endl;
if (init_rand)
{
time_t t = time(0) + ::getpid();
std::cout << "seed: " << t << std::endl;
srand(t);
}
po::options_description desc("Allowed sferes2 options");
desc.add(add_opts);
desc.add_options()("help,h", "produce help message")("stat,s",
po::value<int>(), "statistic number")("out,o",
po::value<std::string>(), "output file")("number,n", po::value<int>(),
"number in stat")("load,l", po::value<std::string>(),
"load a result file")("verbose,v",
po::value<std::vector<std::string> >()->multitoken(),
"verbose output, available default streams : all, ea, fit, phen, trace");
// po::variables_map vm;
po::store(po::parse_command_line(argc, argv, desc), vm);
po::notify(vm);
if (vm.count("help"))
{
std::cout << desc << std::endl;
return;
}
if (vm.count("verbose"))
{
dbg::init();
std::vector < std::string > streams = vm["verbose"].as<
std::vector<std::string> >();
attach_ostream(dbg::warning, std::cout);
attach_ostream(dbg::error, std::cerr);
attach_ostream(dbg::info, std::cout);
bool all = std::find(streams.begin(), streams.end(), "all")
!= streams.end();
bool trace = std::find(streams.begin(), streams.end(), "trace")
!= streams.end();
if (all)
{
streams.push_back("ea");
streams.push_back("fit");
streams.push_back("phen");
streams.push_back("eval");
}
BOOST_FOREACH(const std::string& s, streams){
dbg::enable(dbg::all, s.c_str(), true);
dbg::attach_ostream(dbg::info, s.c_str(), std::cout);
if (trace)
dbg::attach_ostream(dbg::tracing, s.c_str(), std::cout);
}
if (trace)
attach_ostream(dbg::tracing, std::cout);
}
parallel::init();
if (vm.count("load"))
{
ea.load(vm["load"].as<std::string>());
if (!vm.count("out"))
{
std::cerr << "You must specifiy an out file" << std::endl;
return;
}
else
{
int stat = 0;
int n = 0;
if (vm.count("stat"))
stat = vm["stat"].as<int>();
if (vm.count("number"))
n = vm["number"].as<int>();
std::ofstream ofs(vm["out"].as<std::string>().c_str());
ea.show_stat(stat, ofs, n);
}
}
else
ea.run();
}
}
}
#endif /* GLOBAL_OPTIONS_HPP_ */
#ifndef DEEP_LEARNING_IMAGES_HPP
#define DEEP_LEARNING_IMAGES_HPP
#include "settings.h"
#include <sferes/phen/parameters.hpp>
using namespace sferes;
// Parameters required by Caffe separated from those introduced by Sferes
struct ParamsCaffe
{
struct image
{
// Size of the square image 256x256
SFERES_CONST int size = 256;
SFERES_CONST int crop_size = 227;
SFERES_CONST bool use_crops = true;
SFERES_CONST bool color = true; // true: color, false: grayscale images
// GPU configurations
SFERES_CONST bool use_gpu = false;
// GPU on Moran can only handle max of 512 images in a batch at a time.
SFERES_CONST int batch = 1;
SFERES_CONST int num_categories = 1000; // ILSVR2012 ImageNet has 1000 categories
static int category_id;
SFERES_CONST bool record_lineage = false; // Flag to save the parent's assigned class
};
};
#endif /* DEEP_LEARNING_IMAGES_HPP */
#include "dl_images.hpp"
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include <sferes/gen/evo_float.hpp>
#include <sferes/eval/eval.hpp>
#include "stat/best_fit_map_image.hpp"
#include "stat/stat_map_image.hpp"
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
// Evolutionary algorithms --------------------------------
#include "fit/fit_map_deep_learning.hpp"
#include <modules/nn2/gen_dnn.hpp>
#include <modules/nn2/gen_hyper_nn.hpp>
#include "phen/phen_color_image.hpp"
#include <glog/logging.h>
// Caffe -------------------------------------------------
#include <modules/map_elite/map_elite.hpp>
#include "eval/mpi_parallel.hpp" // MPI
#include "continue_run/continue_run.hpp" // MPI
using namespace sferes;
using namespace sferes::gen::dnn;
using namespace sferes::gen::evo_float;
struct Params
{
struct cont
{
static const int getPopIndex = 0;
};
struct log
{
SFERES_CONST bool best_image = false;
};
struct ea
{
SFERES_CONST size_t res_x = 1; // 256;
SFERES_CONST size_t res_y = 1000; // 256;
};
struct dnn
{
SFERES_CONST size_t nb_inputs = 4;
SFERES_CONST size_t nb_outputs = 3; // Red, Green, Blue
SFERES_CONST float m_rate_add_conn = 0.5f;
SFERES_CONST float m_rate_del_conn = 0.3f;
SFERES_CONST float m_rate_change_conn = 0.5f;
SFERES_CONST float m_rate_add_neuron = 0.5f;
SFERES_CONST float m_rate_del_neuron = 0.2f;
SFERES_CONST init_t init = ff;
};
struct evo_float
{
// we choose the polynomial mutation type
SFERES_CONST mutation_t mutation_type = polynomial;
// we choose the polynomial cross-over type
SFERES_CONST cross_over_t cross_over_type = sbx;
// the mutation rate of the real-valued vector
SFERES_CONST float mutation_rate = 0.1f;
// the cross rate of the real-valued vector
SFERES_CONST float cross_rate = 0.5f;
// a parameter of the polynomial mutation
SFERES_CONST float eta_m = 15.0f;
// a parameter of the polynomial cross-over
SFERES_CONST float eta_c = 10.0f;
};
struct pop
{
//number of initial random points
static const size_t init_size = 400; // 1000
// size of the population
SFERES_CONST unsigned size = 400; //200;
// number of generations
SFERES_CONST unsigned nb_gen = 5001; //10,000;
// how often should the result file be written (here, each 5
// generation)
static int dump_period;// 5;
// how many individuals should be created during the random
// generation process?
SFERES_CONST int initial_aleat = 1;
// used by RankSimple to select the pressure
SFERES_CONST float coeff = 1.1f;
// the number of individuals that are kept from on generation to
// another (elitism)
SFERES_CONST float keep_rate = 0.6f;
};
struct parameters
{
// maximum value of parameters
SFERES_CONST float min = -10.0f;
// minimum value
SFERES_CONST float max = 10.0f;
};
struct cppn
{
// params of the CPPN
struct sampled
{
SFERES_ARRAY(float, values, nn::cppn::sine, nn::cppn::sigmoid, nn::cppn::gaussian, nn::cppn::linear);
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float cross_rate = 0.25f;
SFERES_CONST bool ordered = false;
};
struct evo_float
{
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float cross_rate = 0.1f;
SFERES_CONST mutation_t mutation_type = polynomial;
SFERES_CONST cross_over_t cross_over_type = sbx;
SFERES_CONST float eta_m = 15.0f;
SFERES_CONST float eta_c = 15.0f;
};
};
// Specific settings for MNIST database of grayscale
struct image : ParamsCaffe::image
{
static const std::string model_definition;
static const std::string pretrained_model;
};
};
// Initialize the parameter files for Caffe network.
#ifdef LOCAL_RUN
const std::string Params::image::model_definition = "/home/anh/src/model/imagenet_deploy_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/home/anh/src/model/caffe_reference_imagenet_model";
#else
const std::string Params::image::model_definition = "/project/EvolvingAI/anguyen8/model/imagenet_deploy_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/project/EvolvingAI/anguyen8/model/caffe_reference_imagenet_model";
#endif
int Params::pop::dump_period = 1000;
int main(int argc, char **argv)
{
// Disable GLOG output from experiment and also Caffe
// Comment out for debugging
google::InitGoogleLogging("");
google::SetStderrLogging(3);
// Our fitness is the class FitTest (see above), that we will call
// fit_t. Params is the set of parameters (struct Params) defined in
// this file.
typedef sferes::fit::FitMapDeepLearning<Params> fit_t;
// We define the genotype. Here we choose EvoFloat (real
// numbers). We evolve 10 real numbers, with the params defined in
// Params (cf the beginning of this file)
//typedef gen::EvoFloat<10, Params> gen_t;
typedef phen::Parameters<gen::EvoFloat<1, Params>, fit::FitDummy<>, Params> weight_t;
typedef gen::HyperNn<weight_t, Params> cppn_t;
// This genotype should be simply transformed into a vector of
// parameters (phen::Parameters). The genotype could also have been
// transformed into a shape, a neural network... The phenotype need
// to know which fitness to use; we pass fit_t.
typedef phen::ColorImage<cppn_t, fit_t, Params> phen_t;
// The evaluator is in charge of distributing the evaluation of the
// population. It can be simple eval::Eval (nothing special),
// parallel (for multicore machines, eval::Parallel) or distributed
// (for clusters, eval::Mpi).
// typedef eval::Eval<Params> eval_t;
typedef eval::MpiParallel<Params> eval_t; // TBB
// Statistics gather data about the evolutionary process (mean
// fitness, Pareto front, ...). Since they can also stores the best
// individuals, they are the container of our results. We can add as
// many statistics as required thanks to the boost::fusion::vector.
// typedef boost::fusion::vector<stat::BestFit<phen_t, Params>, stat::MeanFit<Params> > stat_t;
typedef boost::fusion::vector<stat::MapImage<phen_t, Params>, stat::BestFitMapImage<phen_t, Params> > stat_t;
// Modifiers are functors that are run once all individuals have
// been evalutated. Their typical use is to add some evolutionary
// pressures towards diversity (e.g. fitness sharing). Here we don't
// use this feature. As a consequence we use a "dummy" modifier that
// does nothing.
typedef modif::Dummy<> modifier_t;
// We can finally put everything together. RankSimple is the
// evolutianary algorithm. It is parametrized by the phenotype, the
// evaluator, the statistics list, the modifier and the general params.
// typedef ea::RankSimple<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
typedef ea::MapElite<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
// We now have a special class for our experiment: ea_t. The next
// line instantiate an object of this class
ea_t ea;
// we can now process the command line options an run the
// evolutionary algorithm (if a --load argument is passed, the file
// is loaded; otherwise, the algorithm is launched).
if (argc > 1) // if a number is provided on the command line
{
int randomSeed = atoi(argv[1]);
printf("randomSeed:%i\n", randomSeed);
srand(randomSeed); //set it as the random seed
boost::program_options::options_description add_opts =
boost::program_options::options_description();
shared_ptr<boost::program_options::option_description> opt (new boost::program_options::option_description(
"continue,t", boost::program_options::value<std::string>(),
"continue from the loaded file starting from the generation provided"
));
add_opts.add(opt);
options::run_ea(argc, argv, ea, add_opts, false);
}
else
{
run_ea(argc, argv, ea);
}
return 0;
}
#include "dl_images.hpp"
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include "gen/evo_float_image.hpp"
#include <sferes/eval/eval.hpp>
#include "stat/best_fit_map_image.hpp"
#include "stat/stat_map_image.hpp"
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
// Evolutionary algorithms --------------------------------
#include "fit/fit_map_deep_learning.hpp"
#include <modules/nn2/gen_dnn.hpp>
#include <modules/nn2/phen_dnn.hpp>
#include <modules/nn2/gen_hyper_nn.hpp>
#include "phen/phen_image_direct.hpp"
#include <glog/logging.h>
// Caffe -------------------------------------------------
#include <modules/map_elite/map_elite.hpp>
#include "eval/mpi_parallel.hpp" // MPI
#include "continue_run/continue_run.hpp" // MPI
using namespace sferes;
using namespace sferes::gen::dnn;
using namespace sferes::gen::evo_float_image;
struct Params
{
struct cont
{
static const int getPopIndex = 0;
};
struct log
{
SFERES_CONST bool best_image = false;
};
struct ea
{
SFERES_CONST size_t res_x = 1; // 256;
SFERES_CONST size_t res_y = 1000; // 256;
};
struct evo_float_image
{
// we choose the polynomial mutation type
SFERES_CONST mutation_t mutation_type = polynomial;
// we choose the polynomial cross-over type
SFERES_CONST cross_over_t cross_over_type = sbx;
// the mutation rate of the real-valued vector
static float mutation_rate;
// the cross rate of the real-valued vector
SFERES_CONST float cross_rate = 0.5f;
// a parameter of the polynomial mutation
SFERES_CONST float eta_m = 15.0f;
// a parameter of the polynomial cross-over
SFERES_CONST float eta_c = 10.0f;
};
struct pop
{
//number of initial random points
SFERES_CONST size_t init_size = 200; // 1000
// size of the population
SFERES_CONST unsigned size = 200; //200;
// number of generations
SFERES_CONST unsigned nb_gen = 5010; //10,000;
// how often should the result file be written (here, each 5
// generation)
static int dump_period;// 5;
// how many individuals should be created during the random
// generation process?
SFERES_CONST int initial_aleat = 1;
// used by RankSimple to select the pressure
SFERES_CONST float coeff = 1.1f;
// the number of individuals that are kept from on generation to
// another (elitism)
SFERES_CONST float keep_rate = 0.6f;
};
struct parameters
{
// maximum value of parameters
SFERES_CONST float min = -10.0f;
// minimum value
SFERES_CONST float max = 10.0f;
};
// Specific settings for MNIST database of grayscale
struct image : ParamsCaffe::image
{
static const std::string model_definition;
static const std::string pretrained_model;
};
};
// Initialize the parameter files for Caffe network.
#ifdef LOCAL_RUN
const std::string Params::image::model_definition = "/home/anh/src/model/imagenet_deploy_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/home/anh/src/model/caffe_reference_imagenet_model";
#else
const std::string Params::image::model_definition = "/project/EvolvingAI/anguyen8/model/imagenet_deploy_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/project/EvolvingAI/anguyen8/model/caffe_reference_imagenet_model";
#endif
int Params::pop::dump_period = 1000;
float Params::evo_float_image::mutation_rate = 0.1f;
int main(int argc, char **argv)
{
// Disable GLOG output from experiment and also Caffe
// Comment out for debugging
google::InitGoogleLogging("");
google::SetStderrLogging(3);
// Our fitness is the class FitTest (see above), that we will call
// fit_t. Params is the set of parameters (struct Params) defined in
// this file.
typedef sferes::fit::FitMapDeepLearning<Params> fit_t;
// We define the genotype. Here we choose EvoFloat (real
// numbers). We evolve 10 real numbers, with the params defined in
// Params (cf the beginning of this file)
typedef gen::EvoFloatImage<Params::image::size * Params::image::size * 3, Params> gen_t;
// This genotype should be simply transformed into a vector of
// parameters (phen::Parameters). The genotype could also have been
// transformed into a shape, a neural network... The phenotype need
// to know which fitness to use; we pass fit_t.
typedef phen::ImageDirect<gen_t, fit_t, Params> phen_t;
// The evaluator is in charge of distributing the evaluation of the
// population. It can be simple eval::Eval (nothing special),
// parallel (for multicore machines, eval::Parallel) or distributed
// (for clusters, eval::Mpi).
// typedef eval::Eval<Params> eval_t;
typedef eval::MpiParallel<Params> eval_t; // TBB
// Statistics gather data about the evolutionary process (mean
// fitness, Pareto front, ...). Since they can also stores the best
// individuals, they are the container of our results. We can add as
// many statistics as required thanks to the boost::fusion::vector.
// typedef boost::fusion::vector<stat::BestFit<phen_t, Params>, stat::MeanFit<Params> > stat_t;
typedef boost::fusion::vector<stat::MapImage<phen_t, Params>, stat::BestFitMapImage<phen_t, Params> > stat_t;
// Modifiers are functors that are run once all individuals have
// been evalutated. Their typical use is to add some evolutionary
// pressures towards diversity (e.g. fitness sharing). Here we don't
// use this feature. As a consequence we use a "dummy" modifier that
// does nothing.
typedef modif::Dummy<> modifier_t;
// We can finally put everything together. RankSimple is the
// evolutianary algorithm. It is parametrized by the phenotype, the
// evaluator, the statistics list, the modifier and the general params.
// typedef ea::RankSimple<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
typedef ea::MapElite<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
// We now have a special class for our experiment: ea_t. The next
// line instantiate an object of this class
ea_t ea;
// we can now process the command line options an run the
// evolutionary algorithm (if a --load argument is passed, the file
// is loaded; otherwise, the algorithm is launched).
if (argc > 1) // if a number is provided on the command line
{
int randomSeed = atoi(argv[1]);
printf("randomSeed:%i\n", randomSeed);
srand(randomSeed); //set it as the random seed
boost::program_options::options_description add_opts =
boost::program_options::options_description();
shared_ptr<boost::program_options::option_description> opt (new boost::program_options::option_description(
"continue,t", boost::program_options::value<std::string>(),
"continue from the loaded file starting from the generation provided"
));
add_opts.add(opt);
options::run_ea(argc, argv, ea, add_opts, false);
}
else
{
run_ea(argc, argv, ea);
}
return 0;
}
#include "dl_images.hpp"
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include <sferes/gen/evo_float.hpp>
#include <sferes/eval/eval.hpp>
#include "stat/best_fit_map_image.hpp"
#include "stat/stat_map_image.hpp"
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
// Evolutionary algorithms --------------------------------
#include "fit/fit_map_deep_learning.hpp"
#include <modules/nn2/gen_dnn.hpp>
#include <modules/nn2/gen_hyper_nn.hpp>
#include "phen/phen_grayscale_image.hpp"
#include <glog/logging.h>
// Caffe -------------------------------------------------
#include <modules/map_elite/map_elite.hpp>
#include "eval/mpi_parallel.hpp" // MPI
#include "continue_run/continue_run.hpp" // MPI
using namespace sferes;
using namespace sferes::gen::dnn;
using namespace sferes::gen::evo_float;
struct Params
{
struct cont
{
static const int getPopIndex = 0;
};
struct log
{
SFERES_CONST bool best_image = false;
};
struct ea
{
SFERES_CONST size_t res_x = 1; // 256;
SFERES_CONST size_t res_y = 10; // 256;
};
struct dnn
{
SFERES_CONST size_t nb_inputs = 4;
SFERES_CONST size_t nb_outputs = 1; // Red, Green, Blue
SFERES_CONST float m_rate_add_conn = 0.5f;
SFERES_CONST float m_rate_del_conn = 0.3f;
SFERES_CONST float m_rate_change_conn = 0.5f;
SFERES_CONST float m_rate_add_neuron = 0.5f;
SFERES_CONST float m_rate_del_neuron = 0.2f;
SFERES_CONST init_t init = ff;
};
struct evo_float
{
// we choose the polynomial mutation type
SFERES_CONST mutation_t mutation_type = polynomial;
// we choose the polynomial cross-over type
SFERES_CONST cross_over_t cross_over_type = sbx;
// the mutation rate of the real-valued vector
SFERES_CONST float mutation_rate = 0.1f;
// the cross rate of the real-valued vector
SFERES_CONST float cross_rate = 0.5f;
// a parameter of the polynomial mutation
SFERES_CONST float eta_m = 15.0f;
// a parameter of the polynomial cross-over
SFERES_CONST float eta_c = 10.0f;
};
struct pop
{
//number of initial random points
static const size_t init_size = 200; // 1000
// size of the population
SFERES_CONST unsigned size = 200; //200;
// number of generations
SFERES_CONST unsigned nb_gen = 1001; //10,000;
// how often should the result file be written (here, each 5
// generation)
static int dump_period;// 5;
// how many individuals should be created during the random
// generation process?
SFERES_CONST int initial_aleat = 1;
// used by RankSimple to select the pressure
SFERES_CONST float coeff = 1.1f;
// the number of individuals that are kept from on generation to
// another (elitism)
SFERES_CONST float keep_rate = 0.6f;
};
struct parameters
{
// maximum value of parameters
SFERES_CONST float min = -10.0f;
// minimum value
SFERES_CONST float max = 10.0f;
};
struct cppn
{
// params of the CPPN
struct sampled
{
SFERES_ARRAY(float, values, nn::cppn::sine, nn::cppn::sigmoid, nn::cppn::gaussian, nn::cppn::linear);
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float cross_rate = 0.25f;
SFERES_CONST bool ordered = false;
};
struct evo_float
{
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float cross_rate = 0.1f;
SFERES_CONST mutation_t mutation_type = polynomial;
SFERES_CONST cross_over_t cross_over_type = sbx;
SFERES_CONST float eta_m = 15.0f;
SFERES_CONST float eta_c = 15.0f;
};
};
// Specific settings for MNIST database of grayscale
struct image : ParamsCaffe::image
{
// Size of the square image 256x256
SFERES_CONST int size = 28;
SFERES_CONST bool use_crops = false;
SFERES_CONST bool color = false; // Grayscale
SFERES_CONST int num_categories = 10; // MNIST has 10 categories
static const std::string model_definition;
static const std::string pretrained_model;
SFERES_CONST bool record_lineage = true;
};
};
// Initialize the parameter files for Caffe network.
#ifdef LOCAL_RUN
const std::string Params::image::model_definition = "/home/anh/src/model/lenet_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/home/anh/src/model/lenet_iter_10000";
#else
const std::string Params::image::model_definition = "/project/EvolvingAI/anguyen8/model/lenet_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/project/EvolvingAI/anguyen8/model/lenet_iter_10000";
#endif
int Params::pop::dump_period = 100;
int main(int argc, char **argv)
{
// Disable GLOG output from experiment and also Caffe
// Comment out for debugging
google::InitGoogleLogging("");
google::SetStderrLogging(3);
// Our fitness is the class FitTest (see above), that we will call
// fit_t. Params is the set of parameters (struct Params) defined in
// this file.
typedef sferes::fit::FitMapDeepLearning<Params> fit_t;
// We define the genotype. Here we choose EvoFloat (real
// numbers). We evolve 10 real numbers, with the params defined in
// Params (cf the beginning of this file)
//typedef gen::EvoFloat<10, Params> gen_t;
typedef phen::Parameters<gen::EvoFloat<1, Params>, fit::FitDummy<>, Params> weight_t;
typedef gen::HyperNn<weight_t, Params> cppn_t;
// This genotype should be simply transformed into a vector of
// parameters (phen::Parameters). The genotype could also have been
// transformed into a shape, a neural network... The phenotype need
// to know which fitness to use; we pass fit_t.
typedef phen::GrayscaleImage<cppn_t, fit_t, Params> phen_t;
// The evaluator is in charge of distributing the evaluation of the
// population. It can be simple eval::Eval (nothing special),
// parallel (for multicore machines, eval::Parallel) or distributed
// (for clusters, eval::Mpi).
// typedef eval::Eval<Params> eval_t;
typedef eval::MpiParallel<Params> eval_t; // TBB
// Statistics gather data about the evolutionary process (mean
// fitness, Pareto front, ...). Since they can also stores the best
// individuals, they are the container of our results. We can add as
// many statistics as required thanks to the boost::fusion::vector.
// typedef boost::fusion::vector<stat::BestFit<phen_t, Params>, stat::MeanFit<Params> > stat_t;
typedef boost::fusion::vector<stat::MapImage<phen_t, Params>, stat::BestFitMapImage<phen_t, Params> > stat_t;
// Modifiers are functors that are run once all individuals have
// been evalutated. Their typical use is to add some evolutionary
// pressures towards diversity (e.g. fitness sharing). Here we don't
// use this feature. As a consequence we use a "dummy" modifier that
// does nothing.
typedef modif::Dummy<> modifier_t;
// We can finally put everything together. RankSimple is the
// evolutianary algorithm. It is parametrized by the phenotype, the
// evaluator, the statistics list, the modifier and the general params.
// typedef ea::RankSimple<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
typedef ea::MapElite<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
// We now have a special class for our experiment: ea_t. The next
// line instantiate an object of this class
ea_t ea;
// we can now process the command line options an run the
// evolutionary algorithm (if a --load argument is passed, the file
// is loaded; otherwise, the algorithm is launched).
if (argc > 1) // if a number is provided on the command line
{
int randomSeed = atoi(argv[1]);
printf("randomSeed:%i\n", randomSeed);
srand(randomSeed); //set it as the random seed
boost::program_options::options_description add_opts =
boost::program_options::options_description();
shared_ptr<boost::program_options::option_description> opt (new boost::program_options::option_description(
"continue,t", boost::program_options::value<std::string>(),
"continue from the loaded file starting from the generation provided"
));
add_opts.add(opt);
options::run_ea(argc, argv, ea, add_opts, false);
}
else
{
run_ea(argc, argv, ea);
}
return 0;
}
#include "dl_images.hpp"
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include "gen/evo_float_image.hpp"
#include <sferes/eval/eval.hpp>
#include "stat/best_fit_map_image.hpp"
#include "stat/stat_map_image.hpp"
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
// Evolutionary algorithms --------------------------------
#include "fit/fit_map_deep_learning.hpp"
#include <modules/nn2/gen_dnn.hpp>
#include <modules/nn2/gen_hyper_nn.hpp>
#include "phen/phen_grayscale_image_direct.hpp"
#include <glog/logging.h>
// Caffe -------------------------------------------------
#include <modules/map_elite/map_elite.hpp>
#include "eval/mpi_parallel.hpp" // MPI
#include "continue_run/continue_run.hpp" // MPI
using namespace sferes;
using namespace sferes::gen::dnn;
using namespace sferes::gen::evo_float_image;
struct Params
{
struct cont
{
static const int getPopIndex = 0;
};
struct log
{
SFERES_CONST bool best_image = false;
};
struct ea
{
SFERES_CONST size_t res_x = 1; // 256;
SFERES_CONST size_t res_y = 10; // 256;
};
struct evo_float_image
{
// we choose the polynomial mutation type
SFERES_CONST mutation_t mutation_type = polynomial;
// we choose the polynomial cross-over type
SFERES_CONST cross_over_t cross_over_type = sbx;
// the mutation rate of the real-valued vector
SFERES_CONST float mutation_rate = 0.1f;
// the cross rate of the real-valued vector
SFERES_CONST float cross_rate = 0.5f;
// a parameter of the polynomial mutation
SFERES_CONST float eta_m = 15.0f;
// a parameter of the polynomial cross-over
SFERES_CONST float eta_c = 10.0f;
};
struct pop
{
//number of initial random points
static const size_t init_size = 200; // 1000
// size of the population
SFERES_CONST unsigned size = 200; //200;
// number of generations
SFERES_CONST unsigned nb_gen = 1010; //10,000;
// how often should the result file be written (here, each 5
// generation)
static int dump_period;// 5;
// how many individuals should be created during the random
// generation process?
SFERES_CONST int initial_aleat = 1;
// used by RankSimple to select the pressure
SFERES_CONST float coeff = 1.1f;
// the number of individuals that are kept from on generation to
// another (elitism)
SFERES_CONST float keep_rate = 0.6f;
};
struct parameters
{
// maximum value of parameters
SFERES_CONST float min = -10.0f;
// minimum value
SFERES_CONST float max = 10.0f;
};
// Specific settings for MNIST database of grayscale
struct image : ParamsCaffe::image
{
// Size of the square image 256x256
SFERES_CONST int size = 28;
SFERES_CONST bool use_crops = false;
SFERES_CONST bool color = false; // Grayscale
SFERES_CONST int num_categories = 10; // MNIST has 10 categories
static const std::string model_definition;
static const std::string pretrained_model;
};
};
// Initialize the parameter files for Caffe network.
#ifdef LOCAL_RUN
const std::string Params::image::model_definition = "/home/anh/src/model/lenet_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/home/anh/src/model/lenet_iter_10000";
#else
const std::string Params::image::model_definition = "/project/EvolvingAI/anguyen8/model/lenet_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/project/EvolvingAI/anguyen8/model/lenet_iter_10000";
#endif
int Params::pop::dump_period = 10;
int main(int argc, char **argv)
{
// Disable GLOG output from experiment and also Caffe
// Comment out for debugging
google::InitGoogleLogging("");
google::SetStderrLogging(3);
// Our fitness is the class FitTest (see above), that we will call
// fit_t. Params is the set of parameters (struct Params) defined in
// this file.
typedef sferes::fit::FitMapDeepLearning<Params> fit_t;
// We define the genotype. Here we choose EvoFloat (real
// numbers). We evolve 10 real numbers, with the params defined in
// Params (cf the beginning of this file)
typedef gen::EvoFloatImage<Params::image::size * Params::image::size, Params> gen_t;
// This genotype should be simply transformed into a vector of
// parameters (phen::Parameters). The genotype could also have been
// transformed into a shape, a neural network... The phenotype need
// to know which fitness to use; we pass fit_t.
typedef phen::GrayscaleImageDirect<gen_t, fit_t, Params> phen_t;
// The evaluator is in charge of distributing the evaluation of the
// population. It can be simple eval::Eval (nothing special),
// parallel (for multicore machines, eval::Parallel) or distributed
// (for clusters, eval::Mpi).
// typedef eval::Eval<Params> eval_t;
typedef eval::MpiParallel<Params> eval_t; // TBB
// Statistics gather data about the evolutionary process (mean
// fitness, Pareto front, ...). Since they can also stores the best
// individuals, they are the container of our results. We can add as
// many statistics as required thanks to the boost::fusion::vector.
// typedef boost::fusion::vector<stat::BestFit<phen_t, Params>, stat::MeanFit<Params> > stat_t;
typedef boost::fusion::vector<stat::MapImage<phen_t, Params>, stat::BestFitMapImage<phen_t, Params> > stat_t;
// Modifiers are functors that are run once all individuals have
// been evalutated. Their typical use is to add some evolutionary
// pressures towards diversity (e.g. fitness sharing). Here we don't
// use this feature. As a consequence we use a "dummy" modifier that
// does nothing.
typedef modif::Dummy<> modifier_t;
// We can finally put everything together. RankSimple is the
// evolutianary algorithm. It is parametrized by the phenotype, the
// evaluator, the statistics list, the modifier and the general params.
// typedef ea::RankSimple<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
typedef ea::MapElite<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
// We now have a special class for our experiment: ea_t. The next
// line instantiate an object of this class
ea_t ea;
// we can now process the command line options an run the
// evolutionary algorithm (if a --load argument is passed, the file
// is loaded; otherwise, the algorithm is launched).
if (argc > 1) // if a number is provided on the command line
{
int randomSeed = atoi(argv[1]);
printf("randomSeed:%i\n", randomSeed);
srand(randomSeed); //set it as the random seed
boost::program_options::options_description add_opts =
boost::program_options::options_description();
shared_ptr<boost::program_options::option_description> opt (new boost::program_options::option_description(
"continue,t", boost::program_options::value<std::string>(),
"continue from the loaded file starting from the generation provided"
));
add_opts.add(opt);
options::run_ea(argc, argv, ea, add_opts, false);
}
else
{
run_ea(argc, argv, ea);
}
return 0;
}
#include "dl_images.hpp"
#include <iostream>
#include <sferes/phen/parameters.hpp>
#include <sferes/gen/evo_float.hpp>
#include <sferes/eval/eval.hpp>
#include "stat/best_fit_map_image.hpp"
#include "stat/stat_map_image.hpp"
#include <sferes/modif/dummy.hpp>
#include <sferes/run.hpp>
// Evolutionary algorithms --------------------------------
#include "fit/fit_map_deep_learning.hpp"
#include <modules/nn2/gen_dnn.hpp>
#include <modules/nn2/gen_hyper_nn.hpp>
#include "phen/phen_color_image.hpp"
#include <glog/logging.h>
// Caffe -------------------------------------------------
#include <modules/map_elite/map_elite.hpp>
#include "eval/mpi_parallel.hpp" // MPI
#include "continue_run/continue_run.hpp" // MPI
using namespace sferes;
using namespace sferes::gen::dnn;
using namespace sferes::gen::evo_float;
struct Params
{
struct cont
{
static const int getPopIndex = 0;
};
struct log
{
SFERES_CONST bool best_image = false;
};
struct ea
{
SFERES_CONST size_t res_x = 1; // 256;
SFERES_CONST size_t res_y = 10; // 256;
};
struct dnn
{
SFERES_CONST size_t nb_inputs = 4;
SFERES_CONST size_t nb_outputs = 3; // Red, Green, Blue
SFERES_CONST float m_rate_add_conn = 0.5f;
SFERES_CONST float m_rate_del_conn = 0.3f;
SFERES_CONST float m_rate_change_conn = 0.5f;
SFERES_CONST float m_rate_add_neuron = 0.5f;
SFERES_CONST float m_rate_del_neuron = 0.2f;
SFERES_CONST init_t init = ff;
};
struct evo_float
{
// we choose the polynomial mutation type
SFERES_CONST mutation_t mutation_type = polynomial;
// we choose the polynomial cross-over type
SFERES_CONST cross_over_t cross_over_type = sbx;
// the mutation rate of the real-valued vector
SFERES_CONST float mutation_rate = 0.1f;
// the cross rate of the real-valued vector
SFERES_CONST float cross_rate = 0.5f;
// a parameter of the polynomial mutation
SFERES_CONST float eta_m = 15.0f;
// a parameter of the polynomial cross-over
SFERES_CONST float eta_c = 10.0f;
};
struct pop
{
//number of initial random points
static const size_t init_size = 10; // 1000
// size of the population
SFERES_CONST unsigned size = 10; //200;
// number of generations
SFERES_CONST unsigned nb_gen = 10; //10,000;
// how often should the result file be written (here, each 5
// generation)
static int dump_period;// 5;
// how many individuals should be created during the random
// generation process?
SFERES_CONST int initial_aleat = 1;
// used by RankSimple to select the pressure
SFERES_CONST float coeff = 1.1f;
// the number of individuals that are kept from on generation to
// another (elitism)
SFERES_CONST float keep_rate = 0.6f;
};
struct parameters
{
// maximum value of parameters
SFERES_CONST float min = -10.0f;
// minimum value
SFERES_CONST float max = 10.0f;
};
struct cppn
{
// params of the CPPN
struct sampled
{
SFERES_ARRAY(float, values, nn::cppn::sine, nn::cppn::sigmoid, nn::cppn::gaussian, nn::cppn::linear);
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float cross_rate = 0.25f;
SFERES_CONST bool ordered = false;
};
struct evo_float
{
SFERES_CONST float mutation_rate = 0.1f;
SFERES_CONST float cross_rate = 0.1f;
SFERES_CONST mutation_t mutation_type = polynomial;
SFERES_CONST cross_over_t cross_over_type = sbx;
SFERES_CONST float eta_m = 15.0f;
SFERES_CONST float eta_c = 15.0f;
};
};
// Specific settings for MNIST database of grayscale
struct image : ParamsCaffe::image
{
static const std::string model_definition;
static const std::string pretrained_model;
SFERES_CONST bool record_lineage = false;
SFERES_CONST int size = 28;
SFERES_CONST bool use_crops = false;
SFERES_CONST int num_categories = 10; // ILSVR2012 ImageNet has 1000 categories
};
};
// Initialize the parameter files for Caffe network.
#ifdef LOCAL_RUN
const std::string Params::image::model_definition = "/home/anh/src/model/imagenet_deploy_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/home/anh/src/model/caffe_reference_imagenet_model";
#else
const std::string Params::image::model_definition = "/project/EvolvingAI/anguyen8/model/imagenet_deploy_image_memory_data.prototxt";
const std::string Params::image::pretrained_model = "/project/EvolvingAI/anguyen8/model/caffe_reference_imagenet_model";
#endif
int Params::pop::dump_period = 1;
int main(int argc, char **argv)
{
// Disable GLOG output from experiment and also Caffe
// Comment out for debugging
google::InitGoogleLogging("");
google::SetStderrLogging(3);
// Our fitness is the class FitTest (see above), that we will call
// fit_t. Params is the set of parameters (struct Params) defined in
// this file.
typedef sferes::fit::FitMapDeepLearning<Params> fit_t;
// We define the genotype. Here we choose EvoFloat (real
// numbers). We evolve 10 real numbers, with the params defined in
// Params (cf the beginning of this file)
//typedef gen::EvoFloat<10, Params> gen_t;
typedef phen::Parameters<gen::EvoFloat<1, Params>, fit::FitDummy<>, Params> weight_t;
typedef gen::HyperNn<weight_t, Params> cppn_t;
// This genotype should be simply transformed into a vector of
// parameters (phen::Parameters). The genotype could also have been
// transformed into a shape, a neural network... The phenotype need
// to know which fitness to use; we pass fit_t.
typedef phen::ColorImage<cppn_t, fit_t, Params> phen_t;
// The evaluator is in charge of distributing the evaluation of the
// population. It can be simple eval::Eval (nothing special),
// parallel (for multicore machines, eval::Parallel) or distributed
// (for clusters, eval::Mpi).
// typedef eval::Eval<Params> eval_t;
typedef eval::MpiParallel<Params> eval_t; // TBB
// Statistics gather data about the evolutionary process (mean
// fitness, Pareto front, ...). Since they can also stores the best
// individuals, they are the container of our results. We can add as
// many statistics as required thanks to the boost::fusion::vector.
// typedef boost::fusion::vector<stat::BestFit<phen_t, Params>, stat::MeanFit<Params> > stat_t;
typedef boost::fusion::vector<stat::MapImage<phen_t, Params>, stat::BestFitMapImage<phen_t, Params> > stat_t;
// Modifiers are functors that are run once all individuals have
// been evalutated. Their typical use is to add some evolutionary
// pressures towards diversity (e.g. fitness sharing). Here we don't
// use this feature. As a consequence we use a "dummy" modifier that
// does nothing.
typedef modif::Dummy<> modifier_t;
// We can finally put everything together. RankSimple is the
// evolutianary algorithm. It is parametrized by the phenotype, the
// evaluator, the statistics list, the modifier and the general params.
// typedef ea::RankSimple<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
typedef ea::MapElite<phen_t, eval_t, stat_t, modifier_t, Params> ea_t;
// We now have a special class for our experiment: ea_t. The next
// line instantiate an object of this class
ea_t ea;
// we can now process the command line options an run the
// evolutionary algorithm (if a --load argument is passed, the file
// is loaded; otherwise, the algorithm is launched).
if (argc > 1) // if a number is provided on the command line
{
int randomSeed = atoi(argv[1]);
printf("randomSeed:%i\n", randomSeed);
srand(randomSeed); //set it as the random seed
boost::program_options::options_description add_opts =
boost::program_options::options_description();
shared_ptr<boost::program_options::option_description> opt (new boost::program_options::option_description(
"continue,t", boost::program_options::value<std::string>(),
"continue from the loaded file starting from the generation provided"
));
add_opts.add(opt);
options::run_ea(argc, argv, ea, add_opts, false);
}
else
{
run_ea(argc, argv, ea);
}
return 0;
}
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef EA_CUSTOM_HPP_
#define EA_CUSTOM_HPP_
#include <iostream>
#include <vector>
#include <fstream>
#include <boost/algorithm/string.hpp>
#include <sferes/ea/ea.hpp>
namespace sferes {
namespace ea {
SFERES_EA(EaCustom, Ea) {
protected:
std::string _gen_file_path;
public:
EaCustom () : _gen_file_path("")
{
this->_make_res_dir();
}
void _make_res_dir()
{
if (Params::pop::dump_period == -1)
{
return;
}
// Delete the unused folder by Ea
std::string to_delete = misc::hostname() + "_" + misc::date() + "_" + misc::getpid();
if (boost::filesystem::is_directory(to_delete) && boost::filesystem::is_empty(to_delete))
{
boost::filesystem::remove(to_delete);
}
// Check if such a folder already exists
this->_res_dir = "mmm"; // Only one folder regardless which platform the program is running on
boost::filesystem::path my_path(this->_res_dir);
// Create a new folder if it doesn't exist
if (!boost::filesystem::exists(boost::filesystem::status(my_path)))
{
// Create a new folder if it does not exist
boost::filesystem::create_directory(my_path);
}
// Run experiment from that folder
else
{
std::vector<std::string> gens;
// The file to find
int max = 0;
// Find a gen file
for(boost::filesystem::directory_entry& entry : boost::make_iterator_range(boost::filesystem::directory_iterator(my_path), {}))
{
// Find out if '/gen_' exists in the filename
std::string e = entry.path().string();
std::string prefix = this->_res_dir + "/gen_";
size_t found = e.find(prefix);
if (found != std::string::npos)
{
// Extract out the generation number
std::string number = std::string(e).replace(found, prefix.length(), "");
// Remove double quotes
// number = boost::replace_all_copy(number, "\"", "");.string()
int gen = boost::lexical_cast<int>(number);
if (gen > max)
{
max = gen;
_gen_file_path = e;
}
} // end if
} // end for-loop
// Start run from that gen file
// _continue_run = boost::filesystem::current_path().string() + "/" + _continue_run;
std::cout << "[A]: " << _gen_file_path << "\n";
}
}
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef RANK_SIMPLE_HPP_
#define RANK_SIMPLE_HPP_
#include <algorithm>
#include <boost/foreach.hpp>
#include <sferes/stc.hpp>
#include "ea_custom.hpp"
#include <sferes/fit/fitness.hpp>
#include <exp/images/continue_run/continue_run.hpp>
namespace sferes {
namespace ea {
SFERES_EA(RankSimple, EaCustom) {
public:
typedef boost::shared_ptr<Phen> indiv_t;
typedef std::vector<indiv_t> raw_pop_t;
typedef typename std::vector<indiv_t> pop_t;
typedef RankSimple<Phen, Eval, Stat, FitModifier, Params, Exact> this_t;
SFERES_CONST unsigned nb_keep = (unsigned)(Params::pop::keep_rate * Params::pop::size);
void random_pop()
{
sferes::cont::Continuator<this_t, Params> continuator;
// Continuing a run manually from command line or continuing a run automatically if the job was pre-empted
bool continue_run = continuator.enabled() || this->_gen_file_path != "";
if(continue_run)
{
// Load the population file
raw_pop_t raw_pop;
if (this->_gen_file_path == "")
{
raw_pop = continuator.getPopulationFromFile(*this);
}
else
{
raw_pop = continuator.getPopulationFromFile(*this, this->_gen_file_path);
}
// Get the number of population to continue with
const size_t init_size = raw_pop.size();
// Resize the current population archive
this->_pop.resize(init_size);
// Add loaded individuals to the new population
int i = 0;
BOOST_FOREACH(boost::shared_ptr<Phen>&indiv, this->_pop)
{
indiv = boost::shared_ptr<Phen>(new Phen(*raw_pop[i]));
++i;
}
}
else
{
// Original Map-Elites code
// Intialize a random population
this->_pop.resize(Params::pop::size * Params::pop::initial_aleat);
BOOST_FOREACH(boost::shared_ptr<Phen>& indiv, this->_pop)
{
indiv = boost::shared_ptr<Phen>(new Phen());
indiv->random();
}
}
// Evaluate the initialized population
this->_eval.eval(this->_pop, 0, this->_pop.size());
this->apply_modifier();
std::partial_sort(this->_pop.begin(), this->_pop.begin() + Params::pop::size,
this->_pop.end(), fit::compare());
this->_pop.resize(Params::pop::size);
// Continue a run from a specific generation
if(continue_run)
{
if (this->_gen_file_path == "")
{
continuator.run_with_current_population(*this);
}
else
{
continuator.run_with_current_population(*this, this->_gen_file_path);
}
}
}
//ADDED
void setGen(size_t gen)
{
this->_gen = gen;
}
//ADDED END
void epoch()
{
assert(this->_pop.size());
for (unsigned i = nb_keep; i < this->_pop.size(); i += 2) {
unsigned r1 = _random_rank();
unsigned r2 = _random_rank();
boost::shared_ptr<Phen> i1, i2;
this->_pop[r1]->cross(this->_pop[r2], i1, i2);
i1->mutate();
i2->mutate();
this->_pop[i] = i1;
this->_pop[i + 1] = i2;
}
#ifndef EA_EVAL_ALL
this->_eval.eval(this->_pop, nb_keep, Params::pop::size);
#else
this->_eval.eval(this->_pop, 0, Params::pop::size);
#endif
this->apply_modifier();
std::partial_sort(this->_pop.begin(), this->_pop.begin() + nb_keep,
this->_pop.end(), fit::compare());
dbg::out(dbg::info, "ea")<<"best fitness: " << this->_pop[0]->fit().value() << std::endl;
}
protected:
unsigned _random_rank() {
static float kappa = pow(Params::pop::coeff, nb_keep + 1.0f) - 1.0f;
static float facteur = nb_keep / ::log(kappa + 1);
return (unsigned) (this->_pop.size() - facteur * log(misc::rand<float>(1) * kappa + 1));
}
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef BATCH_EVAL_MPI_PARALLEL_HPP_
#define BATCH_EVAL_MPI_PARALLEL_HPP_
#include <sferes/parallel.hpp>
#include <boost/mpi.hpp>
#include "tbb_parallel_eval.hpp"
#include <cmath>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
//#ifndef BOOST_MPI_HAS_NOARG_INITIALIZATION
//#error MPI need arguments (we require a full MPI2 implementation)
//#endif
#define MPI_INFO dbg::out(dbg::info, "mpi")<<"["<<_world->rank()<<"] "
namespace sferes {
namespace eval {
SFERES_CLASS(BatchMpiTBBParallel) {
public:
BatchMpiTBBParallel()
{
static char* argv[] = {(char*)"sferes2", 0x0};
char** argv2 = (char**) malloc(sizeof(char*) * 2);
int argc = 1;
argv2[0] = argv[0];
argv2[1] = argv[1];
using namespace boost;
dbg::out(dbg::info, "mpi")<<"Initializing MPI..."<<std::endl;
_env = shared_ptr<mpi::environment>(new mpi::environment(argc, argv2, true));
dbg::out(dbg::info, "mpi")<<"MPI initialized"<<std::endl;
_world = shared_ptr<mpi::communicator>(new mpi::communicator());
MPI_INFO << "communicator initialized"<<std::endl;
// Disable dumping out results for slave processes.
if (_world->rank() > 0)
{
Params::pop::dump_period = -1;
}
}
template<typename Phen>
void eval(std::vector<boost::shared_ptr<Phen> >& pop,
size_t begin, size_t end) {
dbg::trace("mpi", DBG_HERE);
// Develop phenotypes in parallel
// Each MPI process develops one phenotype
if (_world->rank() == 0)
_master_develop(pop, begin, end);
else
_slave_develop<Phen>();
// Make sure the processes have finished developing phenotypes
// Evaluate phenotypes in parallel but in batches of 256.
// Caffe GPU supports max of 512.
// There is no limit for CPU but we try to find out what batch size works best.
if (_world->rank() == 0)
{
_master_eval(pop, begin, end);
}
}
~BatchMpiTBBParallel()
{
MPI_INFO << "Finalizing MPI..."<<std::endl;
std::string s("bye");
if (_world->rank() == 0)
for (size_t i = 1; i < _world->size(); ++i)
_world->send(i, _env->max_tag(), s);
_finalize();
}
protected:
void _finalize()
{
_world = boost::shared_ptr<boost::mpi::communicator>();
dbg::out(dbg::info, "mpi")<<"MPI world destroyed"<<std::endl;
_env = boost::shared_ptr<boost::mpi::environment>();
dbg::out(dbg::info, "mpi")<<"environment destroyed"<<std::endl;
}
template<typename Phen>
void _master_develop(std::vector<boost::shared_ptr<Phen> >& pop, size_t begin, size_t end)
{
dbg::trace("mpi", DBG_HERE);
size_t current = begin;
std::vector<bool> developed(pop.size());
std::fill(developed.begin(), developed.end(), false);
// first round
for (size_t i = 1; i < _world->size() && current < end; ++i) {
MPI_INFO << "[master] [send-init...] ->" <<i<<" [indiv="<<current<<"]"<<std::endl;
_world->send(i, current, pop[current]->gen());
MPI_INFO << "[master] [send-init ok] ->" <<i<<" [indiv="<<current<<"]"<<std::endl;
++current;
}
// send a new indiv each time we received a fitness
while (current < end) {
boost::mpi::status s = _recv(developed, pop);
MPI_INFO << "[master] [send...] ->" <<s.source()<<" [indiv="<<current<<"]"<<std::endl;
_world->send(s.source(), current, pop[current]->gen());
MPI_INFO << "[master] [send ok] ->" <<s.source()<<" [indiv="<<current<<"]"<<std::endl;
++current;
}
//join
bool done = true;
do {
dbg::out(dbg::info, "mpi")<<"joining..."<<std::endl;
done = true;
for (size_t i = begin; i < end; ++i)
if (!developed[i]) {
_recv(developed, pop);
done = false;
}
} while (!done);
}
template<typename Phen>
boost::mpi::status _recv(std::vector<bool>& developed,
std::vector<boost::shared_ptr<Phen> >& pop)
{
dbg::trace("mpi", DBG_HERE);
using namespace boost::mpi;
status s = _world->probe();
MPI_INFO << "[rcv...]" << getpid() << " tag=" << s.tag() << std::endl;
//_world->recv(s.source(), s.tag(), pop[s.tag()]->fit());
// Receive the whole developed phenotype from slave processes
Phen p;
_world->recv(s.source(), s.tag(), p);
// Assign the developed phenotype back to the current population for further evaluation
pop[s.tag()]->image() = p.image();
MPI_INFO << "[rcv ok]" << " tag=" << s.tag() << std::endl;
developed[s.tag()] = true;
return s;
}
template<typename Phen>
void _slave_develop()
{
dbg::trace("mpi", DBG_HERE);
while(true) {
Phen p;
boost::mpi::status s = _world->probe();
if (s.tag() == _env->max_tag()) {
MPI_INFO << "[slave] Quit requested" << std::endl;
MPI_Finalize();
exit(0);
} else {
MPI_INFO <<"[slave] [rcv...] [" << getpid()<< "]" << std::endl;
_world->recv(0, s.tag(), p.gen());
MPI_INFO <<"[slave] [rcv ok] " << " tag="<<s.tag()<<std::endl;
p.develop();
MPI_INFO <<"[slave] [send...]"<<" tag=" << s.tag()<<std::endl;
//_world->send(0, s.tag(), p.fit()); // Send only the fitness back to master process
// Send the whole phenotype back to master process
_world->send(0, s.tag(), p);
MPI_INFO <<"[slave] [send ok]"<<" tag=" << s.tag()<<std::endl;
}
}
}
// ----------------------------------------------------------------------------------
template<typename Phen>
void _master_eval(std::vector<boost::shared_ptr<Phen> >& pop, size_t begin, size_t end)
{
dbg::trace trace("eval", DBG_HERE);
assert(pop.size());
assert(begin < pop.size());
assert(end <= pop.size());
// Number of eval iterations
const size_t count = end - begin;
LOG(INFO) << "Total: " << count << " | Batch: " << Params::image::batch << "\n";
// Evaluate phenotypes in parallel using TBB.
parallel::init();
parallel::p_for(
parallel::range_t(begin, end, Params::image::batch),
sferes::eval::parallel_tbb_eval<Phen>(pop, Params::image::model_definition, Params::image::pretrained_model));
// The barrier is implicitly set here after the for-loop in TBB.
}
boost::shared_ptr<boost::mpi::environment> _env;
boost::shared_ptr<boost::mpi::communicator> _world;
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef BATCH_EVAL_MPI_TBB_PARALLEL_HPP_
#define BATCH_EVAL_MPI_TBB_PARALLEL_HPP_
#include <sferes/parallel.hpp>
#include <boost/mpi.hpp>
#include "tbb_parallel_eval.hpp"
#include <cmath>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
//#ifndef BOOST_MPI_HAS_NOARG_INITIALIZATION
//#error MPI need arguments (we require a full MPI2 implementation)
//#endif
#define MPI_INFO dbg::out(dbg::info, "mpi")<<"["<<_world->rank()<<"] "
namespace sferes {
namespace eval {
SFERES_CLASS(BatchMpiParallel) {
public:
BatchMpiParallel()
{
static char* argv[] = {(char*)"sferes2", 0x0};
char** argv2 = (char**) malloc(sizeof(char*) * 2);
int argc = 1;
argv2[0] = argv[0];
argv2[1] = argv[1];
using namespace boost;
dbg::out(dbg::info, "mpi")<<"Initializing MPI..."<<std::endl;
_env = shared_ptr<mpi::environment>(new mpi::environment(argc, argv2, true));
dbg::out(dbg::info, "mpi")<<"MPI initialized"<<std::endl;
_world = shared_ptr<mpi::communicator>(new mpi::communicator());
MPI_INFO << "communicator initialized"<<std::endl;
// Disable dumping out results for slave processes.
if (_world->rank() > 0)
{
Params::pop::dump_period = -1;
}
}
template<typename Phen>
void eval(std::vector<boost::shared_ptr<Phen> >& pop,
size_t begin, size_t end) {
dbg::trace("mpi", DBG_HERE);
// Develop phenotypes in parallel
// Each MPI process develops one phenotype
if (_world->rank() == 0)
_master_develop(pop, begin, end);
else
_slave_develop<Phen>();
// Make sure the processes have finished developing phenotypes
// Evaluate phenotypes in parallel but in batches of 256.
// Caffe GPU supports max of 512.
// There is no limit for CPU but we try to find out what batch size works best.
if (_world->rank() == 0)
{
_master_eval(pop, begin, end);
}
}
~BatchMpiParallel()
{
MPI_INFO << "Finalizing MPI..."<<std::endl;
std::string s("bye");
if (_world->rank() == 0)
for (size_t i = 1; i < _world->size(); ++i)
_world->send(i, _env->max_tag(), s);
_finalize();
}
protected:
void _finalize()
{
_world = boost::shared_ptr<boost::mpi::communicator>();
dbg::out(dbg::info, "mpi")<<"MPI world destroyed"<<std::endl;
_env = boost::shared_ptr<boost::mpi::environment>();
dbg::out(dbg::info, "mpi")<<"environment destroyed"<<std::endl;
}
template<typename Phen>
void _master_develop(std::vector<boost::shared_ptr<Phen> >& pop, size_t begin, size_t end)
{
dbg::trace("mpi", DBG_HERE);
size_t current = begin;
std::vector<bool> developed(pop.size());
std::fill(developed.begin(), developed.end(), false);
// first round
for (size_t i = 1; i < _world->size() && current < end; ++i) {
MPI_INFO << "[master] [send-init...] ->" <<i<<" [indiv="<<current<<"]"<<std::endl;
_world->send(i, current, pop[current]->gen());
MPI_INFO << "[master] [send-init ok] ->" <<i<<" [indiv="<<current<<"]"<<std::endl;
++current;
}
// send a new indiv each time we received a fitness
while (current < end) {
boost::mpi::status s = _recv(developed, pop);
MPI_INFO << "[master] [send...] ->" <<s.source()<<" [indiv="<<current<<"]"<<std::endl;
_world->send(s.source(), current, pop[current]->gen());
MPI_INFO << "[master] [send ok] ->" <<s.source()<<" [indiv="<<current<<"]"<<std::endl;
++current;
}
//join
bool done = true;
do {
dbg::out(dbg::info, "mpi")<<"joining..."<<std::endl;
done = true;
for (size_t i = begin; i < end; ++i)
if (!developed[i]) {
_recv(developed, pop);
done = false;
}
} while (!done);
}
template<typename Phen>
boost::mpi::status _recv(std::vector<bool>& developed,
std::vector<boost::shared_ptr<Phen> >& pop)
{
dbg::trace("mpi", DBG_HERE);
using namespace boost::mpi;
status s = _world->probe();
MPI_INFO << "[rcv...]" << getpid() << " tag=" << s.tag() << std::endl;
//_world->recv(s.source(), s.tag(), pop[s.tag()]->fit());
// Receive the whole developed phenotype from slave processes
Phen p;
_world->recv(s.source(), s.tag(), p);
// Assign the developed phenotype back to the current population for further evaluation
pop[s.tag()]->image() = p.image();
MPI_INFO << "[rcv ok]" << " tag=" << s.tag() << std::endl;
developed[s.tag()] = true;
return s;
}
template<typename Phen>
void _slave_develop()
{
dbg::trace("mpi", DBG_HERE);
while(true) {
Phen p;
boost::mpi::status s = _world->probe();
if (s.tag() == _env->max_tag()) {
MPI_INFO << "[slave] Quit requested" << std::endl;
MPI_Finalize();
exit(0);
} else {
MPI_INFO <<"[slave] [rcv...] [" << getpid()<< "]" << std::endl;
_world->recv(0, s.tag(), p.gen());
MPI_INFO <<"[slave] [rcv ok] " << " tag="<<s.tag()<<std::endl;
p.develop();
MPI_INFO <<"[slave] [send...]"<<" tag=" << s.tag()<<std::endl;
//_world->send(0, s.tag(), p.fit()); // Send only the fitness back to master process
// Send the whole phenotype back to master process
_world->send(0, s.tag(), p);
MPI_INFO <<"[slave] [send ok]"<<" tag=" << s.tag()<<std::endl;
}
}
}
// ----------------------------------------------------------------------------------
template<typename Phen>
void _master_eval(std::vector<boost::shared_ptr<Phen> >& pop, size_t begin, size_t end)
{
dbg::trace trace("eval", DBG_HERE);
assert(pop.size());
assert(begin < pop.size());
assert(end <= pop.size());
// Number of eval iterations
const size_t count = end - begin;
LOG(INFO) << "Total: " << count << " | Batch: " << Params::image::batch << "\n";
// Evaluate phenotypes in parallel using TBB.
parallel::init();
parallel::p_for(
parallel::range_t(begin, end, Params::image::batch),
sferes::eval::parallel_tbb_eval<Phen>(pop, Params::image::model_definition, Params::image::pretrained_model));
// The barrier is implicitly set here after the for-loop in TBB.
}
boost::shared_ptr<boost::mpi::environment> _env;
boost::shared_ptr<boost::mpi::communicator> _world;
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef EVAL_CUDA_PARALLEL_HPP_
#define EVAL_CUDA_PARALLEL_HPP_
#include <sferes/parallel.hpp>
#include <cmath>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdexcept>
namespace sferes {
namespace caffe
{
/**
* Using a shared_ptr to hold a pointer to a statically allocated object.
* http://www.boost.org/doc/libs/1_55_0/libs/smart_ptr/sp_techniques.html#static
*/
struct null_deleter
{
void operator()(void const *) const
{
}
};
class CaffeFactory
{
private:
static bool initialized;
static Net<float>* _net_1;
static Net<float>* _net_2;
static int _status;
public:
static shared_ptr<Net<float> > getCaffe(const std::string model_definition, const std::string pretrained_model)
{
if (!initialized)
{
// Initialize Caffe net 1
_net_1 = new Net<float>(model_definition);
// Get the trained model
_net_1->CopyTrainedLayersFrom(pretrained_model);
// Initialize Caffe net 2
_net_2 = new Net<float>(model_definition);
// Get the trained model
_net_2->CopyTrainedLayersFrom(pretrained_model);
initialized = true;
}
if (_status == 1)
{
_status = 2;
shared_ptr<Net<float> > c(_net_1, null_deleter());
return c;
}
else
{
_status = 1;
shared_ptr<Net<float> > c(_net_2, null_deleter());
return c;
}
}
CaffeFactory()
{
initialized = false;
_status = 1;
}
};
}
namespace eval {
/**
* Develop phenotypes in parallel using TBB.
*/
template<typename Phen>
struct _parallel_develop {
typedef std::vector<boost::shared_ptr<Phen> > pop_t;
pop_t _pop;
~_parallel_develop() { }
_parallel_develop(pop_t& pop) : _pop(pop) {}
_parallel_develop(const _parallel_develop& ev) : _pop(ev._pop) {}
void operator() (const parallel::range_t& r) const {
for (size_t i = r.begin(); i != r.end(); ++i) {
assert(i < _pop.size());
_pop[i]->develop();
}
}
};
SFERES_CLASS(CudaParallel)
{
private:
/**
* Develop phenotypes in parallel using TBB.
*/
template<typename Phen>
struct _parallel_cuda_eval {
typedef std::vector<boost::shared_ptr<Phen> > pop_t;
pop_t _pop;
~_parallel_cuda_eval() { }
_parallel_cuda_eval(pop_t& pop) : _pop(pop) {}
_parallel_cuda_eval(const _parallel_cuda_eval& ev) : _pop(ev._pop) {}
void operator() (const parallel::range_t& r) const
{
size_t begin = r.begin();
size_t end = r.end();
LOG(INFO) << "Begin: " << begin << " --> " << end << "\n";
dbg::trace trace("eval_cuda", DBG_HERE);
assert(_pop.size());
assert(begin < _pop.size());
assert(end <= _pop.size());
// Algorithm works as follow:
// Send the individuals to Caffe first
// Get back a list of results
// Assign the results to individuals
// Construct a list of images to be in the batch
std::vector<cv::Mat> images(0);
for (size_t i = begin; i < end; ++i)
{
cv::Mat output;
_pop[i]->imageBGR(output);
images.push_back( output ); // Add to a list of images
}
// Initialize Caffe net
shared_ptr<Net<float> > caffe_test_net = sferes::caffe::CaffeFactory::getCaffe(
Params::image::model_definition,
Params::image::pretrained_model
);
// shared_ptr<Net<float> > caffe_test_net =
// boost::shared_ptr<Net<float> >(new Net<float>(Params::image::model_definition));
//
// // Get the trained model
// caffe_test_net->CopyTrainedLayersFrom(Params::image::pretrained_model);
// Run ForwardPrefilled
float loss; // const vector<Blob<float>*>& result = caffe_test_net.ForwardPrefilled(&loss);
// Number of eval iterations
const size_t num_images = end - begin;
// Add images and labels manually to the ImageDataLayer
// vector<cv::Mat> images(num_images, image);
vector<int> labels(num_images, 0);
const shared_ptr<ImageDataLayer<float> > image_data_layer =
boost::static_pointer_cast<ImageDataLayer<float> >(
caffe_test_net->layer_by_name("data"));
image_data_layer->AddImagesAndLabels(images, labels);
// Classify this batch of 512 images
const vector<Blob<float>*>& result = caffe_test_net->ForwardPrefilled(&loss);
// Get the highest layer of Softmax
const float* argmaxs = result[1]->cpu_data();
// Get back a list of results
LOG(INFO) << "Number of results: " << result[1]->num() << "\n";
// Assign the results to individuals
for(int i = 0; i < num_images * 2; i += 2)
{
LOG(INFO)<< " Image: "<< i/2 + 1 << " class:" << argmaxs[i] << " : " << argmaxs[i+1] << "\n";
int pop_index = begin + i/2; // Index of individual in the batch
// Set the fitness of this individual
_pop[pop_index]->fit().setFitness((float) argmaxs[i+1]);
// For Map-Elite, set the cell description
_pop[pop_index]->fit().set_desc(0, argmaxs[i]);
}
}
};
public:
template<typename Phen>
void eval(std::vector<boost::shared_ptr<Phen> >& pop, size_t begin, size_t end)
{
dbg::trace trace("eval", DBG_HERE);
assert(pop.size());
assert(begin < pop.size());
assert(end <= pop.size());
// Develop phenotypes in parallel using TBB.
// The barrier is implicitly set here after the for-loop in TBB.
//parallel::init();
// We have only 2 GPUs per node
//tbb::task_scheduler_init init1(4);
parallel::p_for(parallel::range_t(begin, end),
_parallel_develop<Phen>(pop));
// Number of eval iterations
const size_t count = end - begin;
LOG(INFO) << "Size: " << count << " vs " << Params::image::batch << "\n";
// Load balancing
// We have only 2 GPUs per node
//tbb::task_scheduler_init init2(2);
parallel::p_for(
parallel::range_t(begin, end, Params::image::batch),
_parallel_cuda_eval<Phen>(pop));
}
};
}
}
bool sferes::caffe::CaffeFactory::initialized;
int sferes::caffe::CaffeFactory::_status;
Net<float>* sferes::caffe::CaffeFactory::_net_1;
Net<float>* sferes::caffe::CaffeFactory::_net_2;
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef EVAL_MPI_PARALLEL_HPP_
#define EVAL_MPI_PARALLEL_HPP_
#include <sferes/parallel.hpp>
#include <boost/mpi.hpp>
//#ifndef BOOST_MPI_HAS_NOARG_INITIALIZATION
//#error MPI need arguments (we require a full MPI2 implementation)
//#endif
#define MPI_INFO dbg::out(dbg::info, "mpi")<<"["<<_world->rank()<<"] "
namespace sferes {
namespace eval {
SFERES_CLASS(MpiParallel) {
public:
MpiParallel() {
static char* argv[] = {(char*)"sferes2", 0x0};
char** argv2 = (char**) malloc(sizeof(char*) * 2);
int argc = 1;
argv2[0] = argv[0];
argv2[1] = argv[1];
using namespace boost;
dbg::out(dbg::info, "mpi")<<"Initializing MPI..."<<std::endl;
_env = shared_ptr<mpi::environment>(new mpi::environment(argc, argv2, true));
dbg::out(dbg::info, "mpi")<<"MPI initialized"<<std::endl;
_world = shared_ptr<mpi::communicator>(new mpi::communicator());
MPI_INFO << "communicator initialized"<<std::endl;
// Disable dumping out results for slave processes.
if (_world->rank() > 0)
{
Params::pop::dump_period = -1;
}
}
template<typename Phen>
void eval(std::vector<boost::shared_ptr<Phen> >& pop,
size_t begin, size_t end) {
dbg::trace("mpi", DBG_HERE);
if (_world->rank() == 0)
_master_loop(pop, begin, end);
else
_slave_loop<Phen>();
}
~MpiParallel()
{
MPI_INFO << "Finalizing MPI..."<<std::endl;
std::string s("bye");
if (_world->rank() == 0)
for (size_t i = 1; i < _world->size(); ++i)
_world->send(i, _env->max_tag(), s);
_finalize();
}
protected:
void _finalize()
{
_world = boost::shared_ptr<boost::mpi::communicator>();
dbg::out(dbg::info, "mpi")<<"MPI world destroyed"<<std::endl;
_env = boost::shared_ptr<boost::mpi::environment>();
dbg::out(dbg::info, "mpi")<<"environment destroyed"<<std::endl;
}
template<typename Phen>
void _master_loop(std::vector<boost::shared_ptr<Phen> >& pop, size_t begin, size_t end)
{
dbg::trace("mpi", DBG_HERE);
size_t current = begin;
std::vector<bool> evaluated(pop.size());
std::fill(evaluated.begin(), evaluated.end(), false);
// first round
for (size_t i = 1; i < _world->size() && current < end; ++i) {
MPI_INFO << "[master] [send-init...] ->" <<i<<" [indiv="<<current<<"]"<<std::endl;
_world->send(i, current, pop[current]->gen());
MPI_INFO << "[master] [send-init ok] ->" <<i<<" [indiv="<<current<<"]"<<std::endl;
++current;
}
// send a new indiv each time we received a fitness
while (current < end) {
boost::mpi::status s = _recv(evaluated, pop);
MPI_INFO << "[master] [send...] ->" <<s.source()<<" [indiv="<<current<<"]"<<std::endl;
_world->send(s.source(), current, pop[current]->gen());
MPI_INFO << "[master] [send ok] ->" <<s.source()<<" [indiv="<<current<<"]"<<std::endl;
++current;
}
//join
bool done = true;
do {
dbg::out(dbg::info, "mpi")<<"joining..."<<std::endl;
done = true;
for (size_t i = begin; i < end; ++i)
if (!evaluated[i]) {
_recv(evaluated, pop);
done = false;
}
} while (!done);
}
template<typename Phen>
boost::mpi::status _recv(std::vector<bool>& evaluated,
std::vector<boost::shared_ptr<Phen> >& pop)
{
dbg::trace("mpi", DBG_HERE);
using namespace boost::mpi;
status s = _world->probe();
MPI_INFO << "[rcv...]" << getpid() << " tag=" << s.tag() << std::endl;
//_world->recv(s.source(), s.tag(), pop[s.tag()]->fit());
// Receive the whole developed phenotype from slave processes
Phen p;
_world->recv(s.source(), s.tag(), p);
// Assign the developed data back to the current population for further evaluation
pop[s.tag()]->fit() = p.fit();
pop[s.tag()]->image() = p.image();
MPI_INFO << "[rcv ok]" << " tag=" << s.tag() << std::endl;
evaluated[s.tag()] = true;
return s;
}
template<typename Phen>
void _slave_loop()
{
dbg::trace("mpi", DBG_HERE);
while(true) {
Phen p;
boost::mpi::status s = _world->probe();
if (s.tag() == _env->max_tag()) {
MPI_INFO << "[slave] Quit requested" << std::endl;
MPI_Finalize();
exit(0);
} else {
MPI_INFO <<"[slave] [rcv...] [" << getpid()<< "]" << std::endl;
_world->recv(0, s.tag(), p.gen());
MPI_INFO <<"[slave] [rcv ok] " << " tag="<<s.tag()<<std::endl;
p.develop();
p.fit().eval(p);
MPI_INFO <<"[slave] [send...]"<<" tag=" << s.tag()<<std::endl;
//_world->send(0, s.tag(), p.fit()); // Send only the fitness back to master process
// Send the whole phenotype back to master process
_world->send(0, s.tag(), p);
MPI_INFO <<"[slave] [send ok]"<<" tag=" << s.tag()<<std::endl;
}
}
}
boost::shared_ptr<boost::mpi::environment> _env;
boost::shared_ptr<boost::mpi::communicator> _world;
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef EVAL_TBB_PARALLEL_HPP_
#define EVAL_TBB_PARALLEL_HPP_
#include <sferes/parallel.hpp>
#include <cmath>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "tbb_parallel_develop.hpp"
#include "tbb_parallel_eval.hpp"
#include <stdexcept>
namespace sferes {
namespace eval {
SFERES_CLASS(TBBParallel)
{
public:
template<typename Phen>
void eval(std::vector<boost::shared_ptr<Phen> >& pop, size_t begin, size_t end)
{
dbg::trace trace("eval", DBG_HERE);
assert(pop.size());
assert(begin < pop.size());
assert(end <= pop.size());
// Develop phenotypes in parallel using TBB.
// The barrier is implicitly set here after the for-loop in TBB.
parallel::init();
parallel::p_for(parallel::range_t(begin, end),
sferes::eval::parallel_develop<Phen>(pop));
// Number of eval iterations
const size_t count = end - begin;
LOG(INFO) << "Size: " << count << " vs " << Params::image::batch << "\n";
// Evaluate phenotypes in parallel using TBB.
parallel::p_for(
parallel::range_t(begin, end, Params::image::batch),
sferes::eval::parallel_tbb_eval<Phen>(pop, Params::image::model_definition, Params::image::pretrained_model));
}
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef TBB_PARALLEL_DEVELOP_HPP_
#define TBB_PARALLEL_DEVELOP_HPP_
#include <sferes/parallel.hpp>
namespace sferes {
namespace eval {
/**
* Develop phenotypes in parallel using TBB.
*/
template<typename Phen>
struct parallel_develop {
typedef std::vector<boost::shared_ptr<Phen> > pop_t;
pop_t _pop;
~parallel_develop() { }
parallel_develop(pop_t& pop) : _pop(pop) {}
parallel_develop(const parallel_develop& ev) : _pop(ev._pop) {}
void operator() (const parallel::range_t& r) const {
for (size_t i = r.begin(); i != r.end(); ++i) {
assert(i < _pop.size());
_pop[i]->develop();
}
}
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef EVAL_TBB_PARALLEL_EVAL_HPP_
#define EVAL_TBB_PARALLEL_EVAL_HPP_
#include <sferes/parallel.hpp>
#include <cmath>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "tbb_parallel_develop.hpp"
#include <stdexcept>
namespace sferes {
namespace eval {
/**
* Develop phenotypes in parallel using TBB.
*/
template<typename Phen>
struct parallel_tbb_eval {
typedef std::vector<boost::shared_ptr<Phen> > pop_t;
pop_t _pop;
std::string _model_definition;
std::string _pretrained_model;
~parallel_tbb_eval() { }
parallel_tbb_eval(pop_t& pop, const std::string model_definition, const std::string pretrained_model) :
_pop(pop),
_model_definition(model_definition),
_pretrained_model(pretrained_model)
{
}
parallel_tbb_eval(const parallel_tbb_eval& ev) :
_pop(ev._pop),
_model_definition(_model_definition),
_pretrained_model(_pretrained_model)
{
}
void operator() (const parallel::range_t& r) const
{
size_t begin = r.begin();
size_t end = r.end();
LOG(INFO) << "Begin: " << begin << " --> " << end << "\n";
dbg::trace trace("eval_cuda", DBG_HERE);
assert(_pop.size());
assert(begin < _pop.size());
assert(end <= _pop.size());
// Algorithm works as follow:
// Send the individuals to Caffe first
// Get back a list of results
// Assign the results to individuals
// Construct a list of images to be in the batch
std::vector<cv::Mat> images(0);
for (size_t i = begin; i < end; ++i)
{
cv::Mat output;
_pop[i]->imageBGR(output);
images.push_back( output ); // Add to a list of images
}
// Initialize Caffe net
shared_ptr<Net<float> > caffe_test_net =
boost::shared_ptr<Net<float> >(new Net<float>(_model_definition));
// Get the trained model
caffe_test_net->CopyTrainedLayersFrom(_pretrained_model);
// Run ForwardPrefilled
float loss; // const vector<Blob<float>*>& result = caffe_test_net.ForwardPrefilled(&loss);
// Number of eval iterations
const size_t num_images = end - begin;
// Add images and labels manually to the ImageDataLayer
// vector<cv::Mat> images(num_images, image);
vector<int> labels(num_images, 0);
const shared_ptr<ImageDataLayer<float> > image_data_layer =
boost::static_pointer_cast<ImageDataLayer<float> >(
caffe_test_net->layer_by_name("data"));
image_data_layer->AddImagesAndLabels(images, labels);
// Classify this batch of 512 images
const vector<Blob<float>*>& result = caffe_test_net->ForwardPrefilled(&loss);
// Get the highest layer of Softmax
const float* argmaxs = result[1]->cpu_data();
// Get back a list of results
LOG(INFO) << "Number of results: " << result[1]->num() << "\n";
// Assign the results to individuals
for(int i = 0; i < num_images * 2; i += 2)
{
LOG(INFO)<< " Image: "<< i/2 + 1 << " class:" << argmaxs[i] << " : " << argmaxs[i+1] << "\n";
int pop_index = begin + i/2; // Index of individual in the batch
// Set the fitness of this individual
_pop[pop_index]->fit().setFitness((float) argmaxs[i+1]);
// For Map-Elite, set the cell description
_pop[pop_index]->fit().set_desc(0, argmaxs[i]);
}
}
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef FIT_DEEP_LEARNING_HPP
#define FIT_DEEP_LEARNING_HPP
#include <sferes/fit/fitness.hpp>
// Caffe -------------------------------------------------
#include <cuda_runtime.h>
#include <cstring>
#include <cstdlib>
#include <vector>
#include <stdio.h>
#include <caffe/caffe.hpp>
#include <caffe/vision_layers.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
// Caffe -------------------------------------------------
using namespace caffe;
#define FIT_DEEP_LEARNING(Name) SFERES_FITNESS(Name, sferes::fit::Fitness)
namespace sferes
{
namespace fit
{
SFERES_FITNESS(FitDeepLearning, sferes::fit::Fitness)
{
protected:
/**
* Crop an image based on the coordinates and the size of the crop.
*/
static cv::Mat crop(const cv::Mat& image,
const size_t x, const size_t y, const size_t width, const size_t height, const size_t offset, const bool flip = false)
{
// Setup a rectangle to define your region of interest
// int x, int y, int width, int height
cv::Rect myROI(x, y, width, height); // top-left
// Crop the full image to that image contained by the rectangle myROI
// Note that this doesn't copy the data
cv::Mat croppedImage = image(myROI);
// Create a background image of size 256x256
cv::Mat background (Params::image::size, Params::image::size, CV_8UC3, cv::Scalar(255, 255, 255));
// Because we are using crop size of 227x227 which is odd, when the image size is even 256x256
// This adjustment helps aligning the crop.
int left = offset/2;
if (flip)
{
left++;
}
// Because Caffe requires 256x256 images, we paste the crop back to a dummy background.
croppedImage.copyTo(background(cv::Rect(left, offset/2, width, height)));
return background;
}
/**
* Create ten crops (4 corners, 1 center, and x2 for mirrors).
* Following Alex 2012 paper.
* The 10 crops are added back to the list.
*/
static void _createTenCrops(const cv::Mat& image, vector<cv::Mat>& list)
{
// Offset
const int crop_size = Params::image::crop_size;
const int offset = Params::image::size - crop_size;
// 1. Top-left
{
cv::Mat cropped = crop(image, 0, 0, crop_size, crop_size, offset);
// Add a crop to list
list.push_back(cropped);
cv::Mat flipped;
cv::flip(crop(image, 0, 0, crop_size, crop_size, offset, true), flipped, 1);
// Add a flipped crop to list
list.push_back(flipped);
}
// 2. Top-Right
{
cv::Mat cropped = crop(image, offset, 0, crop_size, crop_size, offset);
// Add a crop to list
list.push_back(cropped);
cv::Mat flipped;
cv::flip(crop(image, offset, 0, crop_size, crop_size, offset, true), flipped, 1);
// Add a flipped crop to list
list.push_back(flipped);
}
// 3. Bottom-left
{
cv::Mat cropped = crop(image, 0, offset, crop_size, crop_size, offset);
// Add a crop to list
list.push_back(cropped);
cv::Mat flipped;
cv::flip(crop(image, 0, offset, crop_size, crop_size, offset, true), flipped, 1);
// Add a flipped crop to list
list.push_back(flipped);
}
// 4. Bottom-right
{
cv::Mat cropped = crop(image, offset, offset, crop_size, crop_size, offset);
// Add a crop to list
list.push_back(cropped);
cv::Mat flipped;
cv::flip(crop(image, offset, offset, crop_size, crop_size, offset, true), flipped, 1);
// Add a flipped crop to list
list.push_back(flipped);
}
// 5. Center and its mirror
{
cv::Mat cropped = crop(image, offset/2, offset/2, crop_size, crop_size, offset);
// Add a crop to list
list.push_back(cropped);
cv::Mat flipped;
cv::flip(crop(image, offset/2, offset/2, crop_size, crop_size, offset, true), flipped, 1);
// Add a flipped crop to list
list.push_back(flipped);
}
}
private:
/**
* Evaluate the given image to see its probability in the given category.
*/
float _getProbability(const cv::Mat& image, const int category)
{
this->initCaffeNet(); //Initialize caffe
// Initialize test network
shared_ptr<Net<float> > caffe_test_net = shared_ptr<Net<float> >( new Net<float>(Params::image::model_definition));
// Get the trained model
caffe_test_net->CopyTrainedLayersFrom(Params::image::pretrained_model);
// Run ForwardPrefilled
float loss;
// Add images and labels manually to the ImageDataLayer
vector<int> labels(10, 0);
vector<cv::Mat> images;
// Add images to the list
if (Params::image::use_crops)
{
// Ten crops have been stored in the vector
_createTenCrops(image, images);
}
else
{
images.push_back(image);
}
// Classify images
const shared_ptr<ImageDataLayer<float> > image_data_layer =
boost::static_pointer_cast<ImageDataLayer<float> >(
caffe_test_net->layer_by_name("data"));
image_data_layer->AddImagesAndLabels(images, labels);
const vector<Blob<float>*>& result = caffe_test_net->ForwardPrefilled(&loss);
// Get the highest layer of Softmax
const float* softmax = result[1]->cpu_data();
// If use 10 crops, we have to average the predictions of 10 crops
if (Params::image::use_crops)
{
vector<double> values;
// Average the predictions of evaluating 10 crops
for(int i = 0; i < Params::image::num_categories; ++i)
{
boost::accumulators::accumulator_set<double, boost::accumulators::stats<boost::accumulators::tag::mean> > avg;
for(int j = 0; j < 10 * Params::image::num_categories; j += Params::image::num_categories)
{
avg(softmax[i + j]);
}
double mean = boost::accumulators::mean(avg);
values.push_back(mean);
}
return values[category];
}
// If use only 1 crop
else
{
return softmax[category];
}
}
public:
// Indiv will have the type defined in the main (phen_t)
template<typename Indiv>
void eval(const Indiv& ind)
{
// Convert image to BGR before evaluating
cv::Mat output;
// Convert HLS into BGR because imwrite uses BGR color space
cv::cvtColor(ind.image(), output, CV_HLS2BGR);
// Evolve images to be categorized as a soccer ball
this->_value = _getProbability(output, Params::image::category_id);
}
// Indiv will have the type defined in the main (phen_t)
void setFitness(float value)
{
this->_value = value;
}
void initCaffeNet()
{
// Set test phase
Caffe::set_phase(Caffe::TEST);
if (Params::image::use_gpu)
{
// Set GPU mode
Caffe::set_mode(Caffe::GPU);
}
else
{
// Set CPU mode
Caffe::set_mode(Caffe::CPU);
}
}
};
}
}
#endif
//| This file is a part of the sferes2 framework.
//| Copyright 2009, ISIR / Universite Pierre et Marie Curie (UPMC)
//| Main contributor(s): Jean-Baptiste Mouret, mouret@isir.fr
//|
//| This software is a computer program whose purpose is to facilitate
//| experiments in evolutionary computation and evolutionary robotics.
//|
//| This software is governed by the CeCILL license under French law
//| and abiding by the rules of distribution of free software. You
//| can use, modify and/ or redistribute the software under the terms
//| of the CeCILL license as circulated by CEA, CNRS and INRIA at the
//| following URL "http://www.cecill.info".
//|
//| As a counterpart to the access to the source code and rights to
//| copy, modify and redistribute granted by the license, users are
//| provided only with a limited warranty and the software's author,
//| the holder of the economic rights, and the successive licensors
//| have only limited liability.
//|
//| In this respect, the user's attention is drawn to the risks
//| associated with loading, using, modifying and/or developing or
//| reproducing the software by the user in light of its specific
//| status of free software, that may mean that it is complicated to
//| manipulate, and that also therefore means that it is reserved for
//| developers and experienced professionals having in-depth computer
//| knowledge. Users are therefore encouraged to load and test the
//| software's suitability as regards their requirements in conditions
//| enabling the security of their systems and/or data to be ensured
//| and, more generally, to use and operate it in the same conditions
//| as regards security.
//|
//| The fact that you are presently reading this means that you have
//| had knowledge of the CeCILL license and that you accept its terms.
#ifndef FIT_MAP_DEEP_LEARNING_HPP
#define FIT_MAP_DEEP_LEARNING_HPP
#include "fit_deep_learning.hpp"
#include <modules/map_elite/fit_map.hpp>
#include <boost/accumulators/accumulators.hpp>
#include <boost/accumulators/statistics/stats.hpp>
// Headers specifics to the computations we need
#include <boost/accumulators/statistics/mean.hpp>
#include <boost/accumulators/statistics/max.hpp>
#define FIT_MAP_DEEP_LEARNING(Name) SFERES_FITNESS(Name, sferes::fit::FitDeepLearning)
namespace sferes
{
namespace fit
{
SFERES_FITNESS(FitMapDeepLearning, sferes::fit::FitDeepLearning)
{
/*
private:
struct ArgMax
{
unsigned int category;
float probability;
};
ArgMax getMaxProbability(const cv::Mat& image)
{
this->initCaffeNet(); //Initialize caffe
// Initialize test network
shared_ptr<Net<float> > caffe_test_net = shared_ptr<Net<float> >( new Net<float>(Params::image::model_definition));
// Get the trained model
caffe_test_net->CopyTrainedLayersFrom(Params::image::pretrained_model);
// Run ForwardPrefilled
float loss; // const vector<Blob<float>*>& result = caffe_test_net.ForwardPrefilled(&loss);
// Add images and labels manually to the ImageDataLayer
vector<cv::Mat> images(1, image);
vector<int> labels(1, 0);
const shared_ptr<ImageDataLayer<float> > image_data_layer =
boost::static_pointer_cast<ImageDataLayer<float> >(
caffe_test_net->layer_by_name("data"));
image_data_layer->AddImagesAndLabels(images, labels);
vector<Blob<float>* > dummy_bottom_vec;
const vector<Blob<float>*>& result = caffe_test_net->Forward(dummy_bottom_vec, &loss);
// Get the highest layer of Softmax
const float* argmax = result[1]->cpu_data();
ArgMax m;
m.category = (int) argmax[0]; // Category
m.probability = (float) argmax[1]; // Probability
return m;
}
*/
private:
void _setProbabilityList(const cv::Mat& image)
{
this->initCaffeNet(); //Initialize caffe
// Initialize test network
shared_ptr<Net<float> > caffe_test_net = shared_ptr<Net<float> >( new Net<float>(Params::image::model_definition));
// Get the trained model
caffe_test_net->CopyTrainedLayersFrom(Params::image::pretrained_model);
// Run ForwardPrefilled
float loss;
// Add images and labels manually to the ImageDataLayer
vector<int> labels(10, 0);
vector<cv::Mat> images;
// Add images to the list
if (Params::image::use_crops)
{
// Ten crops have been stored in the vector
this->_createTenCrops(image, images);
}
else
{
images.push_back(image);
}
// Classify images
const shared_ptr<ImageDataLayer<float> > image_data_layer =
boost::static_pointer_cast<ImageDataLayer<float> >(
caffe_test_net->layer_by_name("data"));
image_data_layer->AddImagesAndLabels(images, labels);
const vector<Blob<float>*>& result = caffe_test_net->ForwardPrefilled(&loss);
// Get the highest layer of Softmax
const float* softmax = result[1]->cpu_data();
vector<double> values;
boost::accumulators::accumulator_set<double, boost::accumulators::stats<boost::accumulators::tag::max> > max;
// Clear the probability in case it is called twice
_prob.clear();
// If use 10 crops, we have to average the predictions of 10 crops
if (Params::image::use_crops)
{
// Average the predictions of evaluating 10 crops
for(int i = 0; i < Params::image::num_categories; ++i)
{
boost::accumulators::accumulator_set<double, boost::accumulators::stats<boost::accumulators::tag::mean> > avg;
for(int j = 0; j < 10 * Params::image::num_categories; j += Params::image::num_categories)
{
avg(softmax[i + j]);
}
double mean = boost::accumulators::mean(avg);
// Push 1000 probabilities in the list
_prob.push_back(mean);
max(mean); // Add this mean to a list for computing the max later
}
}
else
{
for(int i = 0; i < Params::image::num_categories; ++i)
{
float v = softmax[i];
// Push 1000 probabilities in the list
_prob.push_back(v);
max(v); // Add this mean to a list for computing the max later
}
}
float max_prob = boost::accumulators::max(max);
// Set the fitness
this->_value = max_prob;
}
public:
FitMapDeepLearning() : _prob(Params::image::num_categories) { }
const std::vector<float>& desc() const { return _prob; }
// Indiv will have the type defined in the main (phen_t)
template<typename Indiv>
void eval(const Indiv& ind)
{
if (Params::image::color)
{
// Convert image to BGR before evaluating
cv::Mat output;
// Convert HLS into BGR because imwrite uses BGR color space
cv::cvtColor(ind.image(), output, CV_HLS2BGR);
// Create an empty list to store get 1000 probabilities
_setProbabilityList(output);
}
else // Grayscale
{
// Create an empty list to store get 1000 probabilities
_setProbabilityList(ind.image());
}
}
float value(int category) const
{
assert(category < _prob.size());
return _prob[category];
}
float value() const
{
return this->_value;
}
template<class Archive>
void serialize(Archive & ar, const unsigned int version) {
sferes::fit::Fitness<Params, typename stc::FindExact<FitMapDeepLearning<Params, Exact>, Exact>::ret>::serialize(ar, version);
ar & BOOST_SERIALIZATION_NVP(_prob);
}
protected:
std::vector<float> _prob; // List of probabilities
};
}
}
#endif
/home/anh/workspace/sferes/exp/images/imagenet/hen_256.png 1
\ No newline at end of file
/*
* settings.h
*
* Created on: Jul 16, 2014
* Author: anh
*/
#ifndef SETTINGS_H_
#define SETTINGS_H_
//#define LOCAL_RUN
//#define NB_THREADS 16
#endif /* SETTINGS_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment