?? main.c
字號:
/*************************************************************************
TRIBES, an fully adaptive particle swarm optimiser
-------------------
last update : 2005-01-12
email : Maurice.Clerc@WriteMe.com
Home page : http://www.mauriceclerc.net
***************************************************************************/
/*
Have fun, and keep me posted if you do something interesting with it,
or if you find a bug (I'm pretty sure there are some!).
*/
/***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************/
/* Recent updates
2005-01
2004-11 Specific binary strategies (and examples of binary problems).
Some of them with a very simple "taboo" rule.
2004-07
- added a pseudo-gradient method to define the best informer
- rewritten the "remove worst particle" part. There was a minor bug.
The algo is so robust that most of the time it was not important.
But sometimes it did ... So, globally, performances are now a bit better.
- added link_reorg. Not very convincing, though.
- you can now use discrete variables whose values can't be simply
computed by using min, max and granularity. The acceptable values are read
on the file discrete.txt
*/
/* TO DO
- adding something so that you can cope with more complicated search spaces
(not only hyperparallelepids) WORK STILL IN PROGRESS.
The perfect way, for a (semi) continuous search space would be by homeomorphism.
Quite difficult, though.
*/
/*
************** TRIBES: An adaptive parameter free Particle Swarm Optimiser ******************************
This version does not really need any parameters at all, except, of course, the ones
defining the problem (file problems.txt):
- function
- dimension
- xmin,xmax, granularity (for the hyperparallepipedic search space)
- target
- admissible error
- if all components of the solution have to be different (useful for some combinatorial problems)
Granularity. If you _know_ that the solution, on some dimensions, is on an "integer" point, or even a rational one,
you can give this information
Fo example, if for a given dimension, the solution is an integer, then you can set granularity to 1 for this dimension:
the process may be faster (example: functions 4 and 6, or 1 and 2 if S is an integer)
Also, you have to set granularity to 1 for some "combinatorial" problems like Knap-sack, Fifty-fifty or Magic Square
You can try to optimize some simple (and less simple) functions, for example:
1) find DD numbers that sum to S.
2) find DD numbers whose product is S
3) Sphere. Find DD numbers whose sum of squares is S
4) Rosenbrock (Banana)
5) Clerc's f1 (Alpine)
6) Griewank
7) Rastrigin
8) Fifty-fifty Problem (in particular with granul=1)
9) Ackley
10) Foxholes 2D.
etc.
(see also the file functions.txt).
--------
TSP
There is now a Traveling Salesman Problem option (30). Not that good, but it works for small
graphs (typically <20 nodes). The best way seems to be also the simplest:
just setting granularity to 1 and activating the all_different option.
However, some features (in particular initialization and local search) are specific (see TSP.c)
Of course, you need a file describing the graph (full matrix TSPLIB format).
I also tried a continous relaxation of the problem in another version, but it does not give good results.
Note that PSO for TSP may work quite well, but only by adding some local search that are not in Tribes.
---------
QAP
As TSP is a particular case of QAP (Quadratic Assignment Problem), I added this option (32), which works
almost exactly like TSP.
---------
Multiobjective (see, for example, 33 etc.)
Using Tribes for multiobjective problems is very easy, for each run gives a different solution,
thanks to the random initialization of the first swarm (usually just one particle, at the very beginning).
The trick is to be able to compare two positions, no matter one or more functions are to take into account.
This is done in better_than().
Some points:
- set accuracy to 0.
- it never really "converges", so the stop criterion is the max number of evaluations you give, Max_Eval.
- I have added a module that "cleans up" the result file (run.txt), if you want keeping only the
non dominated solutions (in file run_cleaned.txt).
In practice, it is better to run a lot of times (say 500) with a small Max_Eval (say 20),
and then to clean up the result.
Note 1
Sometimes, you may prefer using only positive functions.
Then just add big enough constants. See 34 (Schaffer) for example.
Note 2
Multiobjective approach is a nice way to take complicated constraints into account.
See, for example Pressure vessel example (code function 52)
--------
You can add your own function (see myfunction.c and functions.txt)
Just for fun, using the function 2 (DD-product), you can then factorize a (small!) integer number.
Note:
For test purpose (cf. file problems.txt), there _are_ some parameters, in particular:
no_begin:
you may choose the "best" neighbour at random, or using or not pseudo-gradient method
adapt:
If adapt=n, it means you want to use a swarm of constant size n
(and the neighbourhood of each particle is the whole swarm)
For classical test functions, and with n= about 20, this version is often better
than the ones that don't use hyperspheres.
If adapt=0, it means you want to use complete adaptive method.
Not always better than the previous one, but you don't even has to "guess"
what the right swarm size is.
Strategies:
There is no reason to adopt the same strategy for a "good" particle and for a "bad" one.
These problemeters define which one to use in which case.
Note that, though, strategy 1 is a compromise for any case.
non_unif coefficients:
Using these coefficients, distributions in hyperspheres can be non uniform
(Gaussian, for example).
In practice, uniform distribution seems OK.
Fuzzification:
You may use fuzzy decision rules and/or fuzzy distributions.
Doesn't seems to be very useful.
----------------------------------------------
Equation
--------
For each particle, we have
.
x(t+1) = x(t) + w(x(t),x(t-1),p_i(t),p_g(t))
with
.
x(t) := position at time t
x(t-1) := position at time t-1
pi(t):= previous best position of the particle (at time t)
pg(t):= previous best position found so far in the i-group (informer group) of the particle (at time t),
including the particle itself
w(x1,x2,x3) := a vectorial function of three positions
Unlike "classical" PS0s, there is no explicit velocity
and w is not at all depending on some more or less arbitrary "weights"
cognitive or social coefficients (often called alpha, phi_1, phi_2).
A promising area is defined using hyperspheres (cf. move_particle).
Download
--------
Well, you have it, havn't you ?
For more information about PSO
http://www.mauriceclerc.net, Math stuff for PSO
And for even more information, see the Particle Swarm Central http://www.particleswarm.net
Compiling the program
-------------------
It is written in ANSI C, so you shouldn't have any problem.
Using the program
-----------------
See files problems.txt
"Problems" describe the problem(s) (function, dimension, search space etc.
*/
/* INFORMAL DESCRIPTION
Common part
-----------
At the very beginning, there are n particles {0,1,2,...,n},
n=3 (or even 1) for the complete adaptive version. So we have on the same time
- the swarm,
- a tribe T0,
- three i-groups I0, I1,I2 (informers group).
Each particle belongs to the tribe T0
For each particle, its i-group contains:
i) the particle itself
ii) the best particle of the tribe
At each time step each particle "moves" towards a more promising area,
using its knowledge (x,p_i,p_g).
- let H_i be the hypersphere radius=norm(p_i-p_g), center p_i
- let H_g be the hypersphere radius=norm(p_i-p_g), center p_g
- a point pp_i is randomly chosen in H_i
(WARNING. It is not that easy, see rand_in_hypersphere)
- a point pp_g is randomly chosen in H_g
- x(t+1) is a weighted combination of pp_i and pp_g,
according to the f values on p_i and p_g.
Note: if you don't use adaptive version, it means you have a global PSO
(the neighbourhood of each particle is the whole swarm).
Adaptive part
-------------
From time to time social adaptation is performed:
- for each tribe, one counts how many particles have improved their best performance (n)
If n=0,the tribe is said "bad"
if (n>=size(tribe)/2) the tribe is said "good"
- if there is at least one bad tribe, a new tribe is generated
- for each bad tribe:
- the best particle is found and "generates" a new particle
in the new tribe, for the moment purely at random,
- this new particle is added to the i-group of this best particle
- for each particle of the new tribe, the i-group contains
ii) the particle itself
ii) the best particle of the new tribe,
iii) the particle that has generated it (symmetry. Note that thanks to the i-group
concept, you could build non symmetrical relationships)
- for each "good" tribe
- the worst particle is removed (if it is bad)
- in all i-groups it is replaced by the best one of the tribe (general case)
or by the best particle of the i-group of the removed particle
(mono-particle tribe case)
- its i-group (except of course this removed particle itself) is merged to the one
that replaces it.
Example
-------
Very beginning
T0 = {0,1,2}
i-group table
particle i-group
0 {0,1,2}
1 {0,1,2}
2 {0,1,2}
========
Adaptation 1, T0 is "bad", best particle 1.
A new tribe T1 is generated.
3 is generated by 1 in the new tribe T1
T0 = {0,1,2}
T1 = {3}
i-group table
--------------
particle i-group
0 {0,1,2}
1 {0,1,2,3}
3 {1,3}
=========
Adaptation 2. T0 is good, best particle 0, worst 1, T1 is bad, best particle 3.
T2 is generated.
1 is removed (and replaced by 0 for the i-groups)
4 is generated by 0 into T2.
T0 = {0,2}
T1 = {3}
T2 = {4}
i-group table
--------------
particle i-group
0 {0,2,3}
2 {0,2}
3 {0,3,4} (1 has been replaced by 0)
4 {3,4}
==========
Adaptation 3. T0 is bad (best particle 2, worst particle 0),T1 is good, T2 is bad.
T3 is generated
0 is removed (replaced by 2 for i-groups)
6 is generated by 2 in T3
8 is generated by 4 in T3
T0 = {2,5}
T1 = {3}
T2 = {4}
T3 = {6,8}
i-group table
--------------
particle i-group
2 {2,5,6}
3 {3,7}
4 {4,8}
5 {2,5}
6 {2,6,8}
7 {3,7}
8 {4,8}
etc.
*/
// DEPENDING on how you build the application you may or may not need to "include"
// the following files
#include "def_struct.c"
// External subroutines
#include "movpeaks_mc.c"
#include "read_display_save.c"
#include "tools.c" // Some mathematical tools like alea(), max(), min(), regranul() etc.
#include "extra_tools.c" // Some "informative" tools, not strictly necessary, like distance(), energy() etc.
#include "myfunction.c" // Write your own objective function in this file.
#include "myconstrain.c" // Write your own constrain function g(position)>=0
#include "apple_trees.c" // For "apple trees" example
#include "TSP.c"
#include "QAP.c"
#include "MINLP.c"
#include "annxor.c"
#include "annparity.c"
#include "anncolorcube.c"
#include "annpima.c"
#include "annsinsimp.c"
#include "annservo.c"
//=========================================================================================
int main(int argc, char *argv[])
{
int bidon;
int i,j;
int level;
float Max_Eval;
int nb_pb;
struct position result;
double t;
E=exp(1);
pi=(double)2*acos(0);
two_pi=2*pi;
printf("\n t %f",20.0); // Test of code generation. Should print 20.000000
// Initialize function names. Just for display
if ((f_functs = fopen("functions.txt","r")) == NULL)
{
fprintf(stderr,"Can't open the file functions.txt\n");
exit(1);
}
i=1;
next_funct:
fscanf(f_functs,"%i %s\n",&bidon,&functions[i]);
if (bidon!=-1)
{
// printf("\n %i %s",bidon,functions[i]);
i=i+1;
goto next_funct;
}
printf("\n-----------------------------------\n");
fclose (f_functs);
//=================== Other files (keep them open during the whole process)
f_problem=fopen("problems.txt","r");
if (f_problem == NULL)
{
fprintf(stderr,"Can't open the file problems.txt\n");
exit(1);
}
f_discrete=fopen("discrete.txt","r");
if (f_discrete == NULL)
{
fprintf(stderr,"Can't open the file discrete.txt\n");
exit(1);
}
printf("\n Open files");
f_run=fopen("run.txt","w");
f_run_c=fopen("run_c.txt","w");
f_swarm=fopen("swarm.txt","w");
f_synth=fopen("synth.txt","w");
f_trace=fopen("trace.txt","w");
f_energy=fopen("energy.txt","w");
f_trace_run=fopen("trace_run.txt","w");
//-----------------
// Read strategies (for tests. Will be hard coded later) . See move_particle()
for (i=0;i<2;i++)
{
for (j=0;j<Max_status;j++)
fscanf(f_problem, "%f",&strategies[i][j]);
}
fscanf(f_problem,"%i",&confin_interv); // If <0, no interval (boundary) confinement
fscanf(f_problem,"%i",&circular_hood); // Option. Usually equal to 0.
// If >0, indicates the size of the circular neighbourhood
// This option is just to rapidly compare with parametric PSO
if (circular_hood<0)
{
if (circular_hood<-1)
{
circular_hood=-circular_hood;
rand_hood=circular_hood;
}
else
{
//rand_hood=K_optim(problem[level].init_size,problem[level].eps);
// Just for info. : init_size and eps are not yet known
}
}
else
{
rand_hood=0;
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -