An algorithm for global optimization inspired by collective animal behavior
Subject classification: this is a biology resource. |
By Erik Cuevas, Mauricio González, Daniel Zaldivar and Marco Pérez-Cisneros
Introduction
editGlobal Optimization (GO) has yielded remarkable applications to many areas of science, engineering, economics and others through mathematical modelling1. In general, the goal is to find a global optimum of an objective function which has been defined over a given search space. Global optimization algorithms are usually broadly divided into deterministic and metaheuristic2. Since deterministic methods only provide a theoretical guarantee of locating a local minimum of the objective function, they often face great difficulties in solving global optimization problems3. On the other hand, metaheuristic methods are usually faster in locating a global optimum4. They virtuously adapt to black-box formulations and extremely ill-behaved functions, whereas deterministic methods usually require some theoretical assumptions about the problem formulation and its analytical properties (such as Lipschitz continuity)5.
Several metaheuristic algorithms have been developed by a combination of rules and randomness mimicking several phenomena.
Many studies have been inspired by animal behavior phenomena for developing optimization techniques. For instance, the Particle swarm optimization (PSO) algorithm which models the social behavior of bird flocking or fish schooling6. PSO consists of a swarm of particles which move towards best positions, seen so far, within a searchable space of possible solutions. Another behavior-inspired approach is the Ant Colony Optimization (ACO) algorithm proposed by Dorigo et al.7, which simulates the behavior of real ant colonies. Main features of the ACO algorithm are the distributed computation, the positive feedback and the constructive greedy search. Recently, a new metaheuristic approach which is based on the animal behavior while hunting has been proposed in8. Such algorithm considers hunters as search positions and preys as potential solutions.
Just recently, the concept of individual-organization9,10 has been widely referenced to understand collective behavior of animals. The central principle of individual-organization is that simple repeating interactions between individuals can produce complex behavioral patterns at group level9,11,12. Such inspiration comes from behavioral patterns previously seen in several animal groups. Examples include ant pheromone trail networks, aggregation of cockroaches and the migration of fish schools, all of which can be accurately described in terms of individuals following simple sets of rules13. Some examples of these rules12,14 are keeping the current position (or location) for best individuals, local attraction or repulsion, random movements and competition for the space within of a determined distance.
On the other hand, new studies15-17 have also shown the existence of collective memory in animal groups. The presence of such memory establishes that the previous history of the group structure influences the collective behavior exhibited in future stages. According to such principle, it is possible to model complex collective behaviors by using simple individual rules and configuring a general memory.
In this paper, a metaheuristic algorithm for global optimization called the Collective Animal Behavior (CAB) is introduced. In this algorithm, the searcher agents emulate a group of animals that interact to each other based on simple behavioral rules which are modeled as mathematical operators. Such operations are applied to each agent considering that the complete group has a memory storing their own best positions seen so far, by using a competition principle. The proposed approach has been compared to other well-known optimization methods. The results confirm a high performance of the proposed method for solving various benchmark functions.
Biologic fundamentals
editThe remarkable collective behavior of organisms such as swarming ants, schooling fish and flocking birds has long captivated the attention of naturalists and scientists. Despite a long history of scientific research, the relationship between individuals and group-level properties has just recently begun to be deciphered18.
Grouping individuals often have to make rapid decisions about where to move or what behavior to perform in uncertain and dangerous environments. However, each individual typically has only a relatively local sensing ability19. Groups are, therefore, often composed of individuals that differ with respect to their informational status and individuals are usually not aware of the informational state of others20, such as whether they are knowledgeable about a pertinent resource or about a threat.
Animal groups are based on a hierarchic structure21 which considers different individuals according to a fitness principle called Dominance22 which is the domain of some individuals within a group that occurs when competition for resources leads to confrontation. Several studies23,24 have found that such animal behavior lead to more stable groups with better cohesion properties among individuals.
Recent studies have begun to elucidate how repeated interactions among grouping animals scale to collective behavior. They have remarkably revealed that collective decision-making mechanisms across a wide range of animal group types, from insects to birds (and even among humans in certain circumstances) seem to share similar functional characteristics9,13,25. Furthermore, at a certain level of description, collective decision-making by organisms shares essential common features such as a general memory. Although some differences may arise, there are good reasons to increase communication between researchers working in collective animal behavior and those involved in cognitive science12.
Despite the variety of behaviors and motions of animal groups, it is possible that many of the different collective behavioral patterns are generated by simple rules followed by individual group members. Some authors have developed different models, one of them, known as the self-propelled particle (SPP) model, attempts to capture the collective behavior of animal groups in terms of interactions between group members which follow a diffusion process26-29.
The dynamical spatial structure of an animal group can be explained in terms of its history24. Despite such a fact, the majority of studies have failed in considering the existence of memory in behavioral models. However, recent research15,30 have also shown the existence of collective memory in animal groups. The presence of such memory establishes that the previous history of the group structure influences the collective behavior which is exhibited in future stages. Such memory can contain the location of special group members (the dominant individuals) or the averaged movements produced by the group.
According to these new developments, it is possible to model complex collective behaviors by using simple individual rules and setting a general memory. In this work, the behavioral model of animal groups inspires the definition of novel evolutionary operators which outline the CAB algorithm. A memory is incorporated to store best animal positions (best solutions) considering a competition-dominance mechanism.
Collective Animal Behavior Algorithm (CAB)
editThe CAB algorithm assumes the existence of a set of operations that resembles the interaction rules that model the collective animal behavior. In the approach, each solution within the search space represents an animal position. The “fitness value” refers to the animal dominance with respect to the group. The complete process mimics the collective animal behavior.
The approach in this paper implements a memory for storing best solutions (animal positions) mimicking the aforementioned biologic process. Such memory is divided into two different elements, one for maintaining the best locations at each generation (Mh) and the other for storing the best historical positions during the complete evolutionary process (Mg).
Description of the CAB algorithm
editFollowing other metaheuristic approaches, the CAB algorithm is an iterative process that starts by initializing the population randomly (generated random solutions or animal positions). Then, the following four operations are applied until a termination criterion is met (i.e. the iteration number NI):
- Keep the position of the best individuals.
- Move from or to nearby neighbors (local attraction and repulsion).
- Move randomly.
- Compete for the space within a determined distance (update the memory).
Initializing the population
editThe algorithm begins by initializing a set A of Np animal positions (A={a1,a2,...aNp}). Each animal position ai is a D-dimensional vector containing parameter values to be optimized. Such values are randomly and uniformly distributed between the pre-specified lower initial parameter bound ajlow and the upper initial parameter bound ajhigh.
ai,j=ajlow+rand(0,1)•(ajhigh-ajlow); (1)
j=1,2,...,D; i=1,2,...,Np.
with j and i being the parameter and individual indexes respectively. Hence,aj,i is the jth parameter of the ith individual.
All the initial positions A are sorted according to the fitness function (dominance) to form a new individual set X={x1,x2,...,xNpp}, so that we can choose the best B positions and store them in the memory Mg and Mh. The fact that both memories share the same information is only allowed at this initial stage.
Keep the position of the best individuals
editAnalogous to the biological metaphor, this behavioral rule, typical from animal groups, is implemented as an evolutionary operation in our approach. In this operation, the first B elements ({a1,a2,...,aB}), of the new animal position set A, are generated. Such positions are computed by the values contained inside the historical memory Mh, considering a slight random perturbation around them. This operation can be modeled as follows:
al=mhl+v; (2)
where lε{1,2,...,B} while mhl represents the l-element of the historical memory Mh. v is a random vector with a small enough length.
Move from or to nearby neighbors
editFrom the biological inspiration, animals experiment a random local attraction or repulsion according to an internal motivation. Therefore, we have implemented new evolutionary operators that mimic such biological pattern. For this operation, a uniform random number rm is generated within the range [0,1]. If rm is less than a threshold H, a determined individual position is moved (attracted or repelled) considering the nearest best historical position within the group (i.e. the nearest position in Mh); otherwise, it goes to the nearest best location within the group for the current generation (i.e. the nearest position in Mg). Therefore such operation can be modeled as follows:
ai=xi±r•(mhnearest-xi) with probability H (3) ai=xi±r•(mgnearest-xi) with probability 1-H
where iε{B+1,B+2,...,Np}, mhnearest and mgnearest represent the nearest elements of Mh and Mg to xi, while r is a random number.
Move randomly
editFollowing the biological model, under some probability P, one animal randomly changes its position. Such behavioral rule is implemented considering the next expression:
ai=r with probability P (4) ai=xi with probability (1-P)
being iε{B+1,B+2,...,Np} and r a random vector defined in the search space. This operator is similar to re-initialize the particle in a random position, as it is done by Eq. (1).
Compete for the space within of a determined distance (update the memory)
editOnce the operations to keep the position of the best individuals, such as moving from or to nearby neighbors and moving randomly, have all been applied to the all Np animal positions, generating Np new positions, it is necessary to update the memory Mh.
In order to update de memory Mh, the concept of dominance is used. Animals that interact within a group maintain a minimum distance among them. Such distance ρ depends on how aggressive the animal behaves 22,30. Hence, when two animals confront each other inside such distance, the most dominant individual prevails meanwhile other withdraw.
In the proposed algorithm, the historical memory Mh is updated considering the following procedure:
- The elements of Mh and Mg are merged into MU(MU=MhυMh).
- Each element mUi of the memory MU is compared pair-wise to remaining memory elements ({mU1,mU2,...,mU2B-1}). If the distance between both elements is less than ρ, the element getting a better performance in the fitness function prevails meanwhile the other is removed.
- From the resulting elements of MU (from step 2), it is selected the B best value to build the new Mh.
Unsuitable values of ρ yield a lower convergence rate, a longer computational time, a larger function evaluation number, the convergence to a local maximum or to an unreliable solution. The ρ value is computed considering the following equation:
ρ=Πj=1D(ajhigh-ajlow)/(10•D) (5)
where ajhigh and ajlow represent the pre-specified lower and upper bound of the j-parameter respectively, in an D-dimensional space.
Computational procedure
editThe computational procedure for the proposed algorithm can be summarized as follows:
Step 1: Set the parameters Np, B, H,P and NI.
Step 2: Generate randomly the position set A={a1,a2,...,aNp} using Eq.1
Step 3: Sort A according to the objective function (dominance) to build X={x1,x2,...,xNp}.
Step 4: Choose the first B positions of X and store them into the memory Mg.
Step 5: Update Mh according to Section 3.1.5. (during the first iteration:Mh=Mg).
Step 6: Generate the first B positions of the new solution set A({a1,a2,...,aB}. Such positions correspond to the elements of Mh making a slight random perturbation around them.
ahl+v; being v a random vector of a small enough length.
Step 7: Generate the rest of the A elements using the attraction, repulsion and random movements.
for i=B+1:Np if (r1<P) then attraction and repulsion movement {if(r2<H) then ai=xi-r•(mhnearest-xi) else if ai=xi-r•(mgnearest-xi) } else if random movement { ai=r } end for where r1,r2,rε rand(0,1)
Step 8: If NI is completed, the process is finished; otherwise go back to step 3. The best value in Mh represents the global solution for the optimization problem.
References
edit[1] M. Pardalos Panos, H. Romeijn Edwin, Hoang Tuy, Recent developments and trends in global optimization, Journal of Computational and Applied Mathematics 124 (2000) 209–228.
[2] Floudas, C., Akrotirianakis, I., Caratzoulas, S., Meyer, C., Kallrath, J. Global optimization in the 21st century: Advances and challenges. Computers & Chemical Engineering, 29(6), (2005), 1185-1202.
[3] Ying, J., Ke-Cun, Z., Shao-Jian, Q. A deterministic global optimization algorithm. Applied Mathematics and Computation, 185(1), (2007), 382-387.
[4] Georgieva, A., Jordanov, I., Global optimization based on novel heuristics, low-discrepancy sequences and genetic algorithms. European Journal of Operational Research, 196, (2009), 413–422.
[5] Lera, D., Sergeyev, Ya. Lipschitz and Hölder global optimization using space-filling curves. Applied Numerical Mathematics, 60(1-2), (2010), 115-129.
[6] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of the 1995 IEEE International Conference on Neural Networks, vol. 4, 1995, pp. 1942–1948.
[7] Dorigo, M., Maniezzo, V., Colorni, A., 1991. Positive feedback as a search strategy. Technical Report No. 91-016, Politecnico di Milano.
[8] Oftadeh, R., Mahjoob, M.J., Shariatpanahi M. A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search. Computers and Mathematics with Applications 60 (2010) 2087-2098.
[9] Sumper D. The principles of collective animal behaviour. Philos Trans R Soc Lond B Biol Sci. 361(1465), 2006, 5–22.
[10] Petit, O., Bon, R. Decision-making processes: The case of collective movements. Behavioural Processes 84 (2010) 635–647.
[11] Kolpas, A., Moehlis, J., Frewen, T., Kevrekidis, I. Coarse analysis of collective motion with different communication mechanisms. Mathematical Biosciences 214 (2008) 49–57.
[12] Couzin, I. Collective cognition in animal groups. Trends in Cognitive Sciences 13(1), (2008) 36–43.
[13] Couzin, I.D. and Krause, J. Self-organization and collective behavior in vertebrates. Adv. Stud. Behav. 32, (2003). 1–75.
[14] Bode, N, Franks, D., Wood, A. Making noise: Emergent stochasticity in collective motion. Journal of Theoretical Biology 267, (2010), 292–299.
[15] Couzi, I., Krause, I., James, R., Ruxton, G., Franks, N. Collective Memory and Spatial Sorting in Animal Groups. J. theor. Biol. (2002) 218, 1–11.
[16] Couzin, I.D. Collective minds. Nature 445, (2007), 715-728.
[17] Bazazi, S., Buhl, J., Hale, J.J., Anstey, M.L., Sword, G.A., Simpson, S.J., Couzin, I.D. Collective motion and cannibalism in locust migratory bands. Curr. Biol. 18, (2008) 735–739.
[18] Bode, N., Wood, A., Franks, D. The impact of social networks on animal collective motion. Animal Behaviour 82(1), (2011), 29-38.
[19] Lemasson, B., Anderson, J., Goodwin, R. Collective motion in animal groups from a neurobiological perspective: The adaptive benefits of dynamic sensory loads and selective attention. Journal of Theoretical Biology, 261(4), (2009), 501-510. [20] Bourjade, M., Thierry, B., Maumy, M., Petit, O. Decision-making processes in the collective movements of Przewalski horses families Equus ferus Przewalskii: influences of the environment. Ethology 115, (2009). 321–330.
[21] Banga, A., Deshpande, S., Sumanab, A., Gadagkar, R. Choosing an appropriate index to construct dominance hierarchies in animal societies: a comparison of three indices. Animal Behaviour 79(3), (2010), 631-636.
[22] Hsu, Y., Earley, R., Wolf, L. Modulation of aggressive behaviour by fighting experience: mechanisms and contest outcomes. Biological Reviews 81(1), (2006), 33–74.
[23] Broom, M., Koenig, A., Borries, C. Variation in dominance hierarchies among group-living animals: modeling stability and the likelihood of coalitions. Behav. Ecol., 20, 2009, 844 - 855.
[24] Bayly, K. L., Evans, C. S. & Taylor, A. Measuring social structure: a comparison of eight dominance indices. Behavioural Processes, 73, (2006). 1–12.
[25] Conradt, L. and Roper, T.J. Consensus decision-making in animals. Trends Ecol. Evol. 20, (2005), 449–456.
[26] Okubo A. Dynamical aspects of animal grouping. Adv. Biophys. 22, (1986), 1–94.
[27] Reynolds C.W. Flocks, herds and schools: a distributed behavioural model. Comp. Graph.21, (1987), 25–33.
[28] Gueron S, Levin S.A, Rubenstein D.I. The dynamics of mammalian herds: from individual to aggregations. J. Theor. Biol.182, (1996), 85–98.
[29] Czirok A, Vicsek T. Collective behavior of interacting self-propelled particles. Physica A. 281, (2000), 17–29.
[30] Ballerini, M. Interaction ruling collective animal behavior depends on topological rather than metric distance: evidence from a field study. Proc. Natl. Acad. Sci. U. S. A. 105, (2008), 1232–1237.
External Links
edit
Erik.cuevasj (talk) 16:38, 13 March 2012 (UTC)