On the Performance of Swarm Intelligence Optimization Algorithms for Phase Stability and Liquid-Liquid and Vapor-Liquid Equilibrium Calculations

This study introduces new soft computing optimization techniques for performing the phase stability analysis and phase equilibrium calculations in both reactive and non-reactive systems. In particular, the performance of the several swarm intelligence optimization methods is compared and discussed based on both reliability and computational efficiency using practical stopping criteria for these applied thermodynamic calculations. These algorithms are: Intelligent Firefly Algorithm (IFA), Cuckoo Search (CS), Artificial Bee Algorithm (ABC) and Bat Algorithm (BA). It is important to note that no attempts have been reported in the literature to evaluate their performance in solving the phase and chemical equilibrium problems. Results indicated that CS was found to be the most reliable technique across different problems tried at the time that it requires similar computational effort to the other methods. In summary, this study provides new results and insights about the capabilities and limitations of bio-inspired optimization methods for performing applied thermodynamic calculations.


Introduction
Soft computing techniques are popular and reliable numerical tools to solve real world optimization problems especially those involved in engineering applications.In particular, nature-inspired algorithms are a branch in the field of soft computing, which imitate processes in nature/inspired from nature.Nature-inspired computation can be classified into six categories [1]: swarm intelligence, natural evolution, biological neural network, molecular biology, immune system and biological cells.To date, several nature-inspired algorithms have been developed for solving difficult non-convex and multivariable optimization problems.In particular, the sophisticated decision making process that swarms of living organisms exhibit has inspired several of these meta-heuristics.Examples of these swarm intelligence optimization techniques are based on the decision making process of fireflies, ants, bees or birds.In general, the bio-inspired methods are quite simple to implement and use.They do not require any assumptions or transformation of the original optimization problems, do not require good starting points, can easily move out of local minima in their path to the global minimum, and can be applied with any model (i.e., black box model), yet provide a high probabilistic convergence to the global optimum.They can often locate the global optimum in modest computational time compared to deterministic optimization methods [2].Therefore, these techniques are more advantageous compared to traditional local gradientbased and global deterministic optimization techniques.
Recently, swarm intelligence optimization methods have been introduced for solving challenging global optimization problems involved in the thermodynamic modeling of phase equilibrium for chemical engineering applications [3][4][5][6][7][8].In particular, the calculations of phase and chemical equilibrium are an essential component of all process simulators in chemical engineering.The prediction of phase behavior of a mixture involves the solution of two main thermodynamic problems: phase stability (PS) and phase equilibrium calculations (PEC).PS problems involve the determination of whether a system will remain in one phase at the given conditions or split into two or more phases.This type of problems usually precedes the PEC problem, which involves the determination of the number, type and composition of the phases at equilibrium at the given operating conditions.Note that a reactive phase equilibrium calculation (RPEC) or chemical equilibrium calculation is performed if any reaction is possible in the system under study.During the analysis of a chemical engineering process, PS, PEC and/or RPEC problems usually need to be solved numerous times.Solving these types of thermodynamic problems involves the use of global optimization methods.In particular, PS analysis requires the minimization of the tangent plane distance function (TPDF), while the Gibbs free energy function needs to be minimized for PEC and RPEC subject to the corresponding constraints [9].For these thermodynamic problems, finding a local minimum is not sufficient; and the global minimum must be identified for determining the correct thermodynamic condition.
In general, the high non-linearity of thermodynamic models, the non-convexity of the objective functions, and the presence of a trivial solution in the search space make PEC, RPEC and PS problems difficult to solve.Moreover, these thermodynamic problems may have local optimal values that are very comparable to the global optimum value, which makes it challenging to find the global optimum.Hence, PS, PEC, and RPEC problems require a reliable and efficient global optimization algorithm.To date, there are no effective optimization methods at all for performing these thermodynamic calculations.Current methods for phase equilibrium modeling have their own deficiencies and sometime fail to find the correct solutions for difficult problems such as the calculation of simultaneous phase and chemical equilibrium for systems containing many components near the critical point of the mixture and the phase boundaries [10].Actually, novel processes in chemical industry handle complex mixtures, severe operating conditions, or even incorporate combined unit operations (e.g., reactive distillation or extractive distillation).Wrong estimation of the thermodynamic state may have negative impacts on the design, analysis and operation of such novel processes.Therefore, the search for better methods and techniques to solve these often-difficult thermodynamic problems is still ongoing and new optimization algorithms should be developed and/or analyzed.
Swarm intelligence optimization methods have been insufficiently studied in chemical engineering applications including the thermodynamic modeling of phase equilibrium.In particular, this work evaluates a set of promising bio-inspired optimization algorithms for PEC, RPEC and PS problems involving multiple components, multiple phases and popular thermodynamic models.These algorithms are: Intelligent Firefly Algorithm (IFA), Cuckoo Search (CS), Artificial Bee Algorithm (ABC) and Bat Algorithm (BA).The performance of these swarm intelligence stochastic global optimization algorithms has not been comparatively studied before for phase stability and equilibrium problems.In this work, they compared and discussed based on both reliability and computational efficiency using practical stopping criteria.In summary, this study provides new results and insights about the capabilities and limitations of bio-inspired optimization methods for performing applied thermodynamic calculations.The remainder of this paper is organized as follows.The four algorithms, (i.e., IFA, CS, ABC, BA) are presented in Section 2. A brief description of PEC, PS and RPEC problems is given in Section 3. Implementation of the four algorithms is covered in Section 4 and Section 5 presents the results and discusses the performance of the bio-inspired methods on selected thermodynamic problems.Finally, the conclusions of this work are summarized in Section 6.

Description of Bio-Inspired Optimization Algorithms used for thermodynamic calculations
In this study, the global optimization problem to be solved is defined as with respect to D decision variables: X = (X 1 , X 2 , …, X d , …, X D ).The upper and lower bounds of these variables are , , , , … respectively.This optimization problem can be subject to constraints depending on the type of thermodynamic calculation (i.e., PS, PEC or RPEC).These constraints are presented with the thermodynamic problems in the following section.
Four different stochastic global optimization techniques, Intelligent Firefly Algorithm (IFA), Cuckoo Search (CS), Artificial Bee Algorithm (ABC) and Bat Algorithm (BA), were evaluated for the phase stability and equilibrium calculations in this study.These methods were selected because they are relatively new with improved global optimization features.It is important to note that no attempts have been reported in the literature to evaluate their performance in solving the phase and chemical equilibrium problems and it is expected that their performance could be superior to other stochastic methods.Each of these methods is briefly described in the following subsections.More details of these stochastic optimization methods can be found in the cited references.

Intelligent Firefly Algorithm
Firefly Algorithm (FA) is a nature-inspired meta-heuristic stochastic global optimization method that was developed by Yang [11].It is a relatively new method that is gaining popularity in finding the global minimum of diverse applications.It was rigorously evaluated by Gandomi et al. [12], and has been recently used to solve the flow shop scheduling problem [13], financial portfolio optimization [14], and phase and chemical equilibrium problems [5].The FA algorithm imitates the mechanism of firefly communications via luminescent flashes.In the FA algorithm, the two important issues are the variation of light intensity and the formulation of attractiveness.The brightness of a firefly is (1) determined by the landscape of the objective function.Attractiveness is proportional to brightness and, thus, for any two flashing fireflies, the less bright one moves towards the brighter one.
In this algorithm, the attractiveness of a firefly is determined by its brightness, which is equal to the objective function.The brightness of a firefly at a particular location x was chosen as I(x) =f(x).The attractiveness is judged by the other fireflies.Thus, it was made to vary with the distance between firefly i and firefly j.The attractiveness was made to vary with the degree of absorption of light in the medium between the two fireflies.Thus, the attractiveness is given by ( ) The distance between any two fireflies i and j at x i and x j is the Cartesian distance: The movement of a firefly attracted to another more attractive (brighter) firefly j is determined by The second term is due to the attraction, while the third term ε i is a vector of random numbers drawn from a uniform distribution in the range [-0.5, 0.5].
In the original FA, the move Eq.( 4) is determined mainly by the attractiveness of the other fireflies; the attractiveness is a strong function of the inter distance between the fireflies.Thus, a firefly can be attracted to another firefly merely because it is close, which may take it away from the global minimum.The fireflies are ranked according to their brightness, i.e. according to the values of the objective function at their respective locations.However, this ranking, which is a valuable piece of information per se, is not utilized in the move equation.A firefly is pulled towards each other firefly as each of them contributes to the move by its attractiveness.This behavior may lead to a delay in the collective move towards the global minimum.The idea behind of Intelligent Firefly Algorithm (IFA) is to make use of the ranking information such that every firefly is moved by the attractiveness of a fraction of fireflies only and not by all of them [4].This fraction represents a top portion of the fireflies based on their rank.Thus, a firefly is acting intelligently by basing its move on the top ranking fireflies only and not merely on attractiveness.
A simplified algorithm for the IFA technique is presented in Fig. 1.The new parameter φ is the fraction of the fireflies utilized in the determination of the move.The original firefly algorithm is retained by setting φ to 1.This parameter is used as the upper limit for the index j in the inner loop.Thus, each firefly is moved by the top φ fraction of the fireflies only.The strength of FA is that the location of the best firefly does not influence the direction of the search.Thus, the fireflies are not trapped in a local minimum.However, the search for the global minimum requires additional computational effort as many fireflies wander around uninteresting areas.With the intelligent firefly modifications, the right value of the parameter φ can maintain the advantage of not being trapped in a local minimum while speeding up the search for the global minimum.The parameters for the original FA were kept constant in all experiments for IFA: we used the value of 1 for β o , 0.2 for β min , 1 for γ, and α was made to decrease with the increase in the iteration number, k, in order to reduce the randomness according to the following formula ( ) Thus, the randomness is decreased gradually as the optima are approached.This formula was adapted from Yang [10].The value of the parameter b was taken equal to 5.

Artificial Bee Colony Algorithm
Artificial Bee Colony (ABC) is a meta-heuristic optimization algorithm based on the foraging behavior of bee swarms [15].In the ABC algorithm, the colony of artificial bees contains three groups: employed bees (typically 50% of initial population), onlookers (typically 50% of initial population) and scouts (typically one scout bee).The quality of a food source is the amount of nectar it retains as a metaphor for the value of the objective function.For every food source, there is only one employed bee.The search carried out by the bees can be summarized as follows: (1) employed bees determine a food source within the neighborhood of a food source in their memory; (2) employed bees share their information with onlookers, and then onlookers select one of the food sources; (3) onlookers select a food source within the neighborhood of the food  (5) sources chosen by themselves, thus they perform probabilistically guided exploitation, and (4) an employed bee retains its right to convert to a scout by abandoning a food source after its exemption, as judged by the failure of a prescribed number of inner iterations (parameter limit) to improve the abandoned food source.In step 1, the employed bees search their neighborhood as guided by the following formula where v ij is the position of the new food source in the neighborhood of x ij , with k as the solution, and Φ is a random number in the range [-1, 1].The onlookers, in step 3, apply a greedy selection criterion that is proportional to the fitness value of the searched sources.In step 4, sources abandoned are randomly replaced by the following formula x min max min where φ ij is a random number in the range [0,1].Pseudo code of ABC algorithm is given in Fig. 2. ABC algorithm, its variants, and its hybrids have shown success in a number of applications from electrical, mechanical, civil, electronics, software and control engineering.Interested readers can consult [16] for a comprehensive survey on ABC and its applications.

Cuckoo Search Algorithm
Cuckoo Search (CS) is a meta-heuristic optimization algorithm based on the breeding behavior of certain species of cuckoos.Some species lay their eggs in the nest of other host birds.If a host bird discovers the alien eggs, they will either throw them away or abandon the nest and build a new one elsewhere.As explained by Yang and Deb [16], the CS is based on three rules: (1) each cuckoo lays one egg at a time and dump its egg in a randomly chosen nest; (2) the best nest with high quality of eggs will carry over to the next generations; and (3) the number of available host nests is fixed, and a fraction p a of the n nests are replaced by new nests (with new random solutions).Each egg represents a new solution and each nest can hold a single egg only.The aim is to use the new and potentially better solutions to replace the worse solutions in the nests.When generating new solutions for Cuckoo i, a Lévy flight is performed.A stochastic equation for random walk is used to represent the search of each cuckoo.The next location of a cuckoo depends on its current location and the transition probability.The step length is randomly drawn for a Lévy distribution, which has an infinite variance with an infinite mean.Some of the new solutions should be generated by Lévy walk around the best solution obtained so far, which will speed up the local search.Some other solutions should be generated by far field randomization to avoid being trapped in a local optimum.The pseudo code of CS is depicted in Fig. 3.

Bat Algorithm
Bats have advanced capability of echolocation, which is a type of sonar used by bats to detect prey and avoid obstacles.They emit a loud sound pulse and listen for the echo that bounces back from the surrounding objects.Some types of bats use the time delay from the emission and detection of the echo, the time difference between their two ears, and the loudness variations of the echoes to build up three-dimensional scenario of the surrounding [17].They can detect the distance and orientation of the target, the type of prey, and even the moving speed of the (6) (7) Step 1: Set parameters and Initialize  prey such as small insects.Indeed, they seem to be able to discriminate targets by the variations of the Doppler Effect induced by the wing-flutter rates of the target insects.Such echolocation behavior of bats was formulated in such a way that it can be associated with the objective function to be optimized [18].In short, BA uses the following approximate or idealized rules:

Define fraction p a
1.All bats use echolocation to sense distance, and they also 'know' the difference between food/prey and background barriers in some magical way; 2. Bats fly randomly with velocity v i at position x i with a fixed frequency f min , varying wavelength λ and loudness A 0 to search for prey.They can automatically adjust the wavelength (or frequency) of their emitted pulses and adjust the rate of pulse emission r ∈ [0, 1], depending on the proximity of their target; 3. The loudness is assumed to vary from a large (positive) A 0 to a minimum constant value A min .
The frequency f in a range [f min , f max ] corresponds to a range of wavelengths [λ min , λ max ].In the actual implementation, the detectable range (or the largest wavelength) should be chosen such that it is comparable to the size of the domain of interest, and then toned down to smaller ranges.BA algorithm has recently demonstrated its ability to solve tough optimization problems, like continuous optimization for engineering design, combinatorial optimization and scheduling, parameter estimation, image processing and data mining [20].The pseudocode shown in Fig. 4 shows the basic steps of the Bat Algorithm.

Problems
A brief description of the global optimization problems including the objective function, decision variables and constraints, for PEC, PS and RPEC problems, is given in the following subsections.It is convenient to remark that we have used the nomenclature and mathematical formulations reported in [9] for describing these thermodynamic problems.

Phase Stability Analysis Problems
Solving the PS problem is usually the starting point for the phase equilibrium calculations.The theory used to solve this problem states that a phase is stable if the tangent plane generated at the feed (or initial) composition lies below the molar Gibbs energy surface for all compositions.One common implementation of the tangent plane criterion is to minimize the tangent plane distance function (TPDF), defined as the vertical distance between the molar Gibbs energy surface and the tangent plane at the given phase composition [21].TPDF is given by 1 ( ) ∑ where μ i | y and μ i | z are the chemical potentials of component i calculated at compositions y and z, respectively.For stability analysis of a phase/mixture of composition z, TPDF must be globally minimized with respect to composition of a trial phase y.If the global minimum value of TPDF is negative, the phase is not stable at the given conditions, and phase split calculations are necessary to identify the compositions of each phase.The decision variables for minimizing TPDF in phase stability problems are mole fractions, y i for i = 1, 2, …, c, each in the range [0, 1], and the constraint is that summation of these mole fractions is equal to 1.According to [9], the constrained global optimization of TPDF can be transformed into an unconstrained problem by using decision variables β i instead of y i as follows 1,..., More details on PS problem formulation can be found in [19] and the characteristics of PS problems used in this study are summarized in Table 1.

Phase Equilibrium Calculation Problems
A mixture of substances at a given temperature, T, pressure, P and total molar amount may separate into two or more phases.The composition of the different substances is the same throughout a phase but may differ significantly in different phases at equilibrium.If there is no reaction between the different substances, then it is a phase equilibrium problem.Classic thermodynamics indicate that minimization of Gibbs free energy is a natural approach for calculating the equilibrium state of a mixture [9].The mathematical formulation involves the minimization of Gibbs free energy subject to mass balance equality constraints and bounds that limit the range of decision variables.In a non-reactive system with c components and π phases, the objective function for PEC is φ and ϕ i are the moles, mole fraction, activity coefficient and fugacity coefficient of component i in phase j, and the fugacity coefficient of pure component, respectively.Thermodynamic function ( 12) must be minimized with respect to n ij taking into account the following mass balance constraints: where z i is the mole fraction of component i in the feed and n F is the total moles in the feed.To perform unconstrained minimization of Gibbs energy function, one can use new variables instead of n ij as decision variables [9].For multi-phase non-reactive systems, new variables β ij ϵ (0, 1) are defined and employed as decision variables by using the following expressions For Gibbs energy minimization, the number of decision variables β ij is c (π -1) for non-reactive systems.The details of PEC problems used in this study are also in Table 1.In most of the reported studies, PEC problems tested assumed that the number and type of phases are known; such problems are also known as phase split calculations.In this study too, the same assumption is made, and the problems tested are simply referred to as PEC problems.

Reactive Phase Equilibrium Calculation Problems
In RPEC problems, also known as chemical equilibrium problems, reactions increase the complexity and dimensionality of phase equilibrium problems, and so phase split calculations in reactive systems are more challenging due to non-linear interactions among phases and reactions.The phase distribution and composition at equilibrium of a reactive mixture are determined by the global minimization of Gibbs free energy with respect to mole numbers of components in each phase subject to element/mass balances and chemical equilibrium constraints [9].In this study, we have used a constrained Gibbs free energy minimization approach for performing RPEC.Specifically, for a system with c components and π phases subject to r independent chemical reactions, the objective function for RPEC is defined as [3,9] -1 , 1 -ln where g is given by Eq. ( 12), ln K eq is a row vector of logarithms of chemical equilibrium constants for r independent reactions, N is an invertible, square matrix formed from the stoichiometric coefficients of a set of reference components chosen from r reactions, and n ref is a column vector of moles of each of the reference components.This objective function is defined using reaction equilibrium constants, and it must be globally minimized subject to the following mass balance restrictions [2] n n where n i,F is the initial moles of component i in the feed.These mass balance equations are rearranged to reduce the number of decision variables in the optimization problem and to eliminate equality constraints, then . .., c r − Using Eq. ( 20), the decision variables for RPEC are c (π − 1) + r mole numbers (n ij ).Then, the global optimization problem can be solved by minimizing Eq. ( 18) with these decision variables and the remaining c − r mole numbers (n iπ ) are determined from Eq. ( 20), subject to the inequality constraints n iπ > 0. The penalty function method is used to solve the constrained Gibbs free energy minimization in reactive systems and the optimization problem is defined as [9] if 0 1,..., 1,..., , otherwise, where p is the penalty term whose value is positive and given by where n iπ is obtained from Eq. ( 20) and n unf is the number of infeasible mole numbers (i.e., n iπ < 0 where i = 1, ..., c − r).The details of the RPEC problems are shown in Table 2.

Implementation of the Bio-Inspired Optimization Algorithms
In this study, all the bio-inspired optimization algorithms and thermodynamic models were coded in MATLAB®.The parameters used for the bio-inspired algorithms were tuned for each type of thermodynamic calculation and then they were fixed for all problems tested in order to compare the robustness of the algorithms, see Table 3 for their values.Further, NP = 10D was used for all the optimization methods.Altogether, we studied 24 problems consisting of 8 PEC, 8 PS and 8 RPEC problems.All these problems are multimodal with number of decision variables ranging from 2 to 10.Each thermodynamic problem was solved 100 times independently with a different random number seed for robust performance analysis.The performances of bio-inspired algorithms were compared based on Success Rate (SR) and average number of function evaluations (for both global and local searches) in all 100 runs (NFE), for two stopping criteria: SC-1 based on the maximum number of iterations and SC-2 based on the maximum number of iterations without improvement in the best objective function value (SC max ).These stopping conditions have been employed in previous studies on bio-inspired computation for phase equilibrium modeling [4][5][6][7][8] and correspond to common convergence criteria for solving real-world optimization problems.Note that NFE is a good indicator of computational efficiency since function evaluation involves extensive computations in application problems.Further, it is independent of the computer and software platform used, and so it is useful for comparison by researchers.On the other hand, SR is the number of times the algorithm located the global optimum to the specified accuracy, (22 out of 100 runs.A run/trial is considered successful if the best objective function value obtained after the local optimization is within 1.0E-5 from the known global optimum.Also, global success rate (GSR) of different algorithms is reported for all the problems and is defined as where np is the number of problems and SR i is the individual success rate for each problem, respectively.At the end of each run by each stochastic algorithm, a local optimizer was used to continue the search to find the global optimum precisely and efficiently.This is also done at the end of different iteration levels for performance analysis; however, global search in the subsequent iterations is not affected by this.Since all algorithms were implemented in MATLAB®, sequential quadratic program (SQP) was chosen as the local optimizer.The best solution at the end of the stochastic algorithm was used as the initial guess for SQP, which is likely to locate the global optimum if the initial guess is in the global optimum region.In the small number of cases when the local optimizer diverged to a larger value of the objective function, the output of the stochastic algorithm was retained.All computations were performed on 64-bit HP Pavilion dv6 Notebook computer with an Intel Core i7-2630QM processor, 2.00 GHZ and 4 GB of RAM, which can complete 1344 MFlops (million floating-point operations) for the LINPACK benchmark program that uses the MATLAB "backslash" operator to solve for a matrix of order 500.

Performance of Bio-inspired Algorithms on PS problems
On PS problems, similar tests using the four bio-inspired algorithms in addition to FA were performed.Results were collected at different iteration levels, starting from 10-iteration level, after local optimization at each of these iteration levels.As expected, GSR of ABC, CS, FA, IFA, and BA for all PS problems using SC-1 improves with increasing number of iterations (Fig. 5a).The highest GSR was 99.25% % obtained by the CS algorithm without the local optimization.The selected PS problems were somewhat difficult to optimize, which is reflected in the relatively low GSR without using the local optimizer.At 10 and 25 iterations, BA obtained the best GSR, but from 50 to 750 iterations, the CS obtained the best GSR.At the termination of the iterations, CS obtained the highest GSR.The reliability of IFA and CS did not improve much beyond the 750 th iteration.However, reliability of FA, BA and ABC kept improving until the end.
Figure 5b shows the improvements in GSR with the use of a local optimization technique at the end of the stochastic techniques.The best GSR, which was obtained by CS, did not increase whatsoever with the use of local optimization.The performance of CS showed slightly more reliability compared to that of ABC at higher iterations, while the opposite was true for lower iterations.Another interesting observation from Fig. 5b is that GSR of CS, FA, and IFA using local optimization decreased with iterations to reach a minimum, and then veered into an increase.Since PS problems are particularly difficult, local optimization techniques may diverge even with an improved initial point obtained by further iterations in the stochastic method.Only at higher iterations that the normal increase in GSR with increased iterations for the three algorithms was retained.
In stochastic global optimization, it is necessary to use a suitable stopping criterion for the optimization algorithm to stop at the right time incurring least computational resources without compromising reliability of finding the global optimum.Results on the effect of stopping criterion, SC-2 with SC max = 10, 25 and 50 on ABC, CS, FA, IFA, and BA for all PS problems are presented in Fig. 6; GSR and NFE reported in this table are for stochastic methods followed by local optimization.They show that, in general, reliability of the algorithm and NFE increase with increasing SC max .However, the reliability obtained using SC-1 is always higher than that obtained by SC-2, which is shown in Fig. 6a that summarizes GSR of the five techniques with the three SC-2 stopping criteria compared with SC-1 = 1500 for all PS problems.Fig. 6b shows the NFE of the five techniques with the four stopping criteria for all PS problems.FA, IFA, and BA were almost insensitive to the increase in SC-2, while ABC and CS showed improved reliability with increased SC max .For all five methods, SC-1 needed much higher NFE than all SC-2 stopping criteria, reaching its zenith at 8 times more NFE.Hence, for bio-inspired algorithms that show comparable reliability for both stopping criteria; algorithms like ABC and CS, it is advisable to use SC-2, which offers much higher efficiency at the expense of only slightly diminished reliability.
Problems 6, 7 and 8 were identified as challenging since at least one of the methods failed to achieve 50% GSR even with the subsequent local optimization.In general, stochastic optimization methods provide only a probabilistic guarantee of locating the global optimum, and their proofs for numerical convergence usually state that the global optimum will be identified in infinite time with probability 1 [9].So, better performance of stochastic methods is expected if more iterations and/or larger population size are used.

Performance of Bio-inspired
Algorithms on PEC problems GSR values for all PEC problems by ABC, CS, FA, IFA, and BA algorithms with NP of 10D using SC-1 are illustrated in Fig. 7.As expected, GSR improves with increasing number of iterations, particularly at lower iteration levels.After 250 iterations, GSR almost does not improve at all for CS.However, GSR kept improving for the rest of the bio-inspired methods.In general, subsequent iterations without improvement in the results are waste of computational resources.For example, for bio-inspired optimization only, GSR of CS is 98% at 250 iterations; it increases to 99.125% at 500 iterations and stays approximately the same until 1500 iterations.Results in Fig. 7a show that BA has higher reliability at low NFE compared to the rest of algorithms, for PEC problems when global stochastic optimization only is used.When the performance of the five methods with local optimization at the end of the global search is compared, CS gives the highest reliability with GSR close to 100% at 1500 iterations.
The effect of stopping criterion, SC-2 on ABC, CS, FA, IFA, and BA algorithms has also been studied on PEC problems.Figure 8 summarizes GSR and NFE of ABC, CS, FA, IFA, and BA algorithms with four stopping criteria.We obtained the same conclusion of higher reliability with higher SC max .It can be observed in Fig. 8a that the use of SC-2 gives lower GSR when compared to that with SC-1 for all five methods.NFE values in Fig. 8b shows that CS uses most NFE to terminate the global search by SC-2 compared to the rest of the algorithms.In general, SC-2 requires significantly fewer NFE compared to SC-1, which confirms the need for a good termination criterion.Especially, with SC max = 50, SR obtained by the ABC, CS, and BA is marginally smaller than that obtained with SC-1 but uses much fewer NFE, see Fig. 8. BA achieved comparable reliability to ABC, FA, and IFA with fewer NFE when SC-2 was used.When SC-1 was used, CS achieved better reliability with almost identical NFE as compared to the other four methods.

Algorithms on RPEC problems
GSR of ABC, CS, FA, IFA, and BA algorithms for all RPEC problems using SC-1 is reported in Fig. 9a, when global stochastic optimization was used without the subsequent use of local optimization.GSR generally improves with increasing number of iterations for these problems as well.The highest GSR is 100% obtained by CS.At all iterations, CS consistently obtained best GSR, reaching almost 100% at 500 iterations.On the other hand, FA and IFA obtained better GSR at higher iterations.GSR of CS remained very low until 100 iterations when it suddenly climbed high and fast until it reached the highest GSR of 100% at 1000 iterations.In short, when comparing the bio-inspired optimization methods without the use of subsequent local optimization, CS can not be challenged with the rest of the four methods.
When the results of the stochastic global optimization after 1500 iterations followed by local optimization are compared (Fig. 9b), CS and IFA comes in top in terms of reliability with GSR of 100% compared to 99% for ABC, 96% for FA and 77.875% for BA.The performance of ABC, CS, and IFA were very good since both have reached very close to the final GSR after only 500 iterations for CS and ABC, and 750 iterations for IFA, although no significant improvement was obtained in subsequent iterations.In short, CS is the most reliable and effective to find the global optimum in relatively small number of iterations.
Results obtained on the effect of stopping criteria on the bio-inspired optimization algorithms using SC-2 with SC max = 6D, 12D and 24D, for RPEC problems are summarized in Fig. 10.Note that SC max values used for each RPEC problem were those used by Bonilla-Petriciolet et al. [3] so that their and the present results can be compared.We conclude that the higher the SC max , the better the reliability of the algorithm is, and that the use of SC-2 gives substantially inferior GSR compared to SC-1 (Fig. 10a), especially for FA and IFA.The use of SC-2 for ABC, CS, and BA will bring about slightly inferior reliability compared to the use of SC-1 but with higher efficiency.This is not the same conclusion that can be drawn for FA and IFA as SC-1 gave the highest reliability (96 and 100%, respectively) as shown in Fig. 10a.

Comparison with the reported performance of other stochastic methods
Recently, Zhang et al. [5] reported the performance of unified bare-bones particle swarm optimization (UBBPSO), Integrated Differential Evolution (IDE) and IDE without tabu list and radius (IDE_N).They also analyzed the performance of UBBPSO, IDE and IDE_N, and compared them with other published results such as classical PSO with quasi-Newton method (PSO-CQN), classical PSO with Nelder-Mead simplex method (PSO-CNM), Simulated Annealing (SA), Genetic Algorithm (GA), and Differential Evolution with Tabu List (DETL).All these stochastic algorithms were run 100 times independently, and at the end of every run, a deterministic local optimizer was activated.Zhang et al. [5] reported that IDE gave better performance across the entire spectrum of problems.Hence, it is sufficient to compare the performance of the five bio-inspired algorithms with IDE for the three categories of problems, with the different stopping criteria.
Figure 11 shows the average GSR of ABC, CS, FA, IFA, and BA for the 24 problems as compared with the average GSR of IDE, at different iterations.ABC shows the best convergence rate as its average GSR reaches about 80.9% after 50 iterations only.The reliability of IDE is superior to CS and slightly inferior to ABC at 50 iterations At larger iterations, the GSR of CS was the highest, reaching 99.5% at the 1500 th iteration compared to 92.8% for IDE.Fig. 12 shows the average GSR of ABC, CS, FA, IFA, and BA for the 24 problems as compared with the average GSR of IDE, when SC-2 was used; IDE is superior over the other five algorithms in terms of its reliability as shown in Fig. 12a.

Conclusions
In this study, four swarm intelligence stochastic global optimization algorithms, namely, ABC, CS, IFA BA, have been evaluated for solving the challenging phase stability, and phase and chemical equilibrium problems.Performance at different iteration levels and the effect of stopping criterion have also been analyzed.CS was found to be the most reliable technique across different problems tried at the time that it requires similar computational effort to the other methods.
The stopping criterion, SC-1 gives in general better reliability than SC-2 at the expense of computational resources, and the use of SC max can significantly reduce the computational effort for solving PEC, RPEC and PS problems without significantly affecting the reliability of the stochastic algorithms studied.Comparison of the performance of ABC, CS, FA, IFA and BA with the results in Zhang et al. [5] shows that IDE is generally outperformed by CS.

Table 3 Fig. 5
Fig. 5 Global Success Rate (GSR) versus Iterations for PS problems using ABC, CS, FA, IFA, and BA with SC-1: (a) bio-inspired method only and (b) bio-inspired method combined with local optimization

found by local optimizer.
Do greedy selection; else counti = count i + 1; end if Produce a new solution v ij from the selected bee Evaluate solutions v ij and x ij if (f(v ij ) < f(x ij ), Do greedy selection; else count i = count i + 1; end if

Table 2
Details of RPEC problems studied

Table 4
Success rate (SR) and number of function evaluations of ABC, CS, FA, IFA and BA for PS problems using SCmax with NP of 10D

Table 5
Success rate (SR) and number of function evaluations (NFE) of CMA-ES, SCE and FA for PEC problems using SCmax with NP of 10D