Keywords

1 Introduction

Optimization aims at obtaining optimal solutions to a problem from a set of feasible solutions based on one or several criteria. Optimization techniques cover large application areas in business, finance, service, industry, engineering, and computer science. For example, portfolio optimization is an optimization problem to select the best portfolio (asset distribution) with the objectives of maximizing factors such as expected return and minimizing costs like financial risk. Constraints, if any, can help in reducing the search space of feasible solutions. The global optimal solution, if possibly found, can be the best solution to the problem. However, sometimes, suboptimal solutions can also be considered the possible optimal solutions to the problem (Parsopoulos & Vrahatis, 2010).

Swarm intelligence (SI) is a distributed, intelligent computing mechanism for solving optimization problems. SI took its inspiration from the flocking of birds, swarming, and herding phenomenon invertebrates. Sometimes SI is considered a part of evolutionary computing, as it shares many similarities with it. SI starts working with individuals, where each individual tries to find out the optimal solution. The solution is shared among individuals, and then each individual improves themselves based on the information gathered from others. The most crucial SI property is that all the individuals work in a coordinated way without a coordinator’s presence.

PSO is a population-based SI algorithm developed by Eberhart and Kennedy in 1995, inspired by the social behavior of bird flocking or fish schooling (Eberhart & Kennedy, 1995). From its inception, it is attracting a lot of researchers to solve optimization problems in different domains. In the beginning, PSO can only handle real-valued problems. Later, it has been extended to cover both binary and discrete problems (Eberhart & Shi, 2004).

PSO is a meta-heuristic algorithm that deals with a population of random solutions (particles). Each particle in PSO flies through the search space with a dynamically adjusted velocity and positions according to its own and its companion’s historical behaviors. The particles move to optimal positions based on objective functions.

PSO is the most popular algorithm in comparison with other evolutionary algorithms (AlRashidi & El-Hawary, 2008; Eberhart & Shi, 2004; Pradeepkumar & Ravi, 2014, 2017; Ravi, Pradeepkumar, & Deb, 2017) as it is:

  1. 1.

    Very intuitive and flexible.

  2. 2.

    Less sensitive to the nature of the objective function.

  3. 3.

    Able to handle objective functions with stochastic nature.

  4. 4.

    Derivative-free.

  5. 5.

    Easy to comprehend and implement.

  6. 6.

    With the requirement of fewer user-defined parameters to tweak.

  7. 7.

    Without the requirement of a good initial solution to start its iteration process. However, PSO also has disadvantages. These include:

    • It does not always guarantee the optimal solution to the problem than the dynamic programming approach; instead, it results in a near-optimal solution.

    • It is slow to convergence in the refined search stage (weak local searchability).

As it is advantageous to apply, PSO is used in various domains involving optimization problems such as antennas, biomedicine, communication networks, clustering and classification, combinatorial optimization, control, design, distribution networks, electronics and electromagnetics, engines and motors, entertainment, fault diagnosis and recovery, finance, fuzzy and neuro-fuzzy systems, graphics and visualization, image and video analysis, metallurgy, modelling, neural networks, prediction and forecasting, power systems and plants, robotics, scheduling, security and military, sensor networks, and signal processing (AlRashidi & El-Hawary, 2008; Poli, 2008; Poli, Kennedy, & Blackwell, 2007; Pradeepkumar & Ravi, 2018).

2 Background

The particle swarm concept originated with the effort of Reeves (1983), who came up with the idea of particles. These particles are considered as independent entities that work in harmony to achieve the objective. Reynolds (1987) then added a concept of communication between the particles’ social behavior, with the help of a flocking algorithm, whereby each particle adheres to the flocking rules. Later, Nowak, Szamrej, and Latané (1990) also helped us understand the principles underlying how particles are affected by the social environment. In addition to this, Heppner and Grenander (1990) related a roost concept, i.e., the flock aims for some roosting area. In these systems, the particles are autonomous, but a class of rules regulates their movements. These observations on collective behaviors in these social animals led to implementing this model to solve different optimization problems.

3 The Basic PSO Algorithm

The PSO technique encompasses the following features. PSO is a metaheuristic because it makes almost nil or very few inferences about the optimization problem. It can search for vast space with distributed candidate solutions. PSO exhibits SI in its optimization process. It mainly follows five fundamental principles observed in SI-based algorithms. Mark Millonas (1993) has stated these principles are followed by the particles while communicating with other fellow particles in the swarm.

In the procedure of basic PSO and its variants, a population of particles in the n-dimensional search space gets initialized randomly. Each particle represents a possible solution. Let Xi = Xi,1, Xi,2, …, Xi,d, …, XN p be a vector denoting the position and Vi = Vi,1, Vi,2, …, Vi,d, …, VN p be a vector denoting velocity of particle i. A particle’s position and velocity can be updated dynamically until optimal values are obtained. The basic PSO procedure is depicted by a flowchart (see Fig. 6.1) and described in Algorithm 6.1, and the notations used in the algorithm are presented in Table 6.1.

Fig. 6.1
A flowchart exhibits 4 variables after the no answer and the output global best particle's position variable after the yes answer on the decision point labeled is termination condition satisfied. The decision point comes from the initialization and computation processes.

Flowchart of particle swarm optimization’s procedure

Table 6.1 Notations and their interpretation

It is worth noting that the updated equations, Eqs (1) and (2), are stochastic. As the velocities are getting updated dynamically, they may become too high, leading particles to become uncontrolled. Therefore, the Vmax(Eberhart, Shi, & Kennedy, 2001), as in Eq. (6.3), helps in restricting the uncontrolled movement of particles in search space.

A parameter, namely, inertia weight (G) as in Eq. (6.4) (Shi & Eberhart, 1998a, 1999), helps in adjusting the trade-off between explorative and exploitative capabilities of PSO. The lesser the inertia weight is, the more the PSO’s exploration capability will be and vice versa. And also, Clerc and Kennedy (2002) introduced constriction factor y, as in Eq. (6.5), which ensured convergence and improved the convergence rate of PSO.

4 Parameter Selection

Shi and Eberhart (1998b), Rezaee Jordehi and Jasni (2013), and Wang, Tan, and Liu (2018) surveyed and presented various parameter selection methods found in the literature. Table 6.2 shows PSO parameters, the purpose of each parameter, and possible values or selection methods for each of these parameters. These parameter selections can help in achieving the best output from PSO. Furthermore, sensitivity analysis (Bartz-Beielstein, Parsopoulos, & Vrahatis, 2002), regression trees (Bartz-Beielstein, Parsopoulos, Vegt, & Vrahatis, 2004), and statistics (Bartz-Beielstein, Parsopoulos, & Vrahatis, 2004) can help in selecting the optimal parameters of the PSO algorithm so that PSO algorithm can solve practical problems better.

Table 6.2 Parameter selection for PSO

Algorithm 6.1: Particle Swarm Optimization

A 23-line code with an input of X open and close square bracket open and close square bracket semicolon Position Matrix comma V open and close square bracket open and close square bracket semicolon Velocity Matrix comma f open parenthesis dot close parenthesis semicolon Objective function, and output of P subscript g semicolon global best particle.
$$ \mathrm{If}\;\left|{V}_{i,d}\right|>{V}_{\mathrm{max}},\mathrm{then}\;{V}_{i,d}-\operatorname{sign}\left({V}_{i,d}\right){V}_{\mathrm{max}} $$
(6.3)
$$ {V}_{i,d}\left(t+1\right)=\omega {V}_{i,d}(t)+{C}_1{r}_{1d}\left({P}_{i,d}-{X}_{i,d}\right)+{C}_2{r}_{2d}\left({P}_{gd}-{X}_{i,d}\right) $$
(6.4)
$$ {V}_{i,d}\left(t+1\right)=\chi \left({V}_{i,d}(t)+{C}_1{r}_{1d}\left({P}_{i,d}-{X}_{i,d}\right)+{C}_2{r}_{2d}\left({P}_{gd}-{X}_{i,d}\right)\right) $$
(6.5)

5 PSO in Finance

PSO is applied to solve various optimization problems in finance. This section presents three such popular applications in finance:

5.1 Financial Market Prediction

The goal of financial market prediction problems such as FOREX rate prediction, stock market prediction, and commodity price prediction is to obtain accurate predictions to make the right decisions. One of the hybrid approaches using PSO is proposed by Pradeepkumar and Ravi (2014). In this approach, the artificial neural network (ANN) is used to obtain predictions. Later, the PSO-based regression model of errors is used to fine-tune the predictions obtained by ANN. The PSO minimizing mean squared error (MSE) is used to obtain optimal coefficients of the regression function of errors. The authors concluded that the proposed hybrid outperformed the standalone approaches.

Ravi et al. (2017) extended the approach aforementioned using multi-objective PSO (MOPSO) in place of PSO. The two objectives of MOPSO are the minimization of MSE and maximization of Dstat (directional change statistic). The authors concluded that MOPSO could yield optimal coefficients of regression in comparison with PSO.

5.2 Volatility Forecasting

The volatility forecasting problem’s goal is to obtain accurate predictions to assist various financial stakeholders. Pradeepkumar and Ravi (2017) presented a PSO-trained quantile regression neural network (QRNN), namely, PSOQRNN, to forecast volatility of financial markets. In this approach, the weights of QRNN are obtained using PSO so that the PSOQRNN could yield accurate forecasts. The authors concluded that PSO helped QRNN obtain accurate volatility forecasts compared to standalone QRNN and other similar volatility forecasting approaches.

5.3 Portfolio Optimization

The goal of portfolio optimization is to build the best investment portfolio according to a defined set of assets. Let us assume that we have selected N financial assets we want to invest in. They can be (daily, monthly, etc.) stocks, funds, bonds, ETF, etc. Each of these has many historical returns that are the relative price difference from one period to another.

Kunwar Madan (https://github.com/KunwarMadan/Optimal-Financial-Portfolio-Selection), in this context, presented an example of portfolio selection using PSO and genetic algorithm (GA) in Python. The author solved a 470-dimensional problem in which 470 stocks were considered in the portfolio. In 470-dimensional search space, PSO and GA are applied in finding the optimal combination of weights representing all stocks’ capital using the Sharpe ratio. The author concluded that GA results after 2000 iterations were not even close to PSO results after 250 iterations. Hence, the author proved that PSO is better than GA to solve the portfolio optimization problem. The same fact is also proved by Chen and Zhu (2010). And also, the PSO is applied in constructing optimal risky portfolio (Cura, 2009; Dashti, Farjami, Vedadi, & Anisseh, 2007; Kendall & Su, 2005; Mercangoz, 2019) and in solving constrained portfolio selection problem (Chen, Zhang, Cai, & Xu, 2006; Cui, Cheng, & Bai, 2014; Zhu, Wang, Wang, & Chen, 2011).

6 Variants of PSO for Portfolio Optimization

Table 6.3 presents various variants of PSO proposed for solving portfolio optimization problem. The authors concluded that the proposed PSO variants outperformed basic PSO aforementioned and other PSO variants.

Table 6.3 Variants of PSO for portfolio optimization

7 Conclusion

Portfolio optimization aims at building the best investment portfolio according to a defined set of assets and constraints. PSO is good at obtaining the near-best global optimal solution from the search space of feasible solutions. Literature provided a base for the readers that PSO and its variants are the best fit for achieving the portfolio optimization problem’s objective. This chapter provided descriptions of basic PSO, its parameter selection methods, and its variants. The readers can also be further directed to refer to Yarpiz (2020) for solving portfolio optimization problem using various classic and other SI algorithms such as imperialist competitive algorithm (ICA), non-dominated sorting genetic algorithm II (NSGA-II), and strength Pareto evolutionary algorithm 2 (SPEA2).