Particle Swarm Optimization (PSO) is a metaheuristic algorithm for optimization problems. PSO was first introduced in 1995 by James Kennedy and Russell Eberhart [
1]. The algorithm is based on the concept of social behavior, where particles (potential solutions) move towards the optimal solution through interactions with other particles in the search space. PSO has been widely used in various fields, including engineering, science, and finance, due to its simplicity, robustness, and efficiency. Despite its success, PSO suffers from several limitations. One of the main limitations is its slow convergence rate, which can be attributed to the premature convergence [
2] of the particles towards local optima. This issue can be addressed by introducing efficient improvement techniques in PSO. Several enhancement ideas have been proposed in the past to improve the convergence rate of the PSO algorithm, and they are listed below. Firstly, the Inertia weight technique was suggested by Russell Eberhart and Ying Shi [
3]. The inertia weight technique is a well-known approach for enhancing the convergence speed of PSO. The inertia weight is used to control the movement of particles in the search space. The idea is to maintain a balance between exploration and exploitation of the search space. The inertia weight is updated at each iteration based on a predefined formula, which controls the speed and direction of particle movement. Various formulas have been proposed for updating the inertia weight, such as linear, nonlinear, and adaptive. The choice of the inertia weight formula depends on the optimization problem and the PSO parameters. Second, the concept of a mutation operator was proposed [
4]. A mutation operator is a powerful tool for enhancing the diversity of the PSO population. The mutation operator randomly modifies the position of a particle to generate a new solution in the search space. This operation can prevent premature convergence by introducing new solutions that may lead to better solutions. The mutation operator can be applied at different stages of the PSO algorithm, such as before or after the velocity update. Third, the Opposition-based Learning technique was suggested [
5]. Opposition-based learning (OBL) is a technique that uses the opposite of the current best solution to generate new solutions. The idea behind OBL is that the opposite of the best solution may represent a good direction for exploration in the search space. OBL can improve the diversity and convergence speed of PSO by generating new solutions that are different from those of the current population. Fourth, hybridization with other metaheuristics has been proposed [
6]. Hybridization with other metaheuristics is a common approach for improving the efficiency of PSO. The idea is to combine the strengths of different metaheuristics to overcome their weaknesses. For example, PSO can be combined with genetic algorithms (GA), simulated annealing (SA), or ant colony optimization (ACO). The hybridization approach can enhance the exploration and exploitation capabilities of PSO, leading to better solutions in less time. Fifth, Dynamic parameter Tuning was presented [
6]. The PSO parameters, such as the swarm size, maximum velocity, and acceleration coefficients, significantly impact the algorithm's performance. Dynamic parameter tuning is a technique that adjusts the PSO parameters based on the search history during the optimization process. The idea is to adapt the PSO parameters to the problem characteristics and the search progress to improve the convergence speed and solution quality. In conclusion, efficient improvement techniques in PSO can enhance the algorithm's convergence speed and solution quality. The approaches discussed in this paper [
6], including the inertia weight technique, mutation operator, opposition-based learning, hybridization with other metaheuristics, and dynamic parameter tuning, can be used individually or in combination to address the limitations of PSO. Tareq M. Shami and a team [
7] of researchers conducted a comprehensive survey on PSO. The survey discusses techniques such as varying the inertia weight and hybridizations, which are discussed above. The survey also states that the ability of PSO to be hybridized with other optimization algorithms has contributed to its popularity. Another technique described in the survey which is velocity clamping [
7], a technique introduced by Eberhart and Kennedy. Velocity clamping is setting bounds for the values of the velocities of the particles in all the dimensions. Another approach to improving the efficiency of the PSO algorithm discussed in this paper [
7] is varying the controlling parameters, such as using the varying inertia weight technique in which inertia weight changes throughout the optimization process, or acceleration coefficient techniques in which the two constant controlling parameters for PSO other than the inertia, are chosen in different ways to yield optimal solutions while evading premature convergence. Many other approaches have been discussed both in the survey, and elsewhere. The choice of approach depends on the problem characteristics and the available computational resources. However, most of these approaches can provide problem-dependent solution methods. In this paper, we proposed a new approach to replace the random numbers used for this method with quasi-random numbers [
8,
9,
10] like Halton and Sobol by maintaining the k-dimensional uniformity of these quasi-random numbers. This not only provides a generalized approach to any kind of optimization problem, but this method can be used in conjunction with the earlier enhancement techniques like the inertia weight technique, mutation operator, opposition-based learning, hybridization with other metaheuristics, and dynamic parameter tuning. In this research, two enhanced versions of PSO (one using Sobol random numbers and the other using Halton random numbers) were proposed with the intention of speeding up the convergence of the standard PSO algorithm. To test the efficiency improvement of the two proposed enhancements of the standard PSO algorithm, the number of iterations taken to achieve the optimum of the well-known cigar, ellipsoid, and paraboloid functions, along with the number of iterations taken to obtain an optimal path for the famous Travelling Salesman Problem (TSP) were noted. Following this, improvement in terms of the optimum of the objective function and the number of iterations needed to reach the global optimum were calculated for both the PSO enhanced with Sobol random number samplings and the PSO enhanced with Halton random number samplings, with respect to the standard PSO which uses Monte Carlo random number samplings. All the results for each benchmark function and TSP unanimously show efficiency improvement due to the use of Sobol and Halton Sequences. Additionally, we noted that the more decision variables an optimization problem has, the improvement due to Sobol and Halton sequences increases. In conclusion, both the enhancements of the standard PSO presented in this research, one utilizing Sobol random numbers and the other utilizing Halton random numbers, consistently show efficiency improvement and a better optimum, meaning that they successfully have increased the speed of convergence of the standard PSO algorithm.