1 Introduction

To be competitive in the global market, manufacturing companies have developed minimal-cost, superior-quality products. Producing superior-quality goods at a low cost in today's market involves the simultaneous consideration of design and production processes. In this case, tolerance design is critical. Tolerance design is the process of allocating tolerances to individual components or subassemblies to reach the final assembly tolerance. To produce a high-quality product at a low cost, the tolerances must be established to perform the desired function with the least amount of machining. In practice, tolerances are typically established as an informal compromise between usefulness and production expense. The tolerance design optimization issue of piston-cylinder assembly and punch and die assembly is used for the study.

Machining costs, product quality, and the cost of quality loss are all affected by tolerance requirements for manufactured item dimensions. Tolerance specification is based on qualitative estimations of tolerance cost, allowing mechanical assembly component tolerances to be specified for the lowest practical production cost. Assembly tolerances are frequently dictated by performance requirements, whereas component tolerances are determined by the manufacturing process's capabilities. The most prevalent problem that engineering designers face when defining tolerances is defined assembly tolerance between assembly components.

The component tolerance of an assembly may be distributed uniformly across all of its constituent parts. However, according to the conditions, the complexity of the product or the production process, every component tolerance could have a variable manufacturing cost. Component tolerances may be supplied to reduce manufacturing costs by defining objective functions for every component dimension and applying it to each component dimension. Improper tolerance specifications can also lead to poor product performance and market share loss. Tight tolerances can lead to higher process costs, whereas loose tolerances might result in more waste and assembly problems. In peer-reviewed journals, several cost-tolerance models have been published. A good function for determining machining cost can be utilized to distribute the optimal tolerances. Tolerance should be as low as feasible. Tolerance allocation has always been done based on the designer's expertise, handbooks, and guidelines. As a result, the assembly quality cannot be assured, and the production cost may be greater than necessary. Optimization is essential to attain the aforementioned goals. The Particle Swarm Optimization approach and the Non-dominated sorting algorithm (NSGA II) are given in this paper for handling the single and multi-objective, constraint, nonlinear programming problems. A systematic optimization strategy for the tolerance allocation problem has been created using PSO and NSGA II.

2 Literature review

A review of current studies suggests many tolerance synthesis mechanisms. It is separated into classic optimization approaches such as statistical methods, complicated methods, stochastic integer programming, and so on, as well as non-traditional strategies such as GA and SA algorithms and fuzzy neural learning.

The overwhelming majority of publications on optimization-based tolerance synthesis have made use of cost-tolerance models, which have been published in peer-reviewed journals. Dong et al. [1] established unique tolerance synthesis models that were based on the tolerance of manufacturing costs. When the nominal ranges of the design variables are employed, Siddall [2] modified the fundamental design optimization issue to contain the optimum allocation of manufacturing tolerance and the optimum allocation of manufacturing tolerance. As defined by Lee and Woo [3], tolerance synthesis is a probabilistic optimization technique in which a random variable is related to a dimension as well as the tolerance associated with that dimension. As the complexity of mechanical components increases, the above-mentioned optimization-based tolerance synthesis procedures become untenable. Gadallah and ElMaraghy [1] were among the first to use quality engineering parameter design methodologies to tackle the problem of tolerance optimization. The notion of a quality loss function is another application of quality engineering. Several studies in the literature, including Bhoi et al. [4, 5], Bho et al. [6], and Beng et al. [7], In order to solve the tolerance scheduling problems, employ the loss function concept.

Numerous different attempts aimed to deal with the impracticality of optimization-based tolerance provisiondifficulties resulted in the development of a variety of innovative solutions based on comparatively recent procedures such as genetic algorithms, neural networks, evolutionary algorithms, and fuzzy logic. Genetic algorithms are one such technique. To promote tolerance, Kopardekar [7] employed a neural network, which they developed themselves. Backpropagation is utilized to prepare the network that produces part tolerances to test how effectively it manages machine competency and industrial production issues like a mean shift. Ji et al. [8], Ta-Cheng Chen et al. [9], and Hupinet et al. [10] employ fuzzy logic and simulated annealing, while Ji et al. [8], Ba-Cheng Bhen et al. [9], and Bupinet et al. [10] do not. Joth Ji et al. [11] and NoorulHaq et al. [6] application of the genetic approach with the assistance of PSO and NSGA II.

3 Materials and functional methods

3.1 Assembly of the piston and cylinder

The piston-cylinder bore assemblage was suggested by Al-Ansaray, as well as Deiab [12]. The dimensions of the piston-cylinder bore assemblage are presented in Fig. 1.

Fig. 1
figure 1

Piston-cylinder bore assemblage [8]

The cylinder bore diameter (dc) is 50.856 mm, and the clearance is 0.056 0.025 mm. When measured in millimeters, the piston diameter (dp) is 50.8 mm, and the cylinder bore diameter (dc) is 50.856 mm. To complete the piston cylinder-bore assemblage, the subsequent eight machining operations are planned in the following order: On the piston, rough turn, final turn, coarse grind, as well as completion grind are conducted; on the cylinder bore, drill, bore, semi-finish bore, as well as grind, are completed. The piston's main machining limits in millimeters are, 0.005 ≤ t1 ≤0.02, 0.002 ≤ t2 ≤ 0.012, 0.0005 ≤ t3 ≤ 0.003 and 0.0002 ≤ t4 ≤ 0.001, and the primary machining limits in millimeters for the cylinder bore are, 0.007 ≤ t5 ≤ 0.02, 0.003 ≤ t6 ≤ 0.012, 0.0006 ≤ t7 ≤ 0.005, 0.0003 ≤ t8 ≤ 0.002.

This effort will find the best machining tolerance allocation of piston and cylinder about the clearance between them. The piston has a t11d design limit, while the cylinder bore has a t21d design tolerance. The machining tolerance limitations are t1i for the piston (I = 1,2,3,4) and t2i for the cylinder bore (I = 1,2,3,4) for the four cylinder bore production methods.

3.2 Impartial function

The impartial function of minimizing the machining cost is measured in this work. The total cost of machining (Cm) is articulated as,

$$ {\text{C}}_{{\text{m}}} = {\text{ F}}_{1} \left( {{\text{t}}_{1} } \right) \, + {\text{ F}}_{2} \left( {{\text{t}}_{2} } \right) + {\text{ F}}_{3} \left( {{\text{t}}_{3} } \right) + {\text{ F}}_{4} \left( {{\text{t}}_{4} } \right) + {\text{ F}}_{5} \left( {{\text{t}}_{5} } \right) \, + {\text{ F}}_{6} \left( {{\text{t}}_{6} } \right) + {\text{ F}}_{7} \left( {{\text{t}}_{7} } \right) + {\text{ F}}_{8} \left( {{\text{t}}_{8} } \right) $$
(1)

The exponential cost tolerance model F (t) is used in this work to find the machining cost of piston cylinder bore assembly

$$ {\text{F}}({\text{t}}) = \left( {\frac{{a_{0} }}{{E^{{a_{1} (t - a_{2} )}} }}} \right) + a_{3} $$
(2)

Subject to,

  1. (i)

    t1d + t2d ≤ 0.001

    where t1d = t4(Piston) and t2d = t8(Cylinder bore)

  2. (ii)

    For the piston, the margins on the machining limits are,

    t1 + t2 ≤ 0.02, t2 + t3 ≤ 0.005,t3 + t4 ≤ 0.0018

  3. (iii)

    For the cylinder bore, the restraints on the machining are,

    t5 + t6 ≤ 0.02, t6 + t7 ≤ 0.005,t7 + t8 ≤ 0.0018

    where a0 through a3 are parameters for each cost tolerance equation computed from the assessment data are given in Table1.

Table 1 Price tolerance limits for the eight machine methods for the piston-cylinderbore assemblage [4]

3.3 Punch and die assembly

The problem considered is an assembly consists of an upper punch, lower punch and die. The upper, lower punch and die components are presented in Fig. 2.

Fig. 2
figure 2

Punch and die assembly

The dimensions are given that the punch diameter is 7.5mm, and the die diameter is 7.55mm. The clearance amongst punch and die is 0.05 ± 0.025 mm. There are three machining processes involved in punch manufacturing, and two machining processes are involved in the die manufacturing process. The machining process plan for the punch is, turning, profile grinding and finally polishing, and for the die is, wire cutting and polishing,

  • The ranges of dimension in millimeters of the machining acceptances for the punch are, 0.0075 ≤ t1 < 0.045, 0.003 ≤ t2 < 0.0075, 0.002 ≤ t3 < 0.0045

  • The ranges of dimensions in millimeters of the machining tolerance for the die are0.001≤ t4 ≤ 0.0075, 0.002 ≤ t5 ≤ 0.045

3.4 Decision variables and constraints

There is just one resultant dimension in this case, which is the clearance between punch and die, as well as the punch and die dimensions that make up the dimensional chain. The precision with which the punch and die parts are machined in relation to the clearance between them. For the punch die assembly, seven design elements are taken into account for optimal machining tolerance allocation.

The design limits parameter for the punch is tp, and the die is td. The machining limits parameters for the punch are t1, t2, t3 and for the die t4, t5.

The designer establishes tolerances for the resulting dimension and machining limits, which are determined by the entire punch and die diameter design. Tolerances must be at or below the same level as the manufacturer's clearance tolerance, which is specified by, tp + td ≤ 0.005

  1. (ii)

    The decision tolerance for a particular component feature is equivalent to the final machining limits for that feature and is expressed as,

    tp = t3 for the punch and td = t5 for the die

  2. (iii)

    The constraints in the machining tolerance for the punch is, t1+ t2 ≤ 0.04, t2 +t3 ≤ 0.007

  3. (iv)

    The constraints in the machining tolerance for the die is, t4 + t5 ≤ 0.008

4 Material combination

The material combination used by the manufacturer for the punch is given by (percentage):

figure a

4.1 Independent function

The major goal of the challenge is to decrease the overall cost of lower/upper punch and die machining while satisfying the product's functional requirements. The entire cost is the sum of the machining and quality-loss costs.

To successfully machine a component, three machining processes are used in a punch, and two machining processes are used in a die. The combined Reciprocal powers and Exponential model were used to establish the relationship between tolerance and cost for each machining operation. In earlier studies, the machining cost was exclusively evaluated to have the best tolerance allocation. Scrap or rework costs, on the other hand, are incurred when manufactured components fail to meet standards. As a result, the total cost must include machining as well as rework/scrap charges. The cost of rework/scrap is determined by an excellence loss function, which designates that the superior the departure after the nominal, the superior the excellence loss sustained by the client.

4.2 Development of tolerance allocation model

The machining cost M (t) is multiplied by the Excellence Loss Cost QLC to create the tolerance allocation model. Using combined reciprocal powers and an exponential equation, a nonlinear, constraint, multi-objective tolerance allocation model was built. The machining cost is calculated using the combined reciprocal powers and exponential model. SPSS version 8 software is used to find the model parameters. According to Wu et al. [13], the combined reciprocal powers and exponential model have fewer modelling errors when compared to empirical production data. The unequal curvature of the empirical production data necessitates the use of two distinct basic functions to produce a decent fit in both the flat and rapid ascending regions. The exponential function accounts for the flat area in this model, whereas the reciprocal power function explains the fast-climbing region. To calculate the quality loss, the Taguchi loss function is utilized. A quality loss function defines the rework or scarp cost, stating that the greater the departure since the nominal, the greater the quality loss suffered by the client. With the least amount of machining expense and excellence loss, each component's tolerance design optimization may be created. The manufacturing restrictions are specified by the multi-objective tolerance distribution model,

$$ {\text{Min }}\left\{ {{\text{M}}\left( {\text{t}} \right) \, + {\text{ QLC}}} \right\} $$
(3)

The machining cost for the upper punch and die assembly is:

$$ {\text{Min }}\left\{ {\left( {{\text{P}}_{1}^{{{\text{turning}}}} + {\text{ P}}_{2}^{\text{profile grinding}} + {\text{P}}_{3}^{{{\text{polishing}}}} + {\text{ P}}_{4}^{{{\text{wirecutting}}}} + {\text{P}}_{5}^{{{\text{polishing}}}} } \right) \, + {\text{ QLC}}} \right\} $$
(4)

where

Machining cost of turning for punch is:

(5)

A: consumer quality loss. ti: module tolerance. T: 1-sided tolerance stack up limit . Ai,Bi,Ci,Di,Ei: model parameters (i = 1,2..) are shown in Tables 2 and 3. l: Number of component tolerances.

Table 2 Tolerance parameter of the three machining processes for punch
Table 3 Tolerance parameter of the two machining processes for die

5 Optimization process

5.1 Particle swarm optimization method

Particle Swarm Optimization (PSO) is a stochastic optimization approach created by Bberhartas well as Kennedy [14] and motivated by the social behaviour of swarming birds or fish schools. The particle swarm model was invented as a simple social system simulation. The initial intention was to make an aesthetically pleasing rendition of a flock of birds or a school of fish. The particle swarm model, on the other hand, was found to be an excellent optimizer.

PSO is analogous to the flocking behaviour observed in birds, which has been previously documented. Take the following example into consideration: A swarm of birds is on the prowl for food in an unidentified location. There is just one kind of edible item in the area being searched, and it is a sandwich. The birds are oblivious to the fact that their meal is being served. They do, however, have an understanding of how far the dish has progressed with each repetition. So, how do you go about finding the food? Following the bird that is closest to the meal is the most successful method. PSO applied what it had learned from the scenario to the optimization issues.

In PSO, each clarification is represented in the search area by a "bird." It's referred to as a "particle" by us. The fitness function examines all particles' fitness values to optimize them, as well as their velocities, which control their flight. The particles traverse the problem space in the same order as the present optimal particles. PSO begins with a collection of random particles (solutions), in addition, then iterates through successive generations in search of an optimal solution. A further "best" value captured by the particle swarm optimizer is the greatest value achieved so far by every particle in the population throughout the simulation. This is referred to as a "global best" and is abbreviated as "gbest."'

$$ {\text{v}}\left[ {\left] { \, = \, \omega \, \times {\text{ v }}} \right[} \right] \, + {\text{ C}}_{1} {\text{rand }}\left( \, \right) \, \times \, \left( {{\text{pb}}\left[ {\left] { \, {-}{\text{ pr}}} \right[} \right]} \right) + {\text{ C}}_{2} \times {\text{ rand}}\left( \, \right) \, \times \, \left( {{\text{gbest}}\left[ {\left] { \, {-}{\text{ present}}} \right[} \right]} \right) $$
(6)
$$ {\text{pr}}\left[ {\left] { \, = {\text{ pr}}} \right[} \right] \, + {\text{ v}}[] $$
(7)

where v[] denotes particle velocity and existing[] denotes the current particle (solution). Gbes t[] = Greatest between defined as mentioned above rand pbest[] = Best solution between every particle () = Inertia = Random numbers amongst 0 and 1 C1, C2 are learning factors, generally C1 = C2 = 2. Weights are commonly 0.8 or 0.9 C1, C2 are learning factors, generally C1 = C2 = 2.

5.2 Particle swarm optimization algorithm

The following is the process used in most evolutionary techniques:

Random population creation at the start of computing each subject's fitness value. The distance to the optimum is exactly proportional to it. ii. Fitness-based population reproduction. iii. If all requirements have been met, they come to a halt. Otherwise, go back to (ii).

We may deduce from the technique that PSO and GA have a lot in common. Both procedures start with a randomly created population that is evaluated using fitness values. Random approaches are used to refresh the population and look for the best solution. Both systems aren't guaranteed to work.

PSO, on the other hand, lacks genetic machinists such as boundary and mutation. The particle's internal velocity is updated. They have a memory as well, which is required for the algorithm to work.

5.3 Implementation of particles swarm optimization

Formation of the initial population at random i. Developing fitness score for each subject. It is proportional to the distance from the optimum. ii. Based on fitness levels, population reproduction occurs. iii. Stop after all requirements have been met. Otherwise, go back to (ii). For piston-cylinder assembly, the greatest preceding point, that is, the position equivalent to the greatest function importance of the ith particle, is stored as pbest (pi) = (pi1.pi2,...pi8), and for punch and die assembly, pbest (pi) = (pi1.pi2,...pi8). For the piston-cylinder assembly, the position change (velocity) of the ith particle is vi =(vi1,vi2,...vi8), and for the punch and die assembly, vi =(vi1,vi2,...vi5). Equations 1 and 2 are used to modify the particles, where I = 1,2,...N and N are the population size. At each repetition, Eq. 1 is utilized to compute the ith particle's new velocity, while Eq. 2 supplies the new velocity to its present position. Each particle's performance is evaluated using a fitness or objective function [15,16,17,18,19,20,21,22,23].

5.4 Parameters used for piston-cylinder assembly

  • The number of population/particles (N) = 100 to 500

  • The quantity of iterations = 50 to 500

  • Measurement of particles = 8

  • Knowledge factors

  • C1= 2

  • C2 = 2 and Inertia weight factor (ω) = 0.9

5.5 Parameters used for punch and die assembly

  • The number of population/particles (N) = 100

  • The number of iterations = 100

  • Dimension of particles = 5

  • Learning factors

  • C1= 2

  • C2 = 2

  • Inertia weight factor (ω) = 0.9

5.6 NSGA-II: elitist non-dominated sorting genetic algorithm

Kalyanmoy Deb [24] created the NSGA-II procedure. NSGA-II varies from non-dominated Sorting Genetic Algorithm procedure (NSGA) implementation in several respects:

The elite-preserving approach used by NSGA-II assures that previously identified good solutions are preserved. The NSGA-II sorting method is both quick and non-dominant. The NSGA-II algorithm does not need several user-adjustable parameters, making it user-independent. It began with a haphazard parent population. The people are organized via non-domination. A novel accounting mechanism was developed to lower the computing complexity to O. (N2). A fitness score is assigned to each solution based on its level of non-dominance (1 is the greatest level). As a result, fitness reduction is an unavoidable result. Binary tournament selection, recombination, and mutation techniques were used to produce the N-person child population Qo. Following that, in each generation, we use the approach below. A Ri=PiUQi mixed population emerged first. It encourages elitism by allowing parent solutions to be compared to solutions for the entire population of children. Ri has a population of 2N people. Following that, the Ri population is classified according to whether or not it has been influenced by a dominant gene. As solutions from the first front are added, the fresh parent population Pi+1 is generated. This process is repeated till the population size reaches or exceeds N. The first N points are chosen after sorting the replies from the preceding acceptable front using a crowded comparison criterion. We utilize a partial order relation n, as illustrated below since we need a wide variety of options.

$$ {\text{i}} \ge_{{\text{n}}} {\text{j}} {\text{if }}\left( {{\text{i}}_{{{\text{rank}}}} < {\text{j}}_{{{\text{rank}}}} } \right){\text{ or }}(\left( {{\text{i}}_{{{\text{rank}} = }} {\text{j}}_{{{\text{rank}}}} } \right){\text{ and }}\left( {{\text{i}}_{{{\text{fitness}}}} > {\text{j}}_{{{\text{fitness}}}} } \right) $$
(8)

To put it another way, we choose the point with the lowest non-domination rank out of two options with different non-domination ratings. If both points are on the same front, we choose the one that is in a less densely populated region (or with a superior crowded distance). When determining which solutions to choose from Ri, fewer dense portions of the search space are given greater weight. The Pi+1 population is created as a result of this. This size N population is currently being utilised for assortment, crossover, and mutation to produce an inventive size N population, Qi+1. The binary tournament assortment operator is still engaged, but the crowded comparison operator n is now the criteria. For a certain number of generations, the technique outlined above is repeated [2023].

As can be seen from the preceding explanation, NSGA-II employs a quicker non-dominated sorting methodology, (ii) an elitist approach, and (iii) no nicking parameter, in addition to the other features described above. The usage of a crowded comparison principle in the random selection and population decrease fosters volatility in the results. On a variety of challenging test tasks, it has been demonstrated that NSGA-II outperforms other existing elitist multi-objective EAs. Figure 3 depicts the NSGA-procedures II that have been proposed for finding the optimum solution.

Fig. 3
figure 3

A flow chart depicting the optimization process utilizing the proposed NSGA-II approach

6 Results and discussion

6.1 Piston cylinder assembly

With a particle size of 100 to 500, an iteration size of 50-500, an inertia weight factor of 0.9, and a learning element of c1=c2=2, the PSO algorithm was performed. The results of PSO for various particle combinations and iterations have been tested and are shown in Table 4. As per the reference of Al-Ansary,M.D. et al. [12], the GA procedure requires 160 bits binary numbers and a total evaluation of 10000(100 samples and 100 generations,) for which the machining cost obtained is $ 66.91.But in PSO with the same number of total evaluations of 10000( 50 iterations and 200 particle size), the cost obtained is $65.33 (Table 4 ).

Table 4 Total machining cost for different iterations and particles size by using PSO

In addition, the GA technique needs three operators (reproduction, crossover, and mutation), whereas the PSO procedure only necessitates one (velocity up-gradation). Similarly, the NSGA II algorithm was run with an inhabitants size of 100, a cross over the probability of 0.7, a mutation probability of 0.2155, a mutation parameter of 10, and a total number of generations of 100. The NSGA II results for various random seed values were tested, and the optimal value is shown in Table 5. The NSGA II algorithm surpasses the GA algorithm because it is a quicker non-dominated sorting technique that is based on an elitist strategy and does not need any nicking parameters.

Table 5 Optimum machining tolerance (mm) and total machining cost ($) for PSO compared with other solutions

As per Table 4, it can be observed that the machining cost has been reduced for PSO when compared to complex methods, SA, GA and even NSGA II, whose best result is $66.756866.The best result obtained by PSO is given in Table 4, and it is $64.872 (400 iterations and 400 particles). The optimal machining tolerance value for the assembly of piston-cylinder bore for all the techniques is given in Table 5. The optimum machining tolerances are within the specified limit and also satisfy the constraints. Figures 4 and 5 show the solution history of the PSO and NSGA II techniques. It is observed that the PSO converges earlier than that of NSGA II. Though data on computational time is not available, definitely it may be less for PSO when compared with GA and NSGA II.

Fig. 4
figure 4

Solution history for PSO technique

Fig. 5
figure 5

Solution history for NSGA II technique

6.2 Punch and die assembly

The results of PSO and NSGA II are tabulated for analysis. The cost of machining with PSO and NSGA is Rs. 371.15 and Rs. 367.874115, respectively. Tables 6 and 7 show the achieved tolerances for PSO and NSGA II, respectively. The findings obtained using PSO and NSGA II for the remaining two situations (A=10 and 20) are likewise shown in Tables 6 and 7. In all three scenarios, NSGA II outperforms PSO and produces the lowest machining cost with the best machining tolerances. Figures 6 and 7 show the solution histories for PSO and NSGA II, respectively.

Table 6 Optimum tolerance with total cost using PSO
Table 7 Optimum tolerance with total cost using NSGA II
Fig. 6
figure 6

Solution history for PSO technique

Fig. 7
figure 7

Solution history for NSGA II technique (punch and die assembly)

7 Conclusions

The assignment of tolerances is the assembly's most crucial and difficult responsibility. The component's functionality is determined by the tolerance design. Higher manufacturing costs result in higher product quality, but tighter tolerance. Lower production costs but lower product quality results from wider tolerances. The allocation of tolerance was improved to improve product performance while reducing machining costs. As can be seen from the explanation above, in addition to the other features mentioned above, NSGA-II uses I a quicker non-dominated sorting method, (ii) an elitist approach, and (iii) a no nicking parameter. The results are more unpredictable when a crowded comparison criterion is used in tournament assortment and population reduction. It has been shown that NSGA-II performs better than other multi-objective elitist EAs when tested on a variety of challenging test problems. Figure 3 shows the suggested NSGA-procedures II for identifying the best solution. The suggested approach, which makes use of PSO and NSGA II, significantly reduces computing time and machining expenses.