Developing a Generalized Regression Forecasting Network for the Prediction of Human Body Dimensions

: With the increasing demand for intelligent custom clothing, the development of highly accurate human body dimension prediction tools using artificial neural network technology has become essential to ensuring high-quality, fashionable, and personalized clothing. Although support vector regression (SVR) networks have demonstrated state-of-the-art (SOTA) performances, they still fall short on prediction accuracy and computation efficiency. We propose a novel generalized regression forecasting network (GRFN) that incorporates kernel ridge regression (KRR) within a multi-strategy multi-subswarm particle swarm optimizer (MMPSO)-SVR nonlinear regression model that applies a residual correction prediction mechanism to enhance prediction accuracy for body dimensions. Importantly, the predictions are generated using only a few basic body size parameters from small-batch samples. The KRR regression model is employed for preliminary residual sequence prediction, and the MMPSO component optimizes the SVR parameters to ensure superior correction of nonlinear relations and noise data, thereby yielding more accurate residual correction value predictions. The GRFN hybrid model is superior to SOTA SVR models and increases the root mean square performance by 91.73–97.12% with a remarkably low mean square error of 0.0054 ± 0.07. This outstanding advancement sets the stage for marketable intelligent apparel design tools for the fast fashion industry.


Introduction
With the many recent improvements in global living standards, human clothing requirements now prioritize diversity and personalization with small-batch, made-to-measure fashion solutions.As such, body dimension and posture measurement prediction supported by artificial neural networks (ANNs) has become a crucial human-machine engineering field.From these tools, new computer-aided ergonomic garment design tools can be paired with three-dimensional (3D) non-contact body scanning and virtual fitting techniques that avoid the use of various calipers, tape measures, and Martin-style anthropometric measuring tools [1].These anthropometric methods utilize stature percentiles for body segments, which can lead to large errors in practice.Hence, the fashion industry demands more cost-effective, accurate, and efficient non-percentile anthropometric methods that cater to clothing and human-centered product designs [2].
In recent years, research on predicting human body dimensions and shapes using artificial intelligence technologies has been widely conducted, mainly including various prediction methods such as multiple linear regression (MLR) [3][4][5], back-propagation (BP) ANN [6][7][8][9], radial basis function (RBF) ANN [2,10,11], and support vector regression (SVR) methods [12][13][14].Su et al. [3] used an MLR model that incorporates easily measurable features, including the thickness/width parameters of cross-sections, with the objective of constructing additional lower-body characteristics.Galada and Baytar [4] used the U.S. size database to train a new lasso regression method that establishes relationships between key predictor variables related to crotch length to improve the fit of bifurcated garments.Chan et al. [5] utilized 3D anthropometric data to forecast the relevant parameters in shirt pattern design by deploying an ANN with a linear regression model to reveal the relationship between patterns and body measurements.However, the nonlinearity in the relationships between different body parts becomes more pronounced, and the MLR model struggles to accurately capture this characteristic.Liu et al. [6] provided a back-propagation (BP) ANN that uses anthropometric data to predict human body dimensions, providing robust support for custom clothing.Furthermore, Liu et al. [7] introduced digital clothing pressure from virtual try-ons as input parameters and employed a BPNN model to predict the fit of clothing, including tight, well-fitted, or loose.This research has expanded the potential applications of BPNN methods in the field of clothing design.However, due to the random selection of initial weights and thresholds in BPNN, the network may become stuck in local minima, thereby diminishing its fitting performance.To overcome this challenge, Cheng et al. [8] employed a genetic algorithm to implement a GA-BP-K-means model to cluster data and enhance body shape prediction.This method involves using a genetic algorithm to optimize the initial parameters of the BPNN.Similarly, Cheng et al. [9] addressed the issue of predicting underwear pressure using an improved GA algorithm combined with a BPNN model.The improved GA algorithm accelerated the BP neural network's convergence speed and enhanced underwear pressure prediction accuracy.In addition to BPNN, other ANN models, such as the radial basis function neural network (RBF-NN) are also often used to estimate body sizes.For example, Liu et al. [10] trained a model on clothing knowledge to extract key human body feature parameters using factor analysis with a combined RBF-NN and linear regression method.This enabled the prediction of detailed human body dimensions with a small number of feature parameters.Wang et al. [11] employed an RBF-NN to estimate complex parameters to improve the comfort and adaptability needed for sportswear.The generalized regression neural network, of Wang et al. [2], utilizes a particular RBF-NN, and demonstrated a high degree of accuracy in predicting 76 specific human body parameters, without the need for manual measurements or 3D body scanning.This model has shown remarkable predictive capabilities, in line with the growing trend of predicting human body dimensions rather than relying on direct measurements.
Support vector regression (SVR) networks comprise state-of-the-art (SOTA) ANN models, but they still fall short in prediction accuracy and computation efficiency.However, SVR models are the most suitable for nonlinear regression problems in which only small-batch samples are available, and they are robust to outliers.Li and Jing [12] used an SVR network to construct a regression model for two-dimensional width and depth features and the corresponding circumference sizes of three important measurement parts (i.e., bust, waist, and hip) of young female samples.However, the establishment of this method relied on cross-sectional data of the human body and cannot avoid the need for 3D scanning.Rativa et al. [13] demonstrated the possibility of superior performance over traditional linear regression methods by employing an SVR model with a Gaussian kernel for the estimation of height and weight using anthropometric data alone.While the results of this method were said to be insensitive to race and gender, in practical application, there may be other unaccounted factors that challenge the model's robustness.Li et al. [14] introduced a data-driven model based on particle swarm optimization (PSO) to optimize the least-squares support vector machine (LSSVM) algorithm.This model was applied to solve the problems of garment style recognition and size prediction for pattern making.Tailoring experience was used as the training basis to improve prediction accuracy.However, the ambiguity of the relationship between the variables made the aforementioned single direct prediction model unsuitable for the fashion industry, due to the diversity of body shapes and precise body dimensions.Although the prediction results exhibit a certain level of accuracy, there is ample room for improvement.Furthermore, anthropometry found extensive applications in the field of intelligent clothing design.For instance, Wang et al. [15] employed fuzzy logic and genetic algorithms to generate initial clothing patterns.They utilized SVR to learn about the quantitative relationships between clothing structural lines, control points, and pattern parameters.These relationships were then used to predict and adjust the pattern parameters, achieving pattern adaptability.Liu et al. [16] proposed a machine learning framework that combines hybrid feature selection and a Bayesian search to estimate missing 3D body measurements, addressing the challenge of incomplete data in 3D body scanning.The study found that this approach leverages hybrid feature selection and the Bayesian search to enhance the performance of random forest (RF) and XGBoost 0.72, particularly in filling in missing data, where RF outperforms XGBoost.Wang et al. [17] introduced an approach that utilizes multiple machine learning frameworks, including RBF-NN, GA, PNN (probabilistic neural network), and SVR, for interactive personalized clothing design.This method enhanced the capability of personalized clothing design by estimating body dimensions, generating customized design solutions, quantifying consumer preferences, predicting clothing fit, and self-adjusting design parameters.
The global optimization of parameters can improve prediction accuracy and generalizability, and modernized meta-heuristic swarm intelligence optimization algorithms can simulate individual and group behaviors that are suitable for complex optimization problems.They also rapidly converge with fewer parameters, and are easily implemented [18].However, multimodal swarm intelligence algorithms cannot conquer optimization problems caused by global exploration and local exploitation imbalances that deprive late iterations of their required data diversity [19].The literature provides a wealth of PSO improvements related to this via parameter weight adjustments (e.g., linear decay [20], chaotic dynamism [21], and S-shaped decay [22]), population topologies (e.g., static neighborhood [23], dynamic neighborhood [24], and hierarchical structure particle subswarms [25]), and evolutionary learning strategies (e.g., comprehensive learning strategy [26], generalized opposition-based learning strategy [27], orthogonal learning strategy [28], and dimensional learning strategy [29]).As stated by the no-free-lunch theorem [30], the identification of a singular method that can be universally regarded as superior to all others for every task is not feasible.However, owing to discrepancies between the two aspects of global exploration and local exploitation, multi-swarm techniques can be used to maintain the population diversity needed to facilitate information flows within subpopulations.Hence, heterogeneous multi-subpopulation techniques have become effective in enhancing PSO algorithms [31].
In this study, we propose a novel generalized regression forecasting network (GRFN), that combines kernel ridge regression (KRR) prediction with a new multi-strategy, multi-subswarm PSO (MMPSO)-SVR model to considerably reduce large errors in human body dimension prediction.The resulting highly accurate small-batch, datadriven human body dimension prediction scheme does not require stature percentile anthropometric measurements and 3D body reconstruction.Our hybrid model only needs a few basic body size parameters to obtain detailed body dimensions.For our experiment, we used the processes of Liu et al. [6] to collect lower-body data from 106 women and applied principal factor analysis so that the KRR-based regression model can establish a multi-variate nonlinear correlation map of parameters.This generates preliminary prediction results as a residual sequence that can be used to fit the data linearly.To deal with nonlinear and noisy data, our model takes the residual of the predicted KRR output as input.A residual correction prediction mechanism is employed to improve the fit and predictive performance of our hybrid model, addressing the following aspects: improving prediction accuracy, accounting for unmodeled factors and making adjustments, correcting model bias, and enhancing model robustness.For further optimization, we apply teaching and co-optimization to the MMPSO algorithm, which divides the population into three role-based subgroups: teachers, students, and independent learners.The ability of the MMPSO model to search for the global optimum primarily depends on the diversity of the population.The SVR then balances exploration and exploitation using a diversityfocused multi-strategy search to avoid local optima.Finally, the estimated values of the predicted body parameters are obtained by combining the KRR results with the corrected MMPSO-SVR residuals.Thus, our hybrid model provides verifiable human body dimension predictions that can work with small-batch samples.
In summary, this study makes the following contributions: (1) Our GRFN utilizes a novel KRR-based MMPSO-SVR nonlinear regression network to achieve residual correction prediction.The proposed hybrid model applies a direct approach that utilizes a few basic body measurements to predict other, more detailed human body dimensions from small-batch samples.The results clearly validate the proposed model, as it outperforms state-of-the-art (SOTA) SVR models in terms of both prediction accuracy and reliability.(2) Our MMPSO model adopts a teaching-learning co-optimization scheme in which a teacher subgroup performs enhanced self-learning searching and a student subgroup constructs learning exemplars under the guidance of the teacher group via subswarm.The independent subgroup performs self-perceptive search behaviors, and the MMPSO algorithm enhances population diversity while preventing subswarms from becoming trapped in local optima.Competitive results are achieved in terms of optimization convergence and stability in terms of most classical benchmarks.(3) Our GRFN network utilizes just a few basic body size parameters as input and obtains detailed human body dimensions and establishes correlations between key body size parameters and clothing pattern sample sizes.This model eliminates the need for anthropometric measurements and 3D body reconstructions, making it more accurate and easily implemented than existing regression models.Consequently, it offers a novel and efficient solution for clothing and human-centered product design.
The rest of this paper is arranged as follows: In Section 2, the research methods are elaborated.Section 3 further describes the GRFN model design alongside the MMPSO algorithm.The effective performance of the proposed hybrid KRR-based MMPSO-SVR model is then reported in Section 4, and a discussion and conclusions for the paper are presented in Section 5, along with directions for future research.

KRR
The KRR regression model employs a kernel function to nonlinearly transform the original sample data and map it to a high-dimensional space [32,33].By weighting the regularization terms and the 2  L norm with these parameters, the model effectively pre- vents overfitting and enhances its generalizability.For the training set, ( ) , and  i y are the i-th observation and response variables, respectively.The count of observations is denoted as n , whereas the count of independent variables is de- noted as p .The loss function is expressed as follows: where 2  is the 2 L norm form in the function space,  is the regularization param- eter, and ( , )   is the kernel function.Given an  nn kernel matrix, N K , the KRR esti- mate is given as follows: ( ) where x represents the provided sample, and ŷ stands for the predicted value.We use cross-validation to calculate the minimum value of the Bayesian information criterion (BIC) for the training set to select the best model's  value.

PSO
Particles can be divided into whole or nearest neighbor types, which can be further divided into global and local versions (i.e., GPSO and LPSO, respectively).The GPSO [20] algorithm takes the position and velocity, , respectively.In this context, variable D denotes the number of dimensions,

 
1, 2, ,  dD , the search zone consists of a population size of N , and the formulas for updating velocity and position are implemented as follows: where id v represents the velocity of the particle located at i-th index in the d-th dimen- sion, and t and t+1 represent the iteration numbers of the previous and current rounds, respectively.Symbol w denotes the inertial weight associated with the memory part of the particle, 1 c represents the cognitive acceleration factor, while 2 c corresponds to the social acceleration factor.Given matrix

SVR
An SVM uses few-shot learning to support structural risk minimization and solve classification and regression problems [34].An SVM can be divided into classification and regression operations, where SVR is widely used to handle few-shot and nonlinear regres- x within the decision function, and empirical risk can be calculated using the -in- sensitive loss function.A regularization term is used to enhance the optimization of structural risk and achieve comprehensive regression: (8) When sample data are found in the -insensitive interval band, its loss is zero.Variables  i and   i are considered slack variables in this context.Specifically,  i is used to represent the training error exceeding a certain threshold of +, whereas   i is used to represent a training error below a threshold of −.Finding hyper-planes w and b are convex quadratic programming problems. , im w w (10) where  denotes the maximum deviation, and C is the trade-off factor of model complexity and error.By incorporating Lagrange multipliers  i and   i , the optimization problem of Equations ( 9) and ( 10) is transformed into a dual optimization problem, and an optimal hyper-plane regression decision function is obtained using the KKT condition.
  is the Gaussian kernel function, and g stands for the kernel coefficient.

General Scheme
The architecture of the generalized regression forecasting network (GRFN) is visualized in Figure 1.To improve accuracy and efficiency, we establish a non-linear correlation between easily measured body parameters and other detailed dimensions that are normally difficult to integrate for customized clothing design.First, we extract the principal factors by preprocessing Liu et al.'s human body dimension dataset consisting of the data of 106 young women [6].To prepare the model, the anthropometric dataset is preprocessed, and normalized for input to the KRR regression model.An appropriate  value is selected by 10-fold cross-validation, multivariate regression is constructed, and the residual sequence between predicted and ground-truth data is calculated.These residual and measured sequences are input to the SVR model, where rough C and g hyperpa- rameters are obtained using the grid search method, and the best pair () , best best Cg are obtained using the MMPSO-SVR model.The KRR-based regression estimations are combined with predicted residual correction values to obtain the dimensional parameters of the human body.Finally, detailed body dimensions are obtained by using a few basic body size parameters input to our GRFN model.An interactive intelligent pattern-making system is employed to generate 2D patterns, and garments are simulated on a virtual 3D avatar for clothing display.

Hybrid Prediction Process
The hybrid prediction process is illustrated in Figure 2. The details are more concisely elaborated in the following subsections.

Step 1: Data PreProcessing
The data are subjected to min-max normalization, resulting in a range of [] 0, 1 : where variable i x denotes the original data, and  x .Then, the length and perimeter factors with large contribution rates (i.e., height, waist circumference, hip circumference, waist circumference height, and hip circumference height) are used as key feature inputs, and randomly assigned to the training and testing sets.

Step 2: KRR Regression Modeling
The minimum BIC value is calculated from the training data to select the best  value [35].The ridge regression model is then used to obtain the predicted ˆ i y value of the training sequence data, and the residual value is provided: Data preprocessing

− ,
where the i-th residual value is represented by i e , and i y corresponds to measured value.The residual sequence data pair {( )} ii y ,e is then constructed as input to the MMPSO-SVR residual prediction model.

Step 3: Calculation of the SVR Kernel Function
The parameters of the SVR kernel function are estimated to obtain approximate values.Subsequently, the fitness function is then defined as follows: fit e e e n (14) where n is the quantity of samples, and êi is the predicted residual values after apply- ing the grid search method on the i-th sample.The grid search method searches a large range that is enlarged and reduced twice to obtain the optimized penalty factor and RBF kernel function parameter pair () C, g .

Step 5: Residual Correction Combination Prediction
By combining ˆ i y and êi , the estimated body size parameter is obtained: Thus, body dimensions for pattern making can be predicted with an accuracy sufficient for garment production.

Sine Chaotic Opposite-Based Learning Population Initializations
Initial population diversity is crucial to improving the search zone, optimization precision, and convergence speed.The chaotic parameters are characterized by their randomness, ergodicity, and regularity.Currently, the most common chaotic mappings [36] include tent, logistic, and sine types.Sine chaos is a self-mapping mode characterized by infinite folding.During optimization, opposite-based learning (OBL) can quickly find an approximate reverse solution.Accordingly, an elitist greedy strategy is adopted to select the optimal population, effectively improving the quality of solutions in the search zone.Therefore, sine chaos is employed to generate an initial population with the greatest diversity.The one-dimensional mapping of the sine chaos is expressed as follows: To avoid the generation of fixed and zero points in the closed interval [] −1, 1 , the initial value should not be set to zero.The sine chaos sequence is converted into a set of D pseudo-random numbers, i,j Z , and is inversely mapped into variables i, j x : x lb ub lb abs z (17) where [] jj lb ,ub is the dynamic search zone boundary along dimension j, and i, j z rep- resents the j-th dimensional component of the i-th pseudo-random number.Next, the OBL strategy [27] is utilized to generate an opposite-based population, , and the individual term,  i, j x , can be defined as where the generalized coefficient, denoted as , is sampled from a uniform distribution.Finally, employing the elitist greedy strategy, the reversed population, X , and initial population after sine-mapping, X , are merged into a new population, {} X X X  = new .From this, the initial N particles with the most suitable fitness score are chosen as X .In an initialization learning environment, the heterogeneous search behavior of multi-subpopulation diversities is utilized.According to a fitness value difference, f D , ascending sort, the population particles are grouped into elite (i.e., teacher subgroup X ST ), ordinary (i.e., independent learner subgroup X SI ), and poor (i.e., student subgroup X SS ) categories.The classification method is as follows: where swarm ratio 30%  = r , and 70%  − = 1 r .

Teaching and Learning Co-Optimization Strategy
Inspired by the literature [31], the teaching and learning co-optimization strategy for the teacher and student subpopulations includes two stages: global and local tuning.With global tuning, the teacher subpopulation uses a ring-topological comprehensive learning strategy to search based on self-cognition and social learning behaviors to obtain information from neighbors, enhance search efficiency and stability, and accelerate convergence.Social learning consists of group induction guidance and ring neighborhood particle learning: where the linear decreasing inertia weight is expressed as 1 : 0.9 0.4 w , and the first acceleration factor is 1 : 2.5 2.0 c , which is linearly decreasing.The second acceleration factor is 2 : 0.5 2.0 c , which is linearly increasing, and the third acceleration factor is is the social learning tendency coefficient.These items are used to improve the performance of particles and enhance their self-search.To strike a balance between exploring and exploiting particles across different dimensions, the CLS is used to define the individual learning probability, pc , of each particle, which generates a [] 0, 1 -interval random number for each dimension.A random number performs better than learning probability in solving the multipeak function.CLS adopts an aging strategy to regenerate the historical optimal solution, () fi d pb , of individual nearest neighbors when the number of stagnant neighbors exceeds the threshold of seven.During global tuning, a subgroup stagnation state-checking mechanism is employed to detect whether the subpopulations of teachers and students are stagnant.Then, the fitness deviation of the optimal solution is calculated for generations t and t − 1 subgroups to judge whether any are stagnant.denotes the fixed number of iterations, the subgroup is identified as having entered stagnation.Then, the local tuning operation is employed to reinvigorate the search capability and counteract stagnation.During local tuning, particle disturbance update and refresh compensation operations are used.Inspired by the literature [38], during the particle disturbance update, a search operator guided by the historical extremum of the induction group is used to remove bias from the search and improve population diversity as follows: where 1    , 2  are random numbers in the range of [] −1, 1 and [] 0, 1.5 , respectively.k is a random index in [] 1, N , and  ki .The new search operator guides particle-balanced exploration and exploitation by inducing historical extrema in the group.During the particle disturbance update operation, an individual particle adopts an optimal greedy strategy.Thus, if the new position's fitness value, , surpasses the initial fitness value, () , and the state counter is activated as ; otherwise, the particles will not be activated, and after many iterations, the position of the new particles may still not meet the requirements of optimization.Hence, the subgroup is vulnerable to the potential of converging towards a local optimum.Therefore, a refresh compensation operation is used to reverse the subgroup stagnation and improve quality.Two particle historical extrema are used for an arithmetic crossover operation to generate the new position: N is a random number, and , respectively.The refresh compensation operation performs a random search and shares individual historical extrema.The advantages of individual candidate solutions are related to the selected particle vector, () X t i , and the difference vector, 12 − kk pb pb , which are chemotaxis operations.During the early progress of our algorithm, the step size of the difference vector is usually larger than it will be later, which is beneficial for initially expanding the search zone.The vector then reduces over time to improve the local development ability.Generally, the step size of the difference vector is continuously adjusted to accommodate various stages and attain superior optimization.
To enhance the search efficiency of the student group, its particles are generally initially far from the optimal area and must be studied under the guidance of the teacher subgroup.During global tuning, the student subgroup employs a search behavior based on social self-cognition; its velocity update equations are as follows: 1, if 0.5 , 0, otherwise where  CT is the self-cognition tendency coefficient,  ST is the social learning tendency coefficient, and variable () U 0,1  ĩ represents a random number that is drawn from a uniform distribution over the closed interval [] 0, 1 .The student subgroup adopts two-stage tuning to enhance its local optimization ability, with which both the teacher and student subgroups can fully exploit their advantages and jointly promote the exploitation and convergence accuracy of the large algorithm.Simultaneously, to hasten student subgroup convergence, the students generate new learning examples under the guidance of the teacher, and they are sorted based on the fitness function [39].If the fitness value of a student, j y , surpasses that of the teacher group, i x , the velocity update of j y is distrib- uted and updated around its original position.The velocity calculation formula for the student subgroup particles following the instruction is as follows: -, x and j y , where 2 1.2  = and 2.0  = .

Multi-Swarm Dynamic Regroup Strategy
The independent learner group employs a dynamic regrouping strategy for multiple subswarms to generate learning exemplars.First, the dynamic regroup learning strategy of multiple subswarms is used to maintain population diversity.In each iteration, the velocity of particles i within each dynamic subgroup is updated using Equations ( 29) and (30) to enhance the global exploration performance: where 2  is a self-adjusting inertial weight that controls the search range and improves the global search capability. 4c , 5 c are constants of 1.49445, and  ST i represents the coefficient of the social learning tendency of the independent learner group subpopulation as perceived by particle i p .When 0.5 represents the social cognitive ability perceived by i p ; otherwise, there is no social ability.Simultaneously, a nonuniform Gaussian mutation operator is applied as an update perturbation to augment the subpopulation's global search performance of the independent learner group during early iterations and improve the local exploitation ability in later iterations.
2 max ( ) exp( ( / )) where ()  t, y adaptively adjust the step length.0 2  is the initial variance and the symbol 2 ()  t denotes the particle's mutation amplitude in the t-th iteration., a fine search is implemented based on the t-distribution mutation and reverse learning using a beetle antennae search (BAS) [40].In the refined search phase, if a globally optimal particle becomes trapped in a local optimum, we employ the opposite learning method to obtain a reverse solution, thus broadening the search range of the globally optimal particle.This OBL strategy is integrated into the MMPSO algorithm; its mathematical description is as follows: , r denotes a random number that follows a uniformly distributed random number within interval   0, 1 , and 1 b denotes the coefficient for controlling information exchange [41], which is mathematically expressed by the following formula: The t-distribution is a continuous probability distribution that amalgamates features from both Gaussian and Cauchy distributions.A degree-of-freedom parameter governs its behavior, enabling a balance between global and local searches.This parameter enables the shape of the distribution to be adjusted so that larger values have higher peaks, such as a Gaussian shape, which has improved local search capability.Smaller values result in flatter shapes, such as those of the Cauchy distribution, which results in the improvement of global development capabilities.During the fine search, the global optimal solution is further optimized by introducing a t-distribution mutation operator.The mathematical expression is as follows: xx ,  represents the particle rotation direction with a value of +1/−1, symbol dir is the normalized direction vector, and () TD t is the degree of freedom of the t-distribution mutation operator.Like the BAS method, which simulates a beetle using its two antennae to sense the direction of the most intense food smell and thereby decide whether to move left or right, the current globally optimal particle determines its distance to the optimal solution based on its left or right orientation.The particle rotation direction,  , is calculated in the following manner: , the OBL strategy is employed to introduce disturbances into the update process of the global optimal solution; otherwise, the process of choosing the t-distribution mutation strategy is employed, which incorporates the BAS, to update the target position.Additionally, an elitist greed-based approach is employed to determine whether the target position should be updated.Update the velocity of X SI subgroup using Equations ( 29) and (30); 22.
X i perform nonuniform Gaussian mutation using Equations ( 31)-( 33 Then, the particle disturbance update and refresh compensation operations in the local tuning phase are performed, and the time computational complexity of the worst-case scenario is represented as () 1 2N .The independent learner subgroups alternately exe- cute dynamic multi-swarm reorganization learning strategies, and their time computational complexity is () 2  ND .The student subgroup learns under the supervision of the teacher subgroup, and the global tuning phase performs a self-cognitive or social cognitive search.The local tuning phase then performs a particle disturbance update and refreshes the compensation operations.In the event of the most unfavorable circumstances, the time computational complexity becomes () 33 +2  N D N .During the fine-search phase of the global optimal solution, the time computational complexity associated with the selection of the dimensional vector for updating the global optimal solution is () D .Consequently, when the stopping criterion for the MMPSO algorithm is characterized by a fixed number of Iterations, denoted as T , the total time computational complexity of the MMPSO algorithm can be expressed as ()  T N D .

Experimental Results and Discussion
Within this section, model human body dimensions were evaluated using two experiments to assess model performance.Experiment 1 entailed the classical CEC2005 test suite for MMPSO task validation, and Experiment 2 validated the model's human body size prediction based on real-world small-batch samples.All experiments were implemented on a 64-bit operating system, a Core i7-6500U CPU with a main frequency of 2.5 GHz, 16-GB memory, and a MATLAB 2018b programming/runtime environment.[42] Standard Sizing Systems for Garments-Women.Prior to commencement of this study, we addressed ethical compliance issues and made relevant disclosures.Notably, all participants provided informed consent, allowing their measurement data to be obtained using a 3D body scanner in an automated manner.The final measurement dataset exclusively provided our measurement testing data, and we completely anonymized the data items.Hence, no personally identifiable participant information was retained.Furthermore, our reference few-shot anthropometric database is publicly available, as explained in appendix 1 in reference [43].To this end, we utilized this open-source anthropometric database to validate the efficacy of our hybrid model.Table 1 displays the descriptive statistics of the body dimensions and relevant measures, such as mean, median, and central tendency.

Factor Analysis
We used maximum orthogonal factor analysis variance to extract two principal factors from 13 observation items and eliminate multicollinearity among variables.The rotated component matrix is represented in Table 2, which lists the scores of each sample and factor.Based on the total variance explanation provided in Table 3, the height factor and perimeter (circumference) coefficient were found to represent most of the information on lower body size.
To avoid large prediction errors caused by large differences in input data, both easyand difficult-to-measure key human body parameters were normalized.To mitigate interference resulting from varying body shapes, the dataset was divided into distinct training and testing sets, with divisions based on variables, such as stature and waist girth.The training set underwent cross-verification k times to evaluate the results.R ).The corresponding equations are presented as follows: () where i y and ˆi y represent observed and predicted values, respectively, and the variable ŷ denotes the average of the predicted values derived from a collection of n samples.

Benchmark Functions and Parameter Settings for PSO Variants
To evaluate the optimization performance of the proposed MMPSO, the classical CEC 2005 test suite [44] was selected as the experimental benchmark.The test set consisted of four types of functions: F01-F05 (unimodal (UN)), F06-F12 (multimodal (MN)), F10 and F11 (rotation), F13 and F14 (expanded (EF)), and F15-F25 (hybrid composition (CF)).The test suite can comprehensively and objectively reflect the optimization performance of an algorithm.For a more comprehensive evaluation and analysis, we compared the MMPSO algorithm with seven well-known PSO variants: HIDMSPSO, HCLPSO, HCLDMSPSO, CLPSO, DMSPOS, EPSO, and FDRPSO.The PSO variants' parameter settings were established as indicated in Table 4.

Numerical Experimental Results
The performance of the MMPSO and several well-known PSO variants was evaluated using a benchmark test suite.Following the acquisition of the results from 30 independent runs of each algorithm, the evaluation metrics were determined based on the average value (mean) and standard deviation (Std) of each benchmark function.The mean reflects the overall trend of convergence and optimization ability, whereas the Std indicates the stability of the algorithm and its capacity to evade local optima.
The dimensions needed to solve the problem were 30, the upper limit for the total number of fitness evaluations was 300,000, the population size was set to 40, and the sizes of the three heterogeneous subgroups (i.e., teacher, independent learner, and student) were 12, 16, and 12, respectively.The MMPSO algorithm in F04, F06, F08, F10, F11, F14, F17-F20, and F22-F25 benchmark functions were compared with the alternative PSO variants, revealing clearly improved solution accuracy, as demonstrated in Table 5. Figure 3A illustrates how the MMPSO algorithm demonstrated excellent performance in terms of finding the global optimum using unimodal functions.Furthermore, the MMPSO algorithm performed well when solving multimodal functions and expanded functions, as illustrated in Figure 3B.In terms of algorithm stability, the standard deviations of the F04, F05, F06-F08, F10, F17-F18, and F23-F25 benchmark functions exhibited superior performance over the alternative PSO variants.The final ranking algorithms are arranged based on overall averages in the following order: MMPSO, HCLDMSPSO, EPSO, HCLPSO, HIDMSPSO, DMS-PSO, FDRPSO, and CLPSO.This study's experimental findings provide evidence that the proposed algorithm exhibits effective global convergence capability, particularly when dealing with multimodal and compound test functions.In summary, the MMPSO algorithm shows a superior optimization effect, and experimental results clearly validate that the presence of diverse populations aids in the avoidance of local optima within the model (i.e., locating the global optimum).

Analysis using Friedman Statistical Test
To verify the statistical differences of the full algorithm, a nonparametric Friedman test and a Nemenyi test were used [50,51].Table 6 presents the Friedman test results for the eight algorithms using a significance level of 5%  = .Prior to the analysis, an average ranking of the algorithm was performed so that m n r would represent the average ranking of the m-th algorithm with the n-th benchmark function, which is expressed as follows: where 25 K N= represents the number of benchmark functions.The results demonstrate that the MMPSO algorithm ranks the highest on the benchmark test set and significantly outperforms other algorithms with unimodal, multimodal, and hybrid composition functions.In this experiment, there were 25 benchmark functions, eight algorithms, and seven degrees of freedom.The square value of the 30-dimensional Friedman test card was 47.160, and the p-value was less than 0.05.According to the critical table of the chi-square distribution, the critical value was 14.07, and the actual chi-square value was larger than the critical value, indicating that there were obvious differences in the overall performance of each algorithm.The Nemenyi rank sum test was used to achieve pairwise comparisons among multiple samples.Figure 4 presents the final rank in 30 dimensions, visually indicating the Friedman statistical test's critical difference (CD).If the discrepancy between the mean rank of the two algorithms exceeds the value domain's CD, the null hypothesis is rejected at the designated level of confidence.The CD calculation formula is built as follows: where  q = 3.031, and  There were obvious differences among the performances of MMPSO, EPSO, HCLPSO, HIDMSPSO, DMSPSO, FDRPSO, and CLPSO, as shown in Figure 4.In our qualitative analysis, there were no significant differences between the proposed algorithm and the HCLDMSPSO algorithm; however, the proposed algorithm showed superior performance according to the overall average rank.

Parameters Selection of SVR
In the context of SVR, the estimation and generalization performances were impacted by the hyperparameter pair () C, g , where the penalty factor is denoted by C , and the RBF kernel function is represented by g .The higher the trade-off factor C , the more prone to overfitting the model becomes.The larger the g , the fewer the support vectors, and vice versa, noting that this number significantly influences the speed of training and prediction processes.The fitness function is an effective measure for optimizing the SVR parameter settings to achieve the best prediction accuracy and generalization ability.First, we employed 10-fold cross-validation to train the model parameters and comprehensively assess the model's performance.Then, the grid search method was used to search a twice enlarged/reduced range to obtain the optimized kernel function parameter range () C, g .
Finally, the MMPSO algorithm was used to optimize () C, g , where the actual measured values were obtained for each body dimension, and the residual sequence was constructed based on the KRR of the RBF kernel, based on input information.To evaluate the effectiveness of the MMPSO method of enhancing the hyper-parameters of the SVR model, we selected the maximum number of iterations for the MMPSO-SVR (i.e., 1000), a population size of 50, two dimensions, and initial ranges of the variables C and g as  

Model Prediction Performance and Garment Pattern Making
Figure 6A,B depict the results of our qualitative analysis for verifying the effectiveness of our method, which seeks to demonstrate the efficacy of GRFN training on the prediction of crotch height and knee circumference.By correcting the residual errors in the training data, the best results pair () best best C , g of the SVR hyperparameter optimization method were (12.1940, 0.3538) and (12.5767, 0.1852), respectively.Notably, the GRFN model had a good fit between the measured and estimated values, according to the datasets used for training and testing.Table 7 displays the error between the estimated and ground-truth values.Among these results, the extreme of the maximum error in the testing set was 0.3720 cm, and that of the average error was 0.1182 cm, aligning with the GB/T 23698-2009 standard [52].Compared with the results of Liu et al. [6], who used a based model, the mean square error (MSE) and standard error (SE) were 2.06 ± 0.2.Using the PSO-LSSVM model, Li et al. [14] could only predict the sleeve sizes at MSE and SE measures of 1.057 ± 0.06.As shown in Table 7, our proposed hybrid model predicts eight difficult-to-measure lower body sizes, where the total MSE and SE were 0.0054 ± 0.07.Moreover, compared with the research conducted by Wang et al. [2], which utilized a generalized RBF-NN regression model and yielded an average As shown in Figure 8A, a customer participant can input a few basic body parameters and accurately predict the relevant dimensions for pattern making via the GRFN.We also provide an interactive automatic pattern generation prototype, built using Microsoft Visual Studio 2013 and the Qt 5 integrated C++ development environment.Figure 8B depicts a screenshot of this system.During the pattern making process, a point-numbering method [53] is used to draw the structural lines of clothing patterns.The pattern for threequarter, seven-quarter, and long pants are shown in Figure 8C.Utilizing the parameter settings of CLO Standalone virtual design software, a customer can effectively identify the precise measurement landmarks required based on the Chinese National Standard, namely GB 3975-1983 [54] and GB 16160-2017 [55], to construct an accurate human body model.Figure 8D illustrates the effect of this 3D simulation using an avatar.

Ablation Experiments
To validate the accuracy and reliability of our proposed model, we conducted an ablation study by using two simplified models: our KRR regression version and our single direct MMPSO-SVR version.Figure 9A R values above 0.9 for KH, AC, and AH predictions.However, it struggled with CW and TCL, which are particularly challenging to measure accurately, resulting in low 2 R values.In contrast, the regression values of our hybrid model were above 0.999 for all prediction items, indicating that our method predicts body sizes relevant to garment pattern making more accurately and efficiently, effectively avoiding the shortcomings of single body dimension prediction methods.

Discussion and Conclusions
With the rapid development of artificial intelligence technology, efficiently estimating the human body dimensions required for clothing patterns using ANN technology, rather than anthropometry measurement methods, has emerged as a new trend.The KRR regression model does not require feature selection and exhibits strong robustness to outliers, making it particularly suitable for handling datasets with noise or outliers.However, due to the ambiguity in the variable relationships among body dimensions, the KRRbased model may not fully capture their nonlinear features.Therefore, by enhancing the improved SVR model to compensate for the nonlinear features in the body dimension data, we have developed a nonlinear hybrid prediction approach with a high tolerance for input errors.This study employed an innovative data-driven GRFN hybrid model, based on a small-batch sample, for predicting highly accurate human body dimensions.We use the KRR regression model to fit and construct the predictive residual sequence.The MMPSO-SVR model is used to deal with nonlinear relationships and noisy data and to obtain more accurate predictive residual correction values.We then combine the estimated values from the KRR regression and the predictive residual correction values to obtain high-precision estimates of human body size parameters.Using the proposed teaching and learning co-optimization MMPSO algorithm enhances global search capabilities and population diversity [56], effectively preventing the algorithm from falling into local optima, thus leading to optimal SVR hyper-parameters.The experimental results validate the suitability of the proposed hybrid residual correction model, and compared with the performance of the single direct MMPSO-SVR model, the GRFN hybrid model exhibits astonishing improvements in RMSE of between 91.73% and 97.12%.Compared with the prediction accuracy and reliability results of other SOTA ANNs, those of our model are improved by 0.0054 ± 0.07 in terms of MSE and SE.The absolute error fluctuation range of the hybrid model is smaller than those of the other models.Hence, this remarkable advancement provides the fashion industry with intelligent garment design tools to effectively achieve the accuracy and efficiency of mass customization.
To further improve prediction accuracy and ease of implementation, we plan to overcome data limitations and improve the anthropometric dataset to cover full-body information, especially the anthropometric data of abnormal body shapes.Apart from the sample data capacity, we will also consider the impact of gender, age, ethnicity, geographical region, and body mass index on data sources.We also plan to improve the diversity of enhanced particle individuals in the MMPSO algorithm by leveraging the topological structure and communication methods of particles to improve global search stability and convergence.
sion problems.Within a provided training set, output.The SVR builds an optimally separated hyper-plane in a high-dimensional (sometimes infinite) feature space, maximizing the margin among the nearest training data points.Symbol ()  i x represents the eigenvector of i

Figure 1 .
Figure 1.Architecture of the GRFN model for human body dimension prediction.(In the 2D pattern-making subfigure, the blue curve represents the back pattern piece of the pant; the red curve represents the frontal pattern piece of the pant; the black curve indicates an auxiliary line.Additionally, the blue dots are used to denote auxiliary frames.)

ix
represents the normalized data, whose sequences are bounded by location boundaries of max x and min

3. 2 . 4 .
Step 4: MMPSO-SVR Residual Correction Population position and velocity are initialized, where population is divided into three subgroups, each representing a feasible solution hyperparameter pair () C, g .In the search zone, the fitness function of each particle is computed to ascertain its individual extreme value, represented as pb , and its group extreme value, represented as gb pb .The MMPSO co-optimization algorithm uses teaching and learning to find gb pb and locate the global optimum.The termination condition is iteratively determined, and 10-fold cross-validation is utilized to determine the optimal combination of SVR kernel function parameters, , as input.Variable i e represents the supervisory information used for training, which gives us the residual prediction value, êi .
are individual historical optima of the randomly selected particles, 1 k and 2 k coefficient, symbol i x denotes the spatial position of the i-th individual within the teacher subgroup, whereas symbol j y denotes the spatial position of the j-th individual within the student subgroup, and the function () ij dis x , y represents the Euclidean distance between particles i

3. 3 . 4 .
Refined Search Strategy for the Global Optimal Solution The teacher, independent learner, and student group subpopulations exchange information among subpopulations based on the global optimal solution, gb pb , to improve performance and convergence.When the number of stagnant updates for gb pb surpasses the predefined threshold, 5 = m

3. 3 . 5 .Algorithm 1 : 2 w
The Full Process of the MMPSO Based on the improvements discussed in Sections 3.3.1-3.3.4,the full MMPSO process is described with Algorithm 1.The full MMPSO process Input: Swarm size N, inertial weights 1 w , , c w , Swarm ratio  r , acceleration factor

4. 1 . 1 .
Description of Experimental Data Following Ref. [6], Liu et al. utilized the Vitus Smart 3D body scanning device to create a dataset comprising measurements from 106 female undergraduate students, never pregnant, aged 20 to 25 years, in the northeastern region of China.The participants' heights ranged from 151.5 to 173.2 cm, and their weights ranged from 40 to 71 kg based on intermediate sizes provided by the Chinese National Standard, GB/T 1335.2-2008:

Table 4 .
Parameters settings for PSO variants.

F25 2 .Figure 3 .
Figure 3.The average ranking of PSO variants: (A) average ranking in unimodal functions of PSO variants; (B) average rank in multimodal and expanded functions of PSO variants.

8
K= represents the number of the algorithm's critical value range 1.863687 CD =.
1 , respectively.To analyze the influences of different training proportions, we performed another comparative test taking crotch height and knee circumference parameters as examples, as depicted in Figure 5A,B.When the size of the training sample exceeded 40, the RMSE e and MAE evalues of the GRFN hybrid model were stable, demonstrating that it had satisfactory generalizability and performed better than SOTA models on small-batch sample prediction tasks.

Figure 5 .
Figure 5.Comparison of MAE and RMSE with different input training samples; (A) crotch-height, (B) knee circumference.

2 R
value of 0.971 and an average RMSE e of 5.823 mm, ours achieved significantly improved results of 0.9997 and 0.6142 mm.The prediction effects from the incremental increase in the number of components/parts are displayed in Figure 7A-D.In summary, the hybrid model clearly improved prediction accuracy and generalizability.A subject's stature, waist circumference, hip circumference, waist height, and hip height vectors were 161.7, 68.9, 93.7, 100.3, and 83.5, respectively, as inputs, and the ground-truth outputs were 69.55, 43.7, 53.85, 77.4,79.2, 37.4, 20.4, and 96.6 for crotch height, knee height, thigh circumference, total crotch length, abdomen circumference, knee circumference, crotch width, and abdomen height, respectively.The outputs predicted by the GRFN hybrid model were 69.559, 43.718, 53.804, 77.501, 79.101, 37.357, 20.397, and 96.310.These predictions are exceptionally accurate and can be used to design patterns for women's sports trousers and other lower-body garments.

2 R
-H present the performance results on the fewshot human body dimension dataset.The comparative analysis involved assessing the predictive accuracy of the GRFN model in terms of the KRR regression and the single direct MMPSO-SVR models.Compared with the RMSE e values of the single direct MMPSO-SVR model, those of the combination hybrid model increased by 97.12% maximally in crotch width, and by 91.73% maximally in abdomen circumference.The RMSE e and MAE e values indicate that the hybrid model demonstrated superior predictive performance compared with every other model.The KRR regression model performed poorly in predicting values across all measurement items, but the single unmodified MMPSO-SVR achieved2
time complexity is analyzed according to Algorithm 1.In MMPSO, we assume that the parameters are initialized, where the swarm size of the input is denoted as N, the spatial dimension is denoted by D, and the fixed number of iterations is indicated as T. The time computational complexity formula for the basic PSO is

Table 1 .
Descriptive statistics on dimension measurements.KC), and crotch width (CW).The sum mean (SM) is the average of the sums of the upper quartile, lower quartile, and overall mean.Standard deviation (SD) and the coefficient of variation (CV) are also listed.

Table 3 .
Sum of squares of rotated loadings.

Table 6 .
Mean values of the benchmark under the Friedman test(30-dim).

Table 7 .
Error between calculated and measured values.Compared with other single optimization SVR hyperparameter methods, our method demonstrates superior efficacy by evading local optima and attaining the global optimum.

Table 8 .
Comparison of estimation results of single optimization SVR models ( RMSE Symbol  represents that the smaller value, the better.Symbol  represent that the larger value, the better.Values in bold font indicate the optimal re- sults.