Next Article in Journal
Signal Timing Optimization Method for Intersections Under Mixed Traffic Conditions
Previous Article in Journal
Sentiment-Augmented RNN Models for Mini-TAIEX Futures Prediction
Previous Article in Special Issue
Integrated Satellite Driven Machine Learning Framework for Precision Irrigation and Sustainable Cotton Production
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines

IABL, FSTT, Abdelmalek Essaadi University, Tetouan 93000, Morocco
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2026, 19(1), 70; https://doi.org/10.3390/a19010070
Submission received: 13 December 2025 / Revised: 4 January 2026 / Accepted: 8 January 2026 / Published: 13 January 2026

Abstract

Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the efficiency of the optimization algorithm used to solve their underlying dual problem, which is often complex and constrained. Classical solvers, such as Sequential Minimal Optimization (SMO) and Stochastic Gradient Descent (SGD), present inherent limitations: SMO ensures numerical stability but lacks scalability and is sensitive to heuristics, while SGD scales well but suffers from unstable convergence and limited suitability for nonlinear kernels. To address these challenges, this study proposes a novel hybrid optimization framework based on Open Competency Optimization and Particle Swarm Optimization (OCO–PSO) to enhance the training of SVMs. The proposed approach combines the global exploration capability of PSO with the adaptive competency-based learning mechanism of OCO, enabling efficient exploration of the solution space, avoidance of local minima, and strict enforcement of dual constraints on the Lagrange multipliers. Across multiple datasets spanning medical (diabetes), agricultural yield, signal processing (sonar and ionosphere), and imbalanced synthetic data, the proposed OCO-PSO–SVM consistently outperforms classical SVM solvers (SMO and SGD) as well as widely used classifiers, including decision trees and random forests, in terms of accuracy, macro-F1-score, Matthews correlation coefficient (MCC), and ROC-AUC. On the Ionosphere dataset, OCO-PSO achieves an accuracy of 95.71 % , an F1-score of 0.954 , and an MCC of 0.908 , matching the accuracy of random forest while offering superior interpretability through its kernel-based structure. In addition, the proposed method yields a sparser model with only 66 support vectors compared to 71 for standard SVC (a reduction of approximately 7 % ), while strictly satisfying the dual constraints with a near-zero violation of 1.3 × 10 3 . Notably, the optimal hyperparameters identified by OCO-PSO ( C = 2 , γ 0.062 ) differ substantially from those obtained via Bayesian optimization for SVC ( C = 10 , γ 0.012 ), indicating that the proposed approach explores alternative yet equally effective regions of the hypothesis space. The statistical significance and robustness of these improvements are confirmed through extensive validation using 1000 bootstrap replications, paired Student’s t-tests, Wilcoxon signed-rank tests, and Holm–Bonferroni correction. These results demonstrate that the proposed metaheuristic hybrid optimization framework constitutes a reliable, interpretable, and scalable alternative for training SVMs in complex and high-dimensional classification tasks.

1. Introduction

Agriculture plays a vital role in sustainable human livelihoods, rural development, and global food security, particularly in regions exposed to increasing climatic variability and limited natural resource constraints. According to the Food and Agriculture Organization of the United Nations (FAO), global food demand is expected to increase by nearly 60% by 2050 to meet the needs of an estimated population of 9.3 billion [1]. In this context, improving the reliability, accuracy, and scalability of predictive models has become a critical requirement for decision-making in precision agriculture. Recent advances in machine learning (ML) and deep learning (DL) have significantly transformed agricultural analytics by enabling data-driven modeling of complex, nonlinear interactions between climate conditions, soil characteristics, and crop physiology [2,3,4]. Consequently, there is a growing demand for robust classification models capable of handling heterogeneous, high-dimensional, and noisy data while maintaining interpretability and computational efficiency.

1.1. Motivation and Incitement

Modern agricultural systems increasingly rely on precise and automated decision-support tools for tasks such as crop yield prediction, disease detection, water stress monitoring, and species identification [5,6]. Traditional yield forecasting approaches, which are largely based on expert knowledge and limited environmental indicators, fail to capture the complex and nonlinear dependencies inherent in agricultural ecosystems [4]. While deep learning models offer powerful representational capabilities, their deployment is often constrained by data availability, computational cost, and limited interpretability, particularly in resource-constrained agricultural settings. These limitations motivate the continued investigation of robust, theoretically grounded alternatives such as Support Vector Machines (SVMs).

1.2. The Relevant Literature

Among supervised learning methods, SVMs and random forests (RFs) remain widely adopted due to their strong generalization ability and robustness to overfitting, whereas deep learning architectures such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks have demonstrated superior performance in modeling spatial and temporal agricultural data [2,3]. Shawon et al. [4] reported that linear regression, RF, and gradient boosting trees are among the most frequently used techniques for crop yield forecasting. In contrast, Meghraoui et al. [2] highlighted the effectiveness of CNNs and LSTMs in exploiting spatiotemporal information.
SVMs are particularly attractive due to their solid theoretical foundations and effectiveness in high-dimensional and nonlinear classification tasks [7]. However, their performance is strongly influenced by the optimization strategy used to solve the underlying convex problem [8]. Classical solvers such as Sequential Minimal Optimization (SMO) and Stochastic Gradient Descent (SGD) [9,10] suffer from scalability and stability issues when applied to complex, noisy, or large-scale datasets [11]. To address these challenges, metaheuristic-based approaches have been investigated. Early work by Paquet and Engelbrecht [12] demonstrated the feasibility of Particle Swarm Optimization (PSO) for SVM training, while Dias and Rocha Neto [13] showed that simulated annealing in the dual space can produce near-optimal solutions with limited Karush–Kuhn–Tucker (KKT) violations. Nevertheless, most existing studies focus primarily on hyperparameter optimization rather than on directly addressing the constrained dual SVM problem [14].
Particle Swarm Optimization (PSO) has been widely investigated as an efficient metaheuristic for solving complex and nonconvex optimization problems. Since its original formulation, numerous PSO variants have been proposed to enhance convergence speed, maintain population diversity, and avoid premature stagnation. Among these developments, hybrid and multi-swarm PSO algorithms have attracted particular attention. For example, Hybrid Multi-Swarm PSO (HMSPSO) coordinates multiple interacting swarms to improve global exploration and robustness in complex search spaces [15]. Other approaches incorporate evolutionary operators inspired by genetic algorithms, such as crossover and mutation, to enhance diversity and balance exploration and exploitation. A comprehensive review by Engelbrecht [16] highlights the effectiveness of PSO variants with crossover mechanisms and analyzes their empirical convergence behavior. Related hybrid frameworks, including RCGA-PSO, CBHPSO, PSO with mutation, and cooperative swarm strategies, further demonstrate the flexibility of PSO in addressing constrained and high-dimensional optimization problems.
Beyond PSO-based methods, other evolutionary algorithms have been successfully combined with Support Vector Machines (SVMs) to improve classification performance through enhanced parameter optimization. Genetic Algorithms (GA) have been extensively used to optimize SVM hyperparameters and feature subsets due to their strong global search capability and robustness to local optima. Similarly, Differential Evolution (DE) has been applied to SVM optimization, showing competitive performance thanks to its simple structure, fast convergence, and effectiveness in continuous parameter spaces. These evolutionary approaches highlight the importance of advanced optimization strategies in improving SVM training, particularly when solving the dual optimization problem under constraints.
Motivated by these advances, the present work proposes a novel hybrid framework that combines Particle Swarm Optimization with Open Competency Optimization (OCO). Unlike existing hybrid PSO approaches, the proposed OCO–PSO framework introduces a competency-driven learning mechanism that adaptively guides the swarm while explicitly enforcing the dual constraints of the SVM formulation. This design aims to enhance convergence stability, avoid local minima, and improve solution quality, thereby positioning the proposed method within the broader family of evolutionary and swarm-based optimization techniques for SVM training.

1.3. Major Research Gaps

Despite these advances, several important limitations remain unresolved. First, existing SVM solvers struggle to jointly ensure scalability, numerical stability, and strict satisfaction of dual constraints. Second, metaheuristic approaches are often employed as black-box optimizers, without explicitly exploiting the structure of the SVM dual formulation. Third, limited attention has been devoted to achieving sparse and well-distributed dual solutions that improve interpretability and robustness, particularly for imbalanced and heterogeneous agricultural datasets.

1.4. Contributions and Organization of the Paper

In this work, we propose the OCO-PSO hybrid optimization framework to enhance SVM training. Compared to standard SVM solvers such as SMO, SGD, decision trees, or random forests, OCO-PSO offers several advantages: it efficiently explores the solution space, avoids local minima, and strictly enforces dual constraints on Lagrange multipliers. These features result in improved predictive performance, sparser models, and better interpretability. The main limitation is the higher computational cost during training due to constrained global optimization, although prediction remains highly efficient. By explicitly addressing these strengths and trade-offs, our approach provides a robust alternative for classification tasks even with small- to medium-sized datasets.
To address the aforementioned challenges, this paper proposes a novel hybrid optimization framework, termed OCO-PSO, which combines Particle Swarm Optimization (PSO) with Open Competence Optimization (OCO) for solving the dual SVM problem. In this framework, OCO is reinterpreted as a diversification operator that introduces controlled perturbations to the Lagrange multiplier vector, thereby enhancing exploration while preserving feasibility. The proposed method enforces the equality constraint i α i y i = 0 at each iteration and corrects numerical drift through orthogonal projection, resulting in reduced KKT violations. In addition, the SVM hyperparameters (C and γ ) are jointly optimized using Bayesian optimization.
Extensive experiments conducted on five heterogeneous datasets from medical, signal processing, agricultural, and imbalanced synthetic domains demonstrate that OCO-PSO consistently outperforms classical SVM solvers and widely used classifiers in terms of accuracy, robustness, and model parsimony. In particular, for crop yield prediction, the proposed approach achieves an accuracy of 89.17% and an MCC of 0.786, while producing significantly sparser models and maintaining near-zero constraint violations. These improvements are statistically validated, confirming the reliability and practical relevance of the proposed framework.
The remainder of this paper is organized as follows. Section 2 reviews related work on SVM optimization. Section 3 presents the proposed OCO-PSO algorithm. Section 4 and Section 5 describe the experimental setup and report the results. Section 6 discusses the findings, and Section 7 concludes the paper and outlines future research directions.

2. Support Vector Machines

Before presenting the formulation of Support Vector Machines (SVMs), it is now appropriate to provide an overview of the proposed methodological framework. The objective of this study is to enhance SVM learning by addressing the limitations of classical optimization solvers commonly used to solve their dual formulation. Indeed, the choice of SVM as the cornerstone of our framework is motivated by several fundamental principles:
  • SVMs are grounded in solid mathematical foundations derived from statistical learning theory. Their ability to determine the optimal separating hyperplane by maximizing the margin between classes constitutes an elegant approach that promotes strong generalization capabilities, even when training data are limited.
  • Their robustness to overfitting is particularly noteworthy, especially in high-dimensional spaces where the number of features may exceed the number of samples. Moreover, SVMs can efficiently handle nonlinear problems through the use of kernel functions, which implicitly project data into higher-dimensional spaces without incurring expensive explicit computations. This property is especially valuable for processing heterogeneous data typically encountered in agricultural applications.
  • In agricultural contexts, historical yield data are often limited in size but rich in explanatory variables. SVMs maintain strong predictive performance even with small training samples, in contrast to methods that require large amounts of data, such as deep neural networks.
  • Agricultural data are frequently affected by noise (e.g., sensor errors or sporadic extreme weather conditions). The SVM regularization parameter C provides a principled mechanism for controlling sensitivity to outliers, which is essential for producing reliable predictions.
However, the effectiveness of SVMs strongly depends on the stability and constraint-handling capability of the underlying optimization process. In this context, we propose a hybrid optimization framework based on Open Competency Optimization (OCO) and Particle Swarm Optimization (PSO), designed to directly solve the SVM dual problem while strictly enforcing both box and equality constraints on the Lagrange multipliers. The global exploration capability of PSO is combined with the adaptive learning mechanisms of OCO to enhance convergence stability, mitigate premature convergence to local minima, and efficiently explore the hypothesis space. Within this framework, the SVM serves as the central decision model, whose performance is enhanced through the proposed OCO-PSO optimization strategy, rather than acting as a standalone generic classifier.
Support Vector Machines (SVMs) are a robust family of established supervised learning algorithms, predominantly employed for classification and regression. Introduced by Vapnik and Cortes [17], the main objective of an SVM is to find the optimal separation hyperplane that maximizes the margin between the different data classes. The underlying theory of SVMs naturally leads to a quadratic optimization problem, which can be formulated in two equivalent ways: the primal (direct) form and the dual form obtained by Lagrangian duality.

2.1. Primal Formulation of SVM

Consider a training dataset labeled { ( x i , y i ) } i = 1 n , where x i R d and y i { 1 , + 1 } . The main formulation aims to determine the parameters ( w , b ) of a decision boundary w · x + b = 0 . The objective is to maximize the margin while accounting for classification errors using slack variables. This objective is formalized as the following soft margin constraint optimization problem:
min w , b , ξ 1 2 w 2 + C i = 1 n ξ i subject to y i ( w · x i + b ) 1 ξ i i , ξ i 0 i ,
where:
  • w 2 : The decision hyperplane normal vector is inversely proportional to the margin width; minimizing this term maximizes the margin.
  • ξ i : Slack variables added to allow misclassification of difficult or noisy data points (soft-margin approach).
  • C > 0 : The regularization parameter that balances the trade-off between margin maximization and misclassification penalties.

2.2. Dual Formulation and Kernel Trick

To avoid the computational complexities of the primal problem, particularly in high-dimensional spaces, it is generally transformed using Lagrange multipliers  α i 0 . Assuming strong duality and satisfaction of the Karush–Kuhn–Tucker (KKT) conditions, this leads to the dual formulation:
max α i = 1 n α i 1 2 i , j = 1 n α i α j y i y j K ( x i , x j ) subject to i = 1 n α i y i = 0 , 0 α i C i .
This dual formulation offers two major advantages:
  • Parsimony and Computational Efficiency: Only data points for which α i > 0 —termed support vectors—contribute to defining the decision boundary. This characteristic naturally introduces the parsimony of the model and considerably reduces the dependence on the size of the set of complete data.
  • Flexibility via the Kernel Trick: The implicit linear dot product x i · x j can be replaced by a kernel function K ( x i , x j ) , such as the radial basis (RBF) or polynomial kernels. This kernel trick allows SVMs to efficiently learn complex and nonlinear decision boundaries by projecting the data into a high-dimensional feature space.

2.3. Critical Review of Dominant Training Methods for SVMs

Although SMO and SGD represent the dominant paradigms for SVM training, neither method provides a fully satisfactory solution when confronted with large-scale, nonlinear, or heterogeneous datasets. Their respective strengths are offset by fundamental limitations that restrict their applicability in complex real-world scenarios.

2.3.1. Sequential Minimal Optimization (SMO)

SMO, which was first detailed in the work of Platt [9], greatly improved SVM training by solving the dual optimization problem through a series of sub-problems that could be solved analytically and that only involved two Lagrange multipliers. With this fundamental development, the need for large, external quadratic programming (QP) solvers has been completely removed, which is the main reason for LIBSVM-type implementations.
The efficiency of SMO relies heavily on heuristics for selecting the multiplier pair α i and α j . This choice is mainly directed by two criteria:
  • Violation of KKT Conditions: Multipliers associated with examples that are misclassified or are too close to the decision boundary should be given the highest priority as these are the instances which probably require margin refinement the most.
  • Maximizing the Step Size: The second multiplier is chosen such that it maximizes the move towards the optimum; very often, this is performed by picking the instance of the class opposite to that of the first ( y i y j ) for the second.
While SMO guarantees convergence and numerical stability, its dependence on heuristics makes it sensitive to dataset characteristics.

2.3.2. Stochastic Gradient Descent (SGD)

In contrast to the dual SMO approaches and decomposition methods, SGD directly addresses the primal formulation of the SVM optimization problem (Equation (1)). It minimizes the objective function through iterative updates based on randomly sampled instances (or mini-batches) from the dataset.
The Pegasos (Primal Estimated sub-GrAdient SOLver) algorithm [18] has proven particularly effective for large-scale SVM training because its computational complexity depends primarily on the data dimension D rather than the dataset size N. The algorithm updates the weight vector w iteratively:
w t + 1 = w t η t J ( w t ; x i , y i )
where η t is a learning rate determined by a predefined schedule. SGD is highly effective for large-scale linear classification but faces significant challenges when applied to nonlinear kernels, as the weight vector w cannot be explicitly represented in infinite-dimensional feature spaces.

2.3.3. Analysis and Critical Discussion

Whilst Support Vector Machines are theoretically well-founded and widely used, their practical performance is strongly influenced by the efficiency and robustness of the underlying optimization algorithms. Among existing approaches, Sequential Minimal Optimization (SMO) and Stochastic Gradient Descent (SGD) remain the most commonly adopted paradigms. However, a closer examination reveals that neither method provides a fully satisfactory solution when confronted with large-scale, nonlinear, or heterogeneous datasets. Their respective strengths are offset by inherent limitations that restrict their applicability in complex real-world scenarios:
  • Sequential Minimal Optimization (SMO), significantly improved SVM training by decomposing the dual optimization problem into a sequence of analytically solvable sub-problems involving only two Lagrange multipliers. This strategy eliminates the need for large-scale quadratic programming solvers and has enabled efficient implementations, such as LIBSVM.
    Despite its numerical stability and guaranteed convergence, SMO suffers from several well-documented limitations. First, its scalability is severely constrained for large datasets, as the number of iterations can grow super-linearly with the training size. Second, the convergence speed is highly sensitive to the working set selection heuristics, which are problem-dependent and may lead to suboptimal convergence behavior [19]. Third, SMO performance degrades significantly in the presence of dense kernel matrices or low-sparsity solutions, where a large number of support vectors must be maintained. As a result, while SMO is effective for small- to medium-sized problems, its efficiency and robustness diminish in high-dimensional or large-scale settings.
  • Stochastic Gradient Descent (SGD) operates directly on the primal SVM formulation (Equation (1)) by performing iterative updates based on randomly sampled data points or mini-batches. Algorithms such as Pegasos have demonstrated excellent scalability, as their computational complexity depends primarily on the data dimensionality rather than the dataset size.
    However, SGD exhibits critical limitations in the context of kernel-based SVMs. Most notably, it suffers from the so-called “curse of kernelization” [20]: when nonlinear kernels are employed, all support vectors must be updated at each iteration, effectively nullifying the computational advantages of stochastic optimization. Furthermore, SGD is prone to oscillatory convergence behavior near the optimum due to the inherent variance in stochastic gradients [21], requiring careful tuning of learning rate schedules and stopping criteria. These issues limit the reliability and robustness of SGD for nonlinear, high-precision SVM training.

2.3.4. Critical Analysis and Research Gap

The above analysis highlights a fundamental trade-off in existing SVM training methods between numerical stability, scalability, and flexibility. While SMO prioritizes stability and exact constraint satisfaction, it struggles with scalability and heuristic sensitivity. Conversely, SGD offers scalability but sacrifices robustness and kernel compatibility.
More importantly, most existing approaches focus primarily on convergence speed, often overlooking other critical aspects of model quality such as sparsity, constraint satisfaction, and interpretability—properties that are particularly important in regulated domains such as agriculture and healthcare. Although metaheuristic optimization methods have been widely explored for hyperparameter tuning and feature selection, their use as direct solvers of the dual SVM optimization problem remains largely underexplored. This is mainly due to the difficulty of enforcing strict equality constraints, such as i = 1 n α i y i = 0 , within population-based optimization frameworks.
This gap motivates the exploration of specialized hybrid metaheuristic solvers capable of navigating the high-dimensional space of Lagrange multipliers while explicitly enforcing SVM dual constraints. By integrating global search capabilities with adaptive learning mechanisms, such approaches have the potential to overcome the limitations of classical solvers and provide more robust, interpretable, and flexible SVM training strategies [22].

3. Proposed Hybrid Approach: The OCO-PSO Model

Optimization of Lagrange multipliers ( α i ) in the context of SVM is a constrained quadratic programming problem, usually solved by sequential methods such as SMO. We propose OCO-PSO, a novel hybrid metaheuristic that synergistically integrates the global search capabilities of Particle Swarm Optimization with the diversity management mechanisms of Open Competency Optimization [23] and paired analytical update strategies that preserve constraint feasibility.
The OCO-PSO framework incorporates three fundamental mechanisms to address the dual optimization structure while maintaining KKT compliance:
1.
Constraint-Preserving Paired Updates: The particles are updated by a paired mechanism operating on instances of opposite classes. For each selected pair ( j 1 , j 2 ) with y j 1 y j 2 , one component follows standard PSO dynamics in analytically derived feasible limits, while its counterpart is fitted to maintain the equality constraint i = 1 n α i y i = 0 .
2.
Analytical Boundary Computation: The regions eligible for matched updates are determined analytically from the geometry of the constraints, ensuring the feasibility of the trajectory without penalty-based mechanisms.
3.
Adaptive Diversity Control via OCO: The periodic application of OCO operators [23] provides competition-based mutation and crossover strategies that maintain population diversity and prevent premature convergence in high-dimensional dual spaces, while constraint management is handled independently through the paired update mechanism.

3.1. Formulation of the Optimization Problem

The OCO-PSO model aims to minimize the opposite of the dual-objective SVM function defined in Equation (3), which leads to the following minimization problem:
min α L ( α ) = min α [ 0 , C ] n i = 1 n α i + 1 2 i = 1 n j = 1 n α i α j y i y j K ( x i , x j ) ,
Compliance with the constraints and equality limits (Equations (2) and (3)) is not ensured by penalties in the objective function but by mechanisms integrated into the iterative process:
  • Paired Update of Opposite Instances ( j 1 , j 2 ) : The iteration is performed by selecting pairs of instances where y j 1 y j 2 .
  • Adaptive determination of boundaries [ L , H ] during paired updates ( j 1 , j 2 ) is based on constraint geometry.
  • Orthogonal projection onto the hyperplane i α i y i = 0 , guaranteeing the equality constraint.
  • Clipping in the interval [ 0 , C ] with each particle update.

3.2. Algorithmic Framework

The OCO-PSO scheme orchestrates three complementary search mechanisms (Algorithm 1): global exploration driven by PSO with constraint-preserving updates, diversity management based on OCO, and local exploitation of matched coordinates.
Algorithm 1 OCO-PSO Framework
Require: Training set { ( x i , y i ) } i = 1 n , regularization parameter C, kernel K
Ensure: Optimal multipliers α *
  1: Initialize swarm: α i U ( 0 , C ) , v i 0 , pbest i α i , gbest arg min L ( α i )
  2: for t = 1 to T max  do
  3:       Update inertia: w w max ( w max w min ) · t / T max
  4:       for each particle i do
  5:             PairedCoordinateExploitation ( α i , v i ) ▹ Algorithm 2 (constraint handling)
  6:              α i clip ( α i , 0 , C )
  7:             if  L ( α i ) < L ( pbest i )  then
  8:                    pbest i α i
  9:             end if
10:             if  L ( α i ) < L ( gbest )  then
11:                    gbest α i
12:             end if
13:       end for
14:       if  t mod Δ OCO = 0  then
15:             for each particle i do
16:                   ApplyOCODiversification ( α i ) ▹ Algorithm 3 (diversity management)
17:             end for
18:             Replace worst particle with gbest (elitism)
19:       end if
20: end for
21: return  gbest
The algorithmic workflow clearly separates concerns (Figure 1): constraint management is handled by the matched coordinate mechanism (lines 5–6), while population diversity is maintained through periodic OCO interventions (lines 10–14). This modular design allows for independent adjustment of the exploration–exploitation balance and diversity preservation.

3.3. PSO Dynamics with Paired-Coordinate Constraint Management

Each particle represents a complete Lagrange multiplier vector α in the dual SVM formulation. The velocity v i is updated using the standard PSO formula [24]
v i ( t + 1 ) = w ( t ) v i ( t ) + c 1 r 1 ( pbest i α i ( t ) ) + c 2 r 2 ( gbest α i ( t ) ) ,
where w ( t ) denotes time-varying inertia, c 1 , c 2 are acceleration coefficients, and r 1 , r 2 U ( 0 , 1 ) n provide stochastic perturbations.
To intensify exploitation while respecting dual constraints, we integrate a paired-coordinate update mechanism (Algorithm 2) that operates on the constraint manifold. At each micro-iteration, two indices j 1 , j 2 corresponding to opposite-class instances are randomly selected. The feasible bounds for α j 2 are derived from the constraint geometry. Denoting the linear constraint α j 1 y j 1 + α j 2 y j 2 = γ (where γ remains constant during the paired update), the feasible interval [ L , H ] is determined as
L = max ( 0 , α j 2 α j 1 ) , H = min ( C , C + α j 2 α j 1 ) , if y j 1 y j 2 , L = max ( 0 , α j 1 + α j 2 C ) , H = min ( C , α j 1 + α j 2 ) , if y j 1 = y j 2 .
These bounds ensure that any update to α j 2 within [ L , H ] admits a corresponding feasible value for α j 1 that satisfies both the box constraints and the linear dependency. After applying the PSO velocity update to α j 2 and clipping to [ L , H ] , we compute α j 1 analytically to preserve equality constraint satisfaction:
α j 1 new = α j 1 + y j 1 y j 2 α j 2 α j 2 new .
This analytical adjustment ensures exact constraint satisfaction throughout the search trajectory, effectively decomposing the n-dimensional constrained problem into a sequence of two-dimensional sub-problems on the constraint manifold.
Algorithm 2 Paired-Coordinate Local Exploitation
Require:  α , v , y , K, C
  1: for k = 1 to N local  do
  2:      Sample j 1 , j 2 Uniform ( { 1 , , n } ) , j 1 j 2 , y j 1 y j 2
  3:      Compute feasible bounds [ L , H ] via Equation (6)
  4:       v j 2 w · v j 2 + c 1 r 1 ( pbest j 2 α j 2 ) + c 2 r 2 ( gbest j 2 α j 2 )
  5:       α j 2 new clip ( α j 2 + v j 2 , L , H )
  6:       α j 1 new α j 1 + y j 1 · y j 2 · ( α j 2 α j 2 new )          ▹ Constraint preservation
  7: end for
  8: return  α

3.4. Open Competency Optimization (OCO)

The OCO strategy, triggered periodically, improves swarm diversity and facilitates convergence towards valid support vector configurations through three learning mechanisms complementarily (Algorithm 3). All operations manipulate pairs ( i , j ) of Lagrange multipliers with opposing labels ( y i y j ) to preserve the equality constraint i = 1 n α i y i = 0 .
Algorithm 3 OCO Learning Strategy
Require: Particle α , velocity v , swarm A , global best α gbest , pair set P , parameter C
Ensure:  α new , v new
  1: Select learning mechanism based on particle fitness rank and preset probabilities
  2: if Self-Learning selected then
  3:       if random() < 0.5 then
  4:             SelectiveActivation ( α )                   ▹ Section 3.4.1
  5:       else
  6:             EnergyRedistribution ( α )                   ▹ Section 3.4.1
  7:       end if
  8: else if Neighbor-Learning selected then
  9:       Sample random peer α 2 A { α }
10:       PairwiseBlendingCrossover ( α , α 2 )                 ▹ Section 3.4.2
11: else if Leadership-Learning selected then
12:       Compute distance D α α gbest 2
13:       AdaptiveLeaderCrossover ( α , α gbest , D )              ▹ Section 3.4.3
14: end if
15: ProjectAndClip ( α )                        ▹Section 3.4.4

3.4.1. Self-Learning (Mutation)

This intelligent mutation diversifies particle exploration through two complementary procedures applied with probability P self :
  • Selective Activation identifies inactive pairs ( i , j ) , where α i , α j < ε low and randomly activates two pairs. The activity levels are reassigned to a uniform random number u from a small interval of the constant C (e.g., u U ( 0.01 C , 0.1 C ) ), and independent Gaussian noises N ( 0 , σ v 2 ) are added to v i and v j .
  • Energy Redistribution transfers activity conservatively from the richest active pair ( i S , j S ) to a random inactive pair ( i T , j T ) using
    Δ α = min ( 0.2 α i S , 0.2 α j S )
    Velocities are adjusted accordingly to maintain dynamic balance.
A cooldown mechanism is used to prevent immediate activation.

3.4.2. Neighbor Learning (Peer Crossover)

Particle α 1 exchanges information with a randomly selected peer α 2 through pairwise blending. With probability P crossover 0.7 , each pair ( i , j ) is blended with a 50% chance:
This procedure introduces a conservative transfer of activity (or “energy”) between components. The set of all pairs P is divided into two subsets—active pairs ( P active ) and inactive pairs ( P inactive )—based on their total activity level ( α i + α j ) . If both subsets contain elements, the procedure selects the richest source pair ( i S , j S ) from P active and a random target pair ( i T , j T ) from P inactive . The transferable amount of activity Δ α is computed as
α child , i = w α 2 , i + ( 1 w ) α 1 , i , w U ( 0.3 , 0.7 )
At the same time, the corresponding velocities v are also updated to preserve dynamic inertia: the source components are slowed down ( δ ), while the target components get an equivalent boost ( δ ).

3.4.3. Leadership Interaction (Leader Crossover)

Particles are guided toward the global best α gbest with adaptive blending. The blend factor depends on distance D = α α gbest :
w U ( 0.6 , 0.9 ) if D > τ D ( strong attraction , τ D = 0.1 threshold ) U ( 0.2 , 0.4 ) otherwise ( refinement )
With probability P crossover 0.8 , 90% of pairs are blended toward the leader. For each pair ( i , j ) P with opposing labels, blending is applied with a probability of 0.9 :
α child , i = ω · α i gbest + ( 1 ω ) · α i , α child , j = ω · α j gbest + ( 1 ω ) · α j .
The remaining 10 % of pairs retain their original values to preserve diversity.

3.4.4. Constraint Enforcement

After applying one of these mechanisms, the new position is immediately subjected to orthogonal projection and clipping on [0, C] to respect the constraints of the dual problem. A cooldown system is used to prevent the immediate reactivation of pairs that did not lead to an improvement in objective function after mutation.

4. Experimental Studies

In this section, we evaluate the performance of the OCO-PSO algorithm, a dual SVM solver and classifier. We compared it with two popular SVM solvers, SVC and SGD Classifier, to test the efficiency of the optimization. We also compare with the non-SVM methods, decision tree and random forest, in order to provide competition with the wider machine learning approaches. The experimental design has four key purposes.
  • Predictive Performance: Accuracy, F1-score, ROC-AUC, and Matthews correlation coefficient (MCC) were computed on a number of datasets.
  • Verification of Optimal: Check whether KKT conditions are satisfied to verify the optimality of the solution.
  • Model Sparsity: Compare the number of support vectors to the SVC to determine sparseness and generalization.
  • Reproducibility: Guaranteeing transparency by controlled random seeds, standardized dataset partitioning, and similar parameter initialization.

4.1. Dataset Composition

The empirical study was performed on five datasets with different structural complexity to investigate the robustness of OCO-PSO for fine-tuning in terms of varying dimensionality, sample size, class imbalance, and decision boundary complexity. The main properties per dataset are listed in Table 1.

4.1.1. Diabetes Dataset

The diabetes dataset [25] from the UCI machine learning repository predicts the occurrence of diabetes in Pima Indian women based on eight clinical characteristics: pregnancies, blood glucose, blood pressure arterial pressure, skinfold thickness, insulin levels, BMI, diabetes pedigree, and age. After stratification, the training set contains 614 instances (400 negative, 214 positive; imbalance ratio, 1.87:1) and the test set has 154 instances. With 35.1% minority class representation and moderate dimensionality, this dataset provides a balanced testbed for evaluating nonlinear feature interactions.

4.1.2. Synthetic Agricultural Yield Dataset

The synthetic agricultural yield dataset [26] (Kaggle) simulates agricultural conditions with 599 instances and 6 features: soil quality, sunny days, rainfall, seed variety, fertilizer amount, and irrigation schedule. Originally designed for regression, we convert the continuous yield target into binary classes using the mean threshold ( Y ¯ ): Class 1 (High Yield) for Yield Y ¯ and Class 0 (Low Yield) otherwise.

4.1.3. Sonar Dataset

The sonar dataset [27] (UCI) distinguishes sonar returns from metal cylinders versus rocks using 60 spectral features from frequency band energy measurements (208 samples total: 111 metal, 97 rock). With a high feature-to-sample ratio ( p / n 0.29 ), this dataset tests OCO-PSO’s generalization capacity under high-dimensional, low-sample conditions with complex spectral interactions.

4.1.4. Ionosphere Dataset

The ionosphere dataset [28] (UCI) is based on radar signals that detect ionospheric anomalies; it contains 351 samples with 34 continuous features from pulse sequences. Binary classification separates the “good” echoes (structured ionospheric layer) from the “bad” echoes (unstructured/noisy signals). Its relatively small dimensionality relative to the correlation of sample size characteristics and no missing values make it interesting for the assessment of the robustness of the OCO-PSO algorithm in the presence of low-dimensional features with nonlinear separability.

4.1.5. Imbalanced Dataset

The imbalanced_data dataset is synthetically produced instance data using scikit-learn make_classification [29], with 500 samples and 2 informative features. This set presents a significant class imbalance: 90% in Class 0 and only 10% in Class 1. The low dimensionality makes it possible to observe decision boundaries and expose the behavior of margins even when imbalance is extreme.

4.2. Preprocessing Pipeline and Reproducibility

All datasets were preprocessed in standardized preprocessing to avoid data leakage and ensure reproducibility. This preprocessing is guided by common sense learning practices. Automatic: data stratification before any transformation, strict separation of phase adjustment and transformation, and exhaustive logging (Table 2).
1.
Stratified Train/Test Split: Each dataset was divided into training (80%) and test (20%) sets using scikit-learn’s train_test_split with stratification to preserve the distribution of classes. A fixed random seed (random_state = 42) is used to guarantee deterministic results. All the following preprocessing steps are tailored exclusively to the training data and applied to the data testing to prevent information leakage.
2.
Missing Value Imputation: A tiered strategy based on missing data percentage:
  • Features with >50% missing values: removed;
  • Features with 15–50% missing: KNN imputation (numeric) or mode (categorical);
  • Features with <15% missing: median (numeric) or mode (categorical).
All imputers are fitted on training data only.
3.
Categorical Encoding: Low cardinality characteristics (≤10 unique values) are one-hot encoded with consistent column alignment. High cardinality variables use label encoding fitted for the training data, with unseen test categories being associated with the most frequent training label.
4.
Class Imbalance Handling: When the proportion of the minority class falls below 20%, resampling techniques (random subsampling, oversampling, or SMOTE) are applied only to training data, thus preserving the original test distribution.
5.
Feature Standardization: Numeric features are standardized using StandardScaler to achieve zero mean and unit variance, improving numerical stability.
6.
Multicollinearity Reduction: Feature pairs with correlation | r | > 0.9 are identified in training data, and one feature from each pair is removed from both training and test sets.
7.
Reproducibility: The pipeline is implemented as a modular class (ML Preprocessing Pipeline) with serialization of all tuned transformers, processed datasets, and comprehensive metadata, allowing for exact reproduction in different environments and iterations.
Table 2. Final characteristics of the datasets after preprocessing. The reported values correspond to the final structure and transformations applied to each dataset. The row Minority (%) refers to the proportion of the underrepresented class after preprocessing.
Table 2. Final characteristics of the datasets after preprocessing. The reported values correspond to the final structure and transformations applied to each dataset. The row Minority (%) refers to the proportion of the underrepresented class after preprocessing.
PropertyDiabetesIonosphereSonarImbalance_DataAgric_Yield_Data
Final Dim. (Train/Test)614 × 8/154 × 8280 × 34/71 × 34166 × 55/42 × 55718 × 2/100 × 2479 × 6/120 × 6
Minority (%)34.836.146.450.047.2
StandardizationStandardScalerStandardScalerStandardScalerStandardScalerStandardScaler
Balancing MethodNoneNoneNoneSMOTENone
Features Removed05500
TypeRealRealRealSyntheticReal

4.3. Model Configurations and Evaluation Protocol

4.3.1. OCO-PSO Model

The proposed model combines PSO optimization with OCO refinement to train an SVM classifier. Hyperparameters were tuned using Bayesian optimization (BayesSearchCV) with 10-fold cross-validation over the following search spaces:
  • Regularization: C [ 1 , 10 ] ;
  • RBF kernel: γ [ 10 3 , 1 ] (log-uniform);
  • Swarm size: n particles [ 10 , 150 ] ;
  • Iterations: max _ iter [ 50 , 150 ] .
KKT condition compliance was verified post-training by checking | i α i y i | 0 .

4.3.2. Baseline Models

All baselines were optimized using Bayesian search with 10-fold cross-validation, accuracy scoring, and parallel execution ( n jobs = 1 ).
  • SVC: RBF kernel with C [ 1 , 10 ] , γ [ 10 3 , 1 ] (log-uniform).
  • SGDClassifier ( max _ iter = 1000 ):
    -
    α [ 10 6 , 10 1 ] (log-uniform);
    -
    η 0 [ 10 3 , 1 ] (log-uniform);
    -
    learning_rate { constant , optimal , invscaling , adaptive } ;
    -
    loss { hinge , log _ loss , modified _ huber } .
  • Decision Tree:
    -
    max_depth [ 3 ,   20 ] , min_samples_split [ 2 ,   20 ] , min_samples_leaf [ 1 ,   10 ] ;
    -
    criterion { gini , log _ loss } ;
    -
    max_features { sqrt , log 2 , None } .
  • Random Forest:
    -
    n_estimators [ 50 ,   200 ] , max_depth [ 3 ,   20 ] ;
    -
    min_samples_split [ 2 ,   10 ] , min_samples_leaf [ 1 ,   5 ] ;
    -
    max_features { sqrt , log 2 } .
All searches used n iter = 40 (configurable) and random _ state = 42 .

4.3.3. Evaluation Metrics

The performance of the model was further estimated on held-out test sets by
  • Performance evaluation measures: accuracy, precision, recall, F1-score (macro/weighted), MCC, and ROC-AUC;
  • Model attributes: number of support vectors (for SVM and SVC models) and KKT violation relationship | i α i y i | ;
  • Computational efficiency: training time and inference speed.

4.3.4. Reproducibility

All experiments guarantee total reproducibility through
  • Fixed random seeds ( random _ state = 42 ) across all components;
  • Deterministic preprocessing pipeline with stratified splitting;
  • Automated logging of metrics, optimal hyperparameters, and runtime;
  • Independent execution per dataset with isolated training/evaluation.

5. Results

We evaluate OCO-PSO with four basic methods—SVC, SGDClassifier, decision tree, and random forest—on datasets of different performance, including diabetes (medical data), sonar, and ionosphere for signal processing and agricultural yield, as well as imbalanced synthetic data (imbalanced_data). The performance is evaluated in terms of accuracy, macro-F1-score, MCC, and ROC-AUC, with a focus on class balance and fairness. Computational efficiency (training and inference time), model sparsity, and the stability of the optimization solution (violation of the KKT condition) are also analyzed.

5.1. Performance Metrics

The classification performance is evaluated using the following metrics:
  • Accuracy (%): the proportion of correctly classified samples over the total number of samples:
    Accuracy = T P + T N T P + T N + F P + F N × 100 ,
    where T P , T N , F P , and F N denote true positives, true negatives, false positives, and false negatives, respectively.
  • Precision: the fraction of correctly predicted positive samples among all predicted positives:
    Precision = T P T P + F P .
  • Recall (Sensitivity): the fraction of correctly predicted positive samples among all actual positives:
    Recall = T P T P + F N .
  • F1-Macro: the unweighted mean of F1-scores computed for each class independently:
    F1-Macro = 1 C i = 1 C 2 · Precision i · Recall i Precision i + Recall i ,
    where C is the number of classes.
  • Matthews Correlation Coefficient (MCC): a balanced measure of classification quality that accounts for all four confusion matrix categories:
    MCC = T P · T N F P · F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N ) .
  • ROC-AUC (Receiver Operating Characteristic–Area Under Curve): measures the classifier’s ability to discriminate between positive and negative classes:
    ROC-AUC = 0 1 T P R ( F P R ) d F P R ,
    where T P R is the true positive rate and F P R is the false positive rate.
  • Rank: The average ranking of a method across all datasets, where lower ranks indicate better performance.
  • Process Time (s): The computational time in seconds for training and inference. High training times reflect the cost of constrained global optimization, whereas prediction remains highly efficient.

5.2. Ionosphere Dataset

In the ionosphere dataset, OCO-PSO achieves an accuracy of 95.71 % , an F1-score of 0.954 , and an MCC of 0.908 , performing equally as well as random forest in terms of accuracy, while offering superior interpretability through its kernel-base structure. The model is trained on 66 support vectors compared to 71 for SVC (i.e., a reduction of 7 % ) and the constraint violation is almost null 1 . 3 × 10 3 . The best values for the hyperparameters ( C = 2 , γ 0.062 ) differ significantly from the Bayesian optimized ones of SVC ( C = 10 , γ 0.012 ), indicating that OCO-PSO visits different yet equally effective regions in hypothesis space.
Training takes 3.7 min (142 particles, 90 iterations), while inference requires just 6.4 ms. This combination of a compact representation, adherence to constraints, and competitive performance makes OCO-PSO suitable for safety-critical applications requiring model traceability.

5.3. Hyperparameter Settings and Sensitivity

The hyperparameters of the PSO and OCO mechanisms were set to ensure a controlled trade-off between exploration, exploitation, and computational cost. Swarm parameters, including the number of particles and iterations, were optimized simultaneously with the SVM hyperparameters C and γ using Bayesian optimization, validated via 10-fold cross-validation to maximize model accuracy. Particle velocities are updated according to a linearly decreasing inertia coefficient w [ 0.95 , 0.3 ] , transitioning from exploration to exploitation. Cognitive and social acceleration coefficients are both set to c 1 = c 2 = 2.0 , while velocity changes are capped at V max = 0.25 × C to ensure numerical stability. Each paired-coordinate local update in the PSO loop is limited to 25 iterations, balancing convergence precision and efficiency.
The OCO strategy is activated every 30 iterations and implements three probabilistic learning mechanisms. Self-learning is guided by a rank-dependent mutation probability P self [ 0.3 , 0.9 ] , complemented by a 20-iteration cooldown period and Gaussian velocity noise ( σ = 0.01 ) to prevent stagnation. Peer learning relies on a crossover probability P peer = 0.7 and adaptive mixing w [ 0.3 , 0.7 ] , while leadership interaction uses P leader = 0.8 with distance-dependent blending ( w [ 0.2 , 0.4 ] for close particles, w [ 0.6 , 0.9 ] for distant ones) and a threshold τ D = 0.1 to toggle between attraction to the global best and local refinement. Soft elitism rescues the worst particle, and all updates respect the SVM dual equality constraints with a KKT tolerance of ϵ = 10 6 and a stagnation-based stopping criterion after 15 iterations without improvement.
Table 3 and Table 4 summarize the main PSO and OCO hyperparameters, including their values, roles, and impact on the optimization dynamics.

5.4. Diabetes Dataset

The OCO-PSO model reaches a precision of 73.28 % and one F1-macro of 0.720 surpassing SVC ( F 1 = 0.658 ), random forest ( F 1 = 0.652 , and decision tree ( F 1 = 0.635 ) (Table 5). Although the SGD Classifier achieves a slightly higher accuracy ( 74.14 % ) (Table 6), OCO-PSO demonstrates a superior class balance with an MCC of 0.454 versus 0.404 for SGD—an essential property for medical applications.
The model exhibits strong parsimony with only 107 support vectors compared to 260 for SVC ( 59 % reduction) and a near-zero constraint violation ( 2.5 × 10 4 ). Optimized hyperparameters ( C = 6 , γ =   3 . 28 × 10 3 ) reflect a fine-scale RBF kernel adapted to the structure of the local clinical data.
The training requires 24 min (122 particles, 110 iterations) and inference of 65 ms (Table 7). The ROC-AUC of 0.816 remains competitive compared to that of SGD’s 0.833 , demonstrating efficient capture of nonlinear boundaries without compromising discrimination power.
Table 5. Condensed comparative performance of OCO-PSO against baseline models on all benchmark datasets. Best results per dataset are highlighted in bold.
Table 5. Condensed comparative performance of OCO-PSO against baseline models on all benchmark datasets. Best results per dataset are highlighted in bold.
DatasetModelAccuracyF1-MacroMCCRank
Imbalance_dataOCO-PSO0.9800.9390.8851
SVC0.9300.8500.7372
SGDClassifier0.9000.8040.6674
DecisionTree0.9600.8890.7783
RandomForest0.9600.8890.7783
SonarOCO-PSO0.8570.8570.7182
SVC0.8330.8310.6713
SGDClassifier0.7380.7250.4964
DecisionTree0.6900.6860.3795
RandomForest0.8570.8520.7421
IonosphereOCO-PSO0.9570.9540.9081
SVC0.9430.9380.8762
SGDClassifier0.8860.8760.7514
DecisionTree0.8860.8700.7485
RandomForest0.9570.9540.9081
DiabetesOCO-PSO0.7330.7200.4541
SVC0.7070.6580.3234
SGDClassifier0.7410.6980.4042
DecisionTree0.6810.6350.2745
RandomForest0.7070.6520.3173
agric_yield_dataOCO-PSO0.8920.8900.7861
SVC0.8750.8740.7502
SGDClassifier0.8750.8740.7493
DecisionTree0.8000.7990.5985
RandomForest0.8500.8470.7024
Table 6. Summary of best performance achieved across all benchmark datasets. Each column represents a dataset; values in bold indicate the best metric achieved.
Table 6. Summary of best performance achieved across all benchmark datasets. Each column represents a dataset; values in bold indicate the best metric achieved.
MetricImbalance_DataSonarIonosphereDiabetes_PimaAgric_Yield_Data
Best ModelOCO-PSOOCO-PSOOCO-PSO = RFSGDOCO-PSO
Accuracy0.9800.8570.9570.7920.892
F1-Macro0.9390.8570.9540.7740.890
MCC0.885RF: 0.742 > OCO: 0.7180.9080.5480.786
Note: OCO-PSO ranks #1 on 3/5 datasets and #2 on 2/5 datasets.
Table 7. Training and inference times and average constraint violation for OCO-PSO across all datasets. High training times reflect the cost of constrained global optimization, whereas prediction remains highly efficient.
Table 7. Training and inference times and average constraint violation for OCO-PSO across all datasets. High training times reflect the cost of constrained global optimization, whereas prediction remains highly efficient.
DatasetTrain Time (s)Pred. Time (s)Constraint Violation
Imbalance_data168.670.00220.00275
Sonar160.680.00000.09461
Ionosphere225.50.00640.00133
diabetes_pima1443.20.00400.00067
agric_yield_data932.330.00000.01043

5.5. Sonar Dataset

The OCO-PSO yields a high accuracy 85.71 % and an F1-score of 0.857 (Table 5), slightly outperforming random forest (F 1 = 0.852 ), but significantly outperforming SVC ( F 1 = 0.831 ), SGD ( F 1 = 0.725 ), and the decision tree ( F 1 = 0.686 ) (Table 6). The MCC of 0.718 indicates a well-balanced classification, though slightly lower than random forest ( 0.752 ), but also respecting kernel interpretability by using support vectors.
The parsimony of the model results in reduced support vectors (84 compared to 132 for SVC, or a 36 % reduction) with a rather large number of features. Training requires 2.6 min (150 particles, 107 iterations) (Table 7), and inference time is almost negligible. The constraint violation rate of 0.095 is higher than in previous datasets; it suggests a more challenging compromise between global convergence and respecting strict constraints in high-dimensional spaces.
The best hyperparameters ( C = 10 , γ 0.028 ) express strict regularization and adapt the kernel scale to the local distance structure. OCO-PSO shows that combining OCO with global optimization can lead to efficient and parsimonious solutions, even in high dimensions, achieving performance comparable to ensemble methods while preserving explicit support vector representation for enhanced interpretability.

5.6. Agricultural Yield Dataset

OCO-PSO achieved the best test accuracy ( 89.17 % ), with an F 1 -macro score of 0.890 (Table 5), outperforming all reference methods: S V C ( 0.874 ), S G D ( 0.874 ), random forest ( 0.847 ), and decision tree ( 0.799 ) (Table 6). The balanced precision ( 0.899 ) and recall ( 0.887 ) demonstrates its robust performance despite an imbalance in moderate classes (226:253 training examples).
The M C C of 0.786 significantly surpasses that of S V C ( 0.750 ), S G D ( 0.749 ), and random forest ( 0.702 ), which is crucial in the agricultural context. This model thus offers better individual calibration between classes than these methods, an essential condition fo decision support in agriculture. With only 93 support vectors compared to 124 for S V C ( 25 % fewer), it is easier to interpret. This reduced number of vectors allows experts to study the crop, weather, and soil profiles associated with these vectors.
The violation rate of the constraint condition is still small ( 0.0104 ), which is comparable to the result obtained by diabetes and slightly lower than that from sonar, indicating the robust correctness of the scheme. The inference process is instantaneous, and the training takes 15.5 min (119 particles, 117 iterations) (Table 7). The best parameters (C = 8, γ ≈ 0.0021) show strong regularization with a detailed R B F kernel adapted for the modeling of the local nonlinear yield threshold. In particular, the S V C optimal γ is 24 times larger ( 0.0497 ), producing evidence that OCO-PSO searches different regions of solutions with a preference for a smoother decision boundary, which generalizes better on this agricultural dataset.
OCO-PSO improves over random forest—a benchmark usually successful on tabular agricultural data—while still providing deterministic and constraint-aware behaviour necessary for regulated precision agriculture applications, for which reproducibility and traceability are important.

5.7. Imbalanced Dataset

OCO-PSO attains 98 % accuracy, an F 1 -macro of 0.939 , and M C C = 0.885 (Table 5), exceeding all the baseline methods; S V C ( F 1 = 0.850 , M C C = 0.737 ), S G D ( F 1 = 0.804 , M C C = 0.667 ), decision tree ( F 1 0.889 ; M C C 0.778 ), and random forest ( F 1 0.889 ; M C C = 0.778 ) (Table 6). The method preserves a high precision ( 0.989 ) and recall ( 0.90 ) on the minority class, reducing false positives effectively. The model is very sparse: it uses only 70 support vectors ( S V C ’s 239) to represent the decision function. The low constraint violation rate ( 2.75 × 10 3 ) indicates that the optimization is stable and convergent.
With a configuration of 71 particles and 60 iterations, the OCO-PSO training converges in just 2.8 min (Table 7). This computational efficiency not only reduces the training time but also appears to promote better model generalization and reliability. The optimal hyperparameters ( C = 3 , γ 0.57 ) balance the amount of knowledge on which to extract patterns (controlled by γ ) and model complexity (determined by C) which assists model generalization capability.
The ensemble model achieves a strong ROC-AUC score of 0.962 . While this is slightly lower than a reference random forest’s score of 0.991 , the random forest’s concurrently lower MCC value M C C ( 0.778 versus 0.885 ) suggests that it may suffer from over-optimistic performance (overfitting) on the test set. In contrast, the OCO-PSO-optimized model shows improved performance across all other evaluated metrics. It demonstrates particular strengths in error calibration and enhanced model sparsity, which are critical advantages for imbalanced classification tasks.
To further position the proposed OCO–PSO–SVM framework with respect to existing work, we compare its performance with previously published state-of-the-art methods evaluated on the same dataset. In particular, two recent studies reported classification accuracies of 0.87 and 0.74 , respectively [30,31]. In contrast, the proposed approach achieves a substantially higher accuracy, demonstrating a clear performance improvement over these reference methods. This gain highlights the effectiveness of the proposed hybrid optimization strategy in enhancing the training of SVMs by enabling better exploration of the solution space and more stable convergence toward high-quality optima. Beyond accuracy, our method additionally enforces strict satisfaction of the SVM dual constraints and yields a sparser model, further distinguishing it from existing approaches that primarily focus on predictive performance alone. These results confirm that the proposed OCO–PSO–SVM framework constitutes a competitive and robust alternative to current state-of-the-art solutions.

5.8. Comparison with PSO

Table 8 reports a direct comparison between the proposed OCO–PSO framework and a standard PSO-based SVM solver under identical experimental settings. This comparison is intended to isolate the effect of the Open Competency Optimization mechanism while controlling for the underlying population-based optimization paradigm.
The results indicate that OCO-PSO consistently outperforms PSO across all datasets in terms of both classification accuracy and Matthews correlation coefficient (MCC). On the agricultural yield dataset, OCO-PSO improves accuracy from 0.83 to 0.89, while the MCC increases substantially from 0.67 to 0.87, highlighting superior class discrimination in a data-scarce and potentially imbalanced setting. Similar improvements are observed on the ionosphere dataset, where OCO-PSO achieves higher accuracy (0.95 vs. 0.91) and MCC (0.91 vs. 0.82), reflecting enhanced convergence stability in a moderately high-dimensional feature space.
The performance gap is particularly pronounced on the sonar dataset, which constitutes a challenging benchmark due to its high dimensionality and limited number of samples. In this case, OCO–PSO yields an improvement of approximately 9 in accuracy and nearly 19 in MCC relative to PSO, underscoring the effectiveness of the proposed constraint-preserving paired-coordinate updates and diversity-aware learning mechanisms. Furthermore, on the Imbalance_data dataset, OCO–PSO attains near-optimal performance (accuracy of 0.98 and MCC of 0.88), significantly outperforming PSO and demonstrating strong robustness under severe class imbalance.
Overall, this comparative analysis provides empirical evidence that the proposed OCO–PSO framework constitutes a more reliable and effective metaheuristic solver for the SVM dual optimization problem than conventional PSO, particularly in challenging scenarios characterized by constraint sensitivity, limited training data, and class imbalance.

5.9. Statistical Analysis

This section includes a statistical analysis of the performed model OCO-PSO in relation to SVC, SGD, decision trees, and random forests. The analysis adheres to the best practices for statistical validation in machine learning by integrating three complementary parts. Data was analyzed using nonparametric bootstrap distribution, paired Student’s t-tests with Holm–Bonferroni correction and Wilcoxon signed-rank tests.

5.9.1. Methodological Framework

Bootstrap Distribution Analysis ( n = 1000 )

A nonparametric bootstrap was applied with 1000 resampled to calculate the empirical distributions of accuracy, F 1 score, M C C , and R O C A U C . These provide robust estimates of central tendency, variance, and model stability given the sampling uncertainties. The box plots obtained (Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6) convey the data spread and quartile distribution for each of these metrics and emphasize the stability of PSO_OCO across the different datasets.

Significance Tests and Correction for Multiple Comparisons

In order to analyze the difference in performance on the P S O - O C O and reference models, we performed
  • Paired t-tests of mean performance differences;
  • the Wilcoxon signed-rank test as a robust, distribution-free analogue.
Corrections were made for multiple comparisons using Holm–Bonferroni correction, with 16 total comparisons (4 metric × 4 models). The level of significance was α = 0.05 . Holm-adjusted p-values (p Holm) are presented together with the unadjusted ones in Table 9, Table 10, Table 11, Table 12 and Table 13.

Measuring the Size of the Effect

The effect size of observed differences was expressed as Cohen’s d for paired samples. The interpretation is | d | < 0.2 (negligible), 0.2 0.5 (low), 0.5 0.8 (medium), and ≥0.8 (high). In each performance table (Table 9, Table 10, Table 11, Table 12 and Table 13), we also include the effect sizes given by OCO-PSO compared to its opponents.

5.9.2. Analysis Results

Dataset 1: Diabetes

Table 9 presents significant differences in OCO-PSO with the reference models. OCO-PSO is competitive but performs worse than SGD with respect to accuracy and R O C A U C area ( d = 0.57 to 1.03 ; p Holm < 0.001 ). It does not increase the performance in terms of the R O C area with respect to both the nearest neighbor ( Δ = + 0.0752 , d = + 1.94 ) and decision tree. The bootstrap distributions (Figure 2) of predictions show that despite the mixed performance on this dataset, OCO-PSO is a stable algorithm.
Table 9. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset: diabetes).
Table 9. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset: diabetes).
MetricModel Δ mean Cohen’s dp-Value p Holm Sig.
PrecisionSGD−0.0188−0.57<0.001<0.001Yes
F1-MacroDecision Tree+0.0346+0.70<0.001<0.001Yes
MCCDecision Tree+0.0693+0.70<0.001<0.001Yes
ROC-AUCSGD−0.0167−1.03<0.001<0.001Yes
ROC-AUCSVC−0.0184−0.96<0.001<0.001Yes
ROC-AUCDecision Tree+0.0752+1.94<0.001<0.001Yes
Note: Δ mean represents the difference in the mean metric value (model—OCO-PSO). All significant results pass the Holm correction procedure.
Figure 2. Bootstrap distributions of metrics ( n = 1000 )—diabetes dataset.
Figure 2. Bootstrap distributions of metrics ( n = 1000 )—diabetes dataset.
Algorithms 19 00070 g002

Dataset 2: Imbalanced Data

As presented in Table 10, on all metrics, tree models (decision tree and random forest) are substantially better than OCO-PSO ( | d | 1.36 1.38 ). This demonstrates the benefits of ensembles in class disequilibrium. Though OCO-PSO has a higher A U C - R O C than the decision tree ( Δ = + 0.0549 ; d = + 1.58 ), this signifies better calibration by probability. This conclusion is also reinforced by the bootstrapped curves (Figure 3).
Table 10. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset 2: imbalanced data).
Table 10. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset 2: imbalanced data).
MetricModel Δ mean Cohen’s dp-Value p Holm Sig.
PrecisionDecision Tree−0.0194−1.38<0.001<0.001Yes
PrecisionRandom Forest−0.0194−1.38<0.001<0.001Yes
F1-MacroDecision Tree−0.0608−1.36<0.001<0.001Yes
F1-MacroRandom Forest−0.0608−1.36<0.001<0.001Yes
MCCDecision Tree−0.1204−1.37<0.001<0.001Yes
MCCRandom Forest−0.1204−1.37<0.001<0.001Yes
ROC-AUCDecision Tree+0.0549+1.58<0.001<0.001Yes
Note: Δ mean represents the difference in the mean metric value (model—OCO-PSO). All significant results pass the Holm correction procedure.
Figure 3. Bootstrap distributions of metrics ( n = 1000 )—imbalanced data.
Figure 3. Bootstrap distributions of metrics ( n = 1000 )—imbalanced data.
Algorithms 19 00070 g003

Dataset 3: Ionosphere

Table 11 shows that OCO-PSO achieves exceptional performance, significantly outperforming gradient-based and decision tree models, with very large effect sizes ( d = 2.59 3.19 ). The gains compared to SGD are among the most significant in the entire study (e.g., MCC Δ = 0.3113 ; d = 3.17 ). The bootstrap distributions (Figure 4) confirm that OCO-PSO offers both high accuracy and minimal variance, demonstrating excellent generalization to high-dimensional nonlinear settings.
Figure 4. Bootstrap distributions of metrics ( n = 1000 )—ionosphere dataset.
Figure 4. Bootstrap distributions of metrics ( n = 1000 )—ionosphere dataset.
Algorithms 19 00070 g004
Table 11. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset 3: ionosphere).
Table 11. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset 3: ionosphere).
MetricModel Δ mean Cohen’s dp-Value p Holm Sig.
PrecisionSGD−0.1424−3.14< 0.001<0.001Yes
PrecisionDecision Tree−0.0858−2.63<0.001<0.001Yes
F1-MacroSGD−0.1576−3.19<0.001<0.001Yes
F1-MacroDecision Tree−0.1004−2.59<0.001<0.001Yes
MCCSGD−0.3113−3.17<0.001<0.001Yes
MCCDecision Tree−0.1910−2.65<0.001<0.001Yes
ROC-AUCSGD−0.0648−1.26<0.001<0.001Yes
ROC-AUCDecision Tree−0.1038−2.17<0.001<0.001Yes
ROC-AUCSVC+0.0178+0.95<0.001<0.001Yes
Note: Δ mean represents the difference in the mean metric value (model—OCO-PSO). All significant results pass the Holm correction procedure.

Dataset 4: Sonar

As shown in Table 12, OCO-PSO significantly outperforms SGD and decision trees for all metrics (d up to 2.59 ). It also surpasses random forest in terms of area under the ROC curve ( Δ = + 0.0758 ; d = + 1.57 ). Bootstrap plots (Figure 5) reveal that OCO-PSO combines strong predictive power and high stability, comparable to ensemble methods, while being theoretically simpler.
Figure 5. Bootstrap distributions of metrics ( n = 1000 )—sonar dataset.
Figure 5. Bootstrap distributions of metrics ( n = 1000 )—sonar dataset.
Algorithms 19 00070 g005
Table 12. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset 4: sonar).
Table 12. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset 4: sonar).
MetricModel Δ mean Cohen’s dp-Value p Holm Sig.
PrecisionSGD−0.0953−1.44<0.001<0.001Yes
PrecisionDecision Tree−0.1671−1.86<0.001<0.001Yes
F1-MacroSGD−0.1058−1.57<0.001<0.001Yes
F1-MacroDecision Tree−0.1740−1.91<0.001<0.001Yes
MCCSGD−0.1759−1.39<0.001<0.001Yes
MCCDecision Tree−0.3365−1.86<0.001<0.001Yes
ROC-AUCSGD−0.1474−2.59<0.001<0.001Yes
ROC-AUCDecision Tree−0.0856−1.19<0.001<0.001Yes
ROC-AUCSVC+0.0444+1.25<0.001<0.001Yes
ROC-AUCRandom Forest+0.0758+1.57<0.001<0.001Yes
Note: Δ mean represents the difference in the mean metric value (model—OCO-PSO). All significant results pass the Holm correction procedure.

Dataset 5: Agricultural Yield Prediction

Table 13 highlights the OCO-PSO method as the most effective. In particular, the exceptional improvement in the area under the ROC curve (AUC-ROC) compared to the decision tree ( Δ = + 0.1421 ; d = + 4.65 ) is noteworthy, representing the largest difference observed across the entire dataset. Figure 6 confirms that OCO-PSO achieves both minimal variance and superior calibration, an essential requirement for agricultural decision-support systems.
Table 13. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset 5: agricultural yield prediction).
Table 13. Significant differences in performance metrics for various models relative to the baseline OCO-PSO (dataset 5: agricultural yield prediction).
MetricModel Δ mean Cohen’s dp-Value p Holm Sig.
PrecisionSGD 0.0087 0.61 <0.001<0.001Yes
PrecisionDecision Tree 0.0835 2.57 <0.001<0.001Yes
PrecisionRandom Forest 0.0343 1.20 <0.001<0.001Yes
F1-MacroSGD 0.0088 0.62 <0.001<0.001Yes
F1-MacroDecision Tree 0.0840 2.58 <0.001<0.001Yes
F1-MacroRandom Forest 0.0364 1.22 <0.001<0.001Yes
MCCSGD 0.0169 0.60 <0.001<0.001Yes
MCCDecision Tree 0.1675 2.57 <0.001<0.001Yes
MCCRandom Forest 0.0654 1.28 <0.001<0.001Yes
ROC-AUCSGD + 0.0137 + 1.51 <0.001<0.001Yes
ROC-AUCDecision Tree 0.1421 4.65 <0.001<0.001Yes
ROC-AUCRandom Forest 0.0110 0.72 <0.001<0.001Yes
Figure 6. Bootstrap distributions of metrics ( n = 1000 )—agricultural yield dataset.
Figure 6. Bootstrap distributions of metrics ( n = 1000 )—agricultural yield dataset.
Algorithms 19 00070 g006

6. Discussion of Results

Over five distinct datasets—medical, agricultural, signal processing, and imbalanced combined real synthetic data—the OCO-PSO achieved overall superior performance in discrimination, fairness, parsimony, and stability to constraints. Its performance is validated by rigorous statistical tests, based on 1000 bootstrap replications, paired Student’s t-tests, Wilcoxon signed-rank tests, and a Holm–Bonferroni correction, thus ensuring the statistical and practical relevance of the observed improvements.

6.1. Discriminatory Performance and Fairness

The OCO-PSO strikes a favorable balance on overall accuracy and class-level fairness, F1-Macro, MCC across diverse benchmark datasets. On Imbalanced_data (extreme class imbalance 9:1), OCO-PSO achieves high MCC ( 0.885 ) and minority recall ( 0.90 ), though tree-based ensemble methods demonstrate substantially superior overall performance ( | d | 1.3661 . 38 ), confirming an advantage for extreme class imbalance. OCO-PSO has a better calibration on ROC-AUC compared to individual decision trees ( δ = + 0.0549 ; d = + 1.58 ).
OCO-PSO performs especially well on the balanced to moderately imbalanced datasets. On agricultural yield (1.12:1), it attains optimal accuracy ( 89.17 % ) and the highest MCC ( 0.786 ), with exceptional ROC-AUC improvement versus decision tree δ = + 0.1421 , d = + 4.65 . On ionosphere (1.77:1), OCO-PSO performs exceptionally well, significantly outperforming gradient-based and tree models with large effect sizes (d = 2.59–3.19). Bootstrap distribution (Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6) demonstrates lower variance and superior calibration. Across the benchmark suite, OCO-PSO performs in an optimal or near-optimal way in around 60 % of comparisons, with all statistically significant differences remaining robust after Holm–Bonferroni correction.

6.2. Parsimony and Interpretability of the Model

The OCO-PSO algorithm builds sparser models than the Bayesian-optimized SVC, reducing support vectors by 7 % to 71 % depending on the dataset (average 39 % ). This enhances the interpretability and analysis for domain specialists.

6.3. Respect for Constraints and the Stability of the Optimization

Constraint violations remain relatively small (≲ 10 2 ) on four of five datasets, with particularly low violations (< 10 3 ) on simpler problems. The higher violation rate on sonar (∼0.095) reflects the challenge of maintaining dual-space feasibility in high-dimensional settings (55 features).

6.4. Cost Calculation and Practical Considerations

A training time range from 2.0 to 24.0 min is a consequence of the PSO-based hyperparameter search, but the model exhibits strong generalization stability, calibrated error distributions, and compactness. The inference time is low (0 to 6.4 ms), which can be used on-the-fly.

6.5. Limitations and Comparative Context

The OCO-PSO algorithm does not consistently offer better performance in terms of raw accuracy. For example, on diabetes, SGD has better accuracy ( 74.14 % vs. 73.28 % ) but OCO-PSO exhibits a higher MCC ( 0.454 vs. 0.0404 ), indicating better calibration at the class level. This distinction underscores the importance of fairness-based metrics when operational context involves asymmetric error costs.
In summary, the combined analysis, using bootstrapping, paired significance tests, effect size estimation, and correction for multiple tests, demonstrates that the OCO-PSO algorithm shows a lot of advancement in terms of fairness, parsimony, calibration, and constraint satisfaction. It is ideal for applications that demand interpretability alongside reproducibility, such as medical diagnostics, precision agriculture, and safety-critical systems, where fair decision-making is crucial.

7. Conclusions

We proposed a hybrid supervised learning method, OCO-PSO, by combining the Open Competency Optimization and particle swarm dynamics for dual formulation SVM. we conducted experiments on five benchmark datasets, including medical diagnostic (diabetes), agricultural yield prediction (crop yield), signal processing (sonar and ionosphere), and extremely imbalanced classification (Imbalanced_data).
This approach exhibits benefits in terms of model sparsity, calibrating at the class level and satisfying constraints. Between 7 % and 71 % , fewer support vectors from standard SVC enhance interpretability. On three of five datasets (ionosphere, sonar, agricultural yield), it reaches or surpasses optimal F1-macro and MCC performances (Table 14): this confirms dual-space optimization convergence with constraint violations of less than 1 % . Performance gains are statistically significant on several datasets p-values < 0.01, which remain stable after robust multiple testing correction.
Bootstrap analysis shows that OCO-PSO is characterized by low variance, and that ROC-AUC calibration remains stable, in contrast to simpler gradient-based techniques. While ensemble methods (random forest) demonstrate advantages on the extremely imbalance scenario (9:1 ratio), OCO-PSO performs significantly better for balanced up to moderately imbalanced problems and preserves the interpretability of the kernel. Higher training cost ( 2.6 min to 24 min, depending on the dataset) is warranted when improved decision quality, interpretability, reproducibility, and fairness are paramount—factors that are indispensable in regulated applications such as medical diagnostics, precision agriculture, or safety-critical systems.

7.1. Future Directions

A natural extension is to reformulate OCO-PSO into a multiobjective optimization (MOO) framework. This approach addresses margin maximization, constraint satisfaction, parsimony, and robustness by one unified fitness function. A MOO would explicitly expose these trade-offs, producing a Pareto front of jointly optimal classifiers. This would allow practitioners to select solutions adapted to specific operational constraints (for example, a maximum number of support vectors count for embedded deployment and minimum MCC for imbalanced data. Additional research directions include (1) adaptive management of constraints for improved stability in large-dimensional spaces (by addressing high sonar KKT violation), (2) warm-start strategies leveraging standard SVM solutions to reduce training time, and (3) theoretical analysis of convergence guarantees under OCO’s constraint-preserving updates.
In future work, we aim to extend the proposed OC-PSO framework to multiclass Support Vector Machines (SVMs). While the current study focuses on binary classification, the framework can naturally accommodate established multiclass strategies, including decomposition-based approaches (e.g., One vs. One, One vs. All) and native multiclass formulations (e.g., Crammer–Singer and DAGSVM).
Integrating these strategies will broaden the applicability of OC-PSO to a wider range of prediction tasks, enhancing its scalability and predictive performance for complex multiclass problems in domains such as agricultural yield forecasting.

7.2. Considerations for Crop Yield Prediction

In practical agricultural applications, the choice of multiclass strategy depends on the number of defined yield categories and the underlying data distribution. OvO generally offers strong performance when classes are relatively balanced, whereas OvA is more effective in scenarios involving a large number of classes or severe class imbalance—conditions that frequently arise in agricultural yield prediction. These extensions will enable the proposed OCO-PSO framework to address a broader range of real-world precision agriculture problems.
In summary, OCO-PSO represents a principles-based alternative for scenarios where the reliability, interpretability, and strict constraint compliance outweigh training speed—an increasingly relevant trade-off in responsible AI deployment.
In this work, we have proposed the OCO-PSO hybrid optimization framework to enhance SVM training. Compared to standard SVM solvers such as SMO, SGD, decision trees, or random forests, OCO-PSO offers several advantages: it efficiently explores the solution space, avoids local minima, and strictly enforces dual constraints on Lagrange multipliers. These features result in improved predictive performance, sparser models, and better interpretability. The main limitation is the higher computational cost during training due to constrained global optimization, although prediction remains highly efficient. By explicitly addressing these strengths and trade-offs, our approach provides a robust alternative for classification tasks even with small- to medium-sized datasets. Furthermore, the proposed OCO-PSO–SVM framework demonstrates clear advantages over conventional methods that do not employ PSO. It achieves more accurate and interpretable models by combining global search capability with adaptive learning and strict enforcement of dual constraints. While the training phase is computationally more demanding, the prediction phase remains efficient, making the method suitable for practical applications. These findings highlight the potential of hybrid metaheuristic optimization in improving the performance of SVMs across diverse classification problems.

Author Contributions

Conceptualization, K.J. and K.N.; methodology, K.J. and K.N.; software, K.N.; validation, K.J., K.N., and S.R.; formal analysis, K.N.; investigation, K.N.; resources, K.J., K.N., and S.R.; writing—original draft preparation, K.J. and K.N.; writing—review and editing, K.J. and K.N.; supervision, K.J.; project administration, K.J.; funding acquisition, K.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

https://www.kaggle.com/search?q=crop-+yield-prediction-dataset (accessed on 1 January 2026) and https://archive.ics.uci.edu/ (accessed on 1 January 2020).

Conflicts of Interest

The authors K.Jebari, K.Nejjar and S.Rekiek declare no conflicts of interest.

References

  1. FAO. The Future of Food and Agriculture—Alternative Pathways to 2050. Rome: Food and Agriculture Organization of the United Nations, 2018. Available online: https://openknowledge.fao.org/server/api/core/bitstreams/2c6bd7b4-181e-4117-a90d-32a1bda8b27c/content (accessed on 1 January 2026).
  2. Meghraoui, K.; Sebari, I.; Pilz, J.; Ait El Kadi, K.; Bensiali, S. Applied Deep Learning-Based Crop Yield Prediction: A Systematic Analysis of Current Developments and Potential Challenges. Technologies 2024, 12, 43. [Google Scholar] [CrossRef]
  3. Choi, J.W.; Hidayat, M.S.; Cho, S.B.; Hwang, W.H.; Lee, H.; Cho, B.K.; Kim, M.S.; Baek, I.; Kim, G. Recent Trends in Machine Learning, Deep Learning, Ensemble Learning, and Explainable Artificial Intelligence Techniques for Evaluating Crop Yields Under Abnormal Climate Conditions. Plants 2025, 14, 2841. [Google Scholar] [CrossRef] [PubMed]
  4. Shawon, S.M.; Ema, F.B.; Mahi, A.K.; Niha, F.L.; Zubair, H. Crop yield prediction using machine learning: An extensive and systematic literature review. Smart Agric. Technol. 2025, 10, 100718. [Google Scholar] [CrossRef]
  5. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  6. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [PubMed]
  7. Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef] [PubMed]
  8. Bottou, L.; Cortes, C.; Vapnik, V.N. Support Vector Machines. In Encyclopedia of Machine Learning; Springer: Berlin/Heidelberg, Germany, 2007; pp. 981–983. [Google Scholar] [CrossRef]
  9. Platt, J.C. Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines; Technical Report MSR-TR-98-14; Microsoft Research: Redmond, WA, USA, 1998. [Google Scholar]
  10. Joachims, T. Making Large-Scale SVM Learning Practical. In Advances in Kernel Methods: Support Vector Learning; Schölkopf, B., Burges, C.J.C., Smola, A.J., Eds.; MIT Press: Cambridge, MA, USA, 1999; pp. 169–184. [Google Scholar]
  11. Wang, H.; Li, W. Fast ramp fraction loss SVM classifier with low computational complexity for pattern classification. Neural Netw. 2025, 184, 107087. [Google Scholar] [CrossRef] [PubMed]
  12. Paquet, U.; Engelbrecht, A. Training support vector machines with particle swarms. In Proceedings of the International Joint Conference on Neural Networks, Portland, OR, USA, 20–24 July 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 2, pp. 1593–1598. [Google Scholar]
  13. Dias, M.L.D.; Neto, A.R.R. Training soft margin support vector machines by simulated annealing: A dual approach. Expert Syst. Appl. 2017, 87, 157–169. [Google Scholar] [CrossRef]
  14. Huang, C.L.; Wang, C.J. A GA-based feature selection and parameters optimizationfor support vector machines. Expert Syst. Appl. 2006, 31, 231–240. [Google Scholar] [CrossRef]
  15. Akopov, A. A Hybrid Multi-Swarm Particle Swarm Optimization Algorithm for Solving Agent-Based Epidemiological Model. Cybern. Inf. Technol. 2025, 25, 59–77. [Google Scholar] [CrossRef]
  16. Engelbrecht, A. Particle swarm optimization with crossover: A review and empirical analysis. Artif. Intell. Rev. 2016, 45, 131–165. [Google Scholar] [CrossRef]
  17. Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [PubMed]
  18. Shalev-Shwartz, S.; Singer, Y.; Srebro, N. Pegasos: Primal estimated sub-gradient solver for svm. In Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, USA, 20–24 June 2007; pp. 807–814. [Google Scholar]
  19. Fan, R.E.; Chen, P.H.; Lin, C.J. Working set selection using second order information for training support vector machines. J. Mach. Learn. Res. 2005, 6, 1889–1918. [Google Scholar]
  20. Wang, J.; Zhou, J.; Zhao, P.; Liu, J.; Hoi, S.C.H.; Zhao, T. Breaking the curse of kernelization: Budgeted stochastic gradient descent for large-scale SVM training. J. Mach. Learn. Res. 2012, 13, 3295–3329. [Google Scholar]
  21. Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT 2010), Paris, France, 22–27 August 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 177–187. [Google Scholar]
  22. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments—A survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef]
  23. Ben Jelloun, R.; Jebari, K.; El Moujahid, A. Open Competency Optimization: A Human-Inspired Optimizer for the Dynamic Vehicle-Routing Problem. Algorithms 2024, 17, 449. [Google Scholar] [CrossRef]
  24. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  25. Smith, J.; Everhart, J.; Dickson, W.; Knowler, W.; Johannes, R. Diabetes. UCI Machine Learning Repository. Original Owners: National Institute of Diabetes and Digestive and Kidney Diseases. 1988. Available online: https://archive.ics.uci.edu/dataset/34/diabetes (accessed on 1 January 1994).
  26. Attakorah, S.O. Agriculture Crop Yield. Kaggle Dataset. Compiled Dataset Containing Agricultural Statistics for Various Countries and Crops, Including Production, Yield, and Harvested Area. 2023. Original Owners: Jennifer Chu, MIT News. Available online: https://www.kaggle.com/datasets/samuelotiattakorah/agriculture-crop-yield (accessed on 15 February 2024).
  27. Roy, R. SONAR.csv. Kaggle Dataset. Version 1. Updated Dataset of the Classic “Connectionist Bench (Sonar, Mines vs. Rocks)” Data. 2020. Available online: https://www.kaggle.com/datasets/rupakroy/sonarcsv (accessed on 15 February 2020).
  28. Sigillito, V.; Wing, S.; Hutton, L.; Baker, K. Ionosphere. UCI Machine Learning Repository. 1989. ID: 52. Available online: https://archive.ics.uci.edu/dataset/52/ionosphere (accessed on 1 January 2026).
  29. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  30. Meenakshi, G.; Sanchez, D.T.; Jawarneh, M. Support Vector Machine for Crop Yield Prediction Towards Smart Agriculture. Available online: https://www.scitepress.org/Papers/2023/126149/ (accessed on 17 June 2023).
  31. Senapaty, M.K.; Ray, A.; Padhy, N. A decision support system for crop recommendation using machine learning classification algorithms. Agriculture 2024, 14, 1256. [Google Scholar] [CrossRef]
Figure 1. Overall workflow of the proposed OCO_PSO algorithm. The framework integrates PSO-based global exploration, paired-coordinate local exploitation for exact constraint handling, and periodic OCO-based diversification to preserve swarm diversity.
Figure 1. Overall workflow of the proposed OCO_PSO algorithm. The framework integrates PSO-based global exploration, paired-coordinate local exploitation for exact constraint handling, and periodic OCO-based diversification to preserve swarm diversity.
Algorithms 19 00070 g001
Table 1. Dataset characteristics summary.
Table 1. Dataset characteristics summary.
DatasetDomainSamples × Feat.Class Distribution (neg:pos)Type
DiabetesMedical768 × 8500:268 (65.0%:35.0%)Real
Agricultural YieldAgriculture599 × 6317:282 (52.9%:47.1%)Synthetic
IonosphereRadar/Remote Sensing351 × 34225:126 (64.1%:35.9%)Real
SonarSignal Processing208 × 6089:77 (53.6%:46.4%)Real
Imbalance_dataSynthetic Data500 × 2450:50 (90.0%:10.0%)Synthetic
Table 3. Summary of PSO algorithm parameters.
Table 3. Summary of PSO algorithm parameters.
CategoryParameterDescription/Value
Swarm Settings n p a r t i c l e s , m a x _ i t e r Optimized via Bayesian Search with C and γ
KernelRBF (Radial Basis Function)
Inertia Dynamics w s t a r t 0.95 (Maximum initial exploration)
w e n d 0.3 (Precise final exploitation)
Acceleration FactorsCognitive ( c 1 )2.0 (Attraction to local best p b e s t )
Social ( c 2 )2.0 (Attraction to global best g b e s t )
Movement ConstraintsMax velocity ( v m a x ) 0.25 × C (Kinetic dynamics regulation)
Local Optimization m a x _ p s o _ l o c a l _ i t e r 25 (Pairwise update iterations in local loop)
t o l _ k k t 0.001 (KKT conditions tolerance)
Convergence Criteria ϵ f i t n e s s 10 6 (Objective function precision threshold)
C o n v e r g e n c e _ i t e r 15 (Stagnation limit before termination)
Table 4. Summary of OCO parameters, activated every 30 iterations.
Table 4. Summary of OCO parameters, activated every 30 iterations.
Strategy (Function)HyperparameterSymbol/ValueRole and Impact
S1: Self-learning (apply_self_learning)Mutation probability [ 0.3 , 0.9 ] Rank-based adjustment (higher for low-performing particles).
Activation interval [ 0.01 C , 0.1 C ] Intensity of pairwise dimension reactivation.
Transfer rate 20 % ( Δ α ) Proportion of energy shifted between pairs.
Cooldown period20 iterationsSuspension of dimensions failing to improve.
Velocity noise σ = 0.01 Gaussian noise to maintain kinetic dynamics.
S2: Peer Learning (peer_crossover)Crossover probability P p e e r = 0.7 Frequency of information exchange between neighbors.
Mixing factor w [ 0.3 , 0.7 ] Balance of interpolation between two particles.
Source selection 50 % chanceProbability for each pair to inherit from a neighbor.
S3: LeadershipInteraction (leadership_crossover)Direction probability P l e a d e r = 0.8 Frequency of attraction toward the global best.
Distance threshold τ D = 0.1 Toggle between global attraction and local refinement.
Mixing (Distant) w [ 0.6 , 0.9 ] Acceleration toward the leader if the particle is far.
Mixing (Close) w [ 0.2 , 0.4 ] Fine exploitation if the particle is nearby.
Pair ratio 90 % Proportion of dimensions oriented toward the leader.
Global ControlOCO Frequency30 iterationsTime interval between two learning phases.
Soft Elitism w [ 0.6 , 0.9 ] Mixing intensity used to rescue the worst particle.
Convergence threshold ϵ = 10 6 Minimum fitness precision to validate progress.
Stagnation stop15 iterationsCycle limit without improvement before termination.
Table 8. Performance comparison between PSO and OCO-PSO across different datasets.
Table 8. Performance comparison between PSO and OCO-PSO across different datasets.
DatasetAlgorithmAccuracyMCC
Agricultural YieldPSO0.830.67
OCO-PSO0.890.87
IonospherePSO0.910.82
OCO-PSO0.950.91
SonarPSO0.760.52
OCO-PSO0.850.71
Imbalance_dataPSO0.740.69
OCO-PSO0.980.88
Table 14. Extended results of OCO-PSO and baseline models across all datasets.
Table 14. Extended results of OCO-PSO and baseline models across all datasets.
Metric/ModelImbalance_DataSonarAgric_Yield_DataDiabetesIonosphere
OCO-PSO
Accuracy0.9800.8570.8920.7330.957
F1-Macro0.9390.8570.8900.7200.954
Precision0.9890.8590.8990.7170.950
Recall0.9000.8590.8870.7370.958
MCC0.8850.7180.7860.4540.908
Train (s)1691609321443225
Pred (s)0.00220.00000.00000.00600.0013
Viol.0.002750.094610.010430.000250.0064
C3.0010.008.006.002.00
γ 0.5700.0280.00210.00330.062
SVC
Accuracy0.9300.8330.8750.7070.943
F1-Macro0.8500.8310.8740.6580.938
Precision0.7940.8410.7850.6720.938
Recall0.9610.8300.8720.6520.938
MCC0.7370.6710.7500.3230.876
Train (s)6.0737.1336.649.0922.29
Pred (s)0.0000.0060.0000.0090.000
Viol.
C5.2210.003.331.2610.00
γ 0.0800.0270.0500.0270.012
SGDClassifier
Accuracy0.9000.7380.8750.7410.886
F1-Macro0.8040.7250.8740.6980.876
Precision0.7500.7680.8760.7150.876
Recall0.9440.7300.8730.6900.876
MCC0.6670.4960.7490.4040.751
Train (s)0.26558.5148.232.0135.30
Pred (s)0.0000.0030.0000.0000.000
Viol.
C0.00
γ 0.00
Decision Tree
Accuracy0.9600.6900.8000.6810.886
F1-Macro0.8890.6860.7990.6350.870
Precision0.8890.6930.7990.6420.891
Recall0.8890.6860.7990.6320.858
MCC0.7780.3790.5980.2740.748
Train (s)0.44739.4448.852.5335.64
Pred (s)0.0100.0040.0000.0090.000
Viol.
C0.00
γ 0.00
Random Forest
Accuracy0.9600.8570.8500.7070.957
F1-Macro0.8890.8520.8470.6520.954
Precision0.8890.8930.8580.6720.950
Recall0.8890.8500.8450.6460.958
MCC0.7780.7420.7020.3170.908
Train (s)2.13560.7853.646.0140.80
Pred (s)0.0160.0200.0000.0240.016
Viol.
C0.00
γ 0.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nejjar, K.; Jebari, K.; Rekiek, S. A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines. Algorithms 2026, 19, 70. https://doi.org/10.3390/a19010070

AMA Style

Nejjar K, Jebari K, Rekiek S. A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines. Algorithms. 2026; 19(1):70. https://doi.org/10.3390/a19010070

Chicago/Turabian Style

Nejjar, Khalid, Khalid Jebari, and Siham Rekiek. 2026. "A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines" Algorithms 19, no. 1: 70. https://doi.org/10.3390/a19010070

APA Style

Nejjar, K., Jebari, K., & Rekiek, S. (2026). A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines. Algorithms, 19(1), 70. https://doi.org/10.3390/a19010070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop