Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the efficiency of the optimization algorithm used to solve their underlying dual problem, which is often complex and constrained. Classical solvers, such as Sequential Minimal Optimization (SMO) and Stochastic Gradient Descent (SGD), present inherent limitations: SMO ensures numerical stability but lacks scalability and is sensitive to heuristics, while SGD scales well but suffers from unstable convergence and limited suitability for nonlinear kernels. To address these challenges, this study proposes a novel hybrid optimization framework based on Open Competency Optimization and Particle Swarm Optimization (OCO–PSO) to enhance the training of SVMs. The proposed approach combines the global exploration capability of PSO with the adaptive competency-based learning mechanism of OCO, enabling efficient exploration of the solution space, avoidance of local minima, and strict enforcement of dual constraints on the Lagrange multipliers. Across multiple datasets spanning medical (diabetes), agricultural yield, signal processing (sonar and ionosphere), and imbalanced synthetic data, the proposed OCO-PSO–SVM consistently outperforms classical SVM solvers (SMO and SGD) as well as widely used classifiers, including decision trees and random forests, in terms of accuracy, macro-F1-score, Matthews correlation coefficient (MCC), and ROC-AUC. On the Ionosphere dataset, OCO-PSO achieves an accuracy of
, an F1-score of
, and an MCC of
, matching the accuracy of random forest while offering superior interpretability through its kernel-based structure. In addition, the proposed method yields a sparser model with only 66 support vectors compared to 71 for standard SVC (a reduction of approximately
), while strictly satisfying the dual constraints with a near-zero violation of
. Notably, the optimal hyperparameters identified by OCO-PSO (
,
) differ substantially from those obtained via Bayesian optimization for SVC (
,
), indicating that the proposed approach explores alternative yet equally effective regions of the hypothesis space. The statistical significance and robustness of these improvements are confirmed through extensive validation using 1000 bootstrap replications, paired Student’s
t-tests, Wilcoxon signed-rank tests, and Holm–Bonferroni correction. These results demonstrate that the proposed metaheuristic hybrid optimization framework constitutes a reliable, interpretable, and scalable alternative for training SVMs in complex and high-dimensional classification tasks.
Full article