Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (281)

Search Parameters:
Keywords = iterative method with memory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1249 KB  
Article
Autoregressive and Residual Index Convolution Model for Point Cloud Geometry Compression
by Gerald Baulig and Jiun-In Guo
Sensors 2026, 26(4), 1287; https://doi.org/10.3390/s26041287 - 16 Feb 2026
Viewed by 143
Abstract
This study introduces a hybrid point cloud compression method that transfers from octree-nodes to voxel occupancy estimation to find its lower-bound bitrate by using a Binary Arithmetic Range Coder. In previous attempts, we demonstrated that our entropy compression model based on index convolution [...] Read more.
This study introduces a hybrid point cloud compression method that transfers from octree-nodes to voxel occupancy estimation to find its lower-bound bitrate by using a Binary Arithmetic Range Coder. In previous attempts, we demonstrated that our entropy compression model based on index convolution achieves promising performance while maintaining low complexity. However, our previous model lacks an autoregressive approach, which is apparently indispensable to compete with the current state-of-the-art of compression performance. Therefore, we adapt an autoregressive grouping method that iteratively populates, explores, and estimates the occupancy of 1-bit voxel candidates in a more discrete fashion. Furthermore, we refactored our backbone architecture by adding a distiller layer on each convolution, forcing every hidden feature to contribute to the final output. Our proposed model extracts local features using lightweight 1D convolution applied in varied ordering and analyzes causal relationships by optimizing the cross-entropy. This approach efficiently replaces the voxel convolution techniques and attention models used in previous works, providing significant improvements in both time and memory consumption. The effectiveness of our model is demonstrated on three datasets, where it outperforms recent deep learning-based compression models in this field. Full article
30 pages, 5422 KB  
Article
Iterative Learning Bipartite Consensus Control for Fractional-Order Switched Nonlinear Heterogeneous MASs with Cooperative and Antagonistic Interactions
by Song Yang and Siyuan Chen
Fractal Fract. 2026, 10(2), 98; https://doi.org/10.3390/fractalfract10020098 - 2 Feb 2026
Viewed by 337
Abstract
The coordination of switched fractional-order nonlinear heterogeneous multi-agent systems (FONHMASs) with cooperative and antagonistic interactions presents significant challenges due to the complex coupling of switched fractional-order dynamics. Crucially, existing control methods typically rely on integer-order assumptions and precise system modeling, which are inadequate [...] Read more.
The coordination of switched fractional-order nonlinear heterogeneous multi-agent systems (FONHMASs) with cooperative and antagonistic interactions presents significant challenges due to the complex coupling of switched fractional-order dynamics. Crucially, existing control methods typically rely on integer-order assumptions and precise system modeling, which are inadequate for capturing the inherent non-local memory behaviors of fractional dynamics. Furthermore, they generally assume fixed agent dynamics, and cannot be applied to switched FONHMASs where the continuity of agents’ dynamics is violated at switching instants. Considering the constraints of precise modeling difficulties and limited task time for switched FONHMASs in practice, a distributed Dα-type iterative learning control (ILC) protocol is proposed to achieve bipartite consensus in the presence of cooperative and antagonistic interactions. Also, without relying on repetitive initial conditions, based on a presented initial state learning mechanism and Dα-type ILC protocol, the bipartite consensus error convergence property with each iteration is achieved. Additionally, in consideration of external disturbances, the robustness of the iterative bipartite consensus controller for the switched FONHMASs is analyzed. Simulation results confirm that the switched FONHMASs achieve the convergence and robustness of the bipartite consensus errors along the iteration direction. In addition, the proposed Dα-type ILC protocol achieves a maximum root-mean-square-error (MRMSE) of 0.0168 in time domain, significantly outperforming the integer-order ILC (MRMSE = 0.3601) and fractional-order PID control (MRMSE = 0.7550), confirming its superiority. Full article
(This article belongs to the Special Issue Fractional Dynamics and Control in Multi-Agent Systems and Networks)
Show Figures

Figure 1

22 pages, 5284 KB  
Article
An Accelerated Steffensen Iteration via Interpolation-Based Memory and Optimal Convergence
by Shuai Wang, Chenshuo Lu, Zhanmeng Yang and Tao Liu
Mathematics 2026, 14(3), 498; https://doi.org/10.3390/math14030498 - 30 Jan 2026
Viewed by 137
Abstract
We develop a novel Steffensen-type iterative solver to solve nonlinear scalar equations without requiring derivatives. A two-parameter one-step scheme without memory is first introduced and analyzed. Its optimal quadratic convergence is then established. To enhance the convergence rate without additional functional evaluations, we [...] Read more.
We develop a novel Steffensen-type iterative solver to solve nonlinear scalar equations without requiring derivatives. A two-parameter one-step scheme without memory is first introduced and analyzed. Its optimal quadratic convergence is then established. To enhance the convergence rate without additional functional evaluations, we extend the scheme by incorporating memory through adaptively updated accelerator parameters. These parameters are approximated by Newton interpolation polynomials constructed from previously computed values, yielding a derivative-free method with R-rate of convergence of approximately 3.56155. A dynamical system analysis based on attraction basins demonstrates enlarged convergence regions compared to Steffensen-type methods without memory. Numerical experiments further confirm the accuracy of the proposed scheme for solving nonlinear equations. Full article
(This article belongs to the Special Issue Computational Methods in Analysis and Applications, 3rd Edition)
Show Figures

Figure 1

21 pages, 6750 KB  
Article
Machine Learning-Based Energy Consumption and Carbon Footprint Forecasting in Urban Rail Transit Systems
by Sertaç Savaş and Kamber Külahcı
Appl. Sci. 2026, 16(3), 1369; https://doi.org/10.3390/app16031369 - 29 Jan 2026
Viewed by 205
Abstract
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately [...] Read more.
In the fight against global climate change, the transportation sector is of critical importance because it is one of the major causes of total greenhouse gas emissions worldwide. Although urban rail transit systems offer a lower carbon footprint compared to road transportation, accurately forecasting the energy consumption of these systems is vital for sustainable urban planning, energy supply management, and the development of carbon balancing strategies. In this study, forecasting models are designed using five different machine learning (ML) algorithms, and their performances in predicting the energy consumption and carbon footprint of urban rail transit systems are comprehensively compared. For five distribution-center substations, 10 years of monthly energy consumption data and the total carbon footprint data of these substations are used. Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Nonlinear Autoregressive Neural Network (NAR-NN) models are developed to forecast these data. Model hyperparameters are optimized using a 20-iteration Random Search algorithm, and the stochastic models are run 10 times with the optimized parameters. Results reveal that the SVR model consistently exhibits the highest forecasting performance across all datasets. For carbon footprint forecasting, the SVR model yields the best results, with an R2 of 0.942 and a MAPE of 3.51%. The ensemble method XGBoost also demonstrates the second-best performance (R2=0.648). Accordingly, while deterministic traditional ML models exhibit superior performance, the neural network-based stochastic models, such as LSTM, ANFIS, and NAR-NN, show insufficient generalization capability under limited data conditions. These findings indicate that, in small- and medium-scale time-series forecasting problems, traditional machine learning methods are more effective than neural network-based methods that require large datasets. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 3992 KB  
Article
A Sparse Aperture ISAR Imaging Based on a Single-Layer Network Framework
by Haoxuan Song, Xin Zhang, Taonan Wu, Jialiang Xu, Yong Wang and Hongzhi Li
Remote Sens. 2026, 18(2), 335; https://doi.org/10.3390/rs18020335 - 19 Jan 2026
Viewed by 209
Abstract
Under sparse aperture (SA) conditions, inverse synthetic aperture radar (ISAR) imaging becomes a severely ill-posed inverse problem due to undersampled and noisy measurements, leading to pronounced degradation in azimuth resolution and image quality. Although deep learning approaches have demonstrated promising performance for SA-ISAR [...] Read more.
Under sparse aperture (SA) conditions, inverse synthetic aperture radar (ISAR) imaging becomes a severely ill-posed inverse problem due to undersampled and noisy measurements, leading to pronounced degradation in azimuth resolution and image quality. Although deep learning approaches have demonstrated promising performance for SA-ISAR imaging, their practical deployment is often hindered by black-box behavior, fixed network depth, high computational cost, and limited robustness under extreme operating conditions. To address these challenges, this paper proposes an ADMM Denoising Deep Equilibrium Framework (ADnDEQ) for SA-ISAR imaging. The proposed method reformulates an ADMM-based unfolding process as an implicit deep equilibrium (DEQ) model, where ADMM provides an interpretable optimization structure and a lightweight DnCNN is embedded as a learned proximal operator to enhance robustness against noise and sparse sampling. By representing the reconstruction process as the equilibrium solution of a single-layer network with shared parameters, ADnDEQ decouples forward and backward propagation, achieves constant memory complexity, and enables flexible control of inference iterations. Experimental results demonstrate that the proposed ADnDEQ framework achieves superior reconstruction quality and robustness compared with conventional layer-stacked networks, particularly under low sampling ratios and low-SNR conditions, while maintaining significantly reduced computational cost. Full article
Show Figures

Graphical abstract

23 pages, 2112 KB  
Article
An Adaptive Compression Method for Lightweight AI Models of Edge Nodes in Customized Production
by Chun Jiang, Mingxin Hou and Hongxuan Wang
Sensors 2026, 26(2), 383; https://doi.org/10.3390/s26020383 - 7 Jan 2026
Viewed by 619
Abstract
In customized production environments featuring multi-task parallelism, the efficient adaptability of edge intelligent models is essential for ensuring the stable operation of production lines. However, rapidly generating deployable lightweight models under conditions of frequent task changes and constrained hardware resources remains a major [...] Read more.
In customized production environments featuring multi-task parallelism, the efficient adaptability of edge intelligent models is essential for ensuring the stable operation of production lines. However, rapidly generating deployable lightweight models under conditions of frequent task changes and constrained hardware resources remains a major challenge for current edge intelligence applications. This paper proposes an adaptive lightweight artificial intelligence (AI) model compression method for edge nodes in customized production lines to overcome the limited transferability and insufficient flexibility of traditional static compression approaches. First, a task requirement analysis model is constructed based on accuracy, latency, and power-consumption demands associated with different production tasks. Then, the hardware information of edge nodes is structurally characterized. Subsequently, a compression-strategy candidate pool is established, and an adaptive decision engine integrating ensemble reinforcement learning (RL) and Bayesian optimization (BO) is introduced. Finally, through an iterative optimization mechanism, compression ratios are dynamically adjusted using real-time feedback of inference latency, memory usage, and recognition accuracy, thereby continuously enhancing model performance in edge environments. Experimental results demonstrate that, in typical object-recognition tasks, the lightweight models generated by the proposed method significantly improve inference efficiency while maintaining high accuracy, outperforming conventional fixed compression strategies and validating the effectiveness of the proposed approach in adaptive capability and edge-deployment performance. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

42 pages, 1313 KB  
Article
Adaptive Parallel Methods for Polynomial Equations with Unknown Multiplicity
by Mudassir Shams and Bruno Carpentieri
Algorithms 2026, 19(1), 21; https://doi.org/10.3390/a19010021 - 24 Dec 2025
Viewed by 335
Abstract
New two-step simultaneous iterative techniques are proposed for solving polynomial equations with multiple roots of unknown multiplicity. The developed schemes achieve a local convergence order of ten and address key limitations of existing solvers, namely their dependence on prior multiplicity information and their [...] Read more.
New two-step simultaneous iterative techniques are proposed for solving polynomial equations with multiple roots of unknown multiplicity. The developed schemes achieve a local convergence order of ten and address key limitations of existing solvers, namely their dependence on prior multiplicity information and their reduced efficiency when dealing with clustered or repeated roots. Root multiplicities are adaptively estimated within the iterative process, avoiding additional function evaluations beyond those required for parallel updates. The robustness and stability of the proposed methods are assessed using both random and distant initial guesses and validated on benchmark polynomials as well as nonlinear models from biomedical engineering. The numerical results show notable improvements in residual error, iteration count, CPU time, memory usage, and overall convergence rate compared with established classical techniques. These findings demonstrate that the proposed schemes provide reliable, high-order, and computationally efficient tools for solving challenging nonlinear problems in science and engineering. Full article
Show Figures

Figure 1

45 pages, 10838 KB  
Article
Making Creative Thinking Visible: Learner and Teacher Experiences of Boundary Objects as Epistemic Tools in Adolescent Classrooms
by Shafina Vohra and Peter Childs
Educ. Sci. 2026, 16(1), 13; https://doi.org/10.3390/educsci16010013 - 22 Dec 2025
Cited by 1 | Viewed by 713
Abstract
Creative thinking has become more important in education globally due to industry demand and a fast-paced world. In this study, boundary objects that can be tangible and digital objects are investigated to understand their role in facilitating creative thinking across five subject areas [...] Read more.
Creative thinking has become more important in education globally due to industry demand and a fast-paced world. In this study, boundary objects that can be tangible and digital objects are investigated to understand their role in facilitating creative thinking across five subject areas for teenagers aged 13–18 and their teachers, in their natural learning environment. A multiple case study method is used to investigate learners’ and their teachers’ experience in using boundary objects, to enable communication and understanding between individuals or groups in learning. Participants from an inner London secondary school comprised case groups: 8 Teachers and 16 Learners (8 from the lower school, aged 13–15 years, and 8 from the upper school, aged 16–18 years). Participants were invited through email and a short presentation. Consented participants were organised into male and female across teachers and students and were approached in lessons where boundary objects were being used. Data was collected through interviews and comprised photos of tool use, analysed through Reflexive Thematic Analysis for data analysis. The resulting five themes for teacher and student themes showed that boundary objects were perceived to facilitate creative thinking across all case groups within the studied context, with important insights such as iterative design, which develops real-world skills; metacognition, which is critical in learning and enables students to actively question their own thinking; memory, which is very important in enabling students to remember what they learned and how; and individual liberty, suggesting that learning need not be linear nor prescribed but that there must be freedom to learn in ways that are enjoyable and challenging too, amongst others. This study’s interpretive results indicate that when participants experience the use of boundary objects in a natural classroom or learning setting, the learning process is perceived to bring benefits that allow the process of creative thinking to occur. Full article
Show Figures

Figure 1

17 pages, 3772 KB  
Article
Research on Time-Domain Fatigue Analysis Method for Automotive Components Considering Performance Degradation
by Junru He, Chun Zhang and Ruoqing Wan
Appl. Sci. 2026, 16(1), 40; https://doi.org/10.3390/app16010040 - 19 Dec 2025
Viewed by 318
Abstract
Automotive components’ exposure to prolonged random loading not only accumulates fatigue damage but also causes material stiffness degradation. The degradation of material mechanical properties leads to stress redistribution within the structure, which in turn affects the structural fatigue life. Conventional frequency-domain fatigue life [...] Read more.
Automotive components’ exposure to prolonged random loading not only accumulates fatigue damage but also causes material stiffness degradation. The degradation of material mechanical properties leads to stress redistribution within the structure, which in turn affects the structural fatigue life. Conventional frequency-domain fatigue life analysis methods often fail to take into account performance degradation, whereas time-domain approaches are constrained by computational inefficiency in dynamic response calculations. To address this, a time-domain fatigue life analysis is proposed, integrating Long Short-Term Memory (LSTM) networks with performance degradation modeling. First, short-term dynamic response data of engineering structures that contain stiffness degradation parameters are utilized to establish a training set, and an LSTM surrogate model is trained to rapidly predict stress responses in time- and degree-varying structural performance degradation. Second, the time-varying dynamic responses obtained from the LSTM surrogate model are related to the principles the fatigue damage accumulation and Miner’s criterion to quantify the stiffness degradation effects. A computational framework has been developed for fatigue life prediction through iterative alternation between dynamic response calculations and fatigue damage assessments. Case studies on notched plates demonstrate that the LSTM surrogate model approach ensures accuracy while reducing structural fatigue life analysis time by more than three orders of magnitude compared to the finite element method (FEM). Under the application of 20,000s random road loads, the damage value of the reinforced plate obtained by the surrogate model method that takes into account performance degradation is lower by 10–25% compared to that calculated by the frequency-domain or time-domain methods that neglect degradation. Full article
Show Figures

Figure 1

25 pages, 3592 KB  
Article
Finite Element Computations on Mobile Devices: Optimization and Numerical Efficiency
by Maya Saade, Rafic Younes and Pascal Lafon
Algorithms 2025, 18(12), 782; https://doi.org/10.3390/a18120782 - 11 Dec 2025
Viewed by 497
Abstract
Smartphones have become increasingly powerful and widespread, enabling complex numerical computations that were once limited to desktop systems. However, implementing high-precision Finite Element Analysis (FEA) on mobile devices remains challenging due to constraints in memory, processing speed, and energy efficiency. This paper presents [...] Read more.
Smartphones have become increasingly powerful and widespread, enabling complex numerical computations that were once limited to desktop systems. However, implementing high-precision Finite Element Analysis (FEA) on mobile devices remains challenging due to constraints in memory, processing speed, and energy efficiency. This paper presents an optimized algorithmic framework for performing FEA on mobile platforms, focusing on the adaptation of meshing and iterative solver strategies to resource-limited environments. Several iterative solvers for large sparse linear systems are compared, and predefined refined meshing techniques are implemented to balance computational cost and accuracy. A two-dimensional bridge model is used to validate the proposed methods and demonstrate their numerical stability and computational efficiency on smartphones. The results confirm the feasibility of executing reliable FEA directly on mobile hardware, highlighting the potential of portable, low-cost devices as platforms for computational mechanics and algorithmic simulation in engineering and education. Full article
Show Figures

Figure 1

20 pages, 2676 KB  
Article
Memory-Efficient Iterative Signal Detection for 6G Massive MIMO via Hybrid Quasi-Newton and Deep Q-Networks
by Adeb Salh, Mohammed A. Alhartomi, Ghasan Ali Hussain, Fares S. Almehmadi, Saeed Alzahrani, Ruwaybih Alsulami and Abdulrahman Amer
Electronics 2025, 14(24), 4832; https://doi.org/10.3390/electronics14244832 - 8 Dec 2025
Viewed by 418
Abstract
The advent of Sixth Generation (6G) wireless communication systems demands unprecedented data rates, ultra-low latency, and massive connectivity to support emerging applications such as extended reality, digital twins, and ubiquitous intelligent services. These stringent requirements call for the use of massive Multiple-Input Multiple-Output [...] Read more.
The advent of Sixth Generation (6G) wireless communication systems demands unprecedented data rates, ultra-low latency, and massive connectivity to support emerging applications such as extended reality, digital twins, and ubiquitous intelligent services. These stringent requirements call for the use of massive Multiple-Input Multiple-Output (m-MIMO) systems with hundreds or even thousands of antennas, which introduce substantial challenges for signal detection algorithms. Conventional linear detectors, especially the linear Minimum Mean Square Error (MMSE) detectors, face prohibitive computational complexity due to high-dimensional matrix inversions, and their performance remains inherently restricted by the limitations of linear processing. The current research suggested an Iterative Signal Detection (ISD) algorithm with significant limitations being occupied with the combination of Deep Q-Network (DQN) and Quasi-Newton algorithms. The method incorporates the Broyden-Net, which could be faster with less memory training than the model in the case of spatially correlated channels, a Quasi-Newton method, and DQN to improve the m-MIMO detection. The proposed techniques support the computational efficiency of realistic 6G systems and outperform linear detectors. The simulation findings proved that the DQN-improved Quasi-Newton algorithm is more appropriate than traditional algorithms, since it combines the reward design, limited memory updates, and adaptive interference mitigation to shorten convergence time by 60% and increase the confrontation to correlated fading. Full article
(This article belongs to the Special Issue Advances in MIMO Communication)
Show Figures

Figure 1

20 pages, 1325 KB  
Article
AI-Driven Threat Hunting in Enterprise Networks Using Hybrid CNN-LSTM Models for Anomaly Detection
by Mark Kamande, Kwame Assa-Agyei, Frederick Edem Junior Broni, Tawfik Al-Hadhrami and Ibrahim Aqeel
AI 2025, 6(12), 306; https://doi.org/10.3390/ai6120306 - 26 Nov 2025
Viewed by 1391
Abstract
Objectives: This study aims to present an AI-driven threat-hunting framework that automates both hypothesis generation and hypothesis validation through a hybrid deep learning model that combines Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM) networks. The objective is to operationalize proactive threat [...] Read more.
Objectives: This study aims to present an AI-driven threat-hunting framework that automates both hypothesis generation and hypothesis validation through a hybrid deep learning model that combines Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM) networks. The objective is to operationalize proactive threat hunting by embedding anomaly detection within a structured workflow, improving detection performance, reducing analyst workload, and strengthening overall security posture. Methods: The framework begins with automated hypothesis generation, in which the model analyzes network flows, telemetry data, and logs sourced from IoT/IIoT devices, Windows/Linux systems, and interconnected environments represented in the TON_IoT dataset. Deviations from baseline behavior are detected as potential threat indicators, and hypotheses are prioritized according to anomaly confidence scores derived from output probabilities. Validation is conducted through iterative classification, where CNN-extracted spatial features and LSTM-captured temporal features are jointly used to confirm or refute hypotheses, minimizing manual data pivoting and contextual enrichment. Principal Component Analysis (PCA) and Recursive Feature Elimination with Random Forest (RFE-RF) are employed to extract and rank features based on predictive importance. Results: The hybrid model, trained on the TON_IoT dataset, achieved strong performance metrics: 99.60% accuracy, 99.71% precision, 99.32% recall, an AUC of 99%, and a 99.58% F1-score. These results outperform baseline models such as Random Forest and Autoencoder. By integrating spatial and temporal feature extraction, the model effectively identifies anomalies with minimal false positives and false negatives, while the automation of the hypothesis lifecycle significantly reduces analyst workload. Conclusions: Automating threat-hunting processes through hybrid deep learning shifts organizations from reactive to proactive defense. The proposed framework improves threat visibility, accelerates response times, and enhances overall security posture. The findings offer valuable insights for researchers, practitioners, and policymakers seeking to advance AI adoption in threat intelligence and enterprise security. Full article
Show Figures

Figure 1

37 pages, 1518 KB  
Article
Efficient Hybrid ANN-Accelerated Two-Stage Implicit Schemes for Fractional Differential Equations
by Mudassir Shams and Bruno Carpentieri
Mathematics 2025, 13(23), 3774; https://doi.org/10.3390/math13233774 - 24 Nov 2025
Viewed by 459
Abstract
This paper introduces a hybrid two-stage implicit scheme for efficiently solving fractional differential equations, with particular emphasis on fractional initial value problems formulated using the Caputo derivative. Classical numerical approaches to fractional differential equations often encounter challenges related to stability, convergence rate, and [...] Read more.
This paper introduces a hybrid two-stage implicit scheme for efficiently solving fractional differential equations, with particular emphasis on fractional initial value problems formulated using the Caputo derivative. Classical numerical approaches to fractional differential equations often encounter challenges related to stability, convergence rate, and memory efficiency. To overcome these limitations, we propose a new discretization framework that directly embeds nonlinear source terms into the time-stepping process, thereby enhancing both stability and accuracy. Our method embeds nonlinear source terms directly into the time-stepping process, enhancing stability and accuracy. Nonlinear systems are efficiently solved using a parallel iterative algorithm with adaptive convergence control, yielding up to 35–50% faster convergence compared with conventional solvers. A rigorous theoretical analysis establishes the scheme’s convergence, stability, and consistency, extending earlier proofs to a broader class of fractional systems. Extensive numerical experiments on benchmark fractional problems confirm that the hybrid approach achieves markedly lower local and global errors, broader stability regions, and substantial reductions in computational time and memory usage compared with existing implicit methods. The results demonstrate that the proposed framework offers a robust, accurate, and scalable solution for nonlinear fractional simulations. Full article
Show Figures

Figure 1

19 pages, 10374 KB  
Article
Entropy-Guided Search Space Optimization for Efficient Neural Network Pruning
by Yicheng Qiu, Li Niu, Feng Sha, Zhaokun Cheng and Keiji Yanai
Algorithms 2025, 18(12), 736; https://doi.org/10.3390/a18120736 - 24 Nov 2025
Cited by 1 | Viewed by 531
Abstract
Neural network pruning is essential for deploying deep learning models on resource-constrained devices by reducing computational and memory demands. In this paper, we propose a novel pruning framework, Entropy-Guided Search Space Optimization for Efficient Neural Network Pruning, which uses information entropy to assess [...] Read more.
Neural network pruning is essential for deploying deep learning models on resource-constrained devices by reducing computational and memory demands. In this paper, we propose a novel pruning framework, Entropy-Guided Search Space Optimization for Efficient Neural Network Pruning, which uses information entropy to assess the importance of convolutional layers. Specifically, we calculate the layer-wise entropy of pretrained weights, apply outlier detection to remove extreme values, and normalize the entropy values. These normalized values guide the selection of retention ratios, ensuring that layers with higher entropy retain more filters. By refining the subnetwork search space, our approach enhances the efficiency of the search process and improves overall subnetwork performance. The refined search space targets more promising regions, reducing computational overhead and resulting in higher-quality pruned networks. Through iterative optimization, the optimal subnetwork is identified and fine-tuned to produce the final pruned model. Experimental results on benchmark datasets show that our method significantly outperforms existing pruning methods, achieving substantial improvements in both accuracy and computational efficiency. This entropy-guided pruning strategy provides a robust and effective solution for neural network compression, suitable for a wide range of deep learning applications. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

22 pages, 2736 KB  
Article
Radar Foot Gesture Recognition with Hybrid Pruned Lightweight Deep Models
by Eungang Son, Seungeon Song, Bong-Seok Kim, Sangdong Kim and Jonghun Lee
Signals 2025, 6(4), 66; https://doi.org/10.3390/signals6040066 - 13 Nov 2025
Viewed by 749
Abstract
Foot gesture recognition using a continuous-wave (CW) radar requires implementation on edge hardware with strict latency and memory budgets. Existing structured and unstructured pruning pipelines rely on iterative training–pruning–retraining cycles, increasing search costs and making them significantly time-consuming. We propose a NAS-guided bisection [...] Read more.
Foot gesture recognition using a continuous-wave (CW) radar requires implementation on edge hardware with strict latency and memory budgets. Existing structured and unstructured pruning pipelines rely on iterative training–pruning–retraining cycles, increasing search costs and making them significantly time-consuming. We propose a NAS-guided bisection hybrid pruning framework on foot gesture recognition from a continuous-wave (CW) radar, which employs a weighted shared supernet encompassing both block and channel options. The method consists of three major steps. In the bisection-guided NAS structured pruning stage, the algorithm identifies the minimum number of retained blocks—or equivalently, the maximum achievable sparsity—that satisfies the target accuracy under specified FLOPs and latency constraints. Next, during the hybrid compression phase, a global L1 percentile-based unstructured pruning and channel repacking are applied to further reduce memory usage. Finally, in the low-cost decision protocol stage, each pruning decision is evaluated using short fine-tuning (1–3 epochs) and partial validation (10–30% of dataset) to avoid repeated full retraining. We further provide a unified theory for hybrid pruning—formulating a resource-aware objective, a logit-perturbation invariance bound for unstructured pruning/INT8/repacking, a Hoeffding-based bisection decision margin, and a compression (code-length) generalization bound—explaining when the compressed models match baseline accuracy while meeting edge budgets. Radar return signals are processed with a short-time Fourier transform (STFT) to generate unique time–frequency spectrograms for each gesture (kick, swing, slide, tap). The proposed pruning method achieves 20–57% reductions in floating-point operations (FLOPs) and approximately 86% reductions in parameters, while preserving equivalent recognition accuracy. Experimental results demonstrate that the pruned model maintains high gesture recognition performance with substantially lower computational cost, making it suitable for real-time deployment on edge devices. Full article
Show Figures

Figure 1

Back to TopTop