Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,660)

Search Parameters:
Keywords = deep learning structures

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 5511 KB  
Article
Exploring the Application of Large Language Models (LLMs) in Data Structure Instruction: An Empirical Analysis of Student Learning Outcomes in Computer Science
by Hongzhi Li, Lijun Xiao, Kezhong Lu, Dun Li, Zheqing Zhang and Qishou Xia
Information 2026, 17(4), 353; https://doi.org/10.3390/info17040353 - 8 Apr 2026
Abstract
Recent advancements in Large Language Models (LLMs), including ChatGPT, DeepSeek, and Claude, have facilitated their growing integration into computer science education, including data structure courses. Despite their widespread adoption, the association between sustained and informal LLM usage and students’ learning outcomes remains insufficiently [...] Read more.
Recent advancements in Large Language Models (LLMs), including ChatGPT, DeepSeek, and Claude, have facilitated their growing integration into computer science education, including data structure courses. Despite their widespread adoption, the association between sustained and informal LLM usage and students’ learning outcomes remains insufficiently understood. This study seeks to address this gap by empirically examining the association between LLM usage and undergraduate performance in data structure education. We conduct a twelve-week empirical study involving fifty-four undergraduate students, in which LLMs were made freely accessible but neither explicitly encouraged nor discouraged during coursework and assignments. Students’ LLM usage patterns are analyzed in relation to their academic performance across different task types. Findings reveal a significant negative association between extensive reliance on LLMs for cognitively demanding tasks and overall learning outcomes. Additionally, an inverse associative trend is observed between the frequency of LLM usage across some learning activities and academic performance. In contrast, the use of LLMs for supplementary purposes, including conceptual clarification and theoretical understanding, exhibits a notably positive association with final performance. These findings suggest a task-dependent associative relationship between LLM usage and learning outcomes: LLM usage for conceptual learning shows a positive association with the mastery of relevant knowledge when used as a supplementary learning tool, while excessive LLM usage shows a negative association with the development of fundamental analytical and problem-solving skills. This study highlights the importance of carefully integrating LLMs into data structure education to support learning while preserving students’ independent cognitive engagement. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Graphical abstract

26 pages, 7110 KB  
Article
Research on an Automatic Detection Method for Response Keypoints of Three-Dimensional Targets in Directional Borehole Radar Profiles
by Xiaosong Tang, Maoxuan Xu, Feng Yang, Jialin Liu, Suping Peng and Xu Qiao
Remote Sens. 2026, 18(7), 1102; https://doi.org/10.3390/rs18071102 - 7 Apr 2026
Abstract
During the interpretation of Borehole Radar (BHR) B-scan profiles, the accurate determination of the azimuth of geological targets in three-dimensional space is a critical issue for achieving precise anomaly localization and spatial structure inversion. However, existing directional BHR anomaly localization methods exhibit limited [...] Read more.
During the interpretation of Borehole Radar (BHR) B-scan profiles, the accurate determination of the azimuth of geological targets in three-dimensional space is a critical issue for achieving precise anomaly localization and spatial structure inversion. However, existing directional BHR anomaly localization methods exhibit limited intelligence, insufficient adaptability to multi-site data, and weak generalization capability, rendering them inadequate for engineering applications under complex geological conditions. To address these challenges, a robust deep learning model, termed BSS-Pose-BHR, is developed based on YOLOv11n-pose for keypoint detection in directional BHR profiles. The model incorporates three key optimizations: Bi-Level Routing Attention (BRA) replaces Multi-Head Self-Attention (MHSA) in the backbone to improve computational efficiency; Conv_SAMWS enhances keypoint-related feature weighting in the backbone and neck; and Spatial and Channel Reconstruction Convolution (SCConv) is integrated into the detection head to reduce redundancy and strengthen local feature extraction, thereby improving suitability for keypoint detection tasks. In addition, a three-dimensional electromagnetic model of limestone containing a certain density of clay particles is established to construct a simulation dataset. On the simulated test set, compared with current mainstream deep learning approaches and conventional directional borehole radar anomaly localization algorithms, BSS-Pose-BHR achieves superior performance, with an mAP50(B) of 0.9686, an mAP50–95(B) of 0.7712, an mAP50(P) of 0.9951, and an mAP50–95(P) of 0.9952. Ablation experiments demonstrate that each proposed module contributes significantly to performance improvement. Compared with the baseline, BSS-Pose-BHR improves mAP50(B) by 5.39% and mAP50(P) by 0.86%, while increasing model weight by only 1.05 MB, thereby achieving a reasonable trade-off between detection accuracy and complexity. Furthermore, indoor physical model experiments validate the effectiveness of the method on measured data. Robustness experiments under different Peak Signal-to-Noise Ratio (PSNR) conditions and varying missing-trace rates indicate that BSS-Pose-BHR maintains high detection accuracy under moderate noise and data loss, demonstrating strong engineering applicability and practical value. Full article
Show Figures

Figure 1

24 pages, 3754 KB  
Article
A Deep Learning-Based Method for Stress Measurement Using Longitudinal Critically Refracted Waves
by Yong Gan, Jingkun Ma, Binpeng Zhang, Yang Zheng, Xuedong Wang, Yuhong Zhu, Yibo Wang and Dachun Ji
Sensors 2026, 26(7), 2283; https://doi.org/10.3390/s26072283 - 7 Apr 2026
Abstract
Accurate stress measurement is essential to evaluating structural integrity and plays a pivotal role in the health monitoring and predicting the service life of steel infrastructures. This study proposes a deep learning approach for stress prediction based on longitudinal critically refracted (LCR) ultrasonic [...] Read more.
Accurate stress measurement is essential to evaluating structural integrity and plays a pivotal role in the health monitoring and predicting the service life of steel infrastructures. This study proposes a deep learning approach for stress prediction based on longitudinal critically refracted (LCR) ultrasonic waves. The model integrates gated recurrent units (GRU), attention mechanisms, and one-dimensional convolutional neural networks (1D-CNN), enabling direct stress prediction from raw ultrasonic signals without the need for manual feature extraction or explicit physical modeling. To validate the approach, LCR signals were acquired using a custom-built piezoelectric ultrasonic system from 20# steel specimens subjected to uniaxial stresses ranging from 0 to 200 MPa. A dataset comprising 4200 samples was augmented to enhance training efficiency. The proposed model achieved a mean absolute error of 1.94 MPa. Generalization tests demonstrated high accuracy across diverse stress levels, with average errors below 3 MPa, highlighting the model’s robustness. This research presents an accurate, intelligent, and calibration-free ultrasonic method for stress evaluation, providing practical support for stress evaluation in steel structures under actual operating conditions. Full article
(This article belongs to the Section Intelligent Sensors)
19 pages, 4124 KB  
Article
Prediction of Maximum Usable Frequency Based on a New Hybrid Deep Learning Model
by Yuyang Li, Zhigang Zhang and Jian Shen
Electronics 2026, 15(7), 1539; https://doi.org/10.3390/electronics15071539 - 7 Apr 2026
Abstract
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling [...] Read more.
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling of the complex spatiotemporal variation patterns of MUF-F2 by integrating a feature enhancement mechanism, a dual-branch feature extraction structure, and a bidirectional temporal dependency capture network. The hybrid prediction model integrates the Channel Attention mechanism (CA), Dual-Branch Convolutional Neural Network (DCNN), and Bidirectional Long Short-Term Memory network (BiLSTM). The model is trained and validated using MUF-F2 data from 5 communication links over China during geomagnetically quiet periods and 4 during geomagnetic storm periods, with the difference in the number of links attributed to experimental constraints and the disruptive effects of geomagnetic storms. Its performance is evaluated via multiple metrics, and a comparative analysis is conducted with commonly used prediction models such as the Long Short-Term Memory (LSTM) network. Experimental results show that during geomagnetically quiet periods, the proposed model achieves lower prediction errors (Root Mean Square Error (RMSE) < 1.1 MHz, Mean Absolute Percentage Error (MAPE) < 3.8%) and a higher goodness of fit (coefficient of determination (R2) > 0.94), with the average error reduction across all links ranging 8 from 6.2% to 46.9% compared with the baseline model. Under geomagnetic storm disturbance conditions, the model still maintains robust prediction performance, with R2 > 0.89 for all communication links, as well as RMSE < 0.6 MHz, Mean Absolute Error (MAE) < 0.4 MHz, and MAPE < 3.3%. The study demonstrates that the proposed CA-DCNN-BiLSTM model exhibits excellent prediction accuracy and anti-interference capability under different geomagnetic activity conditions, which can effectively improve the short-term prediction accuracy of MUF-F2 and provide more reliable technical support for HF communication frequency decision-making. Full article
Show Figures

Figure 1

20 pages, 1234 KB  
Article
Lightweight Real-Time Navigation for Autonomous Driving Using TinyML and Few-Shot Learning
by Wajahat Ali, Arshad Iqbal, Abdul Wadood, Herie Park and Byung O Kang
Sensors 2026, 26(7), 2271; https://doi.org/10.3390/s26072271 - 7 Apr 2026
Abstract
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, [...] Read more.
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, we propose a unified TinyML-optimized navigation framework that integrates a lightweight convolutional feature extractor (MobileNetV2) with a metric-based few-shot learning classifier to enable rapid adaptation to unseen driving scenarios with minimal data. The proposed framework jointly combines feature extraction, few-shot generalization, and edge-aware optimization into a single end-to-end pipeline designed specifically for real-time autonomous decision-making. Furthermore, post-training quantization and structured pruning are employed to significantly reduce the memory footprint and inference latency while preserving the classification performance. Experimental results demonstrate that the proposed model achieved a 93.4% accuracy on previously unseen road conditions, with an average inference latency of 68 ms and a memory usage of 18 MB, outperforming traditional CNN and LSTM models in efficiency while maintaining a competitive predictive performance. These results highlight the effectiveness of the proposed approach in enabling scalable, real-time navigation on low-power edge devices. Full article
Show Figures

Figure 1

12 pages, 6028 KB  
Article
A Universal Deep Learning Model for Predicting Detection Performance and Single-Event Effects of SPAD Devices
by Yilei Chen, Jin Huang, Yuxiang Zeng, Yi Jiang, Shulong Wang, Shupeng Chen and Hongxia Liu
Micromachines 2026, 17(4), 452; https://doi.org/10.3390/mi17040452 - 7 Apr 2026
Abstract
Single-event effects (SEEs) present a significant challenge to the radiation reliability of integrated circuits. Conventional SEE analysis methods for single-photon avalanche diode (SPAD) devices primarily rely on Sentaurus Technology Computer-Aided Design (TCAD) numerical simulation, which is computationally intensive and time-consuming. In this study, [...] Read more.
Single-event effects (SEEs) present a significant challenge to the radiation reliability of integrated circuits. Conventional SEE analysis methods for single-photon avalanche diode (SPAD) devices primarily rely on Sentaurus Technology Computer-Aided Design (TCAD) numerical simulation, which is computationally intensive and time-consuming. In this study, we propose a generalized deep learning (DL) model, using a silicon-based SPAD device with a double-junction double-buried-layer (DJDB) structure fabricated in 180 nm CMOS process as the research subject. By incorporating key parameters that influence SEEs as model inputs, the proposed approach enables rapid prediction of critical parameter metrics, including transient current peaks and dark count rates. Experimental results show that the DL model achieves a prediction accuracy of 97.32% for transient current peaks and 99.87% for dark count rates, demonstrating extremely high prediction precision. To further validate the generalization capability of the proposed network, the model is applied to predict the detection performance of the DJDB-SPAD device. The prediction accuracies for four key performance parameters all exceed 97.5%, further confirming the accuracy and robustness of the developed model. Meanwhile, compared with the conventional Sentaurus TCAD simulation method, the proposed method achieves a 336-fold improvement in computational efficiency. Overall, this method realizes the dual advantages of high precision and high efficiency, which provides an efficient and accurate technical solution for the rapid characteristic analysis and reliability evaluation of SPAD devices under single-event effects (SEEs). Full article
Show Figures

Figure 1

38 pages, 1937 KB  
Review
Cavitation Monitoring in Rotating Hydraulic Machines Using Machine Learning—A Review
by Elisa Sanchez and Axel Busboom
Appl. Sci. 2026, 16(7), 3566; https://doi.org/10.3390/app16073566 - 6 Apr 2026
Viewed by 50
Abstract
Cavitation in rotating hydraulic machinery—such as industrial pumps and hydropower turbines—can cause blade and casing erosion, excessive vibration, noise and efficiency loss, posing significant operational and economic risks across industrial sectors. Reliable and scalable monitoring strategies are therefore essential, particularly under variable operating [...] Read more.
Cavitation in rotating hydraulic machinery—such as industrial pumps and hydropower turbines—can cause blade and casing erosion, excessive vibration, noise and efficiency loss, posing significant operational and economic risks across industrial sectors. Reliable and scalable monitoring strategies are therefore essential, particularly under variable operating conditions in real-world environments. Recent advances in machine learning (ML) and deep learning (DL) have enabled data-driven approaches for cavitation detection based on operational sensor signals, yet a structured synthesis of these developments is lacking. This scoping review systematically analyzes measurement-based ML and DL approaches for cavitation monitoring, with the aim of identifying key trends, challenges and future research directions. Following PRISMA-ScR and JBI guidelines, 52 peer-reviewed studies published between 1996 and 2025 were evaluated, covering laboratory and field investigations across pumps and turbines and a wide range of model architectures. The analysis reveals that most studies are laboratory-based (∼80%), focus on pumps (∼70%) and rely on single-machine datasets (>80%), limiting generalization across machines and operating conditions. Classical ML approaches remain relevant due to interpretability and robustness with limited data, while DL enables end-to-end learning from raw or time–frequency transformed signals, frequently achieving diagnostic accuracy above 95%. Hybrid frameworks combining DL-based feature extraction with classical classifiers are increasingly adopted. Key limitations across the literature include domain shifts between laboratory and field data, scarce or inconsistent labeling and a predominant focus on categorical cavitation severity levels. Full article
(This article belongs to the Special Issue New Trends in Sustainable Energy Technology)
Show Figures

Figure 1

28 pages, 4886 KB  
Article
Equivariant Transition Matrices for Explainable Deep Learning: A Lie Group Linearization Approach
by Pavlo Radiuk, Oleksander Barmak, Leonid Bedratyuk and Iurii Krak
Mach. Learn. Knowl. Extr. 2026, 8(4), 92; https://doi.org/10.3390/make8040092 - 6 Apr 2026
Viewed by 64
Abstract
Deep learning systems deployed in regulated settings require explanations that are accurate and stable under nuisance transformations, yet classical post hoc transition matrices rely on fidelity-only fitting that fails to guarantee consistent explanations under spatial rotations or other group actions. In this work, [...] Read more.
Deep learning systems deployed in regulated settings require explanations that are accurate and stable under nuisance transformations, yet classical post hoc transition matrices rely on fidelity-only fitting that fails to guarantee consistent explanations under spatial rotations or other group actions. In this work, we propose Equivariant Transition Matrices, a post hoc approach that augments transition matrices with Lie-group-aware structural constraints to bridge this research gap. Our method estimates infinitesimal generators in the formal and mental feature spaces, enforces an approximate intertwining relation at the Lie algebra level, and solves the resulting convex Least-Squares problem via singular value decomposition for small networks or implicit operators for large systems. We introduce diagnostics for symmetry validation and an unsupervised strategy for regularization weight selection. On a controlled synthetic benchmark, our approach reduces the symmetry defect from 13,100 to 0.0425 while increasing the mean squared error marginally from 0.00367 to 0.00524. On the MNIST dataset, the symmetry defect decreases by 72.6 percent (141.19 to 38.65) with changes in structural similarity and peak signal-to-noise ratio below 0.03 percent and 0.06 percent, respectively. These results demonstrate that explanation-level equivariance can be reliably imposed post-training, providing geometrically consistent interpretations for fixed deep models. Full article
(This article belongs to the Special Issue Trustworthy AI: Integrating Knowledge, Retrieval, and Reasoning)
Show Figures

Figure 1

17 pages, 12185 KB  
Article
Adjustable Complexity Transformer Architecture for Image Denoising
by Jan-Ray Liao, Wen Lin and Li-Wen Chang
Signals 2026, 7(2), 33; https://doi.org/10.3390/signals7020033 - 6 Apr 2026
Viewed by 115
Abstract
In recent years, image denoising has seen a shift from traditional non-local self-similarity methods like BM3D to deep-learning based approaches that use learnable convolutions and attention mechanisms. While pixel-level attention is effective at capturing long-range relationships similar to non-local self-similarity based methods, it [...] Read more.
In recent years, image denoising has seen a shift from traditional non-local self-similarity methods like BM3D to deep-learning based approaches that use learnable convolutions and attention mechanisms. While pixel-level attention is effective at capturing long-range relationships similar to non-local self-similarity based methods, it incurs extremely high computational costs that scale quadratically with image resolution. As an alternative, channel-wise attention is resolution-independent and computationally efficient but may miss crucial spatial details. In this paper, an adjustable attention mechanism is introduced that bridges the gap between pixel and channel attentions. In the proposed model, average pooling and variable-size convolutions are added before attention calculation to adjust spatial resolution and, thus, allow dynamical adjustment of computational complexity. This adjustable attention is applied in a transformer-based U-Net architecture and achieves performance comparable to state-of-the-art methods in both real and Gaussian blind denoising tasks. To be more concrete, the proposed method achieves a Peak Signal-to-Noise Ratio of 39.65 dB and a Structural Similarity Index Measure of 0.913 on the Smartphone Image Denoising Dataset. Therefore, the proposed method demonstrates a balance between efficiency and denoising quality. Full article
Show Figures

Figure 1

22 pages, 812 KB  
Review
AI-Driven BCR Modeling for Precision Immunology
by Tao Liu, Xusheng Zhao and Fan Yang
Int. J. Mol. Sci. 2026, 27(7), 3296; https://doi.org/10.3390/ijms27073296 - 5 Apr 2026
Viewed by 302
Abstract
The B cell receptor (BCR) repertoire captures an individual’s immunological history and antigen-driven evolution within a vast, high-dimensional sequence space. Although bulk and single-cell adaptive immune receptor repertoire sequencing (AIRR-seq) now enables deep profiling of BCR diversity, interpreting these datasets remains challenging due [...] Read more.
The B cell receptor (BCR) repertoire captures an individual’s immunological history and antigen-driven evolution within a vast, high-dimensional sequence space. Although bulk and single-cell adaptive immune receptor repertoire sequencing (AIRR-seq) now enables deep profiling of BCR diversity, interpreting these datasets remains challenging due to strong inter-individual heterogeneity, nonlinear sequence–structure–function relationships, dynamic clonal evolution, and the rarity of functionally relevant clones. Artificial intelligence (AI) provides a conceptual and computational framework for addressing these challenges. Here, we summarize how advanced deep learning architectures, including antibody-specific language models, graph neural networks (GNNs), and generative frameworks, uncover clonal topology, structural features, and antigen-binding semantics. We further highlight applications in cancer, infectious disease, and autoimmunity. Finally, we propose a closed-loop framework that integrates multimodal datasets, interpretable AI, and iterative experimental validation to advance predictive immunology and accelerate therapeutic antibody discovery. Full article
(This article belongs to the Special Issue Molecular Mechanism of Immune Response)
Show Figures

Figure 1

24 pages, 4411 KB  
Article
GT-TD3: A Kinematics-Aware Graph-Transformer Framework for Stable Trajectory Tracking of High-Degree-of-Freedom (DOF) Manipulators
by Hanwen Miao, Haoran Hou, Zhaopeng Zhu, Zheng Chao and Rui Zhang
Machines 2026, 14(4), 397; https://doi.org/10.3390/machines14040397 - 5 Apr 2026
Viewed by 164
Abstract
Accurate trajectory tracking of redundant manipulators is difficult because the controller must simultaneously model local couplings between adjacent joints and global dependencies across the whole kinematic chain. Existing reinforcement learning methods typically employ multilayer perceptrons, which do not explicitly exploit manipulator structure and [...] Read more.
Accurate trajectory tracking of redundant manipulators is difficult because the controller must simultaneously model local couplings between adjacent joints and global dependencies across the whole kinematic chain. Existing reinforcement learning methods typically employ multilayer perceptrons, which do not explicitly exploit manipulator structure and therefore show limited stability and representation ability in high-dimensional continuous control tasks. This paper proposes GT-TD3, a Graph Transformer-enhanced-Twin Delayed Deep Deterministic Policy Gradient framework, for redundant manipulator trajectory tracking. The proposed actor first converts the raw system state into joint-level node features and uses a graph neural network to extract local kinematic coupling information. A Transformer is then employed to capture long-range dependencies among joints. To strengthen the use of structural priors, topology- and distance-related bias terms are incorporated into the attention mechanism, enabling the network to encode manipulator structure during global feature learning. Experiments on a 7-DoF KUKA iiwa manipulator in PyBullet demonstrate that GT-TD3 outperforms MLP, pure GNN, and pure Transformer baselines in tracking performance. The proposed method achieves more stable training, faster convergence, and smoother and more accurate end-effector motion. The results show that the integration of local graph modeling and structure-aware global attention provides an effective solution for high-precision trajectory tracking of redundant manipulators. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

20 pages, 4228 KB  
Article
Design and Application of an Automated Microinjection System Combining Deep Learning Vision Positioning and Neural Network Sliding Mode Motion Control
by Zhihao Deng, Yifan Xu and Shengzheng Kang
Actuators 2026, 15(4), 208; https://doi.org/10.3390/act15040208 - 5 Apr 2026
Viewed by 111
Abstract
Microinjection is one of the most established and effective techniques for introducing foreign substances into cells. However, issues such as cumbersome procedures, low success rates, and poor repeatability in manual cell microinjection have seriously restricted its practical applications in biomedical research and engineering. [...] Read more.
Microinjection is one of the most established and effective techniques for introducing foreign substances into cells. However, issues such as cumbersome procedures, low success rates, and poor repeatability in manual cell microinjection have seriously restricted its practical applications in biomedical research and engineering. Responding to such problems, this paper designs an automated microinjection system that combines deep learning visual positioning and adaptive neural network sliding-mode motion control. The machine vision solution based on the deep learning YOLOv8 target detection algorithm is utilized by the system to provide positional prerequisites for automated microinjection. Then, stable and fast puncture is completed by controlling the end effector (composed of a piezoelectric actuator and a displacement amplification mechanism). Since the piezoelectric actuator has strong nonlinearity, the motion control of the end effector adopts the control strategy combining sliding mode variable structure and adaptive neural networks to meet the requirements of precise displacement output of microinjection. At the same time, a host computer control system is developed to integrate hardware equipment, visual positioning algorithms and motion control algorithms to achieve corresponding automated microinjection tasks. Finally, the effectiveness of the designed automated microinjection system is successfully verified on zebrafish embryos. Full article
Show Figures

Figure 1

21 pages, 2647 KB  
Article
Fine-Tuned Nonlinear Autoregressive Recurrent Neural Network Model for Dam Displacement Time Series Prediction
by Vukašin Ćirović, Vesna Ranković, Nikola Milivojević, Vladimir Milivojević and Brankica Majkić-Dursun
Mach. Learn. Knowl. Extr. 2026, 8(4), 90; https://doi.org/10.3390/make8040090 - 5 Apr 2026
Viewed by 99
Abstract
Dam monitoring data are nonlinear and nonstationary time series. Most existing data-driven dam displacement models are developed independently for each measuring point, disregarding the fact that a dam is a complex structure composed of various interconnected elements that form a unified whole. Regardless [...] Read more.
Dam monitoring data are nonlinear and nonstationary time series. Most existing data-driven dam displacement models are developed independently for each measuring point, disregarding the fact that a dam is a complex structure composed of various interconnected elements that form a unified whole. Regardless of the dam type, all points on the dam are exposed to the same external environmental influences. To account for the correlation between displacement time series at different points, this paper proposes a novel fine-tuned deep-learning nonlinear autoregressive (NAR) model based on a Long Short-Term Memory (LSTM) network for predicting dam tangential displacement, and a new method for generating source data to train the base model. The models for three measuring points were developed and tested on experimental data collected over a period of slightly more than twelve years. Compared with the model without fine-tuning, the proposed approach achieves an average mean square error (MSE) reduction of 80.68% on the training set and 65.79% on the test set, as well as an average mean absolute error (MAE) reduction of 51.05% and 52.62%, respectively. Furthermore, the proposed model outperforms Random Forest (RF), Support Vector Regression (SVR), and Multi-Layer Perceptron (MLP) models for dam displacement prediction. Full article
Show Figures

Figure 1

32 pages, 43664 KB  
Article
MVFF: Multi-View Feature Fusion Network for Small UAV Detection
by Kunlin Zou, Haitao Zhao, Xingwei Yan, Wei Wang, Yan Zhang and Yaxiu Zhang
Drones 2026, 10(4), 264; https://doi.org/10.3390/drones10040264 - 4 Apr 2026
Viewed by 287
Abstract
With the widespread adoption of various types of Unmanned Aerial Vehicles (UAVs), their non-compliant operations pose a severe challenge to public safety, necessitating the urgent identification and detection of UAV targets. However, in complex backgrounds, UAV targets exhibit small-scale dimensions and low contrast, [...] Read more.
With the widespread adoption of various types of Unmanned Aerial Vehicles (UAVs), their non-compliant operations pose a severe challenge to public safety, necessitating the urgent identification and detection of UAV targets. However, in complex backgrounds, UAV targets exhibit small-scale dimensions and low contrast, coupled with extremely low signal-to-noise ratios. This forces conventional target detection methods to confront issues such as feature convergence, missed detections, and false alarms. To address these challenges, we propose a Multi-View Feature Fusion Network (MVFF) that achieves precise identification of small, low-contrast UAV targets by leveraging complementary multi-view information. First, we design a collaborative view alignment fusion module. This module employs a cross-map feature fusion attention mechanism to establish pixel-level mapping relationships and perform deep fusion, effectively resolving geometric distortion and semantic overlap caused by imaging angle differences. Furthermore, we introduce a view feature smoothing module that employs displacement operators to construct a lightweight long-range modeling mechanism. This overcomes the limitations of traditional convolutional local receptive fields, effectively eliminating ghosting artifacts and response discontinuities arising from multi-view fusion. Additionally, we developed a small object binary cross-entropy loss function. By incorporating scale-adaptive gain factors and confidence-aware weights, this function enhances the learning capability of edge features in small objects, significantly reducing prediction uncertainty caused by background noise. Comparative experiments conducted on a multi-perspective UAV dataset demonstrate that our approach consistently outperforms existing state-of-the-art methods across multiple performance metrics. Specifically, it achieves a Structure-measure of 91.50% and an F-measure of 85.14%, validating the effectiveness and superiority of the proposed method. Full article
Show Figures

Figure 1

31 pages, 2118 KB  
Review
Artificial Intelligence Enabling Intelligent Solar Energy Systems: Integration and Emerging Directions
by Rogelio Ochoa-Barragán, Luis David Saavedra-Sánchez, Fabricio Nápoles-Rivera, César Ramírez-Márquez, Luis Fernando Lira-Barragán and José María Ponce-Ortega
Processes 2026, 14(7), 1167; https://doi.org/10.3390/pr14071167 - 4 Apr 2026
Viewed by 191
Abstract
The integration of artificial intelligence (AI) into solar energy systems has emerged as a transformative pathway to enhance efficiency, reliability, and sustainability in renewable energy. This review examines recent advances in AI-driven optimization and integration strategies across photovoltaic and solar thermal technologies with [...] Read more.
The integration of artificial intelligence (AI) into solar energy systems has emerged as a transformative pathway to enhance efficiency, reliability, and sustainability in renewable energy. This review examines recent advances in AI-driven optimization and integration strategies across photovoltaic and solar thermal technologies with elements of bibliometric analysis to identify trends, methodologies, and research directions. A particular emphasis is placed on machine learning and deep learning techniques applied to solar irradiance forecasting, maximum power point tracking, fault detection, energy management, and predictive maintenance. Unlike earlier reviews that focused on isolated applications, this work highlights the systemic role of AI in enabling smart grids, hybrid systems, and large-scale energy storage integration. The novelty of this contribution lies in mapping the evolution from traditional control methods to intelligent, self-adaptive frameworks that couple physical modeling with data-driven approaches, offering a structured roadmap for future developments. Furthermore, the review identifies challenges such as data scarcity, computational demand, and interpretability of AI models, while outlining opportunities for process intensification, resilience, and techno-economic optimization. By bridging technical progress with implementation prospects, this article provides an updated reference for researchers, policymakers, and industry stakeholders seeking to accelerate the deployment of AI-enhanced solar energy solutions. Full article
(This article belongs to the Special Issue Modeling, Simulation and Control in Energy Systems—2nd Edition)
Show Figures

Figure 1

Back to TopTop