Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,804)

Search Parameters:
Keywords = design-based inference

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2326 KB  
Article
A Logic-Guided and Explainable Approach to LLM-Based Unit Test Generation
by Cong Zeng, Meng Li, Fei Liu, Xiaohua Yang, Jie Liu and Shiyu Yan
Appl. Sci. 2026, 16(5), 2542; https://doi.org/10.3390/app16052542 - 6 Mar 2026
Abstract
Large Language Models (LLMs) have demonstrated considerable potential in automated unit test generation; however, most existing approaches rely on a black-box paradigm that directly maps code under test to test code, often resulting in low compilation success rates, limited branch coverage, high assertion [...] Read more.
Large Language Models (LLMs) have demonstrated considerable potential in automated unit test generation; however, most existing approaches rely on a black-box paradigm that directly maps code under test to test code, often resulting in low compilation success rates, limited branch coverage, high assertion failure rates, and poor interpretability. Inspired by the human process of developing test cases, this paper proposes Logic-CoT, a white-box generation paradigm that follows a code under test–logical reasoning–test code workflow. The proposed approach consists of three stages: in the logical inference stage, logical node state vectors and execution paths are constructed from the control flow graph of the code under test, and input values and oracles satisfying state constraints are derived; in the test case construction stage, a template-based method is used to initialize test code conforming to the Arrange–Act–Assert pattern, with test intentions explicitly documented as comments; in the repair stage, syntactic errors and assertion failures are handled in a layered manner, where the former are corrected without altering test logic and the latter trigger logic reflection based on discrepancies between expected and actual outcomes, leading to state updates and test case reconstruction. This design forms a closed-loop process of reasoning, generation, and repair. Experiments on the QuixBugs, Apache Commons, HumanEval, and SV-COMP benchmarks show that Logic-CoT consistently outperforms state-of-the-art approaches such as ChatUniTest in terms of compilation success rate, runtime pass rate, assertion pass rate, branch coverage, average repair iterations for faulty code, and interpretability. Ablation studies further demonstrate that each component of Logic-CoT contributes effectively to improving the overall quality and effectiveness of generated test cases. These results indicate that Logic-CoT improves the reliability and interpretability of LLM-generated unit tests in practical software testing scenarios. Full article
(This article belongs to the Special Issue Advancements in Computer Systems and Software Testing)
Show Figures

Figure 1

16 pages, 4216 KB  
Article
Neural Network Approach for Wideband RCS Computation with Wide Incident Angles via Method of Moments
by Woongi Bin, Sanghyuk An and Wonzoo Chung
Appl. Sci. 2026, 16(5), 2518; https://doi.org/10.3390/app16052518 - 5 Mar 2026
Abstract
In this paper, we present a deep neural network–based approach for computing radar cross section (RCS) over a wide frequency band and a broad range of incident angles. The proposed network, termed WBRCS-Net, is designed to converge to the solution of the method [...] Read more.
In this paper, we present a deep neural network–based approach for computing radar cross section (RCS) over a wide frequency band and a broad range of incident angles. The proposed network, termed WBRCS-Net, is designed to converge to the solution of the method of moments (MoM) formulation by minimizing a mean-squared residual loss without explicitly solving the MoM linear system, thereby avoiding the numerical instabilities commonly encountered in conventional iterative solvers. Moreover, by using only the frequency and incident angle as inputs, WBRCS-Net enables wideband RCS prediction over a broad range of incident angles while substantially simplifying the network architecture. The performance of WBRCS-Net is evaluated on perfectly electrically conducting (PEC) spheres and cubes and is compared with the Maehly approximation based on Chebyshev polynomials, using monostatic RCS over a frequency range of 2–12 GHz and an incident-angle range of 0°∼90°. Experimental results demonstrate that, once trained, WBRCS-Net enables stable wideband RCS computation over a wide range of incident angles with instantaneous inference speed, achieving a minimum mean-squared error (MSE) on the order of 1014 relative to reference MoM solutions. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 15575 KB  
Article
Adaptive Tuning Framework for MOSFET Gate Drive Parameters Based on PPO
by Yuhang Wang, Zhongbo Zhu, Qidong Bao, Xiangyu Meng and Xinglin Sun
Electronics 2026, 15(5), 1089; https://doi.org/10.3390/electronics15051089 - 5 Mar 2026
Abstract
The optimization of the MOSFET gate drive parameters is crucial for the trade-off between switching loss and electromagnetic interference (EMI). However, the nonlinear coupling among gate drive parameters, board-level parasitic, and switching performance limits the effectiveness of traditional MOSFET drive design methods. This [...] Read more.
The optimization of the MOSFET gate drive parameters is crucial for the trade-off between switching loss and electromagnetic interference (EMI). However, the nonlinear coupling among gate drive parameters, board-level parasitic, and switching performance limits the effectiveness of traditional MOSFET drive design methods. This paper proposes an adaptive tuning framework based on the proximal policy optimization (PPO) algorithm. An analytical switching model incorporating board-level parasitics is first derived to analyze the coupling between drive parameters and switching performance. The optimization problem is then formulated as a Markov decision process (MDP). Within this framework, domain randomization is applied during training. This enables the agent to learn a generalizable optimization strategy that remains robust across the varying parasitic inductances encountered in different PCB layouts. Compared to the traditional Non-dominated Sorting Genetic Algorithm II (NSGA-II), the proposed method uses the trained policy for direct inference. This reduces computation time by 98.7% while maintaining a multi-objective performance difference within 10.06%. In addition, hardware verification shows a 10.7% average deviation between the measured and simulated results. These results demonstrate that the proposed method provides an efficient and scalable solution for MOSFET gate drive optimization. Full article
(This article belongs to the Special Issue AI-Driven Innovations in Power Electronics Research and Development)
Show Figures

Figure 1

26 pages, 4251 KB  
Article
Reliability-Aware Robust Optimization for Multi-Type Sensor Placement Under Sensor Failures
by Shenghuan Zeng, Ding Luo, Pujingru Yan, Naiwei Lu, Ke Huang and Lei Wang
Buildings 2026, 16(5), 1024; https://doi.org/10.3390/buildings16051024 - 5 Mar 2026
Abstract
In the field of structural health monitoring systems, sensors serve as the fundamental components for assessing infrastructure integrity. The rationality of their spatial configuration significantly influences the precision of structural performance assessment, the efficacy of damage detection algorithms, and the operational reliability of [...] Read more.
In the field of structural health monitoring systems, sensors serve as the fundamental components for assessing infrastructure integrity. The rationality of their spatial configuration significantly influences the precision of structural performance assessment, the efficacy of damage detection algorithms, and the operational reliability of the system throughout its designated lifecycle. A robust optimization methodology for the placement of multi-type sensors is proposed in this study, explicitly formulated to mitigate the negative impact of sensor malfunctions during long-term operation. First, a rigorous evaluation framework for sensor placement schemes is established based on Bayesian inference and the minimization of information entropy, thereby quantifying the uncertainty inherent in parameter identification. Then, a probabilistic model of sensor failure is developed utilizing the Weibull distribution to capture time-dependent reliability characteristics, combined with a modified information entropy calculation method that mathematically assimilates these failure probabilities into the optimization objective. Finally, a heuristic search strategy is employed to achieve the robust optimal placement of multi-type sensors, efficiently navigating the complex combinatorial search space. In contrast to deterministic information entropy (DIE) methodologies, which assume ideal sensor functionality, the robust information entropy (RIE) approach comprehensively accounts for the stochastic nature of sensor failures and their impact on the information content of the monitoring network, thereby significantly augmenting the robustness and redundancy of the sensor configuration. Validations utilizing a numerical frame structure and a finite element bridge model demonstrate that the RIE method effectively integrates the sensor failure probability model to yield robust optimal placement schemes, minimizing the risk of information loss and ensuring reliable structural health monitoring throughout the engineering lifecycle. Full article
Show Figures

Figure 1

22 pages, 5554 KB  
Article
Image Inpainting-Based Point Cloud Restoration for Enhancing Tactical Classification of Unmanned Surface Vehicles
by Hyunjun Jeon, Eon-ho Lee, Jane Shin and Sejin Lee
Sensors 2026, 26(5), 1637; https://doi.org/10.3390/s26051637 - 5 Mar 2026
Abstract
The operational effectiveness of Unmanned Surface Vehicles (USVs) in modern naval scenarios depends on robust situational awareness. While LiDAR sensors are integral to 3D perception, their performance is frequently affected by incomplete data resulting from long-range sparsity and target occlusion. This study investigates [...] Read more.
The operational effectiveness of Unmanned Surface Vehicles (USVs) in modern naval scenarios depends on robust situational awareness. While LiDAR sensors are integral to 3D perception, their performance is frequently affected by incomplete data resulting from long-range sparsity and target occlusion. This study investigates a framework to restore incomplete point clouds to support improved surface vessel classification. The framework first estimates the target’s heading angle using a 2D area projection technique, combined with a descriptor to address orientation ambiguity. Subsequently, the 3D point cloud is converted into a 2D multi-channel image representation to leverage a deep learning-based image inpainting algorithm for data restoration. Finally, a high-density keypoint extraction method is applied to the completed point cloud to generate features for classification. This image-based approach is designed to prioritize computational efficiency and inference speed, facilitating deployment on resource-constrained maritime platforms. Experiments conducted on a simulator dataset reveal that the classification of restored point clouds yields higher accuracy compared to using the original, incomplete LiDAR data, particularly at extended distances (>70 m) and challenging aspect angles (0° and 180°). The results suggest the framework’s potential to address perception failures in sparse data scenarios, thereby supporting the operational envelope of USVs in contested environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

21 pages, 5786 KB  
Article
Uncertainty3D: A Lightweight Tri-Dimensional Uncertainty Framework for CNN-Based Active Learning in Object Detection
by Qing Li, Chunhe Xia, Zhipeng Zhang and Wenting Ma
Appl. Sci. 2026, 16(5), 2503; https://doi.org/10.3390/app16052503 - 5 Mar 2026
Abstract
In object detection, annotation cost and computational efficiency are important factors in iterative model improvement under standard benchmark settings. Active learning (AL) addresses this challenge by selecting informative samples for labeling; however, many detection-oriented AL methods incur substantial overhead due to repeated inference [...] Read more.
In object detection, annotation cost and computational efficiency are important factors in iterative model improvement under standard benchmark settings. Active learning (AL) addresses this challenge by selecting informative samples for labeling; however, many detection-oriented AL methods incur substantial overhead due to repeated inference (e.g., augmentation-based consistency). This paper introduces Uncertainty3D, a lightweight uncertainty proxy designed for standard CNN-based object detectors. It leverages native pre-NMS predictions to estimate sample informativeness using a single forward pass. We propose a tri-dimensional formulation that captures inconsistencies in position, scale, and category across proposal-consistent predictions. Experiments on PASCAL VOC and MS COCO using representative CNN-based detectors (Faster R-CNN and RetinaNet) show competitive mAP versus representative baselines and about 3–4× faster uncertainty estimation than augmentation-based baselines. Full article
Show Figures

Figure 1

32 pages, 5003 KB  
Article
A Novel Hybrid IK Architecture for Robotic Arms: Iterative Refinement of Soft-Computing Approximations with Validation on ABB IRB-1200 Robotic Arm
by Meenalochani Jayabalan, Karunamoorthy Loganathan and Palanikumar Kayaroganam
Machines 2026, 14(3), 292; https://doi.org/10.3390/machines14030292 - 4 Mar 2026
Abstract
Adaptive Neuro-Fuzzy Inference System (ANFIS)-based inverse kinematics (IK) is highly accurate for trained poses but often yields approximations for unseen inputs due to non-standardized training data. This research addresses these limitations through two novel contributions designed for any generic Degrees of Freedom (DoF) [...] Read more.
Adaptive Neuro-Fuzzy Inference System (ANFIS)-based inverse kinematics (IK) is highly accurate for trained poses but often yields approximations for unseen inputs due to non-standardized training data. This research addresses these limitations through two novel contributions designed for any generic Degrees of Freedom (DoF) serial revolute robotic arm. First, A structured training methodology is introduced using workspace decomposition and cubic path planning. Instead of random sampling, the workspace is partitioned into cubic regions where 28 unique trajectories (12 edges, 12 face diagonals, four space diagonals) connect the eight vertices using cubic polynomial interpolation. This ensures physically consistent data mirroring real world point to point (PTP) movements. Even though validated on an ABB IRB-1200 robotic arm, this modular design is inherently scalable, allowing the local cubic expertise to be extended to cover the entire reachable workspace. Second, a two-stage hybrid IK framework is proposed, where an initial ANFIS approximation is refined via Jacobian-based iterative methods. Three Hybrid Frame works were evaluated, Framework-1 (ANFIS + Jacobian Gradient), Framework-2 (ANFIS + Jacobian Pseudoinverse/Newton–Raphson), and Framework-3 (ANFIS + Damped Least Squares). The results show that all three hybrid IK frameworks achieve reliable convergence, while the DLS-based hybrid provides the best trade-off between accuracy, convergence speed, and numerical stability. This generic, analytical free architecture provides a computationally efficient solution even in a hybrid scenario, bridging the gap between offline structured training and online, real-time refinement for digital twin synchronization and industrial automation. Full article
Show Figures

Figure 1

20 pages, 3216 KB  
Article
AMFA-DeepLab: An Improved Lightweight DeepLabV3+ Adaptive Multi-Statistic Fusion Attention Network for Sea Ice Segmentation in GaoFen-1 Images
by Zengzhou Hao, Xin Li, Qiankun Zhu, Yunzhou Li, Zhihua Mao, Jianyu Chen and Delu Pan
Remote Sens. 2026, 18(5), 783; https://doi.org/10.3390/rs18050783 - 4 Mar 2026
Abstract
For addressing difficult detail extraction and low operating efficiency in monitoring sea ice in a large area with wide-field-of-view images from the Chinese Gaofen-1 satellite, a lightweight, high-precision sea ice segmentation network adaptive multistatistic fusion attention (AMFA) module using DeepLabV3+ as the base [...] Read more.
For addressing difficult detail extraction and low operating efficiency in monitoring sea ice in a large area with wide-field-of-view images from the Chinese Gaofen-1 satellite, a lightweight, high-precision sea ice segmentation network adaptive multistatistic fusion attention (AMFA) module using DeepLabV3+ as the base architecture (AMFA-DeepLab) is proposed. First, the module replaces the backbone network with a lightweight MobileNetV2 to ensure feature extraction capability and greatly reduce model computational complexity using inverted residuals and depthwise separable convolution. Second, to solve the problems of fragmented ice texture blurring and speckle noise interference in optical images, an AMFA is designed and introduced into the decoder side. This module innovatively integrates the global median pooling branch and adapts the recalibrated feature weight through a dynamic channel mixing mechanism, effectively enhancing the model’s capability of capturing fine sea ice edge features and its antinoise robustness in complex backgrounds. Experimental results based on the dataset from Liaodong Bay in the Bohai Sea of China show that the intersection over union of AMFA-DeepLab reaches 92.15% and the F1-score reaches 95.91%, increases of 3.06%, and 1.68%, respectively, compared with those of the baseline model. In addition, only 5.85 million model parameters are needed, the training time is shortened to 4.42 h, and the inference speed is 281.76 frames per second. Visualized analysis and generalization test further demonstrates that this model can accurately eliminate clutter interference from coastal land and seawater and extract the fine filamentous structure of drift ice in the scene of complex melting ice. This research overcomes the precision bottleneck while achieving an ultimate lightweight model, providing efficient technical support for operational dynamic monitoring of sea ice disasters based on Chinese GaoFen-1 satellites. Full article
Show Figures

Figure 1

15 pages, 2505 KB  
Article
Performance Validation of ORTHOSEG, a Novel Artificial Intelligence Tool for the Segmentation of Orthopantomographs and Intra-Oral X-Rays
by Giuseppe Cota, Gaetano Scaramozzino, Marco Chiesa, Lelio Gennaro, Maurizio Pascadopoli, Andrea Scribante and Marco Colombo
Clin. Pract. 2026, 16(3), 54; https://doi.org/10.3390/clinpract16030054 - 4 Mar 2026
Abstract
Background: Dental radiographs are essential for diagnosis and treatment planning in modern dentistry. However, their manual interpretation is time-consuming and subject to variability, highlighting the need for automated tools to improve efficiency and consistency. This study aims to validate ORTHOSEG, a deep learning-based [...] Read more.
Background: Dental radiographs are essential for diagnosis and treatment planning in modern dentistry. However, their manual interpretation is time-consuming and subject to variability, highlighting the need for automated tools to improve efficiency and consistency. This study aims to validate ORTHOSEG, a deep learning-based system designed to automate the segmentation of anatomical, pathological, and non-pathological elements in radiographs, including orthopantomograms, bitewings, and periapical images. Methods: ORTHOSEG’s performance was evaluated using a rigorously curated dataset of 150 dental radiographs, including 50 orthopantomograms, 50 bitewings, and 50 periapical images, with manual annotations by expert clinicians serving as the ground truth. The system’s segmentation performance was assessed using standard evaluation metrics, including mean Dice Similarity Coefficient (mDSC) and mean Intersection over Union (mIoU), and inference time was also recorded. Results: The system achieved high accuracy, with mDSC and mIoU values of 0.635 ± 0.233 and 0.576 ± 0.214, respectively. In particular for orthopantomograms, it achieved an mDSC of 0.756 ± 0.174 and an mIoU of 0.684 ± 0.172, surpassing existing benchmarks. Its segmentation capabilities extend to approximately 70 distinct elements, underscoring its comprehensive utility. The system demonstrated efficient computational performance, with processing times of 19.745 ± 3.625 s for orthopantomograms, 8.467 ± 0.903 s for bitewings, and 5.653 ± 0.897 s for periapical radiographs on standard clinical hardware. Conclusions: ORTHOSEG demonstrates efficiency suitable for integration into routine workflows. This study confirms ORTHOSEG’s reliability and potential to improve diagnostic workflows, offering clinicians a valuable tool for faster and more detailed radiograph analysis. Future research will focus on extending validation across diverse clinical scenarios to ensure broader applicability. However, this study has limitations, including the use of a dataset derived from a European population and the absence of usability and clinical workflow evaluation, which should be addressed in future studies. Full article
(This article belongs to the Special Issue Clinical Outcome Research in the Head and Neck: 2nd Edition)
Show Figures

Figure 1

32 pages, 4390 KB  
Article
Predicting the Remaining Useful Life of Ship Shafting Using Bayesian Networks with Asymmetric Probability Distributions
by Peng Dong, Ge Han and Luwen Yuan
Symmetry 2026, 18(3), 443; https://doi.org/10.3390/sym18030443 - 4 Mar 2026
Abstract
Accurately predicting the remaining useful life (RUL) of ship shafting is crucial for ensuring navigation safety and optimizing operation and maintenance. Traditional Bayesian Network (BN) methods are usually based on the assumption of symmetric distributions. They struggle to effectively characterize common statistical properties [...] Read more.
Accurately predicting the remaining useful life (RUL) of ship shafting is crucial for ensuring navigation safety and optimizing operation and maintenance. Traditional Bayesian Network (BN) methods are usually based on the assumption of symmetric distributions. They struggle to effectively characterize common statistical properties such as asymmetry and heavy tails during the shafting degradation process, leading to biases in prediction results. To address this issue, this study proposes an Asymmetric Distribution Bayesian Network (ADBN) method. The method consists of three key components. Firstly, each node selects the optimal asymmetric distribution form based on the Bayesian Information Criterion (BIC) to better fit data characteristics. Secondly, a Generalized Linear Model (GLM) is used to associate distribution parameters (e.g., location, scale, shape) with parent node states, enabling the conditional distribution to adaptively evolve with the system degradation process. Finally, to tackle the complex inference problem under asymmetric distributions, an approximate algorithm based on stochastic gradient variational inference is designed to ensure prediction timeliness. Experimental results show that the ADBN method outperforms traditional Gaussian networks in terms of Mean Absolute Error in the early, middle, and late stages of RUL prediction, and can provide more accurate prediction intervals. This research offers a probabilistic approach that better aligns with actual statistical properties for modeling ship shafting degradation. Full article
(This article belongs to the Special Issue Symmetry in Fault Detection, Diagnosis, and Prognostics)
Show Figures

Figure 1

30 pages, 3169 KB  
Article
Mineralogical Effects on Cement-Stabilized Rammed Earth Strength: A Multivariate and Non-Parametric Analysis
by Piotr Narloch, Łukasz Rosicki, Hubert Anysz and Ireneusz Gawriuczenkow
Sustainability 2026, 18(5), 2491; https://doi.org/10.3390/su18052491 - 4 Mar 2026
Viewed by 36
Abstract
This study demonstrates that compressive strength in cement-stabilized rammed earth is governed by conditional, threshold-controlled interactions rather than by intrinsic mineralogical effects. A B + K (beidellite + kaolinite) content exceeding 15% defines a low-strength regime (median ≈ 44.6 kN), whereas B + [...] Read more.
This study demonstrates that compressive strength in cement-stabilized rammed earth is governed by conditional, threshold-controlled interactions rather than by intrinsic mineralogical effects. A B + K (beidellite + kaolinite) content exceeding 15% defines a low-strength regime (median ≈ 44.6 kN), whereas B + K ≤ 5% allows medians above 90 kN under 7% forming moisture. Quartz-rich fractions show a global correlation of r = 0.71. The Kruskal–Wallis test confirms strong clay grouping influence (H = 72.78, p < 0.001). Analysis of the experimental dataset shows that most strength distributions deviate from normality, invalidating pooled parametric inference and justifying the use of distribution-free methods. At the global level, bulk density and quartz-rich fractions are the dominant positive contributors to strength. Meanwhile, forming moisture and high combined beidellite–kaolinite content (>15%) exerts a negative influence under elevated forming moisture (8%), whereas the effect of 1:1 and 2:1 clay minerals differs depending on their hydro-affinity and moisture regime. However, subgroup analyses reveal frequent reversals in both magnitude and sign of correlations, proving that mineral effects depend critically on cement dosage and moisture regime, revealing discrete strength regimes defined by hierarchical interactions between moisture, cement content, and mineralogical thresholds. The combined beidellite–kaolinite content was classified into ≤5%, 5–15%, and >15% groups. Specimens with B + K > 15% consistently formed a low-strength regime, with a median destructive load of approximately 44.6 kN (≈1.1–1.3 MPa depending on cross-sectional area). In contrast, mixtures with B + K ≤ 5% achieved median loads above 90 kN (≈2.5–3.0 MPa). Quartz-rich fractions showed a strong global positive correlation with strength (r = 0.71), while the grouped clay fraction exhibited a highly significant effect (Kruskal–Wallis H = 72.78, p < 0.001). A regime shift was observed between 7% and 8% forming moisture, where quartz correlation changed from strongly positive (r ≈ 0.70) to negative (r ≈ −0.69). Increasing cement content from 6% to 9% significantly improved strength (H = 12.30, p = 0.0005), although this effect diminished when B + K exceeded 15% or forming moisture reached 8%. Association rules further confirm that high or low strength emerges only from specific multivariate combinations. The results show that mineralogy influences CSRE strength primarily through interaction with technological parameters, providing a robust basis for regime-based interpretation and rational mixture design. Full article
Show Figures

Figure 1

17 pages, 533 KB  
Systematic Review
Immersive Virtual Reality in Addictive Disorders: A Systematic Review of Neuroimaging Evidence
by Francesco Monaco, Ernesta Panarello, Annarita Vignapiano, Stefania Landi, Rossella Mucciolo, Raffaele Malvone, Ilaria Pullano, Alessandra Marenna, Anna Maria Iazzolino, Giulio Corrivetti and Luca Steardo
Neuroimaging 2026, 1(1), 5; https://doi.org/10.3390/neuroimaging1010005 - 4 Mar 2026
Viewed by 55
Abstract
Background: Addictive disorders are characterized by the dysregulation of neural circuits involved in reward processing, salience attribution, emotional regulation, and cognitive control. Traditional neuroimaging paradigms based on static or two-dimensional stimuli show limited ecological validity and may fail to capture the contextual [...] Read more.
Background: Addictive disorders are characterized by the dysregulation of neural circuits involved in reward processing, salience attribution, emotional regulation, and cognitive control. Traditional neuroimaging paradigms based on static or two-dimensional stimuli show limited ecological validity and may fail to capture the contextual complexity of real-world addictive triggers. Immersive virtual reality (VR) offers a novel approach to simulate realistic, multisensory environments capable of eliciting craving and emotional responses. Although several reviews have examined VR in addictive disorders, most combined immersive and non-immersive tools and did not restrict inclusion to studies with brain-based outcomes. Methods: This systematic review with narrative synthesis was conducted in PubMed/MEDLINE and APA PsycINFO for studies published up to 30 December 2025. This systematic review followed PRISMA 2020 and was prospectively registered in PROSPERO; due to heterogeneity, findings were synthesized narratively. Eligible studies included human participants with substance-related or behavioral addictions and employed immersive VR paradigms (e.g., head-mounted display–based environments) combined with neuroimaging or neurophysiological measures (EEG, fMRI, fNIRS, PET, or DTI). Risk of bias was assessed using ROB-2 or ROBINS-I, and overall certainty of evidence was evaluated with the GRADE framework. Results: Ten studies met the inclusion criteria, encompassing over 1450 participants with alcohol, nicotine, methamphetamine, opioid use disorders, and internet gaming disorder. Immersive VR was associated with craving-related neural responses across modalities, involving prefrontal, insular, limbic, and striatal networks. EEG studies reported spectral power changes associated with craving and attentional salience, while fMRI, fNIRS, and PET studies demonstrated activation and modulation of executive control and reward-related circuits. Preliminary longitudinal and interventional studies indicate that repeated VR exposure may induce neurobiological changes consistent with therapeutic modulation. Conclusions: Immersive VR combined with neuroimaging supports the use of immersive VR as an ecologically grounded framework to probe addiction-related brain circuits; however, larger trials and standardized reporting are needed to strengthen clinical translation. Future studies should prioritize adequately powered randomized designs, harmonized VR cue-reactivity paradigms, and transparent neuroimaging reporting to enable reproducibility and cumulative inference. Full article
Show Figures

Figure 1

20 pages, 1821 KB  
Article
Research on AI-Assisted Fire Risk Target Detection for Special Operating Conditions in Under-Construction Nuclear Power Plants
by Zhendong Li, Guangwei Liu, Kai Yu and Shijie Du
Fire 2026, 9(3), 115; https://doi.org/10.3390/fire9030115 - 3 Mar 2026
Viewed by 114
Abstract
In night-time construction scenarios of under-construction nuclear power plants, some yellow lights and open flames exhibit highly similar visual characteristics, resulting in frequent false alarms of fire sources. Such false alarm information tends to drown out real fire alarm signals, which not only [...] Read more.
In night-time construction scenarios of under-construction nuclear power plants, some yellow lights and open flames exhibit highly similar visual characteristics, resulting in frequent false alarms of fire sources. Such false alarm information tends to drown out real fire alarm signals, which not only severely disrupts construction operations but also endangers fire safety. To address this problem, this paper proposes an intelligent fire risk identification method based on an enhanced YOLOv8n (named YOLO-Fire). Specifically, shallow convolutional layers embedded with a coordinate attention mechanism are integrated into the Backbone of YOLOv8n; the Neck is optimised to improve the efficiency of multi-scale feature fusion; and the Head is enhanced to strengthen the localization and classification branches. Additionally, a composite loss function combining classification loss, regression loss, and similarity loss is designed, coupled with night-scene-specific data augmentation techniques and a two-stage progressive training strategy. Experimental results show that YOLO-Fire reduces the false alarm rate by 14.3%, increases the mean average precision (mAP@0.5) for open flames by 11.3% to 75.2%, and maintains an inference speed of over 85 frames per second (FPS). This study achieves an optimal balance between false alarm control, small object detection accuracy, and real-time processing efficiency, effectively resolving the misclassification issue between open flames and lights in night-time construction scenarios, and providing precise and efficient intelligent technical support for fire risk prevention and control during the construction phase of nuclear power plants. Full article
(This article belongs to the Special Issue Fire Risk Management and Emergency Prevention)
Show Figures

Figure 1

12 pages, 485 KB  
Article
Association Between Metabolic Syndrome and Psychiatric Morbidity in a Nationwide Taiwanese Population Study
by Jia-In Lee, Yin-Yin Hsu, Jiun-Hung Geng, Yi-Ching Lo, Szu-Chia Chen and Cheng-Sheng Chen
Nutrients 2026, 18(5), 819; https://doi.org/10.3390/nu18050819 - 3 Mar 2026
Viewed by 129
Abstract
Background/Objectives: The relationship between metabolic syndrome (MetS) and mental health disorders has gained increasing attention, yet evidence from large population-based studies remains limited. This study aimed to examine the association between MetS and psychiatric morbidity in a nationwide Taiwanese adult cohort using [...] Read more.
Background/Objectives: The relationship between metabolic syndrome (MetS) and mental health disorders has gained increasing attention, yet evidence from large population-based studies remains limited. This study aimed to examine the association between MetS and psychiatric morbidity in a nationwide Taiwanese adult cohort using a cross-sectional design. Methods: Between 2008 and 2019, a total of 121,575 adults aged 30–70 years were recruited from 29 community health screening stations across Taiwan. Demographic characteristics, lifestyle factors, medical history, and physical measurements were collected. Participants were classified as having MetS or not according to standard criteria. Psychiatric morbidity was defined as depressive and/or anxiety burden identified by validated screening instruments (Patient Health Questionnaire-2 score ≥3 or Generalized Anxiety Disorder-2 score ≥3) or self-reported physician-diagnosed depression. Multivariable logistic regression analyses were performed to evaluate the association between MetS and psychiatric morbidity after adjustment for potential confounders. Results: Psychiatric morbidity was identified in 1366 of 27,349 participants with MetS (5.0%) and in 4047 of 94,226 participants without MetS (4.3%). The prevalence of psychiatric morbidity was higher among participants with MetS than those without MetS (5.0% vs. 4.3%). After multivariable adjustment, MetS was significantly associated with increased odds of psychiatric morbidity (adjusted odds ratio [aOR] 1.235; 95% confidence interval [CI] 1.152–1.325). Among individual MetS components, hypertension, increased waist circumference, and hypertriglyceridemia were independently associated with higher odds of psychiatric morbidity. Conclusions: MetS was associated with a modest increase in psychiatric morbidity in this large Taiwanese community cohort. Because of the cross-sectional design, causal inference is limited. Future longitudinal studies are needed to clarify the direction of association and underlying mechanisms linking metabolic and mental health conditions. Full article
(This article belongs to the Section Nutrition and Metabolism)
Show Figures

Figure 1

29 pages, 2389 KB  
Article
From Concept to Practice: Evidence and Lessons from Sponge City Implementation in Shenzhen, China
by Hugo Pinto, Jennifer Elston, Ojo Segun Sunday and Carla Nogueira
Urban Sci. 2026, 10(3), 135; https://doi.org/10.3390/urbansci10030135 - 3 Mar 2026
Viewed by 169
Abstract
Urban flooding represents an increasingly critical challenge in rapidly urbanizing cities, where high-density development and climate variability intensify hydrological vulnerability. This article presents an analytically focused case study of Shenzhen, a national Sponge City pilot, to examine not only whether nature-based interventions are [...] Read more.
Urban flooding represents an increasingly critical challenge in rapidly urbanizing cities, where high-density development and climate variability intensify hydrological vulnerability. This article presents an analytically focused case study of Shenzhen, a national Sponge City pilot, to examine not only whether nature-based interventions are associated with flood-resilience gains but also under what spatial, institutional, and governance conditions such gains emerge. The study adopts a qualitative mixed-methods case-study design based on secondary sources, integrating observed flood-event records, reported hydrological and water-quality indicators, model-based projections, and systematic policy analysis. Drawing on data from 2006–2020, the analysis explicitly distinguishes observed outcomes, reported performance indicators, and inferred effects, addressing a key methodological limitation in existing Sponge City assessments. Results indicate that, within designated pilot zones, Sponge City interventions are associated with reduced surface runoff, attenuated peak flows, and reported improvements in pollutant filtration, particularly where green infrastructure density and monitoring capacity are high. However, these performance patterns are spatially uneven and mediated by governance constraints, including institutional fragmentation and maintenance capacity. The principal contribution of the study lies in identifying governance–infrastructure mechanisms that condition Sponge City performance and scalability. By treating Shenzhen as a critical rather than representative case, the article offers analytically transferable insights into the effectiveness, durability, and limits of nature-based flood-management strategies in high-capacity urban contexts. Full article
(This article belongs to the Special Issue Urban Resilience to Climate Change Through Nature-Based Solutions)
Show Figures

Figure 1

Back to TopTop