Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (744)

Search Parameters:
Keywords = network slicing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1435 KB  
Article
Development of High-Internal-Phase Pickering Emulsions Stabilized by Soy Protein Isolate and Sodium Alginate as Innovative Fat Replacers for Emulsified Sausages
by Zhi Wang, Xuefei Wang, Xin Li, Chao Zhang, Fangda Sun, Qian Chen, Qian Liu, Baohua Kong and Haotian Liu
Foods 2026, 15(8), 1294; https://doi.org/10.3390/foods15081294 - 9 Apr 2026
Abstract
In this study, vegetable oil-based high-internal-phase Pickering emulsions (HIPPEs) were formulated from soy protein isolate and sodium alginate, and the effects of different replacement ratios (20–100%) of pork back fat on the quality of emulsified sausages were investigated. With the increase in the [...] Read more.
In this study, vegetable oil-based high-internal-phase Pickering emulsions (HIPPEs) were formulated from soy protein isolate and sodium alginate, and the effects of different replacement ratios (20–100%) of pork back fat on the quality of emulsified sausages were investigated. With the increase in the fat replacement ratio, cooking loss, released fat, and lipid oxidation significantly decreased (p < 0.05). Similarly, as the replacement ratio rose, L*-values, pH and springiness increased, while a*-values, hardness, cohesiveness, and chewiness showed a significant decrease. The reformulated sausages exhibited superior slice compactness, a macroscopic trait corroborated by the dense network structure observed via microstructural analysis. Electronic nose and electronic tongue measurements indicated that the inclusion of HIPPEs modulated both the aroma profiles and taste attributes of the emulsified sausages. Moreover, although differences were observed in some sensory attributes and flavor characteristics, all formulations with HIPPEs remained within an acceptable sensory range. Full article
(This article belongs to the Section Meat)
Show Figures

Graphical abstract

21 pages, 11316 KB  
Article
Multimodal Fusion Prediction of Radiation Pneumonitis via Key Pre-Radiotherapy Imaging Feature Selection Based on Dual-Layer Attention Multiple-Instance Learning
by Hao Wang, Dinghui Wu, Shuguang Han, Jingli Tang and Wenlong Zhang
J. Imaging 2026, 12(4), 158; https://doi.org/10.3390/jimaging12040158 - 8 Apr 2026
Viewed by 148
Abstract
Radiation pneumonitis (RP), one of the most common and severe complications in locally advanced non-small cell lung cancer (LA-NSCLC) patients following thoracic radiotherapy, presents significant challenges in prediction due to the complexity of clinical risk factors, incomplete multimodal data, and unavailable slice-level annotations [...] Read more.
Radiation pneumonitis (RP), one of the most common and severe complications in locally advanced non-small cell lung cancer (LA-NSCLC) patients following thoracic radiotherapy, presents significant challenges in prediction due to the complexity of clinical risk factors, incomplete multimodal data, and unavailable slice-level annotations in pre-radiotherapy CT images. To address these challenges, we propose a multimodal fusion framework based on Dual-Layer Attention-Based Adaptive Bag Embedding Multiple-Instance Learning (DAAE-MIL) for accurate RP prediction. This study retrospectively collected data from 995 LA-NSCLC patients who received thoracic radiotherapy between November 2018 and April 2025. After screening, Subject datasets (n = 670) were allocated for training (n = 535), and the remaining samples (n = 135) were reserved for an independent test set. The proposed framework first extracts pre-radiotherapy CT image features using a fine-tuned C3D network, followed by the DAAE-MIL module to screen critical instances and generate bag-level representations, thereby enhancing the accuracy of deep feature extraction. Subsequently, clinical data, radiomics features, and CT-derived deep features are integrated to construct a multimodal prediction model. The proposed model demonstrates promising RP prediction performance across multiple evaluation metrics, outperforming both state-of-the-art and unimodal RP prediction approaches. On the test set, it achieves an accuracy (ACC) of 0.93 and an area under the curve (AUC) of 0.97. This study validates that the proposed method effectively addresses the limitations of single-modal prediction and the unknown key features in pre-radiotherapy CT images while providing significant clinical value for RP risk assessment. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

16 pages, 4263 KB  
Article
Application of Near-Infrared Spectroscopy in Moisture Detection of Carrot Slices During Freeze-Drying
by Pengtao Wang, Meng Sun, Hongwen Xu, Moran Zhang, Rong Liu, Yunfei Xie and Jun Cheng
Foods 2026, 15(7), 1256; https://doi.org/10.3390/foods15071256 - 7 Apr 2026
Viewed by 155
Abstract
This study explored the feasibility of near-infrared (NIR) spectroscopy for detecting total water, free water and bound water in carrot slices during freeze-drying, with low-field nuclear magnetic resonance (LF-NMR) characterizing water state distribution and oven-drying determining moisture content (MC). NIR spectra (10,000–4000 cm [...] Read more.
This study explored the feasibility of near-infrared (NIR) spectroscopy for detecting total water, free water and bound water in carrot slices during freeze-drying, with low-field nuclear magnetic resonance (LF-NMR) characterizing water state distribution and oven-drying determining moisture content (MC). NIR spectra (10,000–4000 cm−1) were processed via optimized sample partitioning, preprocessing and feature extraction; partial least squares regression (PLSR), support vector regression (SVR), back-propagation artificial neural network (BPANN), extreme gradient boosting (XGBoost) and particle swarm optimization–random forest (PSO-RF) models were established and evaluated. Results showed that SVR and BPANN performed robustly, with CARS being the optimal feature extraction method. The full-moisture system achieved high total/free water prediction accuracy (Rp2 = 0.9902/0.9740), while the low-moisture system improved bound water prediction (Rp2 = 0.9709). The established NIR models exhibited excellent fitting and generalization ability, enabling rapid and non-destructive quantitative prediction of moisture content during carrot freeze-drying. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

18 pages, 25595 KB  
Article
Intelligent Recognition and Trajectory Planning for Welds Grinding Based on 3D Visual Guidance
by Pengrui Zhong, Long Xue, Jiqiang Huang, Yong Zou and Feng Han
Machines 2026, 14(4), 393; https://doi.org/10.3390/machines14040393 - 3 Apr 2026
Viewed by 223
Abstract
In the fabrication process of pipelines for petrochemical and other industries, weld reinforcement is often excessive and adversely affects subsequent processes such as anticorrosion treatment and surface coating. Weld reinforcement must be removed through a grinding process. Welding deformation and fit-up errors often [...] Read more.
In the fabrication process of pipelines for petrochemical and other industries, weld reinforcement is often excessive and adversely affects subsequent processes such as anticorrosion treatment and surface coating. Weld reinforcement must be removed through a grinding process. Welding deformation and fit-up errors often lead to highly irregular weld geometries, which makes robotic grinding difficult and causes the task to still heavily rely on manual operation. To address this issue, this study proposes an automatic weld recognition and grinding trajectory planning method based on 3D visualization and deep learning. A weld recognition network, termed WSR-Net, has been developed based on an improved PointNet++ architecture with a cross-attention mechanism, achieving a segmentation accuracy of 98.87% and a mean intersection over union of 90.71% on the test set. An intrinsic shape signature (ISS) key point selection algorithm with orthogonal slicing-based pruning optimization is developed to robustly extract key weld ridge points that characterize the weld trend on rugged weld surfaces. According to the height differences between the weld and the adjacent base metal surfaces, the grinding reference surface is fitted using the weld contour through the moving least-squares method. The ridge line points are projected onto the grinding reference surface along the local normal to generate the expected grinding trajectory points. The grinding trajectory that meets the process constraints is generated through reverse layer slicing. Grinding experiments demonstrate that the proposed WSR-Net achieves robust segmentation performance for both planar and curved surface welds. With the reverse layered trajectory planning method, the proposed method enables high-precision automatic grinding of complex spatially curved surface welds. The results show that the final grinding mean error is 0.316 mm, which satisfies the preprocessing requirements for subsequent processes. The proposed method provides a feasible technical method for the intelligent grinding of spatially curved surface welds. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

20 pages, 5234 KB  
Article
Performance of Neural Networks in Automated Detection of Wood Features in CT Images
by Tomáš Gergeľ, Ondrej Vacek, Miloš Gejdoš, Diana Zraková, Peter Balogh and Emil Ješko
Forests 2026, 17(4), 425; https://doi.org/10.3390/f17040425 - 27 Mar 2026
Viewed by 315
Abstract
Computed tomography (CT) enables non-destructive insight into internal log structure, yet fully automated interpretation of CT images remains limited by inconsistent annotations, boundary ambiguity, and insufficient spatial context in 2D slice-based analysis. These challenges restrict the industrial deployment of deep learning for wood [...] Read more.
Computed tomography (CT) enables non-destructive insight into internal log structure, yet fully automated interpretation of CT images remains limited by inconsistent annotations, boundary ambiguity, and insufficient spatial context in 2D slice-based analysis. These challenges restrict the industrial deployment of deep learning for wood quality assessment. This study applies artificial intelligence (AI) and deep learning to the automated analysis of computed tomography (CT) scans of wood logs for detecting internal qualitative features and segmenting bark. Using convolutional neural networks (CNNs), trained models accurately distinguish healthy and damaged regions and segment bark, including discontinuous parts. We introduce a novel pseudo-spatial representation by merging consecutive slices into red–green–blue (RGB) format, which improves prediction accuracy and model robustness across logs. To enhance interpretability, Gradient-weighted Class Activation Mapping (Grad-CAM) highlights regions contributing most to defect detection, particularly knots. Comprehensive evaluation using Sørensen–Dice similarity coefficients and confusion matrices confirms the effectiveness of the proposed approach under industrial conditions. These findings demonstrate that AI-driven CT image analysis can address key limitations of current log-grading workflows and enable more reliable, objective, and scalable quality assessment for timber-dependent economies. Full article
(This article belongs to the Special Issue Wood Quality, Smart Timber Harvesting, and Forestry Machinery)
Show Figures

Figure 1

17 pages, 4972 KB  
Article
Seismic Attribute Fusion and Reservoir Prediction Using Multiscale Convolutional Neural Networks and Self-Attention: A Case Study of the B Gas Field, South Sumatra Basin
by Ziyun Cheng, Wensong Huang, Xiaoling Zhang, Zhanxiang Lei, Guoliang Hong, Wenwen Wang, Mengyang Zhang, Linze Li and Jian Li
Processes 2026, 14(6), 981; https://doi.org/10.3390/pr14060981 - 19 Mar 2026
Viewed by 337
Abstract
Strong heterogeneity and ambiguous seismic responses hinder reliable sandstone thickness prediction when using a single seismic attribute in the lower sandstone interval of the Talang Akar Formation (hereafter abbreviated as the LTAF interval) in the B gas field, South Sumatra Basin. To address [...] Read more.
Strong heterogeneity and ambiguous seismic responses hinder reliable sandstone thickness prediction when using a single seismic attribute in the lower sandstone interval of the Talang Akar Formation (hereafter abbreviated as the LTAF interval) in the B gas field, South Sumatra Basin. To address this challenge, we propose a seismic attribute fusion and reservoir sweet-spot prediction framework based on a multiscale convolutional neural network (CNN) integrated with a self-attention module. Multiple seismic attribute volumes are organized as multi-channel 2D attribute slices, and parallel convolutions with kernel sizes of 3 × 3, 5 × 5, and 7 × 7 are employed to capture spatial features ranging from thin-bed boundaries and channel morphology to sand-body assemblage distribution. The self-attention module explicitly models inter-attribute dependencies and performs adaptive weighted fusion to suppress noise and emphasize informative attributes. The network adopts a dual-output design, producing (i) a sandstone thickness prediction map at the same spatial resolution as the input and (ii) attribute importance scores for quantitative attribute selection and geological interpretation. Using 3D seismic data and well-constrained thickness labels, the proposed model achieves an R2 of 0.8954, outperforming linear regression (R2 = 0.8281) and random forest regression (R2 ≈ 0.8453). The learned importance scores indicate that amplitude-related attributes (e.g., RMS amplitude and maximum amplitude) contribute most to thickness prediction, whereas frequency- and energy-related attributes show relatively lower contributions, which is consistent with bandwidth-limited resolution effects. Overall, the proposed framework unifies attribute fusion, thickness prediction, and interpretability within a single model, providing practical support for fine reservoir characterization and development optimization in heterogeneous sandstone reservoirs. Full article
(This article belongs to the Special Issue Applications of Intelligent Models in the Petroleum Industry)
Show Figures

Figure 1

24 pages, 1451 KB  
Review
AI-Driven Network Optimization for the 5G-to-6G Transition: A Taxonomy-Based Survey and Reference Framework
by Rexhep Mustafovski, Galia Marinova, Besnik Qehaja, Edmond Hajrizi, Shejnaze Gagica and Vassil Guliashki
Future Internet 2026, 18(3), 155; https://doi.org/10.3390/fi18030155 - 17 Mar 2026
Viewed by 667
Abstract
This paper presents a taxonomy-based survey of AI-driven network optimization mechanisms relevant to the transition from fifth generation (5G) to sixth generation (6G) mobile communication systems. In contrast to earlier generational shifts that are often described as technology replacement cycles, the 5G-to-6G evolution [...] Read more.
This paper presents a taxonomy-based survey of AI-driven network optimization mechanisms relevant to the transition from fifth generation (5G) to sixth generation (6G) mobile communication systems. In contrast to earlier generational shifts that are often described as technology replacement cycles, the 5G-to-6G evolution is increasingly characterized in the literature as a prolonged period of coexistence, hybrid operation, and progressive integration of new capabilities across radio, edge, core, and service layers. To structure this transition, the paper organizes prior work into a transition-oriented taxonomy covering migration strategies, AI-enabled closed-loop control, RAN disaggregation and edge intelligence, core virtualization and slice orchestration, spectrum-aware coexistence, service-driven requirements, and security-aware governance. Rather than introducing a new optimization algorithm or an experimentally validated architecture, the contribution of this survey is analytical and integrative. Specifically, it consolidates fragmented research directions into a reference view of how AI-driven control mechanisms are distributed across spectrum, RAN, edge, and core domains during hybrid 5G–6G operation. In addition, the paper includes a structured evidence synthesis of performance trends, deployment maturity signals, and recurring methodological limitations reported across the literature. The review indicates that meeting anticipated 6G objectives, including ultra-low latency, high reliability, scalability, and improved energy efficiency, depends less on isolated enhancements at individual protocol layers and more on coordinated cross-layer optimization supported by AI-native control loops. At the same time, the surveyed literature reveals persistent gaps in service-to-control mapping, security-aware orchestration, interoperability across heterogeneous domains, and reproducible evaluation methodologies for hybrid 5G–6G environments. The survey is intended to provide researchers, network operators, and standardization stakeholders with a structured analytical basis for assessing how AI-driven optimization can support the staged evolution from 5G systems toward 6G-ready infrastructures. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

33 pages, 3876 KB  
Article
Predictive Network Slicing Resource Orchestration: A VNF Approach
by Andrés Cárdenas, Luis Sigcha and Mohammadreza Mosahebfard
Future Internet 2026, 18(3), 149; https://doi.org/10.3390/fi18030149 - 16 Mar 2026
Viewed by 383
Abstract
As network slicing gains traction in cloud computing environments, efficient management and orchestration systems are required to realize the benefits of this technology. These systems must enable dynamic provisioning and resource optimization of virtualized services spanning multiple network slices. Nevertheless, the common resource [...] Read more.
As network slicing gains traction in cloud computing environments, efficient management and orchestration systems are required to realize the benefits of this technology. These systems must enable dynamic provisioning and resource optimization of virtualized services spanning multiple network slices. Nevertheless, the common resource overprovisioning practice implemented by service providers leads to the inefficient use of resources, limiting the ability of Mobile Network Operators (MNOs) to rent new network slices to more vertical customers. Hence, efficient resource allocation mechanisms are essential to achieve optimal network performance and cost-effectiveness. This paper proposes a predictive model for network slice resource optimization based on resource sharing between Virtualized Network Functions (VNFs). The model employs deep learning models based on Long Short-Term Memory (LSTM) and Transformers for CPU resource usage prediction and a reactive algorithm for resource sharing between VNFs. The model is powered by a telemetry system proposed as an extension of the 3GPP network slice management architectural framework. The extended architectural framework enhances the automation and optimization of the network slice lifecycle management. The model is validated through a practical use case, demonstrating the effectiveness of the resource sharing algorithm in preventing VNF overload and predicting resource usage accurately. The findings demonstrate that the sharing mechanism enhances resource optimization and ensures compliance with service level agreements, mitigating service degradation. This work contributes to the efficient management and utilization of network resources in 5G networks and provides a basis for further research in network slice resource optimization. Full article
(This article belongs to the Special Issue Software-Defined Networking and Network Function Virtualization)
Show Figures

Figure 1

24 pages, 6557 KB  
Article
Ka-Band 16-Channel T/R Module Based on MMIC with Low Cost and High Integration
by Mengyun He, Qinghua Zeng, Xuesong Zhao, Song Wang, Yan Zhao, Pengfei Zhang, Gaoang Li and Xiao Liu
Electronics 2026, 15(6), 1185; https://doi.org/10.3390/electronics15061185 - 12 Mar 2026
Viewed by 415
Abstract
Based on monolithic microwave integrated circuit (MMIC) technology, this paper presents the design and implementation of a low-cost, highly integrated Ka-band sixteen-channel transmit/receive (T/R) module, specifically tailored to meet the application requirements of phased array antennas in airborne and spaceborne radar systems, satellite [...] Read more.
Based on monolithic microwave integrated circuit (MMIC) technology, this paper presents the design and implementation of a low-cost, highly integrated Ka-band sixteen-channel transmit/receive (T/R) module, specifically tailored to meet the application requirements of phased array antennas in airborne and spaceborne radar systems, satellite communications, and 5G/6G millimeter-wave networks. The proposed module employs an MMIC-based single-channel dual-chip discrete architecture, optimally integrating amplitude-phase multifunction chips and transmit-receive multifunction chips in terms of both fabrication process and performance characteristics, achieving a favorable balance between high performance and high-integration density. Using low-cost, low-temperature co-fired ceramic (LTCC) substrates, full-silver conductive paste, and a nickel–palladium–gold plating process, a novel “back-to-back” thin-slice packaging technique is presented to improve integration, lower manufacturing costs, and boost long-term reliability. Furthermore, the design incorporates glass insulators and a direct array interconnection scheme, which significantly minimizes transmission losses and reduces interface dimensions. The final module measures 70.3 mm × 26.2 mm × 10.9 mm and weighs only 34 g. Experimental results demonstrate a transmit output power of at least 23 dBm, a receive gain exceeding 26 dB, and a noise figure below 3.5 dB, achieving a 22.5–58% reduction in volume per channel while maintaining competitive RF performance. To improve testing effectiveness and guarantee data consistency, an automated radio frequency (RF) test system based on Python 3.11.5 was also developed. This work provides a practical technical approach for the engineering realization of Ka-band phased array systems. Full article
Show Figures

Figure 1

20 pages, 3279 KB  
Article
Pore Structure Characteristics of Vegetated Concrete and Their Influence on Physical Properties
by Fazhi Huo, Xinjun Yan, Jiaqi Liu and Peiyuan Zhuang
Materials 2026, 19(5), 1042; https://doi.org/10.3390/ma19051042 - 9 Mar 2026
Viewed by 294
Abstract
In this study, CT scanning technology was combined with ImageJ 1.54r and Avizo 3D 2022 professional image analysis software to quantify porosity. The aim was to reveal the intrinsic correlation between the pore structure characteristics and the macroscopic properties of vegetated concrete. A [...] Read more.
In this study, CT scanning technology was combined with ImageJ 1.54r and Avizo 3D 2022 professional image analysis software to quantify porosity. The aim was to reveal the intrinsic correlation between the pore structure characteristics and the macroscopic properties of vegetated concrete. A combination of 3D reconstruction, fractal analysis and multi-parameter regression modelling techniques was utilised to quantify the association between pore parameters and material properties. The mechanistic role of pore structure in regulating the strength–permeability trade-off relationship was elucidated. The results show that: (1) aggregate particle size and porosity are significantly negatively correlated with the compressive strength of vegetated concrete and strongly positively correlated with the water permeability coefficient, while the effects of both of them on the pH value of the material are negligible; (2) the porosity obtained by the image analysis method meets the design requirements of the target porosity, and the deviation between the computed 3D porosity from CT scanning and the 2D sliced porosity is less than 1%. The image analysis porosity is slightly lower than the measured value, a deviation within a reasonable range. (3) There is a robust positive correlation between the fractal dimension of the vegetated concrete structural surface and porosity. With increasing aggregate size, porosity gradually increases, pore network connectivity is significantly enhanced, and the fractal dimension increases correspondingly. (4) Function fitting analysis confirms that the correlation between the connected porosity and the compressive strength and permeability coefficient is more significant than that of the cross-sectional porosity. Specifically, compressive strength is significantly negatively correlated with equivalent pore size and fractal dimension, and the water permeability coefficient is strongly positively correlated with these two parameters. This study can provide important theoretical support and engineering reference for the optimization of the mix proportion and performance control of vegetated concrete. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

25 pages, 2809 KB  
Article
Multi-Architecture Deep Learning for Early Alzheimer’s Detection in MRI: Slice- and Scan-Level Analysis
by Isabelle Bricaud and Giovanni Luca Masala
Int. J. Environ. Res. Public Health 2026, 23(3), 322; https://doi.org/10.3390/ijerph23030322 - 5 Mar 2026
Viewed by 694
Abstract
Alzheimer’s disease (AD), the most common form of dementia, is a progressive and irreversible neurodegenerative disorder. Structural MRI is widely used for diagnosis, revealing brain changes associated with AD. However, these alterations are often subtle and difficult to detect manually, particularly at early [...] Read more.
Alzheimer’s disease (AD), the most common form of dementia, is a progressive and irreversible neurodegenerative disorder. Structural MRI is widely used for diagnosis, revealing brain changes associated with AD. However, these alterations are often subtle and difficult to detect manually, particularly at early stages. Early intervention during prodromal stages, such as mild cognitive impairment (MCI), can help slow disease progression, highlighting the need for reliable automated methods. In this work, we introduce a dual-level evaluation framework comparing fifteen deep learning architectures, including convolutional neural networks (CNNs), Transformers, and hybrid models, for classifying AD, MCI, and cognitively normal (CN) subjects using the ADNI dataset. A central focus of our work is the impact of robust and standardized preprocessing pipelines, which we identified as a critical yet underexplored factor influencing model reliability. By evaluating performance at both slice-level and scan-level, we reveal that multi-slice aggregation affects architectures asymmetrically. By systematically optimizing preprocessing steps to reduce data variability and enhance feature consistency, we established preprocessing quality as an essential determinant of deep learning performance in neuroimaging. Experimental results show that CNNs and hybrid pre-trained models outperform Transformer-based models in both slice-level and scan-level classification. ConvNeXtV2-L achieved the best scan-level performance (91.07%), EfficientNetV2-L the highest slice-level accuracy (86.84%), and VGG19 balanced results (86.07%/88.52%). ConvNeXtV2-L and SwinV1-L exhibited scan-level improvements of 7.60% and 9.04% respectively, while EfficientNetV2-L experienced degradation of 2.66%, demonstrating that architectural selection and aggregation strategy are interdependent factors. These findings suggest that carefully designed preprocessing not only improves classification accuracy but may also serve as a foundation for more reproducible and interpretable Alzheimer’s disease detection pipelines. Full article
Show Figures

Figure 1

30 pages, 1397 KB  
Article
GAN-Based Cross-Modality Brain MRI Synthesis: Paired Versus Unpaired Training and Comparison with Diffusion and Transformer Models
by Behnam Kiani Kalejahi, Sebelan Danishvar and Mohammad Javad Rajabi
Biomimetics 2026, 11(3), 175; https://doi.org/10.3390/biomimetics11030175 - 2 Mar 2026
Viewed by 668
Abstract
Incomplete or faulty MRI sequences are common in clinical practice and can impair AI-based analyses that rely on complete multi-contrast data. The relative effectiveness of classical generative adversarial networks (GANs) versus modern diffusion and transformer-based models for clinically usable MRI synthesis remains unclear. [...] Read more.
Incomplete or faulty MRI sequences are common in clinical practice and can impair AI-based analyses that rely on complete multi-contrast data. The relative effectiveness of classical generative adversarial networks (GANs) versus modern diffusion and transformer-based models for clinically usable MRI synthesis remains unclear. This study evaluates cross-modality MRI synthesis using the BraTS 2019 brain tumour dataset, focusing on T1-to-T2 translation. We assess paired and unpaired CycleGAN models and compare them with two stronger but computationally intensive baselines, a conditional denoising diffusion probabilistic model (DDPM) and a transformer-enhanced GAN, using identical data splits and preprocessing pipelines. Inter-modality correlation was evaluated to estimate the achievable similarity between modalities. Conceptually, modality synthesis may be viewed as a representation-learning approach that compensates for missing imaging information by reconstructing clinically relevant features from available contrasts. Paired CycleGAN achieved correlations of r0.920.93  and SSIM 0.900.92, approaching natural T1–T2 correlation (r0.95) while maintaining very fast inference (<50 ms/slice). Unpaired CycleGAN achieved r0.740.78 and SSIM 0.820.85, producing clinically interpretable reconstructions without voxel-level supervision. DDPM achieved the highest fidelity (SSIM 0.930.95, r0.94) but required substantially greater computational resources, while transformer-enhanced GAN performance was intermediate. Qualitative analysis showed that CycleGAN and DDPM best preserved tumour and tissue boundaries, whereas unpaired CycleGAN occasionally over-smoothed subtle lesions. These findings highlight the trade-off between fidelity and efficiency in cross-modality MRI synthesis, suggesting paired CycleGAN for time-sensitive clinical workflows and diffusion models as a computationally expensive accuracy upper bound. Full article
Show Figures

Figure 1

22 pages, 10242 KB  
Article
Cross-Modality Whole-Heart MRI Reconstruction with Deep Motion Correction and Super-Resolution
by Jinwei Dong, Wenhao Ke, Wangbin Ding, Liqin Huang and Mingjing Yang
Sensors 2026, 26(5), 1565; https://doi.org/10.3390/s26051565 - 2 Mar 2026
Viewed by 385
Abstract
Magnetic resonance imaging (MRI) inherently suffers from motion artifacts and inter-slice misalignment, primarily due to sequential slice acquisition and the prolonged scanning time required for dynamic cardiac motion. These acquisition-induced inconsistencies often lead to anatomically implausible representations of cardiac structures, impairing subsequent clinical [...] Read more.
Magnetic resonance imaging (MRI) inherently suffers from motion artifacts and inter-slice misalignment, primarily due to sequential slice acquisition and the prolonged scanning time required for dynamic cardiac motion. These acquisition-induced inconsistencies often lead to anatomically implausible representations of cardiac structures, impairing subsequent clinical analyses such as 3D reconstruction and regional functional assessment. On the other hand, acquiring high-resolution MRI demands extended scan durations that increase patient burden and potential health risks. To address this challenge, we propose a deep motion correction and super-resolution whole-heart reconstruction (DeepWHR) framework. It learns cardiac structure prior knowledge from computed tomography (CT) data, and transfers it to reconstruct cardiac structure from conventional misaligned and large slice thickness MRI images. Specifically, DeepWHR utilizes CT anatomy data to train a deep motion correction model that enables the network to capture structurally coherent and anatomically consistent representations, while MRI Finetune preserves modality-specific spatial characteristics, ensuring that the reconstructed results retain the intrinsic MRI data distribution. Furthermore, DeepWHR introduced an implicit neural representation module, which models continuous spatial fields, enabling multi-scale super-resolution structure reconstruction. Experiments on the CARE2024 WHS dataset validate that our method not only restores the spatial coherence of MRI-derived anatomical structures but also generates high-fidelity label representations suitable for downstream cardiac applications. This study demonstrates that DeepWHR transforms sparse, misaligned 2D label stacks into anatomically coherent, high-resolution 3D models, enhancing their reliability for clinical applications. Full article
(This article belongs to the Special Issue Emerging MRI Techniques for Enhanced Disease Diagnosis and Monitoring)
Show Figures

Figure 1

15 pages, 1404 KB  
Article
A Deep Learning-Based Decision Support System for Cholelithiasis in MRI Data
by Ebru Hasbay, Caglar Cengizler, Mahmut Ucar, Nagihan Durgun, Hayriye Ulkucan Disli and Deniz Bolat
J. Clin. Med. 2026, 15(5), 1891; https://doi.org/10.3390/jcm15051891 - 2 Mar 2026
Viewed by 332
Abstract
Background: Cholelithiasis can lead to significant complications if not diagnosed and treated promptly. Recent advances in deep learning and the improved ability of computer systems to detect clinically significant textural and morphological patterns in magnetic resonance imaging (MRI) can help reduce the time [...] Read more.
Background: Cholelithiasis can lead to significant complications if not diagnosed and treated promptly. Recent advances in deep learning and the improved ability of computer systems to detect clinically significant textural and morphological patterns in magnetic resonance imaging (MRI) can help reduce the time and resources required for the radiological evaluation of the gallbladder and cholelithiasis. Objective: To detect cholelithiasis, a support system with a graphical user interface for magnetic resonance (MR) images of the gallbladder was implemented to reduce the manual effort and time required to identify gallstones. Method: A commonly used deep learning model for pixel-level mask generation and instance segmentation, Mask Region Based Convolutional Neural Network (Mask R-CNN), was modified, trained, and evaluated to provide a robust pipeline for automated analysis. The primary aim was to automatically locate and label the gallbladder in T2-weighted axial MR images to detect gallstones and highlight the visual characteristics of the target region, thereby supporting radiologists. All automation was designed to operate on a single optimal slice instead of the entire volume. While this approach limits generalisability, it offers a practical starting point for method development. This setup reflects a feasibility-oriented design, rather than a comprehensive diagnostic capability. The dataset included 788 axial MR images from different patients. Each image was labeled and segmented by an experienced radiologist to train and test the models at the image level. Results: The proposed model with squeeze and excitation (SE) modification improved classification accuracy, and at the image level, stone detection improved in terms of accuracy, precision, and specificity, although recall and F1 scores slightly decreased. Conclusions: The results show that the modified Mask R-CNN model can detect gallstones with up to 0.89 accuracy, supporting the clinical applicability of the proposed method. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

35 pages, 633 KB  
Article
Bi-Objective Optimization for Scalable Resource Scheduling in Dense IoT Deployments via 5G Network Slicing Using NSGA-II
by Francesco Nucci and Gabriele Papadia
Telecom 2026, 7(2), 24; https://doi.org/10.3390/telecom7020024 - 2 Mar 2026
Viewed by 426
Abstract
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs [...] Read more.
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs with quality-of-service. We formulate a bi-objective optimization problem that balances operational costs with quality-of-service (QoS) requirements across heterogeneous 5G network slices. The proposed approach employs a tailored Non-dominated Sorting Genetic Algorithm II (NSGA-II) incorporating domain-specific constraints, including device priorities, slicing isolation requirements, radio resource limitations, and battery capacity. Through extensive simulations on scenarios with up to 5000 devices, our method generates diverse Pareto-optimal solutions achieving hypervolume improvements of 8–13% over multi-objective DRL, 15–28% over single-objective DRL baselines, and 22–41% over heuristic approaches while maintaining computational scalability suitable for real-time network management (sub-2 min execution). Validation with real-world traffic traces from operational deployments confirms algorithm robustness under realistic burstiness and temporal patterns, with 7% performance degradation vs. synthetic traffic—within expected simulation–reality gaps. This work provides a practical framework for IoT resource scheduling in current 5G and future Beyond-5G (B5G) telecommunications infrastructures, validated in scenarios of up to 5000 devices. Full article
Show Figures

Figure 1

Back to TopTop