Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,417)

Search Parameters:
Keywords = weight transfer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2487 KiB  
Article
Feasibility of Sodium and Amide Proton Transfer-Weighted Magnetic Resonance Imaging Methods in Mild Steatotic Liver Disease
by Diana M. Lindquist, Mary Kate Manhard, Joel Levoy and Jonathan R. Dillman
Tomography 2025, 11(8), 89; https://doi.org/10.3390/tomography11080089 (registering DOI) - 6 Aug 2025
Abstract
Background/Objectives: Fat and inflammation confound current magnetic resonance imaging (MRI) methods for assessing fibrosis in liver disease. Sodium or amide proton transfer-weighted MRI methods may be more specific for assessing liver fibrosis. The purpose of this study was to determine the feasibility [...] Read more.
Background/Objectives: Fat and inflammation confound current magnetic resonance imaging (MRI) methods for assessing fibrosis in liver disease. Sodium or amide proton transfer-weighted MRI methods may be more specific for assessing liver fibrosis. The purpose of this study was to determine the feasibility of sodium and amide proton transfer-weighted MRI in individuals with liver disease and to determine if either method correlated with clinical markers of fibrosis. Methods: T1 and T2 relaxation maps, proton density fat fraction maps, liver shear stiffness maps, amide proton transfer-weighted (APTw) images, and sodium images were acquired at 3T. Image data were extracted from regions of interest placed in the liver. ANOVA tests were run with disease status, age, and body mass index as independent factors; significance was set to p < 0.05. Post-hoc t-tests were run when the ANOVA showed significance. Results: A total of 36 participants were enrolled, 34 of whom were included in the final APTw analysis and 24 in the sodium analysis. Estimated liver tissue sodium concentration differentiated participants with liver disease from those without, whereas amide proton transfer-weighted MRI did not. Estimated liver tissue sodium concentration negatively correlated with the Fibrosis-4 score, but amide proton transfer-weighted MRI did not correlate with any clinical marker of disease. Conclusions: Amide proton-weighted imaging was not different between groups. Estimated liver tissue sodium concentrations did differ between groups but did not provide additional information over conventional methods. Full article
(This article belongs to the Section Abdominal Imaging)
Show Figures

Figure 1

17 pages, 4105 KiB  
Article
Evaluation of the Effect of X-Ray Therapy on Glioma Rat Model Using Chemical Exchange Saturation Transfer and Diffusion-Weighted Imaging
by Kazuki Onishi, Koji Itagaki, Sachie Kusaka, Tensei Nakano, Junpei Ueda and Shigeyoshi Saito
Cancers 2025, 17(15), 2578; https://doi.org/10.3390/cancers17152578 - 5 Aug 2025
Abstract
Background/Objectives: This study aimed to examine the changes in brain metabolites and water molecule diffusion using chemical exchange saturation transfer (CEST) imaging and diffusion-weighted imaging (DWI) after 15 Gy of X-ray irradiation in a rat model of glioma. Methods: The glioma-derived [...] Read more.
Background/Objectives: This study aimed to examine the changes in brain metabolites and water molecule diffusion using chemical exchange saturation transfer (CEST) imaging and diffusion-weighted imaging (DWI) after 15 Gy of X-ray irradiation in a rat model of glioma. Methods: The glioma-derived cell line, C6, was implanted into the striatum of the right brain of 7-week-old male Wistar rats. CEST imaging and DWI were performed on days 8, 10, and 17 after implantation using a 7T-magnetic resonance imaging. X-ray irradiation (15 Gy) was performed on day 9. Magnetization transfer ratio (MTR) and apparent diffusion coefficient (ADC) values were calculated for CEST and DWI, respectively. Results: On day 17, the MTR values at 1.2 ppm, 1.5 ppm, 1.8 ppm, 2.1 ppm, and 2.4 ppm in the irradiated group decreased significantly compared with those of the control group. The standard deviation for the ADC values on a pixel-by-pixel basis increased from day 8 to day 17 (0.6 ± 0.06 → 0.8 ± 0.17 (×10−3 mm2/s)) in the control group, whereas it remained nearly unchanged (0.6 ± 0.06 → 0.8 ± 0.11 (×10−3 mm2/s)) in the irradiated group. Conclusions: This study revealed the effects of 15 Gy X-ray irradiation in a rat model of glioma using CEST imaging and DWI. Full article
Show Figures

Figure 1

24 pages, 30837 KiB  
Article
A Transfer Learning Approach for Diverse Motion Augmentation Under Data Scarcity
by Junwon Yoon, Jeon-Seong Kang, Ha-Yoon Song, Beom-Joon Park, Kwang-Woo Jeon, Hyun-Joon Chung and Jang-Sik Park
Mathematics 2025, 13(15), 2506; https://doi.org/10.3390/math13152506 - 4 Aug 2025
Viewed by 37
Abstract
Motion-capture data provide high accuracy but are difficult to obtain, necessitating dataset augmentation. To our knowledge, no prior study has investigated few-shot generative models for motion-capture data that address both quality and diversity. We tackle the diversity loss that arises with extremely small [...] Read more.
Motion-capture data provide high accuracy but are difficult to obtain, necessitating dataset augmentation. To our knowledge, no prior study has investigated few-shot generative models for motion-capture data that address both quality and diversity. We tackle the diversity loss that arises with extremely small datasets (n ≤ 10) by applying transfer learning and continual learning to retain the rich variability of a larger pretraining corpus. To assess quality, we introduce MFMMD (Motion Feature-Based Maximum Mean Discrepancy)—a metric well-suited for small samples—and evaluate diversity with the multimodality metric. Our method embeds an Elastic Weight Consolidation (EWC)-based regularization term in the generator’s loss and then fine-tunes the limited motion-capture set. We analyze how the strength of this term influences diversity and uncovers motion-specific characteristics, revealing behavior that differs from that observed in image-generation tasks. The experiments indicate that the transfer learning pipeline improves generative performance in low-data scenarios. Increasing the weight of the regularization term yields higher diversity in the synthesized motions, demonstrating a marked uplift in motion diversity. These findings suggest that the proposed approach can effectively augment small motion-capture datasets with greater variety, a capability expected to benefit applications that rely on diverse human-motion data across modern robotics, animation, and virtual reality. Full article
(This article belongs to the Special Issue Deep Neural Networks: Theory, Algorithms and Applications)
Show Figures

Figure 1

20 pages, 3586 KiB  
Article
Enhanced NiFe2O4 Catalyst Performance and Stability in Anion Exchange Membrane Water Electrolysis: Influence of Iron Content and Membrane Selection
by Khaja Wahab Ahmed, Aidan Dobson, Saeed Habibpour and Michael Fowler
Molecules 2025, 30(15), 3228; https://doi.org/10.3390/molecules30153228 - 1 Aug 2025
Viewed by 237
Abstract
Anion exchange membrane (AEM) water electrolysis is a potentially inexpensive and efficient source of hydrogen production as it uses effective low-cost catalysts. The catalytic activity and performance of nickel iron oxide (NiFeOx) catalysts for hydrogen production in AEM water electrolyzers were [...] Read more.
Anion exchange membrane (AEM) water electrolysis is a potentially inexpensive and efficient source of hydrogen production as it uses effective low-cost catalysts. The catalytic activity and performance of nickel iron oxide (NiFeOx) catalysts for hydrogen production in AEM water electrolyzers were investigated. The NiFeOx catalysts were synthesized with various iron content weight percentages, and at the stoichiometric ratio for nickel ferrite (NiFe2O4). The catalytic activity of NiFeOx catalyst was evaluated by linear sweep voltammetry (LSV) and chronoamperometry for the oxygen evolution reaction (OER). NiFe2O4 showed the highest activity for the OER in a three-electrode system, with 320 mA cm−2 at 2 V in 1 M KOH solution. NiFe2O4 displayed strong stability over a 600 h period at 50 mA cm−2 in a three-electrode setup, with a degradation rate of 15 μV/h. In single-cell electrolysis using a X-37 T membrane, at 2.2 V in 1 M KOH, the NiFe2O4 catalyst had the highest activity of 1100 mA cm−2 at 45 °C, which increased with the temperature to 1503 mA cm−2 at 55 °C. The performance of various membranes was examined, and the highest performance of the tested membranes was determined to be that of the Fumatech FAA-3-50 and FAS-50 membranes, implying that membrane performance is strongly correlated with membrane conductivity. The obtained Nyquist plots and equivalent circuit analysis were used to determine cell resistances. It was found that ohmic resistance decreases with an increase in temperature from 45 °C to 55 °C, implying the positive effect of temperature on AEM electrolysis. The FAA-3-50 and FAS-50 membranes were determined to have lower activation and ohmic resistances, indicative of higher conductivity and faster membrane charge transfer. NiFe2O4 in an AEM water electrolyzer displayed strong stability, with a voltage degradation rate of 0.833 mV/h over the 12 h durability test. Full article
(This article belongs to the Special Issue Water Electrolysis)
Show Figures

Figure 1

21 pages, 4147 KiB  
Article
OLTEM: Lumped Thermal and Deep Neural Model for PMSM Temperature
by Yuzhong Sheng, Xin Liu, Qi Chen, Zhenghao Zhu, Chuangxin Huang and Qiuliang Wang
AI 2025, 6(8), 173; https://doi.org/10.3390/ai6080173 - 31 Jul 2025
Viewed by 271
Abstract
Background and Objective: Temperature management is key for reliable operation of permanent magnet synchronous motors (PMSMs). The lumped-parameter thermal network (LPTN) is fast and interpretable but struggles with nonlinear behavior under high power density. We propose OLTEM, a physics-informed deep model that combines [...] Read more.
Background and Objective: Temperature management is key for reliable operation of permanent magnet synchronous motors (PMSMs). The lumped-parameter thermal network (LPTN) is fast and interpretable but struggles with nonlinear behavior under high power density. We propose OLTEM, a physics-informed deep model that combines LPTN with a thermal neural network (TNN) to improve prediction accuracy while keeping physical meaning. Methods: OLTEM embeds LPTN into a recurrent state-space formulation and learns three parameter sets: thermal conductance, inverse thermal capacitance, and power loss. Two additions are introduced: (i) a state-conditioned squeeze-and-excitation (SC-SE) attention that adapts feature weights using the current temperature state, and (ii) an enhanced power-loss sub-network that uses a deep MLP with SC-SE and non-negativity constraints. The model is trained and evaluated on the public Electric Motor Temperature dataset (Paderborn University/Kaggle). Performance is measured by mean squared error (MSE) and maximum absolute error across permanent-magnet, stator-yoke, stator-tooth, and stator-winding temperatures. Results: OLTEM tracks fast thermal transients and yields lower MSE than both the baseline TNN and a CNN–RNN model for all four components. On a held-out generalization set, MSE remains below 4.0 °C2 and the maximum absolute error is about 4.3–8.2 °C. Ablation shows that removing either SC-SE or the enhanced power-loss module degrades accuracy, confirming their complementary roles. Conclusions: By combining physics with learned attention and loss modeling, OLTEM improves PMSM temperature prediction while preserving interpretability. This approach can support motor thermal design and control; future work will study transfer to other machines and further reduce short-term errors during abrupt operating changes. Full article
Show Figures

Figure 1

26 pages, 4572 KiB  
Article
Transfer Learning-Based Ensemble of CNNs and Vision Transformers for Accurate Melanoma Diagnosis and Image Retrieval
by Murat Sarıateş and Erdal Özbay
Diagnostics 2025, 15(15), 1928; https://doi.org/10.3390/diagnostics15151928 - 31 Jul 2025
Viewed by 271
Abstract
Background/Objectives: Melanoma is an aggressive type of skin cancer that poses serious health risks if not detected in its early stages. Although early diagnosis enables effective treatment, delays can result in life-threatening consequences. Traditional diagnostic processes predominantly rely on the subjective expertise [...] Read more.
Background/Objectives: Melanoma is an aggressive type of skin cancer that poses serious health risks if not detected in its early stages. Although early diagnosis enables effective treatment, delays can result in life-threatening consequences. Traditional diagnostic processes predominantly rely on the subjective expertise of dermatologists, which can lead to variability and time inefficiencies. Consequently, there is an increasing demand for automated systems that can accurately classify melanoma lesions and retrieve visually similar cases to support clinical decision-making. Methods: This study proposes a transfer learning (TL)-based deep learning (DL) framework for the classification of melanoma images and the enhancement of content-based image retrieval (CBIR) systems. Pre-trained models including DenseNet121, InceptionV3, Vision Transformer (ViT), and Xception were employed to extract deep feature representations. These features were integrated using a weighted fusion strategy and classified through an Ensemble learning approach designed to capitalize on the complementary strengths of the individual models. The performance of the proposed system was evaluated using classification accuracy and mean Average Precision (mAP) metrics. Results: Experimental evaluations demonstrated that the proposed Ensemble model significantly outperformed each standalone model in both classification and retrieval tasks. The Ensemble approach achieved a classification accuracy of 95.25%. In the CBIR task, the system attained a mean Average Precision (mAP) score of 0.9538, indicating high retrieval effectiveness. The performance gains were attributed to the synergistic integration of features from diverse model architectures through the ensemble and fusion strategies. Conclusions: The findings underscore the effectiveness of TL-based DL models in automating melanoma image classification and enhancing CBIR systems. The integration of deep features from multiple pre-trained models using an Ensemble approach not only improved accuracy but also demonstrated robustness in feature generalization. This approach holds promise for integration into clinical workflows, offering improved diagnostic accuracy and efficiency in the early detection of melanoma. Full article
Show Figures

Figure 1

33 pages, 14330 KiB  
Article
Noisy Ultrasound Kidney Image Classifications Using Deep Learning Ensembles and Grad-CAM Analysis
by Walid Obaid, Abir Hussain, Tamer Rabie and Wathiq Mansoor
AI 2025, 6(8), 172; https://doi.org/10.3390/ai6080172 - 31 Jul 2025
Viewed by 320
Abstract
Objectives: This study introduces an automated classification system for noisy kidney ultrasound images using an ensemble of deep neural networks (DNNs) with transfer learning. Methods: The method was tested using a dataset with two categories: normal kidney images and kidney images with stones. [...] Read more.
Objectives: This study introduces an automated classification system for noisy kidney ultrasound images using an ensemble of deep neural networks (DNNs) with transfer learning. Methods: The method was tested using a dataset with two categories: normal kidney images and kidney images with stones. The dataset contains 1821 normal kidney images and 2592 kidney images with stones. Noisy images involve various types of noises, including salt and pepper noise, speckle noise, Poisson noise, and Gaussian noise. The ensemble-based method is benchmarked with state-of-the-art techniques and evaluated on ultrasound images with varying quality and noise levels. Results: Our proposed method demonstrated a maximum classification accuracy of 99.43% on high-quality images (the original dataset images) and 99.21% on the dataset images with added noise. Conclusions: The experimental results confirm that the ensemble of DNNs accurately classifies most images, achieving a high classification performance compared to conventional and individual DNN-based methods. Additionally, our method outperforms the highest-achieving method by more than 1% in accuracy. Furthermore, our analysis using Gradient-weighted Class Activation Mapping indicated that our proposed deep learning model is capable of prediction using clinically relevant features. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

23 pages, 698 KiB  
Article
Modelling the Bioaccumulation of Ciguatoxins in Parrotfish on the Great Barrier Reef Reveals Why Biomagnification Is Not a Property of Ciguatoxin Food Chains
by Michael J. Holmes and Richard J. Lewis
Toxins 2025, 17(8), 380; https://doi.org/10.3390/toxins17080380 - 30 Jul 2025
Viewed by 348
Abstract
We adapt previously developed conceptual and numerical models of ciguateric food chains on the Great Barrier Reef, Australia, to model the bioaccumulation of ciguatoxins (CTXs) in parrotfish, the simplest food chain with only two trophic levels. Our model indicates that relatively low (1 [...] Read more.
We adapt previously developed conceptual and numerical models of ciguateric food chains on the Great Barrier Reef, Australia, to model the bioaccumulation of ciguatoxins (CTXs) in parrotfish, the simplest food chain with only two trophic levels. Our model indicates that relatively low (1 cell/cm2) densities of Gambierdiscus/Fukuyoa species (hereafter collectively referred to as Gambierdiscus) producing known concentrations of CTX are unlikely to be a risk of producing ciguateric fishes on the Great Barrier Reef unless CTX can accumulate and be retained in parrotfish over many months. Cell densities on turf algae equivalent to 10 Gambierdiscus/cm2 producing known maximum concentrations of Pacific-CTX-4 (0.6 pg P-CTX-4/cell) are more difficult to assess but could be a risk. This cell density may be a higher risk for parrotfish than we previously suggested for production of ciguateric groupers (third-trophic-level predators) since second-trophic-level fishes can accumulate CTX loads without the subsequent losses that occur between trophic levels. Our analysis suggests that the ratios of parrotfish length-to-area grazed and weight-to-area grazed scale differently (allometrically), where the area grazed is a proxy for the number of Gambierdiscus consumed and hence proportional to toxin accumulation. Such scaling can help explain fish size–toxicity relationships within and between trophic levels for ciguateric fishes. Our modelling reveals that CTX bioaccumulates but does not necessarily biomagnify in food chains, with the relative enrichment and depletion rates of CTX varying with fish size and/or trophic level through an interplay of local and regional food chain influences. Our numerical model for the bioaccumulation and transfer of CTX across food chains helps conceptualize the development of ciguateric fishes by comparing scenarios that reveal limiting steps in producing ciguateric fish and focuses attention on the relative contributions from each part of the food chain rather than only on single components, such as CTX production. Full article
(This article belongs to the Collection Ciguatoxin)
Show Figures

Figure 1

26 pages, 3356 KiB  
Article
Integrating Urban Factors as Predictors of Last-Mile Demand Patterns: A Spatial Analysis in Thessaloniki
by Dimos Touloumidis, Michael Madas, Panagiotis Kanellopoulos and Georgia Ayfantopoulou
Urban Sci. 2025, 9(8), 293; https://doi.org/10.3390/urbansci9080293 - 29 Jul 2025
Viewed by 227
Abstract
While the explosive growth in e-commerce stresses urban logistics systems, city planners lack of fine-grained data in order to anticipate and manage the resulting freight flows. Using a three-stage analytical approach combining descriptive zonal statistics, hotspot analysis and different regression modeling from univariate [...] Read more.
While the explosive growth in e-commerce stresses urban logistics systems, city planners lack of fine-grained data in order to anticipate and manage the resulting freight flows. Using a three-stage analytical approach combining descriptive zonal statistics, hotspot analysis and different regression modeling from univariate to geographically weighted regression, this study integrates one year of parcel deliveries from a leading courier with open spatial layers of land-use zoning, census population, mobile-signal activity and household income to model last-mile demand across different land use types. A baseline linear regression shows that residential population alone accounts for roughly 30% of the variance in annual parcel volumes (2.5–3.0 deliveries per resident) while adding daytime workforce and income increases the prediction accuracy to 39%. In a similar approach where coefficients vary geographically with Geographically Weighted Regression to capture the local heterogeneity achieves a significant raise of the overall R2 to 0.54 and surpassing 0.70 in residential and institutional districts. Hot-spot analysis reveals a highly fragmented pattern where fewer than 5% of blocks generate more than 8.5% of all deliveries with no apparent correlation to the broaden land-use classes. Commercial and administrative areas exhibit the greatest intensity (1149 deliveries per ha) yet remain the hardest to explain (global R2 = 0.21) underscoring the importance of additional variables such as retail mix, street-network design and tourism flows. Through this approach, the calibrated models can be used to predict city-wide last-mile demand using only public inputs and offers a transferable, privacy-preserving template for evidence-based freight planning. By pinpointing the location and the land uses where demand concentrates, it supports targeted interventions such as micro-depots, locker allocation and dynamic curb-space management towards more sustainable and resilient urban-logistics networks. Full article
Show Figures

Figure 1

22 pages, 6452 KiB  
Article
A Blockchain and IoT-Enabled Framework for Ethical and Secure Coffee Supply Chains
by John Byrd, Kritagya Upadhyay, Samir Poudel, Himanshu Sharma and Yi Gu
Future Internet 2025, 17(8), 334; https://doi.org/10.3390/fi17080334 - 27 Jul 2025
Viewed by 437
Abstract
The global coffee supply chain is a complex multi-stakeholder ecosystem plagued by fragmented records, unverifiable origin claims, and limited real-time visibility. These limitations pose risks to ethical sourcing, product quality, and consumer trust. To address these issues, this paper proposes a blockchain and [...] Read more.
The global coffee supply chain is a complex multi-stakeholder ecosystem plagued by fragmented records, unverifiable origin claims, and limited real-time visibility. These limitations pose risks to ethical sourcing, product quality, and consumer trust. To address these issues, this paper proposes a blockchain and IoT-enabled framework for secure and transparent coffee supply chain management. The system integrates simulated IoT sensor data such as Radio-Frequency Identification (RFID) identity tags, Global Positioning System (GPS) logs, weight measurements, environmental readings, and mobile validations with Ethereum smart contracts to establish traceability and automate supply chain logic. A Solidity-based Ethereum smart contract is developed and deployed on the Sepolia testnet to register users and log batches and to handle ownership transfers. The Internet of Things (IoT) data stream is simulated using structured datasets to mimic real-world device behavior, ensuring that the system is tested under realistic conditions. Our performance evaluation on 1000 transactions shows that the model incurs low transaction costs and demonstrates predictable efficiency behavior of the smart contract in decentralized conditions. Over 95% of the 1000 simulated transactions incurred a gas fee of less than ETH 0.001. The proposed architecture is also scalable and modular, providing a foundation for future deployment with live IoT integrations and off-chain data storage. Overall, the results highlight the system’s ability to improve transparency and auditability, automate enforcement, and enhance consumer confidence in the origin and handling of coffee products. Full article
Show Figures

Figure 1

22 pages, 1359 KiB  
Article
Fall Detection Using Federated Lightweight CNN Models: A Comparison of Decentralized vs. Centralized Learning
by Qasim Mahdi Haref, Jun Long and Zhan Yang
Appl. Sci. 2025, 15(15), 8315; https://doi.org/10.3390/app15158315 - 25 Jul 2025
Viewed by 263
Abstract
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to [...] Read more.
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to train deep learning models across decentralized data sources without compromising user privacy. The pipeline begins with data acquisition, in which annotated video-based fall-detection datasets formatted in YOLO are used to extract image crops of human subjects. These images are then preprocessed, resized, normalized, and relabeled into binary classes (fall vs. non-fall). A stratified 80/10/10 split ensures balanced training, validation, and testing. To simulate real-world federated environments, the training data is partitioned across multiple clients, each performing local training using pretrained CNN models including MobileNetV2, VGG16, EfficientNetB0, and ResNet50. Two FL topologies are implemented: a centralized server-coordinated scheme and a ring-based decentralized topology. During each round, only model weights are shared, and federated averaging (FedAvg) is applied for global aggregation. The models were trained using three random seeds to ensure result robustness and stability across varying data partitions. Among all configurations, decentralized MobileNetV2 achieved the best results, with a mean test accuracy of 0.9927, F1-score of 0.9917, and average training time of 111.17 s per round. These findings highlight the model’s strong generalization, low computational burden, and suitability for edge deployment. Future work will extend evaluation to external datasets and address issues such as client drift and adversarial robustness in federated environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 3625 KiB  
Article
Deep-CNN-Based Layout-to-SEM Image Reconstruction with Conformal Uncertainty Calibration for Nanoimprint Lithography in Semiconductor Manufacturing
by Jean Chien and Eric Lee
Electronics 2025, 14(15), 2973; https://doi.org/10.3390/electronics14152973 - 25 Jul 2025
Viewed by 279
Abstract
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM [...] Read more.
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM images from binary design layouts and delivers calibrated pixel-by-pixel uncertainty simultaneously. First, a shallow U-Net is trained on conformalized quantile regression (CQR) to output 90% prediction intervals with statistically guaranteed coverage. Moreover, per-level errors on a small calibration dataset are designed to drive an outlier-weighted and encoder-frozen transfer fine-tuning phase that refines only the decoder, with its capacity explicitly focused on regions of spatial uncertainty. On independent test layouts, our proposed fine-tuned model significantly reduces the mean absolute error (MAE) from 0.0365 to 0.0255 and raises the coverage from 0.904 to 0.926, while cutting the labeled data and GPU time by 80% and 72%, respectively. The resultant uncertainty maps highlight spatial regions associated with error hotspots and support defect-aware optical proximity correction (OPC) with fewer guard-band iterations. Extending the current perspective beyond OPC, the innovatively model-agnostic and modular design of the pipeline here allows flexible integration into other critical stages of the semiconductor manufacturing workflow, such as imprinting, etching, and inspection. In these stages, such predictions are critical for achieving higher precision, efficiency, and overall process robustness in semiconductor manufacturing, which is the ultimate motivation of this study. Full article
Show Figures

Figure 1

18 pages, 2885 KiB  
Article
Research on Microseismic Magnitude Prediction Method Based on Improved Residual Network and Transfer Learning
by Huaixiu Wang and Haomiao Wang
Appl. Sci. 2025, 15(15), 8246; https://doi.org/10.3390/app15158246 - 24 Jul 2025
Viewed by 207
Abstract
To achieve more precise and effective microseismic magnitude estimation, a classification model based on transfer learning with an improved deep residual network is proposed for predicting microseismic magnitudes. Initially, microseismic waveform images are preprocessed through cropping and blurring before being used as inputs [...] Read more.
To achieve more precise and effective microseismic magnitude estimation, a classification model based on transfer learning with an improved deep residual network is proposed for predicting microseismic magnitudes. Initially, microseismic waveform images are preprocessed through cropping and blurring before being used as inputs to the model. Subsequently, the microseismic waveform image dataset is divided into training, testing, and validation sets. By leveraging the pretrained ResNet18 model weights from ImageNet, a transfer learning strategy is implemented, involving the retraining of all layers from scratch. Following this, the CBAM is introduced for model optimization, resulting in a new network model. Finally, this model is utilized in seismic magnitude classification research to enable microseismic magnitude prediction. The model is validated and compared with other commonly used neural network models. The experiment uses microseismic waveform data and images of magnitudes 0–3 from the Stanford Earthquake Dataset (STEAD) as training samples. The results indicate that the model achieves an accuracy of 87% within an error range of ±0.2 and 94.7% within an error range of ±0.3. This model demonstrates enhanced stability and reliability, effectively addressing the issue of missing data labels. It validates that using ResNet transfer learning combined with an attention mechanism yields higher accuracy in microseismic magnitude prediction, as well as confirming the effectiveness of the CBAM. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 8859 KiB  
Article
Effect of Systematic Errors on Building Component Sound Insulation Measurements Using Near-Field Acoustic Holography
by Wei Xiong, Wuying Chen, Zhixin Li, Heyu Zhu and Xueqiang Wang
Buildings 2025, 15(15), 2619; https://doi.org/10.3390/buildings15152619 - 24 Jul 2025
Viewed by 234
Abstract
Near-field acoustic holography (NAH) provides an effective way to achieve wide-band, high-resolution visualization measurement of the sound insulation performance of building components. However, based on Green’s function, the microphone array’s inherent amplitude and phase mismatch errors will exponentially amplify the sound field inversion [...] Read more.
Near-field acoustic holography (NAH) provides an effective way to achieve wide-band, high-resolution visualization measurement of the sound insulation performance of building components. However, based on Green’s function, the microphone array’s inherent amplitude and phase mismatch errors will exponentially amplify the sound field inversion process, significantly reducing the measurement accuracy. To systematically evaluate this problem, this study combines numerical simulation with actual measurements in a soundproof room that complies with the ISO 10140 standard, quantitatively analyzes the influence of array system errors on NAH reconstructed sound insulation and acoustic images, and proposes an error correction strategy based on channel transfer function normalization. The research results show that when the array amplitude and phase mismatch mean values are controlled within 5% and 5°, respectively, the deviation of the weighted sound insulation measured by NAH can be controlled within 1 dB, and the error in the key frequency band of building sound insulation (200–1.6k Hz) does not exceed 1.5 dB; when the mismatch mean value increases to 10% and 10°, the deviation of the weighted sound insulation can reach 2 dB, and the error in the high-frequency band (≥1.6k Hz) significantly increases to more than 2.0 dB. The sound image shows noticeable spatial distortion in the frequency band above 250 Hz. After applying the proposed correction method, the NAH measurement results of the domestic microphone array are highly consistent with the weighted sound insulation measured by the standard method, and the measurement difference in the key frequency band is less than 1.0 dB, which significantly improves the reliability and applicability of low-cost equipment in engineering applications. In addition, the study reveals the inherent mechanism of differential amplification of system errors in the propagating wave and evanescent wave channels. It provides quantitative thresholds and operational guidance for instrument selection, array calibration, and error compensation of NAH technology in building sound insulation detection. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

26 pages, 9588 KiB  
Article
Research and Experimental Verification of an Efficient Subframe Lightweighting Method Integrating SIMP Topology and Size Optimization
by Jihui Zhuang and Fan Zeng
Appl. Sci. 2025, 15(15), 8192; https://doi.org/10.3390/app15158192 - 23 Jul 2025
Viewed by 222
Abstract
Under the context of the dual-carbon policy, reducing energy consumption and emissions in automobiles has garnered significant attention, with automotive lightweighting being particularly important. This paper focuses on the lightweight design of automotive subframes, aiming to minimize weight while meeting performance requirements. Research [...] Read more.
Under the context of the dual-carbon policy, reducing energy consumption and emissions in automobiles has garnered significant attention, with automotive lightweighting being particularly important. This paper focuses on the lightweight design of automotive subframes, aiming to minimize weight while meeting performance requirements. Research has revealed that the original subframe allows further room for lightweighting and performance optimization. A topology optimization model was established using the Solid Isotropic Material with Penalization (SIMP) method and solved using the Method of Moving Asymptotes (MMA) algorithm. Integration of the SIMP method was achieved on the Abaqus-Matlab (2022x) platform via Python (3.11.0) and Matlab (R2022a) coding, forming an effective optimization framework. The optimization results provided clear load transfer paths, offering a theoretical basis for geometric model conversion. The subframe model was subsequently reconstructed in CATIA. Material redundancy was identified in the reconstructed subframe model, prompting secondary optimization. Multi-objective size optimization was conducted in OptiStruct, reducing the subframe’s mass from 33.73 kg to 17.84 kg, achieving a 47.1% weight reduction. Static stiffness and modal analyses performed in HyperMesh confirmed that results met all relevant standards. Modal testing revealed a minimal deviation of only −2.7% from the simulation results, validating the feasibility and reliability of the optimized design. This research demonstrates that combining topology optimization with size optimization can significantly reduce weight and enhance subframe performance, providing valuable support for future automotive component design. Full article
Show Figures

Figure 1

Back to TopTop