Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = source mask optimization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1460 KB  
Article
Supervirtual Seismic Interferometry with Adaptive Weights to Suppress Scattered Wave
by Chunming Wang, Xiaohong Chen, Shanglin Liang, Sian Hou and Jixiang Xu
Appl. Sci. 2026, 16(3), 1188; https://doi.org/10.3390/app16031188 - 23 Jan 2026
Viewed by 81
Abstract
Land seismic data are always contaminated by surface waves, which demonstrate strong energy, low velocity, and long vibrations. Such noises often mask deep effective reflections, seriously reducing the data’s signal-to-noise ratio while limiting the imaging accuracy of complex deep structures and the efficiency [...] Read more.
Land seismic data are always contaminated by surface waves, which demonstrate strong energy, low velocity, and long vibrations. Such noises often mask deep effective reflections, seriously reducing the data’s signal-to-noise ratio while limiting the imaging accuracy of complex deep structures and the efficiency of hydrocarbon reservoir identification. To address this critical technical bottleneck, this paper proposes a surface wave joint reconstruction method based on stationary phase analysis, combining the cross-correlation seismic interferometry method with the convolutional seismic interferometry method. This approach integrates cross-correlation and convolutional seismic interferometry techniques to achieve coordinated reconstruction of surface waves in both shot and receiver domains while introducing adaptive weight factors to optimize the reconstruction process and reduce interference from erroneous data. As a purely data-driven framework, this method does not rely on underground medium velocity models, achieving efficient noise reduction by adaptively removing reconstructed surface waves through multi-channel matched filtering. Application validation with field seismic data from the piedmont regions of western China demonstrates that this method effectively suppresses high-energy surface waves, significantly restores effective signals, improves the signal-to-noise ratio of seismic data, and greatly enhances the clarity of coherent events in stacked profiles. This study provides a reliable technical approach for noise reduction in seismic data under complex near-surface conditions, particularly suitable for hydrocarbon exploration in regions with developed scattering sources such as mountainous areas in western China. It holds significant practical application value and broad dissemination potential for advancing deep hydrocarbon resource exploration and improving the quality of complex structural imaging. Full article
(This article belongs to the Topic Advanced Technology for Oil and Nature Gas Exploration)
34 pages, 5134 KB  
Review
Inverse Lithography Technology (ILT) Under Chip Manufacture Context
by Xiaodong Meng, Cai Chen and Jie Ni
Micromachines 2026, 17(1), 117; https://doi.org/10.3390/mi17010117 - 16 Jan 2026
Viewed by 264
Abstract
As semiconductor process nodes shrink to 3 nm and beyond, traditional optical proximity correction (OPC) and resolution enhancement technologies (RETs) can no longer meet the high patterning precision needs of advanced chip manufacturing due to the sub-wavelength lithography limits. Inverse lithography technology (ILT), [...] Read more.
As semiconductor process nodes shrink to 3 nm and beyond, traditional optical proximity correction (OPC) and resolution enhancement technologies (RETs) can no longer meet the high patterning precision needs of advanced chip manufacturing due to the sub-wavelength lithography limits. Inverse lithography technology (ILT), a key part of computational lithography, has become a critical solution for these issues. From an EDA industry perspective, this review provides an original and systematic summary of ILT’s development and applications, which helps integrate the scattered research into a clear framework for both academic and industrial use. Compared with traditional OPC, the latest ILT has three main advantages: (1) better patterning accuracy, as a result of the precise optical models that fix complex optical issues (like diffraction and interference) in advanced lithography systems; (2) a wider process window, as it optimizes mask designs by working backwards from the target wafer patterns, making lithography more stable against process changes; and (3) stronger adaptability to new lithography scenarios, such as High-NA EUV and extended DUV nodes. This review first explains ILT’s working principles (the basic concepts, mathematical formulae, and main methods like level-set and pixelated approaches) and its development history, highlighting key events that boosted its progress. It then analyzes ILT’s current application status in the industry (such as hotspot fixing, full-chip trials, and EUV-era use) and its main bottlenecks: a high computational complexity leading to long runtime, difficulties in mask manufacturing, challenges in model calibration, and a conservative market that slows large-scale adoption. Finally, it discusses promising future directions, including hybrid ILT-OPC-SMO strategies, improving model accuracy, AI/ML-driven design, GPU acceleration, multi-beam mask writer improvements, and open-source data to solve data shortage problems. By combining the latest research and industry practices, this review fills the gap of comprehensive ILT summaries that cover the principles, progress, applications, and prospects. It helps readers fully understand ILT’s technical landscape and offers practical insights for solving the key challenges, thus promoting ILT’s industrial use in advanced chip manufacturing. Full article
(This article belongs to the Special Issue Recent Advances in Lithography)
Show Figures

Figure 1

24 pages, 8257 KB  
Article
Multi-Satellite Image Matching and Deep Learning Segmentation for Detection of Daytime Sea Fog Using GK2A AMI and GK2B GOCI-II
by Jonggu Kang, Hiroyuki Miyazaki, Seung Hee Kim, Menas Kafatos, Daesun Kim, Jinsoo Kim and Yangwon Lee
Remote Sens. 2026, 18(1), 34; https://doi.org/10.3390/rs18010034 - 23 Dec 2025
Viewed by 518
Abstract
Traditionally, sea fog detection technologies have relied primarily on in situ observations. However, point-based observations suffer from limitations in extensive monitoring in marine environments due to the scarcity of observation stations and the limited nature of measurement data. Satellites effectively address these issues [...] Read more.
Traditionally, sea fog detection technologies have relied primarily on in situ observations. However, point-based observations suffer from limitations in extensive monitoring in marine environments due to the scarcity of observation stations and the limited nature of measurement data. Satellites effectively address these issues by covering vast areas and operating across multiple spectral channels, enabling precise detection and monitoring of sea fog. Despite the increasing adoption of deep learning in this field, achieving further improvements in accuracy and reliability necessitates the simultaneous use of multiple satellite datasets rather than relying on a single source. Therefore, this study aims to achieve higher accuracy and reliability in sea fog detection by employing a deep learning-based advanced co-registration technique for multi-satellite image fusion and autotuning-based optimization of State-of-the-Art (SOTA) semantic segmentation models. We utilized data from the Advanced Meteorological Imager (AMI) sensor on the Geostationary Korea Multi-Purpose Satellite 2A (GK2A) and the GOCI-II sensor on the Geostationary Korea Multi-Purpose Satellite 2B (GK2B). Swin Transformer, Mask2Former, and SegNeXt all demonstrated balanced and excellent performance across overall metrics such as IoU and F1-score. Specifically, Swin Transformer achieved an IoU of 77.24 and an F1-score of 87.16. Notably, multi-satellite fusion significantly improved the Recall score compared to the single AMI product, increasing from 88.78 to 92.01, thereby effectively mitigating the omission of disaster information. Ultimately, comparisons with the officially operational GK2A AMI Fog and GK2B GOCI-II Marine Fog (MF) products revealed that our deep learning approach was superior to both existing operational products. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

20 pages, 16950 KB  
Article
Using High-Resolution Satellite Imagery and Deep Learning to Map Artisanal Mining Spatial Extent in the Democratic Republic of the Congo
by Francesco Pasanisi, Robert N. Masolele and Johannes Reiche
Remote Sens. 2025, 17(24), 4057; https://doi.org/10.3390/rs17244057 - 18 Dec 2025
Viewed by 657
Abstract
Artisanal and Small-scale Mining (ASM) significantly impacts the Democratic Republic of Congo’s (DRC) socio-economic landscape and environmental integrity, yet its dynamic and informal nature makes monitoring challenging. This study addresses this challenge by implementing a novel deep learning approach to map ASM sites [...] Read more.
Artisanal and Small-scale Mining (ASM) significantly impacts the Democratic Republic of Congo’s (DRC) socio-economic landscape and environmental integrity, yet its dynamic and informal nature makes monitoring challenging. This study addresses this challenge by implementing a novel deep learning approach to map ASM sites across the DRC using satellite imagery. We tackled key obstacles including ground truth data scarcity, insufficient spatial resolution of conventional satellite sensors, and persistent cloud cover in the region. We developed a methodology to generate a pseudo-ground truth dataset by converting point-based ASM locations to segmented areas through a multi-stage process involving clustering, auxiliary dataset masking, and manual refinement. Four model configurations were evaluated: Planet-NICFI standalone, Sentinel-1 standalone, Early Fusion, and Late Fusion approaches. The Late Fusion model, which integrated high-resolution Planet-NICFI optical imagery (4.77 m resolution) with Sentinel-1 SAR data, achieved the highest performance with an average precision of 71%, recall of 75%, and F1-score of 73% for ASM detection. This superior performance demonstrated how SAR data’s textural features complemented optical data’s spectral information, particularly improving discrimination between ASM sites and water bodies—a common source of misclassification in optical-only approaches. We deployed the optimized model to map ASM extent in the Mwenga territory, achieving an overall accuracy of 88.4% when validated against high-resolution reference imagery. Despite these achievements, challenges persist in distinguishing ASM sites from built-up areas, suggesting avenues for future research through multi-class approaches. This study advances the domain of ASM mapping by offering methodologies that enhance remote sensing capabilities in ASM-impacted regions, providing valuable tools for monitoring, regulation, and environmental management. Full article
Show Figures

Figure 1

22 pages, 3829 KB  
Article
Air Pollutant Concentration Prediction Using a Generative Adversarial Network with Multi-Scale Convolutional Long Short-Term Memory and Enhanced U-Net
by Jiankun Zhang, Pei Su, Juexuan Wang and Zhantong Cai
Sustainability 2025, 17(24), 11177; https://doi.org/10.3390/su172411177 - 13 Dec 2025
Viewed by 533
Abstract
Accurate prediction of air pollutant concentrations, particularly fine particulate matter (PM2.5), is essential for controlling and preventing heavy pollution incidents by providing early warnings of harmful substances in the atmosphere. This study proposes a novel spatiotemporal model for PM2.5 concentration [...] Read more.
Accurate prediction of air pollutant concentrations, particularly fine particulate matter (PM2.5), is essential for controlling and preventing heavy pollution incidents by providing early warnings of harmful substances in the atmosphere. This study proposes a novel spatiotemporal model for PM2.5 concentration prediction based on a Conditional Wasserstein Generative Adversarial Network with Gradient Penalty (CWGAN-GP). The framework incorporates three key design components: First, the generator employs an Inception-style Convolutional Long Short-Term Memory (ConvLSTM) network, integrating parallel multi-scale convolutions and hierarchical normalization. This design enhances multi-scale spatiotemporal feature extraction while effectively suppressing boundary artifacts via a map-masking layer. Second, the discriminator adopts an architecturally enhanced U-Net, incorporating spectral normalization and shallow instance normalization. Feature-guided masked skip connections are introduced, and the output is designed as a raw score map to mitigate premature saturation during training. Third, a composite loss function is utilized, combining adversarial loss, feature-matching loss, and inter-frame spatiotemporal smoothness. A sliding-window conditioning mechanism is also implemented, leveraging multi-level features from the discriminator for joint spatiotemporal optimization. Experiments conducted on multi-source gridded data from Dongguan demonstrate that the model achieves a 12 h prediction performance with a Root Mean Square Error (RMSE) of 4.61 μg/m3, a Mean Absolute Error (MAE) of 6.42 μg/m3, and a Coefficient of Determination (R2) of 0.80. The model significantly alleviates performance degradation in long-term predictions when the forecast horizon is extended from 3 to 12 h, the RMSE increases by only 1.84 μg/m3, and regional deviations remain within ±3 μg/m3. These results indicate strong capabilities in spatial topology reconstruction and robustness against concentration anomalies, highlighting the model’s potential for hyperlocal air quality early warning. It should be noted that the empirical validation is limited to the specific environmental conditions of Dongguan, and the model’s generalizability to other geographical and climatic settings requires further investigation. Full article
(This article belongs to the Special Issue Atmospheric Pollution and Microenvironmental Air Quality)
Show Figures

Figure 1

36 pages, 7233 KB  
Article
Deep Learning for Tumor Segmentation and Multiclass Classification in Breast Ultrasound Images Using Pretrained Models
by K. E. ArunKumar, Matthew E. Wilson, Nathan E. Blake, Tylor J. Yost and Matthew Walker
Sensors 2025, 25(24), 7557; https://doi.org/10.3390/s25247557 - 12 Dec 2025
Viewed by 726
Abstract
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence [...] Read more.
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence (AI) tools based on pretrained models to segment lesions and detect breast cancer. The proposed workflow includes both the development of segmentation models and development of a series of classification models to classify ultrasound images as normal, benign or malignant. The pretrained models were trained and evaluated on the Breast Ultrasound Images (BUSI) dataset, a publicly available collection of grayscale breast ultrasound images with corresponding expert-annotated masks. For segmentation, images and ground-truth masks were used to pretrained encoder (ResNet18, EfficientNet-B0 and MobileNetV2)–decoder (U-Net, U-Net++ and DeepLabV3) models, including the DeepLabV3 architecture integrated with a Frequency-Domain Feature Enhancement Module (FEM). The proposed FEM improves spatial and spectral feature representations using Discrete Fourier Transform (DFT), GroupNorm, dropout regularization and adaptive fusion. For classification, each image was assigned a label (normal, benign or malignant). Optuna, an open-source software framework, was used for hyperparameter optimization and for the testing of various pretrained models to determine the best encoder–decoder segmentation architecture. Five different pretrained models (ResNet18, DenseNet121, InceptionV3, MobielNetV3 and GoogleNet) were optimized for multiclass classification. DeepLabV3 outperformed other segmentation architectures, with consistent performance across training, validation and test images, with Dice Similarity Coefficient (DSC, a metric describing the overlap between predicted and true lesion regions) values of 0.87, 0.80 and 0.83 on training, validation and test sets, respectively. ResNet18:DeepLabV3 achieved an Intersection over Union (IoU) score of 0.78 during training, while ResNet18:U-Net++ achieved the best Dice coefficient (0.83) and IoU (0.71) and area under the curve (AUC, 0.91) scores on the test (unseen) dataset when compared to other models. However, the proposed Resnet18: FrequencyAwareDeepLabV3 (FADeepLabV3) achieved a DSC of 0.85 and an IoU of 0.72 on the test dataset, demonstrating improvements over standard DeepLabV3. Notably, the frequency-domain enhancement substantially improved the AUC from 0.90 to 0.98, indicating enhanced prediction confidence and clinical reliability. For classification, ResNet18 produced an F1 score—a measure combining precision and recall—of 0.95 and an accuracy of 0.90 on the training dataset, while InceptionV3 performed best on the test dataset, with an F1 score of 0.75 and accuracy of 0.83. We demonstrate a comprehensive approach to automate the segmentation and multiclass classification of breast cancer ultrasound images into benign, malignant or normal transfer learning models on an imbalanced ultrasound image dataset. Full article
Show Figures

Figure 1

18 pages, 2425 KB  
Article
Impact of Low-Dose CT Radiation on Gene Expression and DNA Integrity
by Nikolai Schmid, Vadim Gorte, Michael Akers, Niklas Verloh, Michael Haimerl, Christian Stroszczynski, Harry Scherthan, Timo Orben, Samantha Stewart, Laura Kubitscheck, Hanns Leonhard Kaatsch, Matthias Port, Michael Abend and Patrick Ostheim
Int. J. Mol. Sci. 2025, 26(24), 11869; https://doi.org/10.3390/ijms262411869 - 9 Dec 2025
Viewed by 496
Abstract
Computed tomography (CT) is a major source of low-dose ionizing radiation exposure in medical imaging. Risk assessment at this dose level is difficult and relies on the hypothetical linear no-threshold model. To address the response to such low doses in patients undergoing CT [...] Read more.
Computed tomography (CT) is a major source of low-dose ionizing radiation exposure in medical imaging. Risk assessment at this dose level is difficult and relies on the hypothetical linear no-threshold model. To address the response to such low doses in patients undergoing CT scans, we examined radiation-induced alterations at the transcriptomic and DNA damage levels in peripheral blood cells. Peripheral whole blood of 60 patients was collected before and after CT. Post-CT samples were obtained 4–6 h after scan (n = 28, in vivo incubation) or alternatively immediately after the CT scan, followed by ex vivo incubation (n = 32). The gene expression of known radiation-responsive genes (n = 9) was quantified using qRT-PCR. DNA double-strand breaks (DSB) were assessed in 12 patients through microscopic γ-H2AX + 53BP1 DSB focus staining. The mean dose–length product (DLP) across all scans was 561.9 ± 384.6 mGy·cm. Significant differences in the median differential gene expression (DGE) were detected between in vivo and ex vivo incubation conditions, implicating that ex vivo incubation masked the true effect in low-dose settings. The median DGE of in vivo-incubated samples showed a significant upregulation of EDA2R, MIR34AHG, PHLDA3, DDB2, FDXR, and AEN (p ranging from <0.001 to 0.041). In vivo, we observed a linear dose-dependent upregulation for several genes and an explained variance of 0.66 and 0.56 for AEN and FDXR, respectively. DSB focus analysis revealed a slight, non-significant increase in the average DSB damage post-exposure, at a mean DLP of 321.0 mGy·cm. Our findings demonstrate that transcriptional biomarkers are sensitive indicators of low-dose radiation exposure in medical imaging and could prove themselves as clinically applicable biodosimetry tools. Furthermore, the results underscore the need for dose optimization. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

19 pages, 7913 KB  
Article
Integrated Satellite Driven Machine Learning Framework for Precision Irrigation and Sustainable Cotton Production
by Syeda Faiza Nasim and Muhammad Khurram
Algorithms 2025, 18(12), 740; https://doi.org/10.3390/a18120740 - 25 Nov 2025
Viewed by 477
Abstract
This study develops a satellite-based, machine-learning-based prediction algorithm to predict optimal irrigation scheduling for cotton cultivation within Rahim Yar Khan, Pakistan. The framework leverages multispectral satellite imagery (Landsat 8 and Sentinel-2), GIS-derived climatic, land surface data and real-time weather information obtained from a [...] Read more.
This study develops a satellite-based, machine-learning-based prediction algorithm to predict optimal irrigation scheduling for cotton cultivation within Rahim Yar Khan, Pakistan. The framework leverages multispectral satellite imagery (Landsat 8 and Sentinel-2), GIS-derived climatic, land surface data and real-time weather information obtained from a freely accessible weather API, eliminating the need for ground-based IoT sensors. The proposed algorithm integrates FAO-56 evapotranspiration principles and water stress indices to accurately forecast irrigation requirements across the four critical growth stages of cotton. Supervised learning algorithms, including Gradient Boosting, Random Forest, and Logistic Regression, were evaluated, with Random Forest indicating better predictive accuracy with a coefficient of determination (R2) exceeding 0.92 and a root mean square error (RMSE) of approximately 415 kg/ha, owed its capacity to handle complex, non-linear relations, and feature interactions. The model was trained on data collected during 2023 and 2024, and its predictions for 2025 were validated against observed irrigation requirements. The proposed model enabled an average 12–18% reduction in total water application between 2023 and 2025, optimizing water use deprived of compromising crop yield. By merging satellite imagery, GIS data, and weather API information, this approach provides a cost-effective, scalable solution that enables precise, stage-specific irrigation scheduling. Cloud masking was executed by applying the built-in QA bands with the Fmask algorithm to eliminate cloud and cloud-shadow pixels in satellite imagery statistics. Time series were generated by compositing monthly median values to ensure consistency across images. The novelty of our study primarily focuses on its end-to-end integration framework, its application within semi-arid agronomic conditions, and its empirical validation and accuracy calculation over direct association of multi-source statistics with FAO-guided irrigation scheduling to support sustainable cotton cultivation. The quantification of irrigation capacity, determining how much water to apply, is identified as a focus for future research. Full article
Show Figures

Figure 1

30 pages, 83343 KB  
Article
Effects of Streetscapes on Residents’ Sentiments During Heatwaves in Shanghai: Evidence from Multi-Source Data and Interpretable Machine Learning for Urban Sustainability
by Zekun Lu, Yichen Lu, Yaona Chen and Shunhe Chen
Sustainability 2025, 17(22), 10281; https://doi.org/10.3390/su172210281 - 17 Nov 2025
Viewed by 761
Abstract
Using Shanghai as a case study, this paper develops a multi-source fusion and interpretable machine learning framework. Sentiment indices were extracted from Weibo check-ins with ERNIE 3.0, street-view elements were identified using Mask2Former, and urban indicators like the Normalized Difference Vegetation Index, floor [...] Read more.
Using Shanghai as a case study, this paper develops a multi-source fusion and interpretable machine learning framework. Sentiment indices were extracted from Weibo check-ins with ERNIE 3.0, street-view elements were identified using Mask2Former, and urban indicators like the Normalized Difference Vegetation Index, floor area ratio, and road network density were integrated. The coupling between residents’ sentiments and streetscape features during heatwaves was analyzed with Extreme Gradient Boosting, SHapley Additive exPlanations, and GeoSHAPLEY. Results show that (1) the average sentiment index is 0.583, indicating a generally positive tendency, with sentiments clustered spatially, and negative patches in central areas, while positive sentiments are concentrated in waterfronts and green zones. (2) SHapley Additive exPlanations analysis identifies NDVI (0.024), visual entropy (0.022), FAR (0.021), road network density (0.020), and aquatic rate (0.020) as key factors. Partial dependence results show that NDVI enhances sentiment at low-to-medium ranges but declines at higher levels; aquatic rate improves sentiment at 0.08–0.10; openness above 0.32 improves sentiment; and both visual entropy and color complexity show a U-shaped relationship. (3) GeoSHAPLEY shows pronounced spatial heterogeneity: waterfronts and the southwestern corridor have positive effects from water–green resources; high FAR and paved surfaces in the urban area exert negative influences; and orderly interfaces in the vitality corridor generate positive impacts. Overall, moderate greenery, visible water, openness, medium-density road networks, and orderly visual patterns mitigate negative sentiments during heatwaves, while excessive density and hard surfaces intensify stress. Based on these findings, this study proposes strategies: reducing density and impervious surfaces in the urban area, enhancing greenery and quality in waterfront and peripheral areas, and optimizing urban–rural interfaces. These insights support heat-adaptive and sustainable street design and spatial governance. Full article
Show Figures

Figure 1

13 pages, 3442 KB  
Article
Patterning Fidelity Enhancement and Aberration Mitigation in EUV Lithography Through Source–Mask Optimization
by Qi Wang, Qiang Wu, Ying Li, Xianhe Liu and Yanli Li
Micromachines 2025, 16(10), 1166; https://doi.org/10.3390/mi16101166 - 14 Oct 2025
Cited by 1 | Viewed by 1506
Abstract
Extreme ultraviolet (EUV) lithography faces critical challenges in aberration control and patterning fidelity as technology nodes shrink below 3 nm. This work demonstrates how Source–Mask Optimization (SMO) simultaneously addresses both illumination and mask design to enhance pattern transfer accuracy and mitigate aberrations. Through [...] Read more.
Extreme ultraviolet (EUV) lithography faces critical challenges in aberration control and patterning fidelity as technology nodes shrink below 3 nm. This work demonstrates how Source–Mask Optimization (SMO) simultaneously addresses both illumination and mask design to enhance pattern transfer accuracy and mitigate aberrations. Through a comprehensive optimization framework incorporating key process metrics, including critical dimension (CD), exposure latitude (EL), and mask error factor (MEF), we achieve significant improvements in imaging quality and process window for 40 nm minimum pitch patterns, representative of 2 nm node back-end-of-line (BEOL) requirements. Our analysis reveals that intelligent SMO implementation not only enables robust patterning solutions but also compensates for inherent EUV aberrations by balancing source characteristics with mask modifications. On average, our results show a 4.02% reduction in CD uniformity variation, concurrent with a 1.48% improvement in exposure latitude and a 5.45% reduction in MEF. The proposed methodology provides actionable insights for aberration-aware SMO strategies, offering a pathway to maintain lithographic performance as feature sizes continue to scale. These results underscore SMO’s indispensable role in advancing EUV lithography capabilities for next-generation semiconductor manufacturing. Full article
(This article belongs to the Special Issue Recent Advances in Lithography)
Show Figures

Figure 1

20 pages, 5553 KB  
Article
An Improved Instance Segmentation Approach for Solid Waste Retrieval with Precise Edge from UAV Images
by Yaohuan Huang and Zhuo Chen
Remote Sens. 2025, 17(20), 3410; https://doi.org/10.3390/rs17203410 - 11 Oct 2025
Viewed by 756
Abstract
As a major contributor to environmental pollution in recent years, solid waste has become an increasingly significant concern in the realm of sustainable development. Unmanned Aerial Vehicle (UAV) imagery, known for its high spatial resolution, has become a valuable data source for solid [...] Read more.
As a major contributor to environmental pollution in recent years, solid waste has become an increasingly significant concern in the realm of sustainable development. Unmanned Aerial Vehicle (UAV) imagery, known for its high spatial resolution, has become a valuable data source for solid waste detection. However, manually interpreting solid waste in UAV images is inefficient, and object detection methods encounter serious challenges due to the patchy distribution, varied textures and colors, and fragmented edges of solid waste. In this study, we proposed an improved instance segmentation approach called Watershed Mask Network for Solid Waste (WMNet-SW) to accurately retrieve solid waste with precise edges from UAV images. This approach combined the well-established Mask R-CNN segmentation framework with the watershed transform edge detection algorithm. The benchmark Mask R-CNN was improved by optimizing the anchor size and Region of Interest (RoI) and integrating a new mask head of Layer Feature Aggregation (LFA) to initially detect solid waste. Subsequently, edges of the detected solid waste were precisely adjusted by overlaying the segments generated by the watershed transform algorithm. Experimental results show that WMNet-SW significantly enhances the performance of Mask R-CNN in solid waste retrieval, increasing the average precision from 36.91% to 58.10%, F1-score from 0.5 to 0.65, and AP from 63.04% to 64.42%. Furthermore, our method efficiently detects the details of solid waste edges, even overcoming the limitations of training Ground Truth (GT). This study provides a solution for retrieving solid waste with precise edges from UAV images, thereby contributing to the protection of the regional environment and ecosystem health. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

19 pages, 2183 KB  
Article
A Hierarchical RNN-LSTM Model for Multi-Class Outage Prediction and Operational Optimization in Microgrids
by Nouman Liaqat, Muhammad Zubair, Aashir Waleed, Muhammad Irfan Abid and Muhammad Shahid
Electricity 2025, 6(4), 55; https://doi.org/10.3390/electricity6040055 - 1 Oct 2025
Viewed by 1139
Abstract
Microgrids are becoming an innovative piece of modern energy systems as they provide locally sourced and resilient energy opportunities and enable efficient energy sourcing. However, microgrid operations can be greatly affected by sudden environmental changes, deviating demand, and unexpected outages. In particular, extreme [...] Read more.
Microgrids are becoming an innovative piece of modern energy systems as they provide locally sourced and resilient energy opportunities and enable efficient energy sourcing. However, microgrid operations can be greatly affected by sudden environmental changes, deviating demand, and unexpected outages. In particular, extreme climatic events expose the vulnerability of microgrid infrastructure and resilience, often leading to increased risk of system-wide outages. Thus, successful microgrid operation relies on timely and accurate outage predictions. This research proposes a data-driven machine learning framework for the optimized operation of a microgrid and predictive outage detection using a Recurrent Neural Network–Long Short-Term Memory (RNN-LSTM) architecture that reflects inherent temporal modeling methods. A time-aware embedding and masking strategy is employed to handle categorical and sparse temporal features, while mutual information-based feature selection ensures only the most relevant and interpretable inputs are retained for prediction. Moreover, the model addresses the challenges of experiencing rapid power fluctuations by looking at long-term learning dependency aspects within historical and real-time data observation streams. Two datasets are utilized: a locally developed real-time dataset collected from a 5 MW microgrid of Maple Cement Factory in Mianwali and a 15-year national power outage dataset obtained from Kaggle. Both datasets went through intensive preprocessing, normalization, and tokenization to transform raw readings into machine-readable sequences. The suggested approach attained an accuracy of 86.52% on the real-time dataset and 84.19% on the Kaggle dataset, outperforming conventional models in detecting sequential outage patterns. It also achieved a precision of 86%, a recall of 86.20%, and an F1-score of 86.12%, surpassing the performance of other models such as CNN, XGBoost, SVM, and various static classifiers. In contrast to these traditional approaches, the RNN-LSTM’s ability to leverage temporal context makes it a more effective and intelligent choice for real-time outage prediction and microgrid optimization. Full article
Show Figures

Figure 1

21 pages, 3479 KB  
Article
A Comprehensive Methodology for Soft Error Rate (SER) Reduction in Clock Distribution Network
by Jorge Johanny Saenz-Noval, Umberto Gatti and Cristiano Calligaro
Chips 2025, 4(4), 39; https://doi.org/10.3390/chips4040039 - 24 Sep 2025
Cited by 1 | Viewed by 1077
Abstract
Single Event Transients (SETs) in clock-distribution networks are a major source of soft errors in synchronous systems. We present a practical framework that assesses SET risk early in the design cycle, before layout and parasitics, using a Vulnerability Function (VF) derived from Verilog [...] Read more.
Single Event Transients (SETs) in clock-distribution networks are a major source of soft errors in synchronous systems. We present a practical framework that assesses SET risk early in the design cycle, before layout and parasitics, using a Vulnerability Function (VF) derived from Verilog fault injection. This framework guides targeted Engineering Change Orders (ECOs), such as clock-net remapping, re-routing, and the selective insertion of SET filters, within a reproducible open-source flow (Yosys, OpenROAD, OpenSTA). A new analytical Soft Error Rate (SER) model for clock trees is also proposed, which decomposes contributions from the root, intermediate levels, and leaves, and is calibrated by SPICE-measured propagation probabilities, area, and particle flux. When coupled with throughput, this model yields a frequency-aware system-level Bit Error Rate (BERsys). The methodology was validated on a First-In First-Out (FIFO) memory, demonstrating a significant vulnerability reduction of approximately 3.35× in READ mode and 2.67× in WRITE mode. Frequency sweeps show monotonic decreases in both clock-tree vulnerability and BERsys at higher clock frequencies, a trend attributed to temporal masking and throughput effects. Cross-node SPICE characterization between 65 nm and 28 nm reveals a technology-dependent effect: for the same injected charge, the 28 nm process produces a shorter root-level pulse, which lowers the propagation probability relative to 65 nm and shifts the optimal clock-tree partition. These findings underscore the framework’s key innovations: a technology-independent, early-stage VF for ranking critical clock nets; a clock-tree SER model calibrated by measured propagation probabilities; an ECO loop that converts VF insights into concrete hardening actions; and a fully reproducible open-source implementation. The paper’s scope is architectural and pre-layout, with extensions to broader circuit classes and a full electrical analysis outlined for future work. Full article
Show Figures

Figure 1

30 pages, 2873 KB  
Article
Quasar—A Process Variability-Aware Radiation Robustness Evaluation Tool
by Bernardo Borges Sandoval, Lucas Yuki Imamura, Ana Flávia D. Reis, Leonardo Heitich Brendler, Rafael B. Schvittz and Cristina Meinhardt
Electronics 2025, 14(15), 3131; https://doi.org/10.3390/electronics14153131 - 6 Aug 2025
Viewed by 859
Abstract
This work presents Quasar, an open-source tool developed to boost the characterization of how variability effects impact radiation sensitivity in digital circuits. Quasar receives a SPICE netlist as input and automatically determines robustness metrics, such as the critical Linear Energy Transfer, for every [...] Read more.
This work presents Quasar, an open-source tool developed to boost the characterization of how variability effects impact radiation sensitivity in digital circuits. Quasar receives a SPICE netlist as input and automatically determines robustness metrics, such as the critical Linear Energy Transfer, for every configuration in which a Single Event Transient fault can propagate an error. The tool can handle ranges from small basic cells to median multi-gate circuits in a few seconds, speeding up the traditional fault injection mechanism based on a large number of electrical simulations. The tool’s workflow explores logical masking to reduce the design space exploration, i.e., reducing the necessary number of electrical simulations, as well as regression methods to speed up variability evaluations. Quasar already has shown the potential to provide useful results, and a prototype has also been published. This work presents a more polished and complete version of the tool, one that optimizes the tool’s search process and allows not only for a fast evaluation of the radiation robustness of a circuit, but also for an analysis of how fabrication process metrics impact this robustness, such as Work Function Fluctuation. Full article
Show Figures

Figure 1

25 pages, 27219 KB  
Article
KCUNET: Multi-Focus Image Fusion via the Parallel Integration of KAN and Convolutional Layers
by Jing Fang, Ruxian Wang, Xinglin Ning, Ruiqing Wang, Shuyun Teng, Xuran Liu, Zhipeng Zhang, Wenfeng Lu, Shaohai Hu and Jingjing Wang
Entropy 2025, 27(8), 785; https://doi.org/10.3390/e27080785 - 24 Jul 2025
Cited by 1 | Viewed by 659
Abstract
Multi-focus image fusion (MFIF) is an image-processing method that aims to generate fully focused images by integrating source images from different focal planes. However, the defocus spread effect (DSE) often leads to blurred or jagged focus/defocus boundaries in fused images, which affects the [...] Read more.
Multi-focus image fusion (MFIF) is an image-processing method that aims to generate fully focused images by integrating source images from different focal planes. However, the defocus spread effect (DSE) often leads to blurred or jagged focus/defocus boundaries in fused images, which affects the quality of the image. To address this issue, this paper proposes a novel model that embeds the Kolmogorov–Arnold network with convolutional layers in parallel within the U-Net architecture (KCUNet). This model keeps the spatial dimensions of the feature map constant to maintain high-resolution details while progressively increasing the number of channels to capture multi-level features at the encoding stage. In addition, KCUNet incorporates a content-guided attention mechanism to enhance edge information processing, which is crucial for DSE reduction and edge preservation. The model’s performance is optimized through a hybrid loss function that evaluates in several aspects, including edge alignment, mask prediction, and image quality. Finally, comparative evaluations against 15 state-of-the-art methods demonstrate KCUNet’s superior performance in both qualitative and quantitative analyses. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Back to TopTop