Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = outlier awareness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 3625 KiB  
Article
Deep-CNN-Based Layout-to-SEM Image Reconstruction with Conformal Uncertainty Calibration for Nanoimprint Lithography in Semiconductor Manufacturing
by Jean Chien and Eric Lee
Electronics 2025, 14(15), 2973; https://doi.org/10.3390/electronics14152973 - 25 Jul 2025
Viewed by 288
Abstract
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM [...] Read more.
Nanoimprint lithography (NIL) has emerged as a promising sub-10 nm patterning at low cost; yet, robust process control remains difficult because of time-consuming physics-based simulators and labeled SEM data scarcity. We propose a data-efficient, two-stage deep-learning framework here that directly reconstructs post-imprint SEM images from binary design layouts and delivers calibrated pixel-by-pixel uncertainty simultaneously. First, a shallow U-Net is trained on conformalized quantile regression (CQR) to output 90% prediction intervals with statistically guaranteed coverage. Moreover, per-level errors on a small calibration dataset are designed to drive an outlier-weighted and encoder-frozen transfer fine-tuning phase that refines only the decoder, with its capacity explicitly focused on regions of spatial uncertainty. On independent test layouts, our proposed fine-tuned model significantly reduces the mean absolute error (MAE) from 0.0365 to 0.0255 and raises the coverage from 0.904 to 0.926, while cutting the labeled data and GPU time by 80% and 72%, respectively. The resultant uncertainty maps highlight spatial regions associated with error hotspots and support defect-aware optical proximity correction (OPC) with fewer guard-band iterations. Extending the current perspective beyond OPC, the innovatively model-agnostic and modular design of the pipeline here allows flexible integration into other critical stages of the semiconductor manufacturing workflow, such as imprinting, etching, and inspection. In these stages, such predictions are critical for achieving higher precision, efficiency, and overall process robustness in semiconductor manufacturing, which is the ultimate motivation of this study. Full article
Show Figures

Figure 1

31 pages, 4220 KiB  
Article
A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing
by Fateme Mazloomi, Shahram Shah Heydari and Khalil El-Khatib
Future Internet 2025, 17(7), 315; https://doi.org/10.3390/fi17070315 - 19 Jul 2025
Viewed by 284
Abstract
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server [...] Read more.
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server FL can alleviate the communication bottlenecks of traditional setups. To this end, we propose an edge-based, multi-server FL (MS-FL) framework that combines performance-driven aggregation at each server—including statistical weighting of peer updates and outlier mitigation—with an application layer handover protocol that preserves model updates when vehicles move between RSU coverage areas. We evaluate MS-FL on both MNIST and GTSRB benchmarks under shard- and Dirichlet-based non-IID splits, comparing it against single-server FL and a two-layer edge-plus-cloud baseline. Over multiple communication rounds, MS-FL with the Statistical Performance-Aware Aggregation method and Dynamic Weighted Averaging Aggregation achieved up to a 20-percentage-point improvement in accuracy and consistent gains in precision, recall, and F1-score (95% confidence), while matching the low latency of edge-only schemes and avoiding the extra model transfer delays of cloud-based aggregation. These results demonstrate that coordinated cooperation among servers based on model quality and seamless handovers can accelerate convergence, mitigate data heterogeneity, and deliver robust, privacy-aware learning in connected vehicle environments. Full article
Show Figures

Figure 1

25 pages, 1524 KiB  
Article
Detecting Emerging DGA Malware in Federated Environments via Variational Autoencoder-Based Clustering and Resource-Aware Client Selection
by Ma Viet Duc, Pham Minh Dang, Tran Thu Phuong, Truong Duc Truong, Vu Hai and Nguyen Huu Thanh
Future Internet 2025, 17(7), 299; https://doi.org/10.3390/fi17070299 - 3 Jul 2025
Viewed by 398
Abstract
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy [...] Read more.
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy or policy constraints. To address this, we present FedSAGE, a security-aware federated intrusion detection framework that combines Variational Autoencoder (VAE)-based latent representation learning with unsupervised clustering and resource-efficient client selection. Each client encodes its local domain traffic into a semantic latent space using a shared, pre-trained VAE trained solely on benign domains. These embeddings are clustered via affinity propagation to group clients with similar data distributions and identify outliers indicative of novel threats without requiring any labeled DGA samples. Within each cluster, FedSAGE selects only the fastest clients for training, balancing computational constraints with threat visibility. Experimental results from the multi-zones DGA dataset show that FedSAGE improves detection accuracy by up to 11.6% and reduces energy consumption by up to 93.8% compared to standard FedAvg under non-IID conditions. Notably, the latent clustering perfectly recovers ground-truth DGA family zones, enabling effective anomaly detection in a fully unsupervised manner while remaining privacy-preserving. These foundations demonstrate that FedSAGE is a practical and lightweight approach for decentralized detection of evasive malware, offering a viable solution for secure and adaptive defense in resource-constrained edge environments. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

18 pages, 5564 KiB  
Article
Flood Exposure Patterns Induced by Sea Level Rise in Coastal Urban Areas of Europe and North Africa
by Wiktor Halecki and Dawid Bedla
Water 2025, 17(13), 1889; https://doi.org/10.3390/w17131889 - 25 Jun 2025
Viewed by 517
Abstract
Coastal cities and low-lying areas are increasingly vulnerable, and accurate data is needed to identify where interventions are most required. We compared 53 cities affected by a 1 m increase in land levels and a 2 m rise in sea levels. The geographical [...] Read more.
Coastal cities and low-lying areas are increasingly vulnerable, and accurate data is needed to identify where interventions are most required. We compared 53 cities affected by a 1 m increase in land levels and a 2 m rise in sea levels. The geographical scope of this study covered selected coastal cities in Europe and northern Africa. Data were sourced from the European Environment Agency (EEA) in the form of prepared datasets, which were further processed for analysis. Statistical methods were applied to compare the extent of urban flooding under two sea level rise scenarios—1 m and 2 m—by calculating the percentage of affected urban areas. To assess social vulnerability, the analysis included several variables: MAPF65 (Mean Area Potentially Flooded for people aged 65 and older, indicating elderly exposure), Age (the percentage of the population aged 65+ in each city), MAPF (Mean Area Potentially Flooded, representing the average share of urban area at risk of flooding), and Unemployment Ratio (the percentage of unemployed individuals living in the areas potentially affected by sea level rise). We utilized t-tests to analyze the means of two datasets, yielding a mean difference of 2.9536. Both parametric and bootstrap confidence intervals included zero, and the p-values from the t-tests (0.289 and 0.289) indicated no statistically significant difference between the means. The Bayes factor (0.178) provided substantial evidence supporting equal means, while Cohen’s D (0.099) indicated a very small effect size. Ceuta’s flooding value (502.8) was identified as a significant outlier (p < 0.05), indicating high flood risk. A Grubbs’ test confirmed Ceuta as a significant outlier. A Wilcoxon test highlighted significant deviations between the medians, with a p << 0.001, demonstrating systematic discrepancies tied to flood frequency and sea level anomalies. These findings illuminated critical disparities in flooding trends across specific locations, offering essential insights for urban planning and mitigation strategies in cities vulnerable to rising sea levels and extreme weather patterns. Information on coastal flooding provides awareness of how rising sea levels affect at-risk areas. Examining factors such as MAPF and population data enables the detection of the most threatened zones and supports targeted action. These perceptions are essential for strengthening climate resilience, improving emergency planning, and directing resources where they are needed most. Full article
(This article belongs to the Section Oceans and Coastal Zones)
Show Figures

Graphical abstract

21 pages, 4513 KiB  
Article
An Enhanced ZigBee-Based Indoor Localization Method Using Multi-Stage RSSI Filtering and LQI-Aware MLE
by Jianming Li, Shuyan Yu, Zhe Wei and Zhanpeng Zhou
Sensors 2025, 25(9), 2947; https://doi.org/10.3390/s25092947 - 7 May 2025
Cited by 1 | Viewed by 641
Abstract
Accurate indoor localization in wireless sensor networks remains a non-trivial challenge, particularly in complex environments characterized by signal variability and multipath propagation. This study presents a ZigBee-based localization approach that integrates multi-stage preprocessing of received signal strength indicator (RSSI) data with a reliability-aware [...] Read more.
Accurate indoor localization in wireless sensor networks remains a non-trivial challenge, particularly in complex environments characterized by signal variability and multipath propagation. This study presents a ZigBee-based localization approach that integrates multi-stage preprocessing of received signal strength indicator (RSSI) data with a reliability-aware extension of the maximum likelihood estimation (MLE) algorithm. To improve measurement stability, a hybrid filtering framework combining Kalman filtering, Dixon’s Q test, Gaussian smoothing, and mean averaging is applied to reduce the influence of noise and outliers. Building on the filtered data, the proposed method introduces a noise and link quality indicator (LQI)-based dynamic weighting mechanism that adjusts the contribution of each distance estimate during localization. The approach was evaluated under simulated and semi-physical non-line-of-sight (NLOS) indoor conditions designed to reflect practical deployment scenarios. While based on a limited set of representative test points, the method yielded improved positioning consistency and achieved an average accuracy gain of 11.7% over conventional MLE in the tested environments. These results suggest that the proposed method may offer a feasible solution for resource-constrained localization applications requiring robustness to signal degradation. Full article
Show Figures

Figure 1

24 pages, 3848 KiB  
Article
Efficient Deep Learning Model Compression for Sensor-Based Vision Systems via Outlier-Aware Quantization
by Joonhyuk Yoo and Guenwoo Ban
Sensors 2025, 25(9), 2918; https://doi.org/10.3390/s25092918 - 5 May 2025
Viewed by 878
Abstract
With the rapid growth of sensor technology and computer vision, efficient deep learning models are essential for real-time image feature extraction in resource-constrained environments. However, most existing quantized deep neural networks (DNNs) are highly sensitive to outliers, leading to severe performance degradation in [...] Read more.
With the rapid growth of sensor technology and computer vision, efficient deep learning models are essential for real-time image feature extraction in resource-constrained environments. However, most existing quantized deep neural networks (DNNs) are highly sensitive to outliers, leading to severe performance degradation in low-precision settings. Our study reveals that outliers extending beyond the nominal weight distribution significantly increase the dynamic range, thereby reducing quantization resolution and affecting sensor-based image analysis tasks. To address this, we propose an outlier-aware quantization (OAQ) method that effectively reshapes weight distributions to enhance quantization accuracy. By analyzing previous outlier-handling techniques using structural similarity (SSIM) measurement results, we demonstrated that OAQ significantly reduced the negative impact of outliers while maintaining computational efficiency. Notably, OAQ was orthogonal to existing quantization schemes, making it compatible with various quantization methods without additional computational overhead. Experimental results on multiple CNN architectures and quantization approaches showed that OAQ effectively mitigated quantization errors. In post-training quantization (PTQ), our 4-bit OAQ ResNet20 model achieved improved accuracy compared with full-precision counterparts, while in quantization-aware training (QAT), OAQ enhanced 2-bit quantization performance by 43.55% over baseline methods. These results confirmed the potential of OAQ for optimizing deep learning models in sensor-based vision applications. Full article
Show Figures

Figure 1

32 pages, 8901 KiB  
Article
Energy Benchmarking Analysis of Multi-Family Housing Unit in Algiers, Algeria
by Marwa Afaifia, Meskiana Boulahia, Kahina Amal Djiar, Nariman Aicha Lamraoui, Amina Naouel Mansouri, Lyna Milat, Sihem Chourouk Serrai and Jacques Teller
Sustainability 2025, 17(9), 4120; https://doi.org/10.3390/su17094120 - 2 May 2025
Viewed by 1299
Abstract
Improving residential energy efficiency is essential for optimizing energy consumption. This article analyzes the electricity and natural gas consumption of a benchmark multi-family housing model in Algiers, based on data from 295 residential units collected over three consecutive years (2022, 2023, and 2024). [...] Read more.
Improving residential energy efficiency is essential for optimizing energy consumption. This article analyzes the electricity and natural gas consumption of a benchmark multi-family housing model in Algiers, based on data from 295 residential units collected over three consecutive years (2022, 2023, and 2024). A comprehensive approach combining data visualization, statistical analysis, a clustering approach, a tariff structure assessment, and an energy performance index is applied to assess residential energy-consumption trends. The findings reveal opposing trends between electricity and natural gas consumption. The electricity demand increased steadily (+15% from 2022 to 2024), particularly in the third trimester (summer), where 40% of the housing unit consumption exceeded 1000 kWh per trimester, indicating a growing reliance on air conditioning. In contrast, natural gas consumption declined significantly, with winter usage dropping by more than 20%, suggesting improved heating efficiency, better thermal insulation, and/or milder weather conditions. The clustering analysis also highlights a shift toward more homogenous consumption profiles, with fewer outliers and a narrower interquartile range, indicating greater energy efficiency across households. The results underscore the need for adaptive energy pricing policies and targeted household awareness programs. They further suggest that incentive-based measures, particularly during peak summer periods, could mitigate demand spikes and enhance energy system resilience. The energy benchmarking approach developed in this study can support decision-makers in adjusting tariff structures according to household energy profiles to improve overall energy efficiency. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

21 pages, 481 KiB  
Article
Adaptive Cluster-Based Normalization for Robust TOPSIS in Multicriteria Decision-Making
by Vitor Anes and António Abreu
Appl. Sci. 2025, 15(7), 4044; https://doi.org/10.3390/app15074044 - 7 Apr 2025
Viewed by 624
Abstract
In multicriteria decision-making (MCDM), methods such as TOPSIS are essential for evaluating and comparing alternatives across multiple criteria. However, traditional normalization techniques often struggle with datasets containing outliers, large variances, or heterogeneous measurement units, which can lead to skewed or biased rankings. To [...] Read more.
In multicriteria decision-making (MCDM), methods such as TOPSIS are essential for evaluating and comparing alternatives across multiple criteria. However, traditional normalization techniques often struggle with datasets containing outliers, large variances, or heterogeneous measurement units, which can lead to skewed or biased rankings. To address these challenges, this paper proposes an adaptive, cluster-based normalization approach, demonstrated through a real-world logistics case study involving the selection of a host city for an international event. The method groups alternatives into clusters based on similarities in criterion values and applies logarithmic normalization within each cluster. This localized strategy reduces the influence of outliers and ensures that scaling adjustments reflect the specific characteristics of each group. In the case study—where cities were evaluated based on cost, infrastructure, safety, and accessibility—the cluster-based normalization method yielded more stable and balanced rankings, even in the presence of significant data variability. By reducing the influence of outliers through logarithmic normalization and allowing predefined cluster profiles to reflect expert judgment, the method improves fairness and adaptability. These features strengthen TOPSIS’s ability to deliver accurate, balanced, and context-aware decisions in complex, real-world scenarios. Full article
(This article belongs to the Special Issue Fuzzy Control Systems: Latest Advances and Prospects)
Show Figures

Figure 1

15 pages, 6487 KiB  
Article
SA-Net: Leveraging Spatial Correlations Spatial-Aware Net for Multi-Perspective Robust Estimation Algorithm
by Yuxiang Shao, Longyang Zhou, Xiang Li, Chunsheng Feng and Xinyu Jin
Algorithms 2025, 18(2), 65; https://doi.org/10.3390/a18020065 - 26 Jan 2025
Viewed by 955
Abstract
Robust estimation aims to provide accurate and reliable parameter estimations, particularly when data are affected by noise or outliers. Traditional methods like random sample consensus (RANSAC) struggle with handling outliers because they treat all observations as equally important. A series of advanced deep [...] Read more.
Robust estimation aims to provide accurate and reliable parameter estimations, particularly when data are affected by noise or outliers. Traditional methods like random sample consensus (RANSAC) struggle with handling outliers because they treat all observations as equally important. A series of advanced deep learning methods have recently emerged, which use deep learning techniques to estimate the probability of each sample being selected, prioritizing higher confidence for observations that are closer to the ground truth model. However, optimizing solely based on proximity to the ground truth model does not guarantee higher-quality estimations. Meanwhile spatial relationships between the data points in the minimum sampled set also influence the accuracy of the final estimated model. To address these issues, we propose Spatial-Aware Net (SA-Net), a dual-branch neural network that integrates both confidence and spatial encodings. SA-Net employs a confidence distribution encoder to learn the confidence distribution and a spatial distribution encoder to capture spatial correlations between point features. By incorporating multi-perspective sampling, the minimum sample set can be selected based on different spatial distributions in the output of the neural network, and applying Chamfer Loss constraints, our approach improves model optimization and effectively mitigates suboptimal solutions. Extensive experiments demonstrate that SA-Net outperforms the state of the art across various real-world robust estimation tasks. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 1074 KiB  
Article
G&G Attack: General and Geometry-Aware Adversarial Attack on the Point Cloud
by Geng Chen, Zhiwen Zhang, Yuanxi Peng, Chunchao Li and Teng Li
Appl. Sci. 2025, 15(1), 448; https://doi.org/10.3390/app15010448 - 6 Jan 2025
Viewed by 962
Abstract
Deep neural networks have been shown to produce incorrect predictions when imperceptible perturbations are introduced into the clean input. This phenomenon has garnered significant attention and extensive research in 2D images. However, related work on point clouds is still in its infancy. Current [...] Read more.
Deep neural networks have been shown to produce incorrect predictions when imperceptible perturbations are introduced into the clean input. This phenomenon has garnered significant attention and extensive research in 2D images. However, related work on point clouds is still in its infancy. Current methods suffer from issues such as generated point outliers and poor attack generalization. Consequently, it is not feasible to rely solely on overall or geometry-aware attacks to generate adversarial samples. In this paper, we integrate adversarial transfer networks with the geometry-aware method to introduce adversarial loss into the attack target. A state-of-the-art autoencoder is employed, and sensitivity maps are utilized. We use the autoencoder to generate a sufficiently deceptive mask that covers the original input, adjusting the critical subset through a geometry-aware trick to distort the point cloud gradient. Our proposed approach is quantitatively evaluated in terms of the attack success rate (ASR), imperceptibility, and transferability. Compared to other baselines on ModelNet40, our method demonstrates an approximately 38% improvement in ASR for black-box transferability query attacks, with an average query count of around 7.84. Comprehensive experimental results confirm the superiority of our method. Full article
Show Figures

Figure 1

27 pages, 1082 KiB  
Article
Cybersecurity Practices and Supply Chain Performance: The Case of Jordanian Banks
by Saleh Fahed Al-Khatib, Yara Yousef Ibrahim and Mohammad Alnadi
Adm. Sci. 2025, 15(1), 1; https://doi.org/10.3390/admsci15010001 - 24 Dec 2024
Viewed by 2122
Abstract
This study explores the impact of cybersecurity practices on supply chain performance in the Jordanian banking sector. A survey was used to obtain data from managers and customers. Data from 40 managers’ and 250 digital banking customers’ surveys were collected, of which 220 [...] Read more.
This study explores the impact of cybersecurity practices on supply chain performance in the Jordanian banking sector. A survey was used to obtain data from managers and customers. Data from 40 managers’ and 250 digital banking customers’ surveys were collected, of which 220 were valid to be analyzed using IBM SPSS V26 and PLS-SEM V4; 30 responses were excluded due to invalidity issues such as zero standard deviation and outliers identified using Cook’s distance. This study empirically demonstrates the significant positive impact of cybersecurity practices on Jordanian banking supply chain performance. Specifically, the confidentiality, integrity, and availability dimensions strongly correlate with the banks’ supply chain performance. The results indicate that managers have a high degree of cybersecurity awareness and implementation, emphasizing the significance of regular cybersecurity practice training and discussions. Customers desired improved communication and explanation on cybersecurity issues from their banks despite being generally satisfied with cybersecurity. This study’s significant contribution lies in identifying the actual levels of cybersecurity practices and supply chain performance in the Jordanian banking sector and their interaction from both managers’ and customers’ perspectives. Future investigations into the long-term impacts of cybersecurity investments and the comparative examination of cybersecurity methods across other sectors or locations would benefit greatly from this research’s insightful findings. Practically, the results highlight the value of investing in cutting-edge cybersecurity measures, training staff, and effectively explaining procedures and protocols to clients. All of these measures together improve efficiency, trust, and collaboration throughout the banking supply chain. Full article
(This article belongs to the Special Issue Supply Chain in the New Business Environment)
Show Figures

Figure 1

18 pages, 1624 KiB  
Article
Self-Supervised, Multi-View, Semantics-Aware Anchor Clustering
by Kaibin Wei, Haifeng Li, Qing Liu and Xiongjian Zhang
Electronics 2024, 13(23), 4782; https://doi.org/10.3390/electronics13234782 - 4 Dec 2024
Cited by 1 | Viewed by 1012
Abstract
Data-driven artificial intelligence systems effectively enhance accuracy and robustness by utilizing multi-view learning to aggregate consistent and complementary information from multi-source data. As one of the most important branches of multi-view learning, multi-view anchor clustering greatly reduces the time complexity via learning similarity [...] Read more.
Data-driven artificial intelligence systems effectively enhance accuracy and robustness by utilizing multi-view learning to aggregate consistent and complementary information from multi-source data. As one of the most important branches of multi-view learning, multi-view anchor clustering greatly reduces the time complexity via learning similarity graphs between anchors and data instead of data-to-data similarities, which has gained widespread attention in data-driven artificial intelligence domains. However, two issues still exist in current methods: (1) They commonly utilize orthogonal regularization to enhance anchor diversity, which may lead to a distorted anchor distribution, e.g., some clusters might have few or even no corresponding anchors. (2) They only utilize view-sharing anchors to aggregate complementary and consistent information between views, which may fail to ensure anchor robustness due to the heterogeneity gap between views. To this end, self-supervised, multi-view, semantics-aware anchor clustering (SMA2C) is proposed, containing multi-view representation alignment, adaptive anchor selection, and global spectral optimization. Specifically, SMA2C devises dual-level contrastive learning on representations and clustering partitioning between views within a deep encoding–decoding architecture to achieve multi-granularity alignment of views with the heterogeneity gap. Meanwhile, SMA2C introduces adaptive anchor selection via filtering outliers and refining clusters to enhance correlations between clusters and anchors, which ensures the diversity and discriminability of anchors. Finally, extensive evaluations across four real-world datasets confirm that SMA2C establishes a new benchmark for multi-view anchor clustering. In particular, SMA2C achieves a 6.75% improvement in accuracy over the second-best result on the HW dataset. Full article
(This article belongs to the Special Issue Advances in Data-Driven Artificial Intelligence)
Show Figures

Figure 1

21 pages, 2515 KiB  
Article
Online Self-Learning-Based Raw Material Proportioning for Rotary Hearth Furnace and Intelligent Batching System Development
by Xianxia Zhang, Lufeng Wang, Shengjie Tang, Chang Zhao and Jun Yao
Appl. Sci. 2024, 14(19), 9126; https://doi.org/10.3390/app14199126 - 9 Oct 2024
Cited by 1 | Viewed by 1303
Abstract
With the increasing awareness of environmental protection, the rotary hearth furnace system has emerged as a key technology that facilitates a win-win situation for both environmental protection and enterprise economic benefits. This is attributed to its high flexibility in raw material utilization, capability [...] Read more.
With the increasing awareness of environmental protection, the rotary hearth furnace system has emerged as a key technology that facilitates a win-win situation for both environmental protection and enterprise economic benefits. This is attributed to its high flexibility in raw material utilization, capability of directly supplying blast furnaces, low energy consumption, and high zinc removal rate. However, the complexity of the raw material proportioning process coupled with the rotary hearth furnace system’s reliance on human labor results in a time-consuming and inefficient process. This paper innovatively introduces an intelligent formula method for proportioning raw materials based on online clustering algorithms and develops an intelligent batching system for rotary hearth furnaces. Firstly, the ingredients of raw materials undergo data preprocessing, which involves using the local outlier factor (LOF) method to detect any abnormal values, using Kalman filtering to smooth the data, and performing one-hot encoding to represent the different kinds of raw materials. Afterwards, the affinity propagation (AP) clustering method is used to evaluate past data on the ingredients of raw materials and their ratios. This analysis aims to extract information based on human experience with ratios and create a library of machine learning formulas. The incremental AP clustering algorithm is utilized to learn new ratio data and continuously update the machine learning formula library. To ensure that the formula meets the actual production performance requirements of the rotary hearth furnace, the machine learning formula is fine-tuned based on expert experience. The integration of machine learning and expert experience demonstrates good flexibility and satisfactory performance in the practical application of intelligent formulas for rotary hearth furnaces. An intelligent batching system is developed and executed at a steel plant in China. It shows an excellent user interface and significantly enhances batching efficiency and product quality. Full article
(This article belongs to the Special Issue Data Analysis and Mining: New Techniques and Applications)
Show Figures

Figure 1

15 pages, 1823 KiB  
Article
Enhancing Recommendation Diversity and Novelty with Bi-LSTM and Mean Shift Clustering
by Yuan Yuan, Yuying Zhou, Xuanyou Chen, Qi Xiong and Hector Chimeremeze Okere
Electronics 2024, 13(19), 3841; https://doi.org/10.3390/electronics13193841 - 28 Sep 2024
Cited by 3 | Viewed by 1704
Abstract
In the digital age, personalized recommendation systems have become crucial for information dissemination and user experience. While traditional systems focus on accuracy, they often overlook diversity, novelty, and serendipity. This study introduces an innovative recommendation system model, Time-based Outlier Aware Recommender (TOAR), designed [...] Read more.
In the digital age, personalized recommendation systems have become crucial for information dissemination and user experience. While traditional systems focus on accuracy, they often overlook diversity, novelty, and serendipity. This study introduces an innovative recommendation system model, Time-based Outlier Aware Recommender (TOAR), designed to address the challenges of content homogenization and information bubbles in personalized recommendations. TOAR integrates Neural Matrix Factorization (NeuMF), Bidirectional Long Short-Term Memory Networks (Bi-LSTM), and Mean Shift clustering to enhance recommendation accuracy, novelty, and diversity. The model analyzes temporal dynamics of user behavior and facilitates cross-domain knowledge exchange through feature sharing and transfer learning mechanisms. By incorporating an attention mechanism and unsupervised clustering, TOAR effectively captures important time-series information and ensures recommendation diversity. Experimental results on a news recommendation dataset demonstrate TOAR’s superior performance across multiple metrics, including AUC, precision, NDCG, and novelty, compared to traditional and deep learning-based recommendation models. This research provides a foundation for developing more intelligent and personalized recommendation services that balance accuracy with content diversity. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 6078 KiB  
Article
Matchability and Uncertainty-Aware Iterative Disparity Refinement for Stereo Matching
by Junwei Wang, Wei Zhou, Yujun Tang and Hanming Guo
Appl. Sci. 2024, 14(18), 8457; https://doi.org/10.3390/app14188457 - 19 Sep 2024
Viewed by 1276
Abstract
After significant progress in stereo matching, the pursuit of robust and efficient ill-posed-region disparity refinement methods remains challenging. To further improve the performance of disparity refinement, in this paper, we propose the matchability and uncertainty-aware iterative disparity refinement neural network. Firstly, a new [...] Read more.
After significant progress in stereo matching, the pursuit of robust and efficient ill-posed-region disparity refinement methods remains challenging. To further improve the performance of disparity refinement, in this paper, we propose the matchability and uncertainty-aware iterative disparity refinement neural network. Firstly, a new matchability and uncertainty decoder (MUD) is proposed to decode the matchability mask and disparity uncertainties, which are used to evaluate the reliability of feature matching and estimated disparity, thereby reducing the susceptibility to mismatched pixels. Then, based on the proposed MUD, we present two modules: the uncertainty-preferred disparity field initialization (UFI) and the masked hidden state global aggregation (MGA) modules. In the UFI, a multi-disparity window scan-and-select method is employed to provide a further initialized disparity field and more accurate initial disparity. In the MGA, the adaptive masked disparity field hidden state is globally aggregated to extend the propagation range per iteration, improving the refinement efficiency. Finally, the experimental results on public datasets show that the proposed model achieves a reduction up to 17.9% in disparity average error and 16.9% in occluded outlier proportion, respectively, demonstrating its more practical handling of ill-posed regions. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop