Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,273)

Search Parameters:
Keywords = cost benchmarking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1623 KB  
Article
Techno-Economic Assessment and Process Design Considerations for Industrial-Scale Photocatalytic Wastewater Treatment
by Hongliang Liu and Mingxia Song
Water 2026, 18(2), 221; https://doi.org/10.3390/w18020221 - 14 Jan 2026
Abstract
Industrial deployment of photocatalysis for recalcitrant wastewater treatment remains constrained by economic uncertainty and scale-up limitations. This study first reviews the current technological routes and application status of photocatalytic processes and then addresses the key obstacles through a quantitative techno-economic assessment (TEA) of [...] Read more.
Industrial deployment of photocatalysis for recalcitrant wastewater treatment remains constrained by economic uncertainty and scale-up limitations. This study first reviews the current technological routes and application status of photocatalytic processes and then addresses the key obstacles through a quantitative techno-economic assessment (TEA) of a full-scale (10,000 m3/d) photocatalytic wastewater treatment plant. A process-level model integrating mass- and energy-balance calculations with equipment sizing was developed for a 280 kW UVA-LED reactor using Pt/TiO2 as the benchmark catalyst. The framework quantifies capital (CAPEX) and operating (OPEX) expenditures and evaluates the overall economic performance of the photocatalytic treatment system. Sensitivity analysis reveals that the catalyst replacement interval and electricity tariffs are the principal economic bottlenecks, whereas improvements in catalyst performance alone provide limited cost leverage. Furthermore, the analysis indicates that supportive policy mechanisms such as carbon-credit incentives and electricity subsidies could substantially enhance economic feasibility. Overall, this work establishes a comprehensive integrated TEA framework for industrial-scale photocatalytic wastewater treatment, offering actionable design parameters and cost benchmarks to guide future commercialization. Full article
(This article belongs to the Section Wastewater Treatment and Reuse)
20 pages, 3743 KB  
Article
Unsupervised Learning-Based Anomaly Detection for Bridge Structural Health Monitoring: Identifying Deviations from Normal Structural Behaviour
by Jabez Nesackon Abraham, Minh Q. Tran, Jerusha Samuel Jayaraj, Jose C. Matos, Maria Rosa Valluzzi and Son N. Dang
Sensors 2026, 26(2), 561; https://doi.org/10.3390/s26020561 - 14 Jan 2026
Abstract
Structural Health Monitoring (SHM) of large-scale civil infrastructure is essential to ensure safety, minimise maintenance costs, and support informed decision-making. Unsupervised anomaly detection has emerged as a powerful tool for identifying deviations in structural behaviour without requiring labelled damage data. The study initially [...] Read more.
Structural Health Monitoring (SHM) of large-scale civil infrastructure is essential to ensure safety, minimise maintenance costs, and support informed decision-making. Unsupervised anomaly detection has emerged as a powerful tool for identifying deviations in structural behaviour without requiring labelled damage data. The study initially reproduces and implements a state-of-the-art methodology that combines local density estimation through the Cumulative Distance Participation Factor (CDPF) with Semi-parametric Extreme Value Theory (SEVT) for thresholding, which serves as an essential baseline reference for establishing normal structural behaviour and for benchmarking the performance of the proposed anomaly detection framework. Using modal frequencies extracted via Stochastic Subspace Identification from the Z24 bridge dataset, the baseline method effectively identifies structural anomalies caused by progressive damage scenarios. However, its performance is constrained when dealing with subtle or non-linear deviations. To address this limitation, we introduce an innovative ensemble anomaly detection framework that integrates two complementary unsupervised methods: Principal Component Analysis (PCA) and Autoencoder (AE) are dimensionality reduction methods used for anomaly detection. PCA captures linear patterns using variance, while AE learns non-linear representations through data reconstruction. By leveraging the strengths of these techniques, the ensemble achieves improved sensitivity, reliability, and interpretability in anomaly detection. A comprehensive comparison with the baseline approach demonstrates that the proposed ensemble not only captures anomalies more reliably but also provides improved stability to environmental and operational variability. These findings highlight the potential of ensemble-based unsupervised methods for advancing SHM practices. Full article
Show Figures

Figure 1

44 pages, 3553 KB  
Article
Hybrid HHO–WHO Optimized Transformer-GRU Model for Advanced Failure Prediction in Industrial Machinery and Engines
by Amir R. Ali and Hossam Kamal
Sensors 2026, 26(2), 534; https://doi.org/10.3390/s26020534 - 13 Jan 2026
Abstract
Accurate prediction of failure in industrial machinery and engines is critical for minimizing unexpected downtimes and enabling cost-effective maintenance. Existing predictive models often struggle to generalize across diverse datasets and require extensive hyperparameter tuning, while conventional optimization methods are prone to local optima, [...] Read more.
Accurate prediction of failure in industrial machinery and engines is critical for minimizing unexpected downtimes and enabling cost-effective maintenance. Existing predictive models often struggle to generalize across diverse datasets and require extensive hyperparameter tuning, while conventional optimization methods are prone to local optima, limiting predictive performance. To address these limitations, this study proposes a hybrid optimization framework combining Harris Hawks Optimization (HHO) and Wild Horse Optimization (WHO) to fine-tune the hyperparameters of ResNet, Bi-LSTM, Bi-GRU, CNN, DNN, VAE, and Transformer-GRU models. The framework leverages HHO’s global exploration and WHO’s local exploitation to overcome local optima and optimize predictive performance. Following hybrid optimization, the Transformer-GRU model consistently outperformed all other models across four benchmark datasets, including time-to-failure (TTF), intelligent maintenance system (IMS), C-MAPSS FD001, and FD003. On the TTF dataset, mean absolute error (MAE) decreased from 0.72 to 0.15, and root mean square error (RMSE) from 1.31 to 0.23. On the IMS dataset, MAE decreased from 0.04 to 0.01, and RMSE from 0.06 to 0.02. On C-MAPSS FD001, MAE decreased from 11.45 to 9.97, RMSE from 16.02 to 13.56, and score from 410.1 to 254.3. On C-MAPSS FD003, MAE decreased from 11.28 to 9.98, RMSE from 15.33 to 14.57, and score from 352.3 to 320.8. These results confirm that the hybrid HHO–WHO optimized Transformer-GRU framework significantly improves prediction performance, robustness, stability, and generalization, providing a reliable solution for predictive maintenance. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

20 pages, 1244 KB  
Article
Learning-Based Cost-Minimization Task Offloading and Resource Allocation for Multi-Tier Vehicular Computing
by Shijun Weng, Yigang Xing, Yaoshan Zhang, Mengyao Li, Donghan Li and Haoting He
Mathematics 2026, 14(2), 291; https://doi.org/10.3390/math14020291 - 13 Jan 2026
Abstract
With the fast development of the 5G technology and IoV, a vehicle has become a smart device with communication, computing, and storage capabilities. However, the limited on-board storage and computing resources often cause large latency for task processing and result in degradation of [...] Read more.
With the fast development of the 5G technology and IoV, a vehicle has become a smart device with communication, computing, and storage capabilities. However, the limited on-board storage and computing resources often cause large latency for task processing and result in degradation of system QoS as well as user QoE. In the meantime, to build the environmentally harmonious transportation system and green city, the energy consumption of data processing has become a new concern in vehicles. Moreover, due to the fast movement of IoV, traditional GSI-based methods face the dilemma of information uncertainty and are no longer applicable. To address these challenges, we propose a T2VC model. To deal with information uncertainty and dynamic offloading due to the mobility of vehicles, we propose a MAB-based QEVA-UCB solution to minimize the system cost expressed as the sum of weighted latency and power consumption. QEVA-UCB takes into account several related factors such as the task property, task arrival queue, offloading decision as well as the vehicle mobility, and selects the optimal location for offloading tasks to minimize the system cost with latency energy awareness and conflict awareness. Extensive simulations verify that, compared with other benchmark methods, our approach can learn and make the task offloading decision faster and more accurately for both latency-sensitive and energy-sensitive vehicle users. Moreover, it has superior performance in terms of system cost and learning regret. Full article
(This article belongs to the Special Issue Computational Methods in Wireless Communications with Applications)
Show Figures

Figure 1

29 pages, 2829 KB  
Article
Real-Time Deterministic Lane Detection on CPU-Only Embedded Systems via Binary Line Segment Filtering
by Shang-En Tsai, Shih-Ming Yang and Chia-Han Hsieh
Electronics 2026, 15(2), 351; https://doi.org/10.3390/electronics15020351 - 13 Jan 2026
Abstract
The deployment of Advanced Driver-Assistance Systems (ADAS) in economically constrained markets frequently relies on hardware architectures that lack dedicated graphics processing units. Within such environments, the integration of deep neural networks faces significant hurdles, primarily stemming from strict limitations on energy consumption, the [...] Read more.
The deployment of Advanced Driver-Assistance Systems (ADAS) in economically constrained markets frequently relies on hardware architectures that lack dedicated graphics processing units. Within such environments, the integration of deep neural networks faces significant hurdles, primarily stemming from strict limitations on energy consumption, the absolute necessity for deterministic real-time response, and the rigorous demands of safety certification protocols. Meanwhile, traditional geometry-based lane detection pipelines continue to exhibit limited robustness under adverse illumination conditions, including intense backlighting, low-contrast nighttime scenes, and heavy rainfall. Motivated by these constraints, this work re-examines geometry-based lane perception from a sensor-level viewpoint and introduces a Binary Line Segment Filter (BLSF) that leverages the inherent structural regularity of lane markings in bird’s-eye-view (BEV) imagery within a computationally lightweight framework. The proposed BLSF is integrated into a complete pipeline consisting of inverse perspective mapping, median local thresholding, line-segment detection, and a simplified Hough-style sliding-window fitting scheme combined with RANSAC. Experiments on a self-collected dataset of 297 challenging frames show that the inclusion of BLSF significantly improves robustness over an ablated baseline while sustaining real-time performance on a 2 GHz ARM CPU-only platform. Additional evaluations on the Dazzling Light and Night subsets of the CULane and LLAMAS benchmarks further confirm consistent gains of approximately 6–7% in F1-score, together with corresponding improvements in IoU. These results demonstrate that interpretable, geometry-driven lane feature extraction remains a practical and complementary alternative to lightweight learning-based approaches for cost- and safety-critical ADAS applications. Full article
(This article belongs to the Special Issue Feature Papers in Electrical and Autonomous Vehicles, Volume 2)
Show Figures

Figure 1

20 pages, 1544 KB  
Article
No Free Lunch in Language Model Bias Mitigation? Targeted Bias Reduction Can Exacerbate Unmitigated LLM Biases
by Shireen Chand, Faith Baca and Emilio Ferrara
AI 2026, 7(1), 24; https://doi.org/10.3390/ai7010024 - 13 Jan 2026
Abstract
Large Language Models (LLMs) inherit societal biases from their training data, potentially leading to harmful outputs. While various techniques aim to mitigate these biases, their effects are typically evaluated only along the targeted dimension, leaving cross-dimensional consequences unexplored. This work provides the first [...] Read more.
Large Language Models (LLMs) inherit societal biases from their training data, potentially leading to harmful outputs. While various techniques aim to mitigate these biases, their effects are typically evaluated only along the targeted dimension, leaving cross-dimensional consequences unexplored. This work provides the first systematic quantification of cross-category spillover effects in LLM bias mitigation. We evaluate four bias mitigation techniques (Logit Steering, Activation Patching, BiasEdit, Prompt Debiasing) across ten models from seven families, measuring impact on racial, religious, profession-, and gender-related biases using the StereoSet benchmark. Across 160 experiments yielding 640 evaluations, we find that targeted interventions cause collateral degradations to model coherence and performance along debiasing objectives in 31.5% of untargeted dimension evaluations. These findings provide empirical evidence that debiasing improvements along one dimension can come at the cost of degradation in others. We introduce a multi-dimensional auditing framework and demonstrate that single-target evaluations mask potentially severe spillover effects, underscoring the need for robust, multi-dimensional evaluation tools when examining and developing bias mitigation strategies to avoid inadvertently shifting or worsening bias along untargeted axes. Full article
Show Figures

Figure 1

54 pages, 2957 KB  
Review
Mamba for Remote Sensing: Architectures, Hybrid Paradigms, and Future Directions
by Zefeng Li, Long Zhao, Yihang Lu, Yue Ma and Guoqing Li
Remote Sens. 2026, 18(2), 243; https://doi.org/10.3390/rs18020243 - 12 Jan 2026
Viewed by 21
Abstract
Modern Earth observation combines high spatial resolution, wide swath, and dense temporal sampling, producing image grids and sequences far beyond the regime of standard vision benchmarks. Convolutional networks remain strong baselines but struggle to aggregate kilometre-scale context and long temporal dependencies without heavy [...] Read more.
Modern Earth observation combines high spatial resolution, wide swath, and dense temporal sampling, producing image grids and sequences far beyond the regime of standard vision benchmarks. Convolutional networks remain strong baselines but struggle to aggregate kilometre-scale context and long temporal dependencies without heavy tiling and downsampling, while Transformers incur quadratic costs in token count and often rely on aggressive patching or windowing. Recently proposed visual state-space models, typified by Mamba, offer linear-time sequence processing with selective recurrence and have therefore attracted rapid interest in remote sensing. This survey analyses how far that promise is realised in practice. We first review the theoretical substrates of state-space models and the role of scanning and serialization when mapping two- and three-dimensional EO data onto one-dimensional sequences. A taxonomy of scan paths and architectural hybrids is then developed, covering centre-focused and geometry-aware trajectories, CNN– and Transformer–Mamba backbones, and multimodal designs for hyperspectral, multisource fusion, segmentation, detection, restoration, and domain-specific scientific applications. Building on this evidence, we delineate the task regimes in which Mamba is empirically warranted—very long sequences, large tiles, or complex degradations—and those in which simpler operators or conventional attention remain competitive. Finally, we discuss green computing, numerical stability, and reproducibility, and outline directions for physics-informed state-space models and remote-sensing-specific foundation architectures. Overall, the survey argues that Mamba should be used as a targeted, scan-aware component in EO pipelines rather than a drop-in replacement for existing backbones, and aims to provide concrete design principles for future remote sensing research and operational practice. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

38 pages, 1391 KB  
Article
Trustworthy AI-IoT for Citizen-Centric Smart Cities: The IMTPS Framework for Intelligent Multimodal Crowd Sensing
by Wei Li, Ke Li, Zixuan Xu, Mengjie Wu, Yang Wu, Yang Xiong, Shijie Huang, Yijie Yin, Yiping Ma and Haitao Zhang
Sensors 2026, 26(2), 500; https://doi.org/10.3390/s26020500 - 12 Jan 2026
Viewed by 72
Abstract
The fusion of Artificial Intelligence and the Internet of Things (AI-IoT, also widely referred to as AIoT) offers transformative potential for smart cities, yet presents a critical challenge: how to process heterogeneous data streams from intelligent sensing—particularly crowd sensing data derived from citizen [...] Read more.
The fusion of Artificial Intelligence and the Internet of Things (AI-IoT, also widely referred to as AIoT) offers transformative potential for smart cities, yet presents a critical challenge: how to process heterogeneous data streams from intelligent sensing—particularly crowd sensing data derived from citizen interactions like text, voice, and system logs—into reliable intelligence for sustainable urban governance. To address this challenge, we introduce the Intelligent Multimodal Ticket Processing System (IMTPS), a novel AI-IoT smart system. Unlike ad hoc solutions, the novelty of IMTPS resides in its theoretically grounded architecture, which orchestrates Information Theory and Game Theory for efficient, verifiable extraction, and employs Causal Inference and Meta-Learning for robust reasoning, thereby synergistically converting noisy, heterogeneous data streams into reliable governance intelligence. This principled design endows IMTPS with four foundational capabilities essential for modern smart city applications: Sustainable and Efficient AI-IoT Operations: Guided by Information Theory, the IMTPS compression module achieves provably efficient semantic-preserving compression, drastically reducing data storage and energy costs. Trustworthy Data Extraction: A Game Theory-based adversarial verification network ensures high reliability in extracting critical information, mitigating the risk of model hallucination in high-stakes citizen services. Robust Multimodal Fusion: The fusion engine leverages Causal Inference to distinguish true causality from spurious correlations, enabling trustworthy integration of complex, multi-source urban data. Adaptive Intelligent System: A Meta-Learning-based retrieval mechanism allows the system to rapidly adapt to new and evolving query patterns, ensuring long-term effectiveness in dynamic urban environments. We validate IMTPS on a large-scale, publicly released benchmark dataset of 14,230 multimodal records. IMTPS demonstrates state-of-the-art performance, achieving a 96.9% reduction in storage footprint and a 47% decrease in critical data extraction errors. By open-sourcing our implementation, we aim to provide a replicable blueprint for building the next generation of trustworthy and sustainable AI-IoT systems for citizen-centric smart cities. Full article
(This article belongs to the Special Issue AI-IoT for New Challenges in Smart Cities)
17 pages, 2595 KB  
Article
Magnetic Field-Assisted Electro-Fenton System Using Magnetite as a Sustainable Iron Source for Wastewater Treatment
by Evelyn A. Hernández-Rodríguez, Josué D. García-Espinoza, José Treviño-Resendez, Mónica Razo-Negrete, Gustavo Acosta-Santoyo, Luis A. Godínez and Irma Robles
Processes 2026, 14(2), 264; https://doi.org/10.3390/pr14020264 - 12 Jan 2026
Viewed by 50
Abstract
The Electro-Fenton (EF) process is a promising technology for the sustainable remediation of organic contaminants in complex wastewater. In this study, a weak magnetic field (~150 G) was applied to enhance the performance of an EF system using magnetite (Fe3O4 [...] Read more.
The Electro-Fenton (EF) process is a promising technology for the sustainable remediation of organic contaminants in complex wastewater. In this study, a weak magnetic field (~150 G) was applied to enhance the performance of an EF system using magnetite (Fe3O4) synthesized by a controlled co-precipitation route as a recyclable solid iron source. The magnetite was characterized by FTIR, SEM/EDS, and XPS, confirming the coexistence of Fe2+/Fe3+ species essential for in situ Fenton-like reactions. Under the selected operating conditions (90 min reaction time), magnetic-field assistance improved methylene blue decolorization from 14.2% to 46.0% at pH 3. FeSO4 was used only as a homogeneous benchmark, whereas the magnetite-based system operated without soluble iron addition, minimizing sludge formation and secondary contamination. These results demonstrate the potential of magnetite-assisted and magnetically enhanced EF systems as a low-cost, sustainable alternative for the treatment of dye-containing industrial wastewater and other complex effluents. Full article
Show Figures

Graphical abstract

19 pages, 528 KB  
Article
On Cost-Effectiveness of Language Models for Time Series Anomaly Detection
by Ali Yassine, Luca Cagliero and Luca Vassio
Information 2026, 17(1), 72; https://doi.org/10.3390/info17010072 - 12 Jan 2026
Viewed by 45
Abstract
Detecting anomalies in time series data is crucial across several domains, including healthcare, finance, and automotive. Large Language Models (LLMs) have recently shown promising results by leveraging robust model pretraining. However, fine-tuning LLMs with several billion parameters requires a large number of training [...] Read more.
Detecting anomalies in time series data is crucial across several domains, including healthcare, finance, and automotive. Large Language Models (LLMs) have recently shown promising results by leveraging robust model pretraining. However, fine-tuning LLMs with several billion parameters requires a large number of training samples and significant training costs. Conversely, LLMs under a zero-shot learning setting require lower overall computational costs, but can fall short in handling complex anomalies. In this paper, we explore the use of lightweight language models for Time Series Anomaly Detection, either zero-shot or via fine-tuning them. Specifically, we leverage lightweight models that were originally designed for time series forecasting, benchmarking them for anomaly detection against both open-source and proprietary LLMs across different datasets. Our experiments demonstrate that lightweight models (<1 Billion parameters) provide a cost-effective solution, as they achieve performance that is competitive and sometimes even superior to that of larger models (>70 Billions). Full article
(This article belongs to the Special Issue Deep Learning Approach for Time Series Forecasting)
Show Figures

Graphical abstract

15 pages, 1363 KB  
Article
Hierarchical Knowledge Distillation for Efficient Model Compression and Transfer: A Multi-Level Aggregation Approach
by Titinunt Kitrungrotsakul and Preeyanuch Srichola
Information 2026, 17(1), 70; https://doi.org/10.3390/info17010070 - 12 Jan 2026
Viewed by 55
Abstract
The success of large-scale deep learning models in remote sensing tasks has been transformative, enabling significant advances in image classification, object detection, and image–text retrieval. However, their computational and memory demands pose challenges for deployment in resource-constrained environments. Knowledge distillation (KD) alleviates these [...] Read more.
The success of large-scale deep learning models in remote sensing tasks has been transformative, enabling significant advances in image classification, object detection, and image–text retrieval. However, their computational and memory demands pose challenges for deployment in resource-constrained environments. Knowledge distillation (KD) alleviates these issues by transferring knowledge from a strong teacher to a student model, which can be compact for efficient deployment or architecturally matched to improve accuracy under the same inference budget. In this paper, we introduce Hierarchical Multi-Segment Knowledge Distillation (HIMS_KD), a multi-stage framework that sequentially distills knowledge from a teacher into multiple assistant models specialized in low-, mid-, and high-level representations, and then aggregates their knowledge into the final student. We integrate feature-level alignment, auxiliary similarity-logit alignment, and supervised loss during distillation. Experiments on benchmark remote sensing datasets (RSITMD and RSICD) show that HIMS_KD improves retrieval performance and enhances zero-shot classification; and when a compact student is used, it reduces deployment cost while retaining strong accuracy. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

24 pages, 4075 KB  
Article
A Hybrid Formal and Optimization Framework for Real-Time Scheduling: Combining Extended Time Petri Nets with Genetic Algorithms
by Sameh Affi, Imed Miraoui and Atef Khedher
Logistics 2026, 10(1), 17; https://doi.org/10.3390/logistics10010017 - 12 Jan 2026
Viewed by 48
Abstract
In modern Industry 4.0 environments, real-time scheduling presents a complex challenge requiring both formal correctness guarantees and optimal performance. Background: Traditional approaches fail to provide an optimal integration between formal correctness guaranteeing and optimization, and such failure either produces suboptimal results or [...] Read more.
In modern Industry 4.0 environments, real-time scheduling presents a complex challenge requiring both formal correctness guarantees and optimal performance. Background: Traditional approaches fail to provide an optimal integration between formal correctness guaranteeing and optimization, and such failure either produces suboptimal results or a correct result lacking guarantee, and studies have indicated that poor scheduling decisions could cause productivity losses of up to 20–30% and increased operational costs of up to USD 2.5 million each year in medium-scale manufacturing facilities. Methods: This work proposes a new hybrid approach by integrating Extended Time Petri Nets (ETPNs) and Finite-State Automata (FSAs) with formal modeling, abstracting ETPNs by extending conventional Time Petri Nets to deterministic time and priority systems, accompanied by Genetic Algorithms (GAs) to optimize the solution to tackle a multi-objective optimization problem. Our solution tackles indeterministic problems by incorporating suitable priority resolution methods and GA to pursue optimal solutions to very complex scheduling problems and starting accurately from standard real-time scheduling-policy models such as DM, RM, and EDF-EDF. Results: Experimental evaluation has clearly verified performance gains up to 48% above conventional techniques, covering completely synthetic and practical case studies, including 31–48% improvement on synthetic benchmarks, 24% increase on resource allocation, and total elimination of constraint violations. Conclusions: The new proposed hybrid technique is, to a considerable extent, a dramatic advancement within real-time scheduling techniques and Industry 4.0, successfully and effectively integrating optimal correctness guaranteeing and favorable GA-aided optimization techniques, which particularly guarantee optimal correctness to safe-related applications and provide considerable improvements to support efficient and optimal performance, extremely helpful within Industry 4.0. Full article
Show Figures

Figure 1

30 pages, 4603 KB  
Article
Joint Optimization of Storage Assignment and Order Batching for Efficient Heterogeneous Robot G2P Systems
by Li Li, Yan Wei, Yanjie Liang and Jin Ren
Sustainability 2026, 18(2), 743; https://doi.org/10.3390/su18020743 - 11 Jan 2026
Viewed by 116
Abstract
Currently, with the widespread popularization of e-commerce systems, enterprises have increasingly high requirements for the timeliness of order fulfillment. It has become particularly critical to enhance the operational efficiency of heterogeneous robotic “goods-to-person” (G2P) systems in book e-commerce fulfillment, reduce enterprise operational costs, [...] Read more.
Currently, with the widespread popularization of e-commerce systems, enterprises have increasingly high requirements for the timeliness of order fulfillment. It has become particularly critical to enhance the operational efficiency of heterogeneous robotic “goods-to-person” (G2P) systems in book e-commerce fulfillment, reduce enterprise operational costs, and achieve highly efficient, low-carbon, and sustainable warehouse management. Therefore, this study focuses on determining the optimal storage location assignment strategy and order batching method. By comprehensively considering the characteristics of book e-commerce, such as small-batch, high-frequency orders and diverse SKU requirements, as well as existing system issues including uncoordinated storage assignment and order processing, and differences in the operational efficiency of heterogeneous robots, this study proposes a joint optimization framework for storage location assignment and order batching centered on a multi-objective model. The framework integrates the time costs of robot picking operations, SKU turnover rates, and inter-commodity correlations, introduces the STCSPBC storage strategy to optimize storage location assignment, and designs the SA-ANS algorithm to solve the storage assignment problem. Meanwhile, order batching optimization is based on dynamic inventory data, and the S-O Greedy algorithm is adopted to find solutions with lower picking costs. This achieves the joint optimization of storage location assignment and order batching, improves the system’s picking efficiency, reduces operational costs, and realizes green and sustainable management. Finally, validation via a spatiotemporal network model shows that the proposed joint optimization framework outperforms existing benchmark methods, achieving a 45.73% improvement in average order hit rate, a 48.79% reduction in total movement distance, a 46.59% decrease in operation time, and a 24.04% reduction in conflict frequency. Full article
(This article belongs to the Section Sustainable Management)
Show Figures

Figure 1

41 pages, 4549 KB  
Review
5.8 GHz Microstrip Patch Antennas for Wireless Power Transfer: A Comprehensive Review of Design, Optimization, Applications, and Future Trends
by Yahya Albaihani, Rizwan Akram, El Amjed Hajlaoui, Abdullah M. Almohaimeed, Ziyad M. Almohaimeed and Abdullrab Albaihani
Electronics 2026, 15(2), 311; https://doi.org/10.3390/electronics15020311 - 10 Jan 2026
Viewed by 109
Abstract
Wireless Power Transfer (WPT) has become a pivotal technology, enabling the battery-free operation of Internet of Things (IoT) and biomedical devices while supporting environmental sustainability. This review provides a comprehensive analysis of microstrip patch antennas (MPAs) operating at the 5.8 GHz Industrial, Scientific, [...] Read more.
Wireless Power Transfer (WPT) has become a pivotal technology, enabling the battery-free operation of Internet of Things (IoT) and biomedical devices while supporting environmental sustainability. This review provides a comprehensive analysis of microstrip patch antennas (MPAs) operating at the 5.8 GHz Industrial, Scientific, and Medical (ISM) band, emphasizing their advantages over the more commonly used 2.4 GHz band. A detailed and systematic classification framework for MPA architectures is introduced, covering single-element, multi-band, ultra-wideband, array, MIMO, wearable, and rectenna systems. The review examines advanced optimization methodologies, including Defected Ground Structures (DGS), Electromagnetic Bandgap (EBG) structures, Metamaterials (MTM), Machine Learning (ML), and nanomaterials, each contributing to improvements in gain, bandwidth, efficiency, and device miniaturization. Unlike previous surveys, this work offers a performance-benchmarked classification specifically for 5.8 GHz MPAs and provides a quantitative assessment of key trade-offs, such as efficiency versus substrate cost. The review also advocates for a shift toward Power Conversion Efficiency (PCE)-centric co-design strategies. The analysis identifies critical research gaps, particularly the ongoing disparity between simulated and experimental performance. The review concludes by recommending multi-objective optimization, integrated antenna-rectifier co-design to maximize PCE, and the use of advanced materials and computational intelligence to advance next-generation, high-efficiency 5.8 GHz WPT systems. Full article
(This article belongs to the Section Microwave and Wireless Communications)
41 pages, 80556 KB  
Article
Why ROC-AUC Is Misleading for Highly Imbalanced Data: In-Depth Evaluation of MCC, F2-Score, H-Measure, and AUC-Based Metrics Across Diverse Classifiers
by Mehdi Imani, Majid Joudaki, Ayoub Bagheri and Hamid R. Arabnia
Technologies 2026, 14(1), 54; https://doi.org/10.3390/technologies14010054 - 10 Jan 2026
Viewed by 187
Abstract
This study re-evaluates ROC-AUC for binary classification under severe class imbalance (<3% positives). Despite its widespread use, ROC-AUC can mask operationally salient differences among classifiers when the costs of false positives and false negatives are asymmetric. Using three benchmarks, credit-card fraud detection (0.17%), [...] Read more.
This study re-evaluates ROC-AUC for binary classification under severe class imbalance (<3% positives). Despite its widespread use, ROC-AUC can mask operationally salient differences among classifiers when the costs of false positives and false negatives are asymmetric. Using three benchmarks, credit-card fraud detection (0.17%), yeast protein localization (1.35%), and ozone level detection (2.9%), we compare ROC-AUC with Matthews Correlation Coefficient, F2-score, H-measure, and PR-AUC. Our empirical analyses span 20 classifier–sampler configurations per dataset, combined with four classifiers (Logistic Regression, Random Forest, XGBoost, and CatBoost) and four oversampling methods plus a no-resampling baseline (no resampling, SMOTE, Borderline-SMOTE, SVM-SMOTE, ADASYN). ROC-AUC exhibits pronounced ceiling effects, yielding high scores even for underperforming models. In contrast, MCC and F2 align more closely with deployment-relevant costs and achieve the highest Kendall’s τ rank concordance across datasets; PR-AUC provides threshold-independent ranking, and H-measure integrates cost sensitivity. We quantify uncertainty and differences using stratified bootstrap confidence intervals, DeLong’s test for ROC-AUC, and Friedman–Nemenyi critical-difference diagrams, which collectively underscore the limited discriminative value of ROC-AUC in rare-event settings. The findings recommend a shift to a multi-metric evaluation framework: ROC-AUC should not be used as the primary metric in ultra-imbalanced settings; instead, MCC and F2 are recommended as primary indicators, supplemented by PR-AUC and H-measure where ranking granularity and principled cost integration are required. This evidence encourages researchers and practitioners to move beyond sole reliance on ROC-AUC when evaluating classifiers in highly imbalanced data. Full article
Show Figures

Figure 1

Back to TopTop