Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,970)

Search Parameters:
Keywords = real-world problems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2767 KiB  
Article
UAM Vertiport Network Design Considering Connectivity
by Wentao Zhang and Taesung Hwang
Systems 2025, 13(7), 607; https://doi.org/10.3390/systems13070607 - 18 Jul 2025
Abstract
Urban Air Mobility (UAM) is envisioned to revolutionize urban transportation by improving traffic efficiency and mitigating surface-level congestion. One of the fundamental challenges in implementing UAM systems lies in the optimal siting of vertiports, which requires a delicate balance among infrastructure construction costs, [...] Read more.
Urban Air Mobility (UAM) is envisioned to revolutionize urban transportation by improving traffic efficiency and mitigating surface-level congestion. One of the fundamental challenges in implementing UAM systems lies in the optimal siting of vertiports, which requires a delicate balance among infrastructure construction costs, passenger access costs to their assigned vertiports, and the operational connectivity of the resulting vertiport network. This study develops an integrated mathematical model for vertiport location decision, aiming to minimize total system cost while ensuring UAM network connectivity among the selected vertiport locations. To efficiently solve the problem and improve solution quality, a hybrid genetic algorithm is developed by incorporating a Minimum Spanning Tree (MST)-based connectivity enforcement mechanism, a fundamental concept in graph theory that connects all nodes in a given network with minimal total link cost, enhanced by a greedy initialization strategy. The effectiveness of the proposed algorithm is demonstrated through numerical experiments conducted on both synthetic datasets and the real-world transportation network of New York City. The results show that the proposed hybrid methodology not only yields high-quality solutions but also significantly reduces computational time, enabling faster convergence. Overall, this study provides practical insights for UAM infrastructure planning by emphasizing demand-oriented vertiport siting and inter-vertiport connectivity, thereby contributing to both theoretical development and large-scale implementation in complex urban environments. Full article
(This article belongs to the Special Issue Modelling and Simulation of Transportation Systems)
21 pages, 571 KiB  
Article
Joint Optimization of Caching and Recommendation with Performance Guarantee for Effective Content Delivery in IoT
by Zhiyong Liu, Hong Shen and Hui Tian
Appl. Sci. 2025, 15(14), 7986; https://doi.org/10.3390/app15147986 - 17 Jul 2025
Abstract
Content caching and recommendation for content delivery over the Internet are two key techniques for improving the content delivery effectiveness determined by delivery efficiency and user satisfaction, which is increasingly important in the booming Internet of Things (IoT). While content caching seeks the [...] Read more.
Content caching and recommendation for content delivery over the Internet are two key techniques for improving the content delivery effectiveness determined by delivery efficiency and user satisfaction, which is increasingly important in the booming Internet of Things (IoT). While content caching seeks the “greatest common denominator” among users to reduce end-to-end delay in content delivery, personalized recommendation, on the contrary, emphasizes users’ differentiation to enhance user satisfaction. Existing studies typically address them separately rather than jointly due to their contradictory objectives. They focus mainly on heuristics and deep reinforcement learning methods without the provision of performance guarantees, which are required in many real-world applications. In this paper, we study the problem of joint optimization of caching and recommendation in which recommendation is performed in the cached contents instead of purely according to users’ preferences, as in the existing work. We show the NP-hardness of this problem and present a greedy solution with a performance guarantee by first performing content caching according to user request probability without considering recommendations to maximize the aggregated request probability on cached contents and then recommendations from cached contents to incorporate user preferences for cache hit rate maximization. We prove that this problem has a monotonically increasing and submodular objective function and develop an efficient algorithm that achieves a 11e0.63 approximation ratio to the optimal solution. Experimental results demonstrate that our algorithm dramatically improves the popular least-recently used (LRU) algorithm. We also show experimental evaluations of hit rate variations by Jensen–Shannon Divergence on different parameter settings of cache capacity and user preference distortion limit, which can be used as a reference for appropriate parameter settings to balance user preferences and cache hit rate for Internet content delivery. Full article
Show Figures

Figure 1

52 pages, 7424 KiB  
Article
ACIVY: An Enhanced IVY Optimization Algorithm with Adaptive Cross Strategies for Complex Engineering Design and UAV Navigation
by Heming Jia, Mahmoud Abdel-salam and Gang Hu
Biomimetics 2025, 10(7), 471; https://doi.org/10.3390/biomimetics10070471 - 17 Jul 2025
Abstract
The Adaptive Cross Ivy (ACIVY) algorithm is a novel bio-inspired metaheuristic that emulates ivy plant growth behaviors for complex optimization problems. While the original Ivy Optimization Algorithm (IVYA) demonstrates a competitive performance, it suffers from limited inter-individual information exchange, inadequate directional guidance for [...] Read more.
The Adaptive Cross Ivy (ACIVY) algorithm is a novel bio-inspired metaheuristic that emulates ivy plant growth behaviors for complex optimization problems. While the original Ivy Optimization Algorithm (IVYA) demonstrates a competitive performance, it suffers from limited inter-individual information exchange, inadequate directional guidance for local optima escape, and abrupt exploration–exploitation transitions. To address these limitations, ACIVY integrates three strategic enhancements: the crisscross strategy, enabling horizontal and vertical crossover operations for improved population diversity; the LightTrack strategy, incorporating positional memory and repulsion mechanisms for effective local optima escape; and the Top-Guided Adaptive Mutation strategy, implementing ranking-based mutation with dynamic selection pools for smooth exploration–exploitation balance. Comprehensive evaluations on the CEC2017 and CEC2022 benchmark suites demonstrate ACIVY’s superior performance against state-of-the-art algorithms across unimodal, multimodal, hybrid, and composite functions. ACIVY achieved outstanding average rankings of 1.25 (CEC2022) and 1.41 (CEC2017 50D), with statistical significance confirmed through Wilcoxon tests. Practical applications in engineering design optimization and UAV path planning further validate ACIVY’s robust performance, consistently delivering optimal solutions across diverse real-world scenarios. The algorithm’s exceptional convergence precision, solution reliability, and computational efficiency establish it as a powerful tool for challenging optimization problems requiring both accuracy and consistency. Full article
Show Figures

Figure 1

17 pages, 2421 KiB  
Article
Cross-Receiver Radio Frequency Fingerprint Identification: A Source-Free Adaptation Approach
by Jian Yang, Shaoxian Zhu, Zhongyi Wen and Qiang Li
Sensors 2025, 25(14), 4451; https://doi.org/10.3390/s25144451 - 17 Jul 2025
Abstract
Radio frequency fingerprint identification (RFFI) leverages the unique characteristics of radio signals resulting from inherent hardware imperfections for identification, making it essential for applications in telecommunications, cybersecurity, and surveillance. Despite the advancements brought by deep learning in enhancing RFFI accuracy, challenges persist in [...] Read more.
Radio frequency fingerprint identification (RFFI) leverages the unique characteristics of radio signals resulting from inherent hardware imperfections for identification, making it essential for applications in telecommunications, cybersecurity, and surveillance. Despite the advancements brought by deep learning in enhancing RFFI accuracy, challenges persist in model deployment, particularly when transferring RFFI models across different receivers. Variations in receiver hardware can lead to significant performance declines due to shifts in data distribution. This paper introduces the source-free cross-receiver RFFI (SCRFFI) problem, which centers on adapting pre-trained RF fingerprinting models to new receivers without needing access to original training data from other devices, addressing concerns of data privacy and transmission limitations. We propose a novel approach called contrastive source-free cross-receiver network (CSCNet), which employs contrastive learning to facilitate model adaptation using only unlabeled data from the deployed receiver. By incorporating a three-pronged loss function strategy—minimizing information entropy loss, implementing pseudo-label self-supervised loss, and leveraging contrastive learning loss—CSCNet effectively captures the relationships between signal samples, enhancing recognition accuracy and robustness, thereby directly mitigating the impact of receiver variations and the absence of source data. Our theoretical analysis provides a solid foundation for the generalization performance of SCRFFI, which is corroborated by extensive experiments on real-world datasets, where under realistic noise and channel conditions, that CSCNet significantly improves recognition accuracy and robustness, achieving an average improvement of at least 13% over existing methods and, notably, a 47% increase in specific challenging cross-receiver adaptation tasks. Full article
Show Figures

Figure 1

18 pages, 533 KiB  
Article
Comparative Analysis of Deep Learning Models for Intrusion Detection in IoT Networks
by Abdullah Waqas, Sultan Daud Khan, Zaib Ullah, Mohib Ullah and Habib Ullah
Computers 2025, 14(7), 283; https://doi.org/10.3390/computers14070283 - 17 Jul 2025
Abstract
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of [...] Read more.
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of detecting intrusions in IoT environments by evaluating the performance of deep learning (DL) models under different data and algorithmic conditions. We conducted a comparative analysis of three widely used DL models—Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), and Bidirectional LSTM (biLSTM)—across four benchmark IoT intrusion detection datasets: BoTIoT, CiCIoT, ToNIoT, and WUSTL-IIoT-2021. Each model was assessed under balanced and imbalanced dataset configurations and evaluated using three loss functions (cross-entropy, focal loss, and dual focal loss). By analyzing model efficacy across these datasets, we highlight the importance of generalizability and adaptability to varied data characteristics that are essential for real-world applications. The results demonstrate that the CNN trained using the cross-entropy loss function consistently outperforms the other models, particularly on balanced datasets. On the other hand, LSTM and biLSTM show strong potential in temporal modeling, but their performance is highly dependent on the characteristics of the dataset. By analyzing the performance of multiple DL models under diverse datasets, this research provides actionable insights for developing secure, interpretable IoT systems that can meet the challenges of designing a secure IoT system. Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
Show Figures

Figure 1

13 pages, 279 KiB  
Article
Generalized Hyers–Ulam Stability of Bi-Homomorphisms, Bi-Derivations, and Bi-Isomorphisms in C*-Ternary Algebras
by Jae-Hyeong Bae and Won-Gil Park
Mathematics 2025, 13(14), 2289; https://doi.org/10.3390/math13142289 - 16 Jul 2025
Viewed by 66
Abstract
In this paper, we investigate the generalized Hyers–Ulam stability of bi-homomorphisms, bi-derivations, and bi-isomorphisms in C*-ternary algebras. The study of functional equations with a sufficient number of variables can be helpful in solving real-world problems such as artificial intelligence. In this [...] Read more.
In this paper, we investigate the generalized Hyers–Ulam stability of bi-homomorphisms, bi-derivations, and bi-isomorphisms in C*-ternary algebras. The study of functional equations with a sufficient number of variables can be helpful in solving real-world problems such as artificial intelligence. In this paper, we build on previous research on functional equations with four variables to study functional equations with as many variables as desired. We introduce new bounds for the stability of mappings satisfying generalized bi-additive conditions and demonstrate the uniqueness of approximating bi-isomorphisms. The results contribute to the deeper understanding of ternary algebraic structures and related functional equations, relevant to both pure mathematics and quantum information science. Full article
16 pages, 944 KiB  
Article
Artificial Intelligence in the Oil and Gas Industry: Applications, Challenges, and Future Directions
by Marcelo dos Santos Póvoas, Jéssica Freire Moreira, Severino Virgínio Martins Neto, Carlos Antonio da Silva Carvalho, Bruno Santos Cezario, André Luís Azevedo Guedes and Gilson Brito Alves Lima
Appl. Sci. 2025, 15(14), 7918; https://doi.org/10.3390/app15147918 - 16 Jul 2025
Viewed by 236
Abstract
This study aims to provide a comprehensive overview of the application of artificial intelligence (AI) methods to solve real-world problems in the oil and gas sector. The methodology involved a two-step process for analyzing AI applications. In the first step, an initial exploration [...] Read more.
This study aims to provide a comprehensive overview of the application of artificial intelligence (AI) methods to solve real-world problems in the oil and gas sector. The methodology involved a two-step process for analyzing AI applications. In the first step, an initial exploration of scientific articles in the Scopus database was conducted using keywords related to AI and computational intelligence, resulting in a total of 11,296 articles. The bibliometric analysis conducted using VOS Viewer version 1.6.15 software revealed an average annual growth of approximately 15% in the number of publications related to AI in the sector between 2015 and 2024, indicating the growing importance of this technology. In the second step, the research focused on the OnePetro database, widely used by the oil industry, selecting articles with terms associated with production and drilling, such as “production system”, “hydrate formation”, “machine learning”, “real-time”, and “neural network”. The results highlight the transformative impact of AI on production operations, with key applications including optimizing operations through real-time data analysis, predictive maintenance to anticipate failures, advanced reservoir management through improved modeling, image and video analysis for continuous equipment monitoring, and enhanced safety through immediate risk detection. The bibliometric analysis identified a significant concentration of publications at Society of Petroleum Engineers (SPE) events, which accounted for approximately 40% of the selected articles. Overall, the integration of AI into production operations has driven significant improvements in efficiency and safety, and its continued evolution is expected to advance industry practices further and address emerging challenges. Full article
Show Figures

Figure 1

16 pages, 2355 KiB  
Article
Generalising Stock Detection in Retail Cabinets with Minimal Data Using a DenseNet and Vision Transformer Ensemble
by Babak Rahi, Deniz Sagmanli, Felix Oppong, Direnc Pekaslan and Isaac Triguero
Mach. Learn. Knowl. Extr. 2025, 7(3), 66; https://doi.org/10.3390/make7030066 - 16 Jul 2025
Viewed by 78
Abstract
Generalising deep-learning models to perform well on unseen data domains with minimal retraining remains a significant challenge in computer vision. Even when the target task—such as quantifying the number of elements in an image—stays the same, data quality, shape, or form variations can [...] Read more.
Generalising deep-learning models to perform well on unseen data domains with minimal retraining remains a significant challenge in computer vision. Even when the target task—such as quantifying the number of elements in an image—stays the same, data quality, shape, or form variations can deviate from the training conditions, often necessitating manual intervention. As a real-world industry problem, we aim to automate stock level estimation in retail cabinets. As technology advances, new cabinet models with varying shapes emerge alongside new camera types. This evolving scenario poses a substantial obstacle to deploying long-term, scalable solutions. To surmount the challenge of generalising to new cabinet models and cameras with minimal amounts of sample images, this research introduces a new solution. This paper proposes a novel ensemble model that combines DenseNet-201 and Vision Transformer (ViT-B/8) architectures to achieve generalisation in stock-level classification. The novelty aspect of our solution comes from the fact that we combine a transformer with a DenseNet model in order to capture both the local, hierarchical details and the long-range dependencies within the images, improving generalisation accuracy with less data. Key contributions include (i) a novel DenseNet-201 + ViT-B/8 feature-level fusion, (ii) an adaptation workflow that needs only two images per class, (iii) a balanced layer-unfreezing schedule, (iv) a publicly described domain-shift benchmark, and (v) a 47 pp accuracy gain over four standard few-shot baselines. Our approach leverages fine-tuning techniques to adapt two pre-trained models to the new retail cabinets (i.e., standing or horizontal) and camera types using only two images per class. Experimental results demonstrate that our method achieves high accuracy rates of 91% on new cabinets with the same camera and 89% on new cabinets with different cameras, significantly outperforming standard few-shot learning methods. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

19 pages, 1006 KiB  
Article
Optimization of Multi-Day Flexible EMU Routing Plan for High-Speed Rail Networks
by Xiangyu Su, Yixiang Yue, Bin Guo and Zanyang Cui
Appl. Sci. 2025, 15(14), 7914; https://doi.org/10.3390/app15147914 - 16 Jul 2025
Viewed by 148
Abstract
With the continuous expansion and increasing operational complexity of high-speed railway networks, there is a growing need for more flexible and efficient EMU (Electric Multiple Unit) routing strategies. To address these challenges, in this paper, we propose a multi-day flexible circulation model that [...] Read more.
With the continuous expansion and increasing operational complexity of high-speed railway networks, there is a growing need for more flexible and efficient EMU (Electric Multiple Unit) routing strategies. To address these challenges, in this paper, we propose a multi-day flexible circulation model that minimizes total connection time and deadheading mileage. A multi-commodity network flow model is formulated, incorporating constraints such as first-level maintenance intervals, storage capacity, train coupling/decoupling operations, and train types, with across-day consistency. To solve this complex model efficiently, a heuristic decomposition algorithm is designed to separate the problem into daily service chain generation and EMU assignment. A real-world case study in the Beijing–Baotou high-speed corridor demonstrates the effectiveness of the proposed approach. Compared to a fixed strategy, the flexible strategy reduces EMU usage by one unit, lowers deadheading mileage by up to 16.4%, and improves maintenance workload balance. These results highlight the practical value of flexible EMU deployment for large-scale, multi-day railway operations. Full article
Show Figures

Figure 1

23 pages, 3820 KiB  
Article
A Fundamental Statistics Self-Learning Method with Python Programming for Data Science Implementations
by Prismahardi Aji Riyantoko, Nobuo Funabiki, Komang Candra Brata, Mustika Mentari, Aviolla Terza Damaliana and Dwi Arman Prasetya
Information 2025, 16(7), 607; https://doi.org/10.3390/info16070607 - 15 Jul 2025
Viewed by 161
Abstract
The increasing demand for data-driven decision making to maintain the innovations and competitiveness of organizations highlights the need for data science educations across academia and industry. At its core is a solid understanding of statistics, which is necessary for conducting a thorough analysis [...] Read more.
The increasing demand for data-driven decision making to maintain the innovations and competitiveness of organizations highlights the need for data science educations across academia and industry. At its core is a solid understanding of statistics, which is necessary for conducting a thorough analysis of data and deriving valuable insights. Unfortunately, conventional statistics learning often lacks practice in real-world applications using computer programs, causing a separation between conceptual knowledge of statistics equations and their hands-on skills. Integrating statistics learning into Python programming can convey an effective solution for this problem, where it has become essential in data science implementations, with extensive and versatile libraries. In this paper, we present a self-learning method for fundamental statistics through Python programming for data science studies. Unlike conventional approaches, our method integrates three types of interactive problems—element fill-in-blank problem (EFP), grammar-concept understanding problem (GUP), and value trace problem (VTP)—in the Programming Learning Assistant System (PLAS). This combination allows students to write code, understand concepts, and trace the output value while obtaining instant feedback so that they can improve retention, knowledge, and practical skills in learning statistics using Python programming. For evaluations, we generated 22 instances using source codes for fundamental statistics topics, and assigned them to 40 first-year undergraduate students at UPN Veteran Jawa Timur, Indonesia. Statistics analytical methods were utilized to analyze the student learning performances. The results show that a significant correlation (ρ<0.05) exists between the students who solved our proposal and those who did not. The results confirm that it can effectively assist students in learning fundamental statistics self-learning using Python programming for data science implementations. Full article
Show Figures

Figure 1

15 pages, 1617 KiB  
Article
A Stochastic Optimization Model for Multi-Airport Flight Cooperative Scheduling Considering CvaR of Both Travel and Departure Time
by Wei Cong, Zheng Zhao, Ming Wei and Huan Liu
Aerospace 2025, 12(7), 631; https://doi.org/10.3390/aerospace12070631 - 14 Jul 2025
Viewed by 82
Abstract
By assuming that both travel and departure time are normally distributed variables, a multi-objective stochastic optimization model for the multi-airport flight cooperative scheduling problem (MAFCSP) with CvaR of travel and departure time is firstly proposed. Herein, conflicts of flights from different airports at [...] Read more.
By assuming that both travel and departure time are normally distributed variables, a multi-objective stochastic optimization model for the multi-airport flight cooperative scheduling problem (MAFCSP) with CvaR of travel and departure time is firstly proposed. Herein, conflicts of flights from different airports at the same waypoint can be avoided by simultaneously assigning an optimal route to each flight between the airport and waypoint and determining its practical departure time. Furthermore, several real-world constraints, including the safe interval between any two aircraft at the same waypoint and the maximum allowable delay for each flight, have been incorporated into the proposed model. The primary objective is minimization of both total carbon emissions and delay times for all flights across all airports. A feasible set of non-dominated solutions were obtained using a two-stage heuristic approach-based NSGA-II. Finally, we present a case study of four airports and three waypoints in the Beijing–Tianjin–Hebei region of China to test our study. Full article
(This article belongs to the Special Issue Flight Performance and Planning for Sustainable Aviation)
Show Figures

Figure 1

28 pages, 1051 KiB  
Article
Probabilistic Load-Shedding Strategy for Frequency Regulation in Microgrids Under Uncertainties
by Wesley Peres, Raphael Paulo Braga Poubel and Rafael Alipio
Symmetry 2025, 17(7), 1125; https://doi.org/10.3390/sym17071125 - 14 Jul 2025
Viewed by 164
Abstract
This paper proposes a novel integer-mixed probabilistic optimal power flow (IM-POPF) strategy for frequency regulation in islanded microgrids under uncertain operating conditions. Existing load-shedding approaches face critical limitations: continuous frameworks fail to reflect the discrete nature of actual load disconnections, while deterministic models [...] Read more.
This paper proposes a novel integer-mixed probabilistic optimal power flow (IM-POPF) strategy for frequency regulation in islanded microgrids under uncertain operating conditions. Existing load-shedding approaches face critical limitations: continuous frameworks fail to reflect the discrete nature of actual load disconnections, while deterministic models inadequately capture the stochastic behavior of renewable generation and load variations. The proposed approach formulates load shedding as an integer optimization problem where variables are categorized as integer (load disconnection decisions at specific nodes) and continuous (voltages, power generation, and steady-state frequency), better reflecting practical power system operations. The key innovation combines integer load-shedding optimization with efficient uncertainty propagation through Unscented Transformation, eliminating the computational burden of Monte Carlo simulations while maintaining accuracy. Load and renewable uncertainties are modeled as normally distributed variables, and probabilistic constraints ensure operational limits compliance with predefined confidence levels. The methodology integrates Differential Evolution metaheuristics with Unscented Transformation for uncertainty propagation, requiring only 137 deterministic evaluations compared to 5000 for Monte Carlo methods. Validation on an IEEE 33-bus radial distribution system configured as an islanded microgrid demonstrates significant advantages over conventional approaches. Results show 36.5-fold computational efficiency improvement while achieving 95.28% confidence level compliance for frequency limits, compared to only 50% for deterministic methods. The integer formulation requires minimal additional load shedding (21.265%) compared to continuous approaches (20.682%), while better aligning with the discrete nature of real-world operational decisions. The proposed IM-POPF framework successfully minimizes total load shedding while maintaining frequency stability under uncertain conditions, providing a computationally efficient solution for real-time microgrid operation. Full article
(This article belongs to the Special Issue Symmetry and Distributed Power System)
Show Figures

Figure 1

20 pages, 1392 KiB  
Article
The Environmental Impact of Inland Empty Container Movements Within Two-Depot Systems
by Alaa Abdelshafie, May Salah and Tomaž Kramberger
Appl. Sci. 2025, 15(14), 7848; https://doi.org/10.3390/app15147848 - 14 Jul 2025
Viewed by 133
Abstract
Inefficient inland repositioning of empty containers between depots remains a persistent challenge in container logistics, contributing significantly to unnecessary truck movements, elevated operational costs, and increased CO2 emissions. Acknowledging the importance of this problem, a large amount of relevant literature has appeared. [...] Read more.
Inefficient inland repositioning of empty containers between depots remains a persistent challenge in container logistics, contributing significantly to unnecessary truck movements, elevated operational costs, and increased CO2 emissions. Acknowledging the importance of this problem, a large amount of relevant literature has appeared. The objective of this paper is to track the empty container flow between ports, empty depots, inland terminals, and customer premises. Additionally, it aims to simulate and assess CO2 emissions, capturing the dynamic interactions between different agents. In this study, agent-based modeling (ABM) was proposed to simulate the empty container movements with an emphasis on inland transportation. ABM is an emerging approach that is increasingly used to simulate complex economic systems and artificial market behaviours. NetLogo was used to incorporate real-world geographic data and quantify CO2 emissions based on truckload status and to evaluate the other operational aspects. Behavior Space was also utilized to systematically conduct multiple simulation experiments, varying parameters to analyze different scenarios. The results of the study show that customer demand frequency plays a crucial role in system efficiency, affecting container availability and logistical tension. Full article
(This article belongs to the Special Issue Green Transportation and Pollution Control)
Show Figures

Figure 1

24 pages, 19550 KiB  
Article
TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video
by Jie Yin, Tao Sun, Xiao Zhang, Guorong Zhang, Xue Wan and Jianjun He
Remote Sens. 2025, 17(14), 2422; https://doi.org/10.3390/rs17142422 - 12 Jul 2025
Viewed by 170
Abstract
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing [...] Read more.
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing turbulence mitigation methods for long-range imaging demonstrate partial success, they exhibit limited generalizability and interpretability in large-scale satellite scenarios. Inspired by refractive-index structure constant (Cn2) estimation from degraded sequences, we propose a physics-informed turbulence signature (TS) prior that explicitly captures spatiotemporal distortion patterns to enhance model transparency. Integrating this prior into a lucky imaging framework, we develop a Physics-Based Turbulence Mitigation Network guided by Turbulence Signature (TMTS) to disentangle atmospheric disturbances from satellite videos. The framework employs deformable attention modules guided by turbulence signatures to correct geometric distortions, iterative gated mechanisms for temporal alignment stability, and adaptive multi-frame aggregation to address spatially varying blur. Comprehensive experiments on synthetic and real-world turbulence-degraded satellite videos demonstrate TMTS’s superiority, achieving 0.27 dB PSNR and 0.0015 SSIM improvements over the DATUM baseline while maintaining practical computational efficiency. By bridging turbulence physics with deep learning, our approach provides both performance enhancements and interpretable restoration mechanisms, offering a viable solution for operational satellite video processing under atmospheric disturbances. Full article
Show Figures

Graphical abstract

20 pages, 568 KiB  
Article
Non-Parametric Inference for Multi-Sample of Geometric Processes with Application to Multi-System Repair Process Modeling
by Ömer Altındağ
Mathematics 2025, 13(14), 2260; https://doi.org/10.3390/math13142260 - 12 Jul 2025
Viewed by 121
Abstract
The geometric process is a significant monotonic stochastic process widely used in the fields of applied probability, particularly in the failure analysis of repairable systems. For repairable systems modeled by a geometric process, accurate estimation of model parameters is essential. The inference problem [...] Read more.
The geometric process is a significant monotonic stochastic process widely used in the fields of applied probability, particularly in the failure analysis of repairable systems. For repairable systems modeled by a geometric process, accurate estimation of model parameters is essential. The inference problem for geometric processes has been well-studied in the case of single-sample data. However, multi-sample data may arise when the repair processes of multiple systems are observed simultaneously. This study addresses the non-parametric inference problem for geometric processes based on multi-sample data. Several non-parametric estimators are proposed using the linear regression method, and their asymptotic properties are established. In addition, test statistics are introduced to assess sample homogeneity and to evaluate the significance of the trend observed in the process. The performance of the proposed estimators is evaluated through a comprehensive simulation study under small-sample settings. An artificial data analysis is conducted to model the repair processes of multiple repairable systems using the geometric process. Furthermore, a real-world dataset consisting of multi-sample failure data from two shared memory processors of the Blue Mountain supercomputer is analyzed to demonstrate the practical applicability of the method in multi-sample failure data analysis. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

Back to TopTop