Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (68)

Search Parameters:
Keywords = FIFO

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1501 KiB  
Article
Re-Designing Business Process Models for Enhancing Sustainability in Spinach Production Through Lean Tools with Digital Transformation
by Juan Diego Guerra, Greisy Palomino, Orkun Yildiz, Iliana Araceli Macassi and Jose C. Alvarez
Sustainability 2025, 17(13), 5673; https://doi.org/10.3390/su17135673 - 20 Jun 2025
Viewed by 518
Abstract
This study addresses rising sustainability demands in the agro-industry by examining how data-driven approaches can reduce inefficiencies, waste, and poor resource use in spinach production. It investigates the impact of Total Productive Maintenance (TPM), First-In–First-Out (FIFO), process standardization, and circular economy practices—enhanced through [...] Read more.
This study addresses rising sustainability demands in the agro-industry by examining how data-driven approaches can reduce inefficiencies, waste, and poor resource use in spinach production. It investigates the impact of Total Productive Maintenance (TPM), First-In–First-Out (FIFO), process standardization, and circular economy practices—enhanced through digital transformation—on operational efficiency in a Peruvian agro-industrial firm. An exploratory case study was conducted using pilot implementations, direct observation, and quantitative analysis. Statistical tools, including Holt–Winters forecasting, were applied to assess the effectiveness of the interventions. Digital technologies supported data collection, traceability, and decision-making. An exploratory case study was conducted using pilot implementations, direct observation, and quantitative analysis. The integration of digital tools with lean and circular practices supports sustainable agro-industrial supply chains, contributing to food security and socio-economic resilience. This research offers a holistic, data-driven framework that aligns operational excellence with sustainability and digital innovation. Findings are based on a single case, limiting their generalizability. Broader applications and long-term effects warrant further study. Practitioners should adopt system-thinking approaches integrating digital, lean, and circular strategies. Future research should explore scalability, cost-efficiency, and policy support mechanisms. Full article
Show Figures

Graphical abstract

13 pages, 1936 KiB  
Protocol
Rapid and Efficient DNA Extraction Protocol from Peruvian Native Cotton (Gossypium barbadense L.) Lambayeque, Peru
by Luis Miguel Serquén Lopez, Herry Lloclla Gonzales, Wilmer Enrique Vidaurre Garcia, Ricardo Leonidas de Jesus Velez Chicoma and Mendoza Cornejo Greta
Methods Protoc. 2025, 8(3), 50; https://doi.org/10.3390/mps8030050 - 7 May 2025
Viewed by 676
Abstract
Efficient extraction of high-quality DNA from plants is a critical challenge in molecular research, especially in species such as Gossypium barbadense L., native to Peru, due to the presence of inhibitors such as polysaccharides and phenolic compounds. This study presents a modified CTAB-based [...] Read more.
Efficient extraction of high-quality DNA from plants is a critical challenge in molecular research, especially in species such as Gossypium barbadense L., native to Peru, due to the presence of inhibitors such as polysaccharides and phenolic compounds. This study presents a modified CTAB-based protocol with silica columns that is designed to overcome these limitations without the need for liquid nitrogen or expensive reagents. Native cotton samples were collected in Lambayeque, Peru, and processed using a simplified procedure that optimizes the purity and concentration of the extracted DNA. Eight cultivars of G. barbadense L. with colored fibers (cream, fifo, light brown, dark brown, orange-brown, reddish, fine reddish, and white) were evaluated, yielding DNA with A260/A280 ratios between 2.14 and 2.19 and A260/A230 ratios between 1.8 and 3.14; these values are higher than those obtained with the classical CTAB method. DNA quality was validated by PCR amplification using ISSR and RAPD molecular markers, which yielded clear and well-defined banding patterns. Furthermore, the extracted DNA was suitable for advanced applications, such as Sanger sequencing, by which high-quality electropherograms were obtained. The results demonstrate that the proposed protocol is an efficient, economical, and adaptable alternative for laboratories with limited resources, allowing the extraction of high-quality DNA from Gossypium barbadense L. and other plant species. This simplified approach facilitates the development of genetic and biotechnological research, contributing to the knowledge and valorization of the genetic resources of Peruvian native cotton. Full article
(This article belongs to the Section Molecular and Cellular Biology)
Show Figures

Graphical abstract

23 pages, 3872 KiB  
Article
A Deep Reinforcement Learning and Graph Convolution Approach to On-Street Parking Search Navigation
by Xiaohang Zhao and Yangzhi Yan
Sensors 2025, 25(8), 2389; https://doi.org/10.3390/s25082389 - 9 Apr 2025
Cited by 1 | Viewed by 751
Abstract
Efficient parking distribution is crucial for urban traffic management; nevertheless, variable demand and spatial disparities raise considerable obstacles. Current research emphasizes local optimization but neglects the fundamental challenges of real-time parking allocation, resulting in inefficiencies within intricate metropolitan settings. This research delineates two [...] Read more.
Efficient parking distribution is crucial for urban traffic management; nevertheless, variable demand and spatial disparities raise considerable obstacles. Current research emphasizes local optimization but neglects the fundamental challenges of real-time parking allocation, resulting in inefficiencies within intricate metropolitan settings. This research delineates two key issues: (1) A dynamic imbalance between supply and demand, characterized by considerable fluctuations in parking demand over time and across different locations, rendering static allocation solutions inefficient; (2) spatial resource optimization, aimed at maximizing the efficiency of limited parking spots to improve overall system performance and user satisfaction. We present a Multi-Agent Reinforcement Learning (MARL) framework that incorporates adaptive optimization and intelligent collaboration for dynamic parking allocation to tackle these difficulties. A reinforcement learning-driven temporal decision mechanism modifies parking assignments according to real-time data, whilst a Graph Neural Network (GNN)-based spatial model elucidates inter-parking relationships to enhance allocation efficiency. Experiments utilizing actual parking data from Melbourne illustrate that Multi-Agent Reinforcement Learning (MARL) substantially surpasses conventional methods (FIFO, SIRO) in managing demand variability and optimizing resource distribution. A thorough quantitative investigation confirms the strength and flexibility of the suggested method in various urban contexts. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

18 pages, 1821 KiB  
Article
Embedded Streaming Hardware Accelerators Interconnect Architectures and Latency Evaluation
by Cristian-Tiberius Axinte, Andrei Stan and Vasile-Ion Manta
Electronics 2025, 14(8), 1513; https://doi.org/10.3390/electronics14081513 - 9 Apr 2025
Viewed by 591
Abstract
In the age of hardware accelerators, increasing pressure is applied on computer architects and hardware engineers to improve the balance between the cost and benefits of specialized computing units, in contrast to more general-purpose architectures. The first part of this study presents the [...] Read more.
In the age of hardware accelerators, increasing pressure is applied on computer architects and hardware engineers to improve the balance between the cost and benefits of specialized computing units, in contrast to more general-purpose architectures. The first part of this study presents the embedded Streaming Hardware Accelerator (eSAC) architecture. This architecture can reduce the idle time of specialized logic. The remainder of this paper explores the integration of an eSAC into a Central Processing Unit (CPU) core embedded inside a System-on-Chip (SoC) design, using the AXI-Stream protocol specification. The three evaluated architectures are the Tightly Coupled Streaming, Protocol Adapter FIFO, and Direct Memory Access (DMA) Streaming architectures. When comparing the tightly coupled architecture with the one including the DMA, the experiments in this paper show an almost 3× decrease in frame latency when using the DMA. Nevertheless, this comes at the price of an increase in FPGA resource utilization as follows: LUT (2.5×), LUTRAM (3×), FF (3.4×), and BRAM (1.2×). Four different test scenarios were run for the DMA architecture, showcasing the best and worst practices for data organization. The evaluation results highlight that poor data organization can lead to a more than 7× increase in latency. The CPU model was selected as the newly released MicroBlaze-V softcore processor. The designs presented herein successfully operate on a popular low-cost Field-Programmable Gate Array (FPGA) development board at 100 MHz. Block diagrams, FPGA resource utilization, and latency metrics are presented. Finally, based on the evaluation results, possible improvements are discussed. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 1329 KiB  
Article
Solving Logistical Challenges in Raw Material Reception: An Optimization and Heuristic Approach Combining Revenue Management Principles with Scheduling Techniques
by Reinaldo Gomes, Ruxanda Godina Silva and Pedro Amorim
Mathematics 2025, 13(6), 919; https://doi.org/10.3390/math13060919 - 10 Mar 2025
Cited by 1 | Viewed by 664
Abstract
The cost of transportation of raw materials is a significant part of the procurement costs in the forestry industry. As a result, routing and scheduling techniques were introduced to the transportation of raw materials from extraction sites to transformation mills. However, little to [...] Read more.
The cost of transportation of raw materials is a significant part of the procurement costs in the forestry industry. As a result, routing and scheduling techniques were introduced to the transportation of raw materials from extraction sites to transformation mills. However, little to no attention has been given to date to the material reception process at the mill. Another factor that motivated this study was the formation of large waiting queues at the mill gates and docks. Queues increase the reception time and associated costs. This work presents the development of a scheduling and reception system for deliveries at a mill. The scheduling system is based on Trucking Appointment Systems (TAS), commonly used at maritime ports, and on revenue management concepts. The developed system allocates each delivery to a timeslot and to an unloading dock using revenue management concepts. Each delivery is segmented according to its priority. Higher-segment deliveries have priority when there are multiple candidates to be allocated for one timeslot. The developed scheduling system was tested on a set of 120 daily deliveries at a Portuguese paper pulp mill and led to a reduction of 66% in the daily reception cost when compared to a first-in, first-out (FIFO) allocation approach. The average waiting time was also significantly reduced, especially in the case of high-priority trucks. Full article
Show Figures

Figure 1

16 pages, 302 KiB  
Article
Understanding Suicide Stigma in Fly-In/Fly-Out Workers: A Thematic Analysis of Attitudes Towards Suicide, Help-Seeking and Help-Offering
by Jordan Jackson and Victoria Ross
Int. J. Environ. Res. Public Health 2025, 22(3), 395; https://doi.org/10.3390/ijerph22030395 - 7 Mar 2025
Viewed by 1273
Abstract
Background: Suicide is estimated to be the fourth leading cause of death globally, with those working in male-dominated industries such as mining and construction at higher risk than the general population. Research suggests this is due (in part) to stigma towards mental health. [...] Read more.
Background: Suicide is estimated to be the fourth leading cause of death globally, with those working in male-dominated industries such as mining and construction at higher risk than the general population. Research suggests this is due (in part) to stigma towards mental health. No research exists that has sought to understand the attitudes underpinning this stigma in the fly-in/fly-out (FIFO) industry. The current study, set in Australia, is the first of its kind to examine what specific stigmatised attitudes of FIFO workers exist towards suicide, help-seeking, and help-offering. Methods: Using convenience sampling, FIFO workers (n = 138) completed an online self-report survey. General thematic analysis identified four major themes. Most salient was that fear of negative consequences for employment was a primary barrier to help-seeking and help-offering. Participants also expressed lack of trust in leadership and workplace mental health culture, lack of knowledge and confidence in responding to suicidality disclosure, and fear of negative reactions as barriers to help-seeking and help-offering behaviours. Conclusions: These findings present new and valuable insights into why FIFO workers are reluctant to seek or offer help for suicidality and have important implications for addressing systematic inadequacies within the sector that hinder disclosure of suicidal ideation and access to vital services. Full article
(This article belongs to the Special Issue Mental Health and Wellbeing in High-Risk Occupational Groups)
18 pages, 2433 KiB  
Article
Biogas Production Modelling Based on a Semi-Continuous Feeding Operation in a Municipal Wastewater Treatment Plant
by Derick Lima, Li Li and Gregory Appleby
Energies 2025, 18(5), 1065; https://doi.org/10.3390/en18051065 - 22 Feb 2025
Viewed by 648
Abstract
Anaerobic digestion is a common method for treating sewage sludge in municipal wastewater treatment plants (WWTPs). However, modelling this process can be very challenging due to the complexity of biochemical reactions. This paper presents a novel methodology that estimates biogas production from sewage [...] Read more.
Anaerobic digestion is a common method for treating sewage sludge in municipal wastewater treatment plants (WWTPs). However, modelling this process can be very challenging due to the complexity of biochemical reactions. This paper presents a novel methodology that estimates biogas production from sewage sludge by considering the semi-continuous sludge-feeding process of the digester. In most WWTPs, the sewage sludge treatment operates in a dynamic process; therefore, using a time-dependent tool that represents this dynamic process is essential for accurately representing biogas production. The biogas production results from the proposed model are compared against the historical data for a large-scale WWTP located in Sydney, Australia. The proposed model shows great accuracy and follows the historical data trend very precisely. The average biogas production based on historical data for 2020, 2021, and 2022 was 37,337 m3/d, 31,695 m3/d, and 23,350 m3/d, whereas for the proposed model, it was 37,960 m3/d, 30,465 m3/d, and 23,080 m3/d. Over the three-year period, the average biogas production was 30,794 m3/d for historical data and 30,503 m3/d for the proposed model, which shows a great level of accuracy (R2 of 0.85 and average error of 4.64%) on the results of the proposed model and WWTP historical data. Full article
(This article belongs to the Special Issue Energy from Agricultural and Forestry Biomass Waste)
Show Figures

Figure 1

13 pages, 3092 KiB  
Article
Modelling Systems with a Finite-Capacity Queue: A Theoretical Investigation into the Design and Operation of Real Systems
by Serban Raicu, Dorinela Costescu and Mihaela Popa
AppliedMath 2025, 5(1), 17; https://doi.org/10.3390/appliedmath5010017 - 13 Feb 2025
Viewed by 735
Abstract
This study investigates M/M/n:(m/FIFO) systems with a limited queue capacity (incorporating both “waiting and rejection”). This category of systems can be considered to be mixed-service systems. They operate as queuing systems for customers admitted to the [...] Read more.
This study investigates M/M/n:(m/FIFO) systems with a limited queue capacity (incorporating both “waiting and rejection”). This category of systems can be considered to be mixed-service systems. They operate as queuing systems for customers admitted to the system awaiting service, as well as systems that implement rejection or loss for customers who are denied when the system is full (when all servers and the buffer capacity are occupied). The correlation between the system size and a set of performance measures is analysed for the given arrival and service rates. The system size is determined based on a threshold rate of rejected customers. The correlation between the buffer size and the utilisation factor has direct relevance in the design of real systems (e.g., when the dynamics of the arrival rate can be estimated, it provides a solution for phasing the building of physical waiting places for a specific service capacity). In addition, the analysis of customer rejection probability and average waiting time as a function of the effective utilisation factor could yield practical insights for designing and operating real systems. The second part of this study presents a model for optimising the size of a multi-server system with a finite queue capacity. Initially, the number of servers is determined, assuming that the existing situation does not allow for an increase in the buffer capacity. Then, the case in which both server and buffer capacities become decision variables is presented. The operating losses (which are more straightforward to measure than the related costs) are used as an optimisation criterion. Full article
Show Figures

Figure 1

20 pages, 899 KiB  
Article
Boundary-Aware Concurrent Queue: A Fast and Scalable Concurrent FIFO Queue on GPU Environments
by Md. Sabbir Hossain Polak, David A. Troendle and Byunghyun Jang
Appl. Sci. 2025, 15(4), 1834; https://doi.org/10.3390/app15041834 - 11 Feb 2025
Viewed by 1090
Abstract
This paper presents Boundary-Aware Concurrent Queue (BACQ), a high-performance queue designed for modern GPUs, which focuses on high concurrency in massively parallel environments. BACQ operates at the warp level, leveraging intra-warp locality to improve throughput. A key to BACQ’s design is its [...] Read more.
This paper presents Boundary-Aware Concurrent Queue (BACQ), a high-performance queue designed for modern GPUs, which focuses on high concurrency in massively parallel environments. BACQ operates at the warp level, leveraging intra-warp locality to improve throughput. A key to BACQ’s design is its ability to replace conflicting accesses to shared data with independent accesses to private data. It uses a ticket-based system to ensure fair ordering of operations and supports infinite growth of the head and tail across its ring buffer. The leader thread of each warp coordinates enqueue and dequeue operations, broadcasting offsets for intra-warp synchronization. BACQ dynamically adjusts operation priorities based on the queue’s state, especially as it approaches boundary conditions such as overfilling the buffer. It also uses a virtual caching layer for intra-warp communication, reducing memory latency. Rigorous benchmarking results show that BACQ outperforms the BWD (Broker Queue Work Distributor), the fastest known GPU queue, by more than 2× while preserving FIFO semantics. The paper demonstrates BACQ’s superior performance through real-world empirical evaluations. Full article
(This article belongs to the Special Issue Data Structures for Graphics Processing Units (GPUs))
Show Figures

Figure 1

21 pages, 666 KiB  
Article
An Innovative Priority Queueing Strategy for Mitigating Traffic Congestion in Complex Networks
by Ganhua Wu
Mathematics 2025, 13(3), 495; https://doi.org/10.3390/math13030495 - 2 Feb 2025
Cited by 2 | Viewed by 944
Abstract
Optimizing transportation in both natural and engineered systems, particularly within complex network environments, has become a pivotal area of research. Traditional methods for mitigating congestion primarily focus on routing strategies that utilize first-in-first-out (FIFO) queueing disciplines to determine the processing order of packets [...] Read more.
Optimizing transportation in both natural and engineered systems, particularly within complex network environments, has become a pivotal area of research. Traditional methods for mitigating congestion primarily focus on routing strategies that utilize first-in-first-out (FIFO) queueing disciplines to determine the processing order of packets in buffer queues. However, these approaches often fail to explore the benefits of incorporating priority mechanisms directly within the routing decision-making processes, leaving significant room for improvement in congestion management. This study introduces an innovative generalized priority queueing (GPQ) strategy, specifically designed as an enhancement to existing FIFO-based routing methods. It is important to note that GPQ is not a new queue scheduling algorithm (e.g., deficit round robin (DRR) or weighted fair queuing (WFQ)), which typically manage multiple queues in broader queue management scenarios. Instead, GPQ integrates a dynamic priority-based mechanism into the routing layer, allowing the routing function to adaptively prioritize packets within a single buffer queue based on network conditions and packet attributes. By focusing on the routing strategy itself, GPQ improves the process of selecting packets for forwarding, thereby optimizing congestion management across the network. The effectiveness of the GPQ strategy is evaluated through extensive simulations on single-layer, two-layer, and dynamic networks. The results demonstrate significant improvements in key performance metrics, such as network throughput and average packet delay, when compared to traditional FIFO-based routing methods. These findings underscore the versatility and robustness of the GPQ strategy, emphasizing its capability to enhance network efficiency across diverse topologies and configurations. By addressing the inherent limitations of FIFO-based routing strategies and proposing a generalized yet scalable enhancement, this study makes a notable contribution to network optimization. The GPQ strategy provides a practical and adaptable solution for improving transportation efficiency in complex networks, bridging the gap between conventional routing techniques and emerging demands for dynamic congestion management. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

32 pages, 12626 KiB  
Article
Strategies for Workplace EV Charging Management
by Natascia Andrenacci, Antonino Genovese and Giancarlo Giuli
Energies 2025, 18(2), 421; https://doi.org/10.3390/en18020421 - 19 Jan 2025
Viewed by 1260
Abstract
Electric vehicles (EVs) help reduce transportation emissions. A user-friendly charging infrastructure and efficient charging processes can promote their wider adoption. Low-power charging is effective for short-distance travel, especially when vehicles are parked for extended periods, like during daily commutes. These idle times present [...] Read more.
Electric vehicles (EVs) help reduce transportation emissions. A user-friendly charging infrastructure and efficient charging processes can promote their wider adoption. Low-power charging is effective for short-distance travel, especially when vehicles are parked for extended periods, like during daily commutes. These idle times present opportunities to improve coordination between EVs and service providers to meet charging needs. The present study examines strategies for coordinated charging in workplace parking lots to minimize the impact on the power grid while maximizing the satisfaction of charging demand. Our method utilizes a heuristic approach for EV charging, focusing on event logic that considers arrival and departure times and energy requirements. We compare various charging management methods in a workplace parking lot against a first-in-first-out (FIFO) strategy. Using real data on workplace parking lot usage, the study found that efficient electric vehicle charging in a parking lot can be achieved either through optimized scheduling with a single high-power charger, requiring user cooperation, or by installing multiple chargers with alternating sockets. Compared to FIFO charging, the implemented strategies allow for a reduction in the maximum charging power between 30 and 40%, a charging demand satisfaction rate of 99%, and a minimum SOC amount of 83%. Full article
(This article belongs to the Special Issue Future Smart Energy for Electric Vehicle Charging)
Show Figures

Figure 1

16 pages, 1091 KiB  
Article
A Hybrid Honey Badger Algorithm to Solve Energy-Efficient Hybrid Flow Shop Scheduling Problems
by M. Geetha, R. Chandra Guru Sekar and M. K. Marichelvam
Processes 2025, 13(1), 174; https://doi.org/10.3390/pr13010174 - 9 Jan 2025
Cited by 1 | Viewed by 1302
Abstract
A well-planned schedule is essential to any organization’s growth. Thus, it is important for the literature to cover a more comprehensive range of scheduling problems. In this paper, energy-efficient hybrid flow shop (EEHFS) scheduling problems are considered. Researchers have developed several techniques to [...] Read more.
A well-planned schedule is essential to any organization’s growth. Thus, it is important for the literature to cover a more comprehensive range of scheduling problems. In this paper, energy-efficient hybrid flow shop (EEHFS) scheduling problems are considered. Researchers have developed several techniques to deal with EEHFS scheduling problems. Also, researchers have recently proposed several metaheuristics. Honey Badger Algorithm (HBA) is one of the most recent algorithms proposed to solve various optimization problems. The objective of the present work is to solve EEHFS scheduling problems using the Hybrid Honey Badger Algorithm (HHBA) to reduce the makespan (Cmax) and total energy cost (TEC). In the HHBA, a constructive heuristic known as the NEH heuristic was incorporated with the Honey Badger Algorithm. The suggested algorithm’s performance was verified using an actual industrial scheduling problem. The company’s results are compared with those of the HHBA. The HHBA could potentially result in an 8% decrease in total energy cost. Then, the proposed algorithm was applied to solve 54 random benchmark problems. The results of the proposed HHBA were compared with the FIFO dispatching rule, the NEH heuristic, and other metaheuristics such as the simulated annealing (SA) algorithm, the genetic algorithm (GA), the particle swarm optimization (PSO) algorithm, Honey Badger Algorithm, and the Ant Colony Optimization (ACO) algorithms addressed in the literature. Average percentage deviation (APD) was the performance measure used to compare different algorithms. The APD of the proposed HHBA was zero. This indicates that the proposed HHBA is more effective in solving EEHFS scheduling problems. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

14 pages, 809 KiB  
Article
Enhancing Radiologist Efficiency with AI: A Multi-Reader Multi-Case Study on Aortic Dissection Detection and Prioritization
by Martina Cotena, Angela Ayobi, Colin Zuchowski, Jacqueline C. Junn, Brent D. Weinberg, Peter D. Chang, Daniel S. Chow, Jennifer E. Soun, Mar Roca-Sogorb, Yasmina Chaibi and Sarah Quenet
Diagnostics 2024, 14(23), 2689; https://doi.org/10.3390/diagnostics14232689 - 28 Nov 2024
Cited by 1 | Viewed by 1936
Abstract
Background and Objectives: Acute aortic dissection (AD) is a life-threatening condition in which early detection can significantly improve patient outcomes and survival. This study evaluates the clinical benefits of integrating a deep learning (DL)-based application for the automated detection and prioritization of AD [...] Read more.
Background and Objectives: Acute aortic dissection (AD) is a life-threatening condition in which early detection can significantly improve patient outcomes and survival. This study evaluates the clinical benefits of integrating a deep learning (DL)-based application for the automated detection and prioritization of AD on chest CT angiographies (CTAs) with a focus on the reduction in the scan-to-assessment time (STAT) and interpretation time (IT). Materials and Methods: This retrospective Multi-Reader Multi-Case (MRMC) study compared AD detection with and without artificial intelligence (AI) assistance. The ground truth was established by two U.S. board-certified radiologists, while three additional expert radiologists served as readers. Each reader assessed the same CTAs in two phases: assessment unaided by AI assistance (pre-AI arm) and, after a 1-month washout period, assessment aided by device outputs (post-AI arm). STAT and IT metrics were compared between the two arms. Results: This study included 285 CTAs (95 per reader, per arm) with a mean patient age of 58.5 years ±14.7 (SD), of which 52% were male and 37% had a prevalence of AD. AI assistance significantly reduced the STAT for detecting 33 true positive AD cases from 15.84 min (95% CI: 13.37–18.31 min) without AI to 5.07 min (95% CI: 4.23–5.91 min) with AI, representing a 68% reduction (p < 0.01). The IT also reduced significantly from 21.22 s (95% CI: 19.87–22.58 s) without AI to 14.17 s (95% CI: 13.39–14.95 s) with AI (p < 0.05). Conclusions: The integration of a DL-based algorithm for AD detection on chest CTAs significantly reduces both the STAT and IT. By prioritizing urgent cases, the AI-assisted approach outperforms the standard First-In, First-Out (FIFO) workflow. Full article
Show Figures

Figure 1

18 pages, 402 KiB  
Article
Shock Model of K/N: G Repairable Retrial System Based on Discrete PH Repair Time
by Xiaoyun Yu, Linmin Hu and Zebin Hu
Axioms 2024, 13(12), 814; https://doi.org/10.3390/axioms13120814 - 21 Nov 2024
Viewed by 677
Abstract
A discrete time modeling method is employed in this paper to analyze and evaluate the reliability of a discrete time K/N: G repairable retrial system with Bernoulli shocks and two-stage repair. Lifetime and shocks are two factors that lead to [...] Read more.
A discrete time modeling method is employed in this paper to analyze and evaluate the reliability of a discrete time K/N: G repairable retrial system with Bernoulli shocks and two-stage repair. Lifetime and shocks are two factors that lead to component failure, and both of them can lead to the simultaneous failure of multiple components. When the repairman is busy, the newly failed component enters retrial orbit and retries in accordance with the first-in-first-out (FIFO) rule to obtain the repair. The repairman provides two-stage repair for failed components, all of which require basic repair and some of which require optional repair. The discrete PH distribution controls the repair times for two stages. Based on discrete time stochastic model properties, priority rules are defined when multiple events occur simultaneously. The state transition probability matrix and state set analysis are used to evaluate the system performance indexes. Numerical experiments are used to illustrate the main performance indexes of the developed discrete time model, and the impact of each parameter variation on the system indexes is examined. Full article
(This article belongs to the Special Issue Mathematical Modeling, Simulations and Applications)
Show Figures

Figure 1

16 pages, 1194 KiB  
Article
CAL: Core-Aware Lock for the big.LITTLE Multicore Architecture
by Shiqiang Nie, Yingming Liu, Jie Niu and Weiguo Wu
Appl. Sci. 2024, 14(15), 6449; https://doi.org/10.3390/app14156449 - 24 Jul 2024
Viewed by 1459
Abstract
The concept of “all cores are created equal” has been popular for several decades due to its simplicity and effectiveness in CPU (Central Processing Unit) design. The more cores the CPU has, the higher performance the host owns and the higher the power [...] Read more.
The concept of “all cores are created equal” has been popular for several decades due to its simplicity and effectiveness in CPU (Central Processing Unit) design. The more cores the CPU has, the higher performance the host owns and the higher the power consumption. However, power-saving is also one of the key goals for servers in data centers and embedded devices (e.g., mobile phones). The big.LITTLE multicore architecture, which contains high-performance cores (namely big core) and power-saved cores (namely little core), has been developed by ARM (Advanced RISC Machine) and Intel to trade off performance and power efficiency. Facing the new heterogeneous computing architecture, the traditional lock algorithms, which are designed to run on homogeneous computing architecture, cannot work optimally as usual and drop into the performance issue for the difference between big core and little core. In our preliminary experiment, we observed that, in the big.LITTLE multicore architecture, all these lock algorithms exhibit sub-optimal performance. The FIFO-based (First In First Out) locks experience throughput degradation, while the performance of competition-based locks can be divided into two categories. One of them is big-core-friendly, so their tail latency increases significantly; the other is little-core-friendly. Not only does the tail latency increase, but the throughput is also degraded. Motivated by this observation, we propose a Core-Aware Lock for the big.LITTLE multicore architecture named CAL, which keeps each core having an equal opportunity to access the critical section in the program. The core idea of the CAL is to take the slowdown ratio as the matric to reorder lock requests of these big and little cores. By evaluating benchmarks and a real-world application named LevelDB, CAL is confirmed to achieve fairness goals in heterogeneous computing architecture without sacrificing the performance of the big core. Compared to several traditional lock algorithms, the CAL’s fairness has increased by up to 67%; and Its throughput is 26% higher than FIFO-based locks and 53% higher than competition-based locks, respectively. In addition, the tail latency of CAL is always kept at a low level. Full article
Show Figures

Figure 1

Back to TopTop