Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,362)

Search Parameters:
Keywords = mean time to failure

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1376 KB  
Article
CNC Milling Optimization via Intelligent Algorithms: An AI-Based Methodology
by Emilia Campean and Grigore Pop
Machines 2026, 14(1), 89; https://doi.org/10.3390/machines14010089 - 11 Jan 2026
Abstract
Artificial intelligence (AI) is becoming more and more integrated into manufacturing processes, revolutionizing conventional production, like CNC (Computer Numerical Control) machining. This study analyzes how large language models (LLMs), exemplified by ChatGPT, behave when tasked with G-code optimization for improving surface quality and [...] Read more.
Artificial intelligence (AI) is becoming more and more integrated into manufacturing processes, revolutionizing conventional production, like CNC (Computer Numerical Control) machining. This study analyzes how large language models (LLMs), exemplified by ChatGPT, behave when tasked with G-code optimization for improving surface quality and productivity of automotive metal parts, with emphasis on systematically documenting failure modes and limitations that emerge when general-purpose AI encounters specialized manufacturing domains. Even if software programming remains essential for highly regulated sectors, free AI tools will be increasingly used due to advantages like cost-effectiveness, adaptability, and continuous innovation. The condition is that there is sufficient technical expertise available in-house. The experiment carried out involved milling three identical parts using a Haas VF-3 SS CNC machine. The G-code was generated by SolidCam and was optimized using ChatGPT considering user-specified criteria. The aim was to improve the quality of the part’s surface, as well as increase productivity. The measurements were performed using an ISR C-300 Portable Surface Roughness Tester and Axiom Too 3D measuring equipment. The experiment revealed that while AI-generated code achieved a 37% reduction in cycle time (from 2.39 to 1.45 min) and significantly improved surface roughness (Ra—arithmetic mean deviation of the evaluated profile—decreased from 0.68 µm to 0.11 µm—an 84% improvement), it critically eliminated the pocket-milling operation, resulting in a non-conforming part. The AI optimization also removed essential safety features including tool length compensation (G43/H codes) and return-to-safe-position commands (G28), which required manual intervention to prevent tool breakage and part damage. Critical analysis revealed that ChatGPT failures stemmed from three factors: (1) token-minimization bias in LLM training leading to removal of the longest code block (31% of total code), (2) lack of semantic understanding of machining geometry, and (3) absence of manufacturing safety constraints in the AI model. This study demonstrates that current free AI tools like ChatGPT can identify optimization opportunities but lack the contextual understanding and manufacturing safety protocols necessary for autonomous CNC programming in production environments, highlighting both the potential, but also the limitation, of free AI software for CNC programming. Full article
Show Figures

Figure 1

11 pages, 516 KB  
Article
Avoiding Post-DMEK IOP Elevation: Insights from a Standardized Surgical Approach
by Stephanie D. Grabitz, Anna L. Engel, Mohammad Al Hariri, Adrian Gericke, Norbert Pfeiffer and Joanna Wasielica-Poslednik
J. Clin. Med. 2026, 15(2), 521; https://doi.org/10.3390/jcm15020521 - 8 Jan 2026
Viewed by 147
Abstract
Background: Descemet membrane endothelial keratoplasty (DMEK) is the most frequently performed keratoplasty procedure in many countries. One of the most common early complications is an elevation of intraocular pressure (IOP). The aim of this study was to characterize early postoperative IOP behavior following [...] Read more.
Background: Descemet membrane endothelial keratoplasty (DMEK) is the most frequently performed keratoplasty procedure in many countries. One of the most common early complications is an elevation of intraocular pressure (IOP). The aim of this study was to characterize early postoperative IOP behavior following DMEK performed with 10% sulfur hexafluoride (SF6) tamponade and to determine the frequency and timing of required IOP-lowering interventions within the first 48 h. Methods: We retrospectively reviewed postoperative outcomes of 116 consecutive DMEK procedures between May and December 2024 at the University Medical Center in Mainz, Germany. No specific exclusion criteria were applied. All surgeries included a surgical iridectomy at the 6 o’clock position, 10% (SF6) tamponade, and maintaining a mid-normal IOP at the end of surgery. Postoperative assessments included IOP measured using Goldmann applanation tonometry, the percentage of gas fill in the anterior chamber evaluated at the slit lamp, and the need for IOP-lowering interventions as determined by the on-call resident at 3, 24, and 48 h after surgery. IOP-lowering interventions consisted of venting in cases of elevated IOP, gas fill > 90%, and/or suspected angle closure or pupillary block, as well as intravenous or oral acetazolamide in cases of moderate IOP elevation with a lower gas fill and a patent iridectomy. If a single intervention was insufficient, a combined approach was used. Results: A total of 116 eyes from 98 patients (62 female, mean age 73.0 ± 9.8 years) were analyzed. DMEK was combined with cataract surgery in 41 eyes, and 4 eyes underwent phakic DMEK. Postoperatively, all iridectomies remained patent, and no cases of pupillary block occurred. Mean IOP and gas fill were within normal limits and declined steadily during the first 48 h. IOP-lowering procedures were performed in 11 eyes (9.5%), including venting (n = 3), acetazolamide administration (n = 7), and a combination of both (n = 1). There was no difference between DMEK and triple-DMEK regarding postoperative gas fill, IOP, or the need for IOP-lowering interventions. Mean postoperative IOP was significantly higher, and IOP-lowering interventions were more frequent in glaucoma vs. non-glaucoma patients. Re-bubbling was performed in 12 eyes (10.3%). Two cases of primary graft failure (1.7%) were recorded. Conclusions: In our patient cohort, a standardized surgical approach incorporating a surgical iridectomy at the 6 o’clock position, 10% SF6 tamponade, and maintaining a mid-normal IOP at the end of surgery effectively prevented pupillary block. We recommend early postoperative assessment of IOP and percent gas fill to promptly identify and manage impending IOP elevation, which is particularly important in patients with glaucoma. Full article
(This article belongs to the Special Issue Clinical Diagnosis and Management of Corneal Diseases)
Show Figures

Figure 1

24 pages, 1914 KB  
Article
ServiceGraph-FM: A Graph-Based Model with Temporal Relational Diffusion for Root-Cause Analysis in Large-Scale Payment Service Systems
by Zhuoqi Zeng and Mengjie Zhou
Mathematics 2026, 14(2), 236; https://doi.org/10.3390/math14020236 - 8 Jan 2026
Viewed by 82
Abstract
Root-cause analysis (RCA) in large-scale microservice-based payment systems is challenging due to complex failure propagation along service dependencies, limited availability of labeled incident data, and heterogeneous service topologies across deployments. We propose ServiceGraph-FM, a pretrained graph-based model for RCA, where “foundation” denotes a [...] Read more.
Root-cause analysis (RCA) in large-scale microservice-based payment systems is challenging due to complex failure propagation along service dependencies, limited availability of labeled incident data, and heterogeneous service topologies across deployments. We propose ServiceGraph-FM, a pretrained graph-based model for RCA, where “foundation” denotes a self-supervised graph encoder pretrained on large-scale production cluster traces and then adapted to downstream diagnosis. ServiceGraph-FM introduces three components: (1) masked graph autoencoding pretraining to learn transferable service-dependency embeddings for cross-topology generalization; (2) a temporal relational diffusion module that models anomaly propagation as graph diffusion on dynamic service graphs (i.e., Laplacian-governed information flow with learnable edge propagation strengths); and (3) a causal attention mechanism that leverages multi-hop path signals to better separate likely causes from correlated downstream effects. Experiments on the Alibaba Cluster Trace and synthetic PayPal-style topologies show that ServiceGraph-FM outperforms state-of-the-art baselines, improving Top-1 accuracy by 23.7% and Top-3 accuracy by 18.4% on average, and reducing mean time to detection by 31.2%. In zero-shot deployment on unseen architectures, the pretrained model retains 78.3% of its fully fine-tuned performance, indicating strong transferability for practical incident management. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
13 pages, 1105 KB  
Article
Impact of Diabetes Mellitus on Disease Severity and Mortality in Acute Pancreatitis: A Retrospective Single-Center Cohort Study
by Bayram İnan, Ahmet Akbay, Beril Turan Erdoğan, Çağdaş Erdoğan, İhsan Ateş and Osman Ersoy
J. Clin. Med. 2026, 15(2), 505; https://doi.org/10.3390/jcm15020505 - 8 Jan 2026
Viewed by 87
Abstract
Background: Diabetes mellitus (DM) is a condition that may increase the severity of acute pancreatitis (AP) through chronic inflammation and disturbances in immune responses. However, the independent effect of DM on clinical outcomes in AP has not yet been fully elucidated. Methods: In [...] Read more.
Background: Diabetes mellitus (DM) is a condition that may increase the severity of acute pancreatitis (AP) through chronic inflammation and disturbances in immune responses. However, the independent effect of DM on clinical outcomes in AP has not yet been fully elucidated. Methods: In this retrospective cohort study, 492 patients diagnosed with acute pancreatitis at the Gastroenterology Clinic of Ankara Bilkent City Hospital between January 2022 and March 2025 were included. Patients were divided into two groups based on the presence of diabetes, and outcomes were compared using statistical methods. Results: Of the total 492 patients (mean age 58.6 ± 17.2 years; 50.2% female) included, 98 (19.9%) had DM. Moderate-to-severe AP occurred in 67.3% of diabetic versus 37.8% of non-diabetic patients (p < 0.0001), and severe disease developed more frequently in the diabetic group (6.1% vs. 1.0%, p = 0.0057). Systemic complications were significantly more common in patients with diabetes (45.9% vs. 26.9%, p = 0.0004). Hospital mortality was higher among patients with diabetes (9.2% vs. 4.6%, p = 0.0344), and Kaplan–Meier analysis demonstrated numerically lower overall survival in patients with diabetes (log-rank p = 0.095), with early divergence in survival curves. Cox proportional hazards analysis confirmed diabetes as an independent predictor of in-hospital mortality (adjusted HR 2.64, 95% CI 1.17–5.97; p = 0.019). After adjustment for confounders, diabetes remained independently associated with the development of moderate/severe pancreatitis (adjusted OR 2.00, 95% CI 1.24–3.22; p = 0.004). Diabetes also independently predicted in-hospital mortality (adjusted OR 3.36, 95% CI 1.35–8.34; p = 0.009), along with APACHE II score. ROC analysis demonstrated that adding diabetes mellitus to the APACHE II score significantly improved mortality prediction compared with APACHE II alone (AUC 0.785 vs. 0.724). The retrospective and single-center design of this study may limit its generalizability and create potential selection bias. There were insufficient data on the type of diabetes, its duration, and glycemic control (e.g., HbA1c), and therefore, we could not assess these factors, all of which may influence risk estimates. Although the survival curves showed early divergence, the borderline log-rank significance (p = 0.095) highlights the limited statistical power to detect long-term survival differences in this cohort. Conclusions: DM is associated with substantially increased severity and in-hospital mortality in AP, primarily through an elevated risk of systemic organ failure. Incorporation of diabetes status into early severity stratification may improve prognostic accuracy and guide closer monitoring and timely interventions in this high-risk population. Full article
(This article belongs to the Section Gastroenterology & Hepatopancreatobiliary Medicine)
Show Figures

Figure 1

29 pages, 7782 KB  
Article
A Hybrid Machine Learning Model for Dynamic Level Detection of Lead-Acid Battery Electrolyte Using a Flat-Plate Capacitive Sensor
by Shuai Huang, Weikang Zhang, Weiwei Zhang, Zhihui Ni, Lifeng Bian, Jiawen Liu, Peng Yue and Peng Xu
Sensors 2026, 26(2), 361; https://doi.org/10.3390/s26020361 - 6 Jan 2026
Viewed by 141
Abstract
Abnormal electrolyte levels can lead to failures in lead-acid batteries. The capacitive method, as a non-invasive liquid level inspection technique, can be applied to the nondestructive detection of electrolyte level abnormalities in lead-acid batteries. However, due to the high viscosity of sulfuric acid [...] Read more.
Abnormal electrolyte levels can lead to failures in lead-acid batteries. The capacitive method, as a non-invasive liquid level inspection technique, can be applied to the nondestructive detection of electrolyte level abnormalities in lead-acid batteries. However, due to the high viscosity of sulfuric acid in lead-acid batteries, residual liquid films are easily adhered to the tube walls during rapid liquid level drops, resulting in significant dynamic measurement errors in capacitive methods. To eliminate dynamic measurement errors caused by residual liquid film adhesion, this study proposes a hybrid deep learning model—Poly-LSTM. This model combines polynomial feature generation with a Long Short-Term Memory (LSTM) network. First, polynomial features are generated to explicitly capture the complex nonlinear and coupling effects in the sensor inputs. Subsequently, the LSTM network processes these features to model their temporal dependencies. Finally, the time information encoded by the LSTM is used to generate accurate liquid level predictions. Experimental results show that this method outperforms other comparative models in terms of liquid level estimation accuracy. At a rapid drop rate of 0.12 mm/s, the average absolute error (MAE) is 0.5319 mm, the root mean square error (RMSE) is 0.7180 mm, and the mean absolute percentage error (MAPE) is 0.1320%. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

19 pages, 778 KB  
Article
GALR: Graph-Based Root Cause Localization and LLM-Assisted Recovery for Microservice Systems
by Wenya Zhang, Zhi Yang, Fang Peng, Le Zhang, Yiting Chen and Ruibo Chen
Electronics 2026, 15(1), 243; https://doi.org/10.3390/electronics15010243 - 5 Jan 2026
Viewed by 196
Abstract
With the rapid evolution of cloud-native platforms, microservice-based systems have become increasingly large-scale and complex, making fast and accurate root cause localization and recovery a critical challenge. Runtime signals in such systems are inherently multimodal—combining metrics, logs, and traces—and are intertwined through deep, [...] Read more.
With the rapid evolution of cloud-native platforms, microservice-based systems have become increasingly large-scale and complex, making fast and accurate root cause localization and recovery a critical challenge. Runtime signals in such systems are inherently multimodal—combining metrics, logs, and traces—and are intertwined through deep, dynamic service dependencies, which often leads to noisy alerts, ambiguous fault propagation paths, and brittle, manually curated recovery playbooks. To address these issues, we propose GALR, a graph- and LLM-based framework for root cause localization and recovery in microservice-based business middle platforms. GALR first constructs a multimodal service call graph by fusing time-series metrics, structured logs, and trace-derived topology, and employs a GAT-based root cause analysis module with temporal-aware edge attention to model failure propagation. On top of this, an LLM-based node enhancement mechanism infers anomaly, normal, and uncertainty scores from log contexts and injects them into node representations and attention bias terms, improving robustness under noisy or incomplete signals. Finally, GALR integrates a retrieval-augmented LLM agent that retrieves similar historical cases and generates executable recovery strategies, with consistency checking against expert-standard playbooks to ensure safety and reproducibility. Extensive experiments on three representative microservice datasets demonstrate that GALR consistently achieves superior Top-k accuracy and mean reciprocal rank for root cause localization, while the retrieval-augmented agent yields substantially more accurate and actionable recovery plans compared with graph-only and LLM-only baselines, providing a practical closed-loop solution from anomaly perception to recovery execution. Full article
(This article belongs to the Special Issue Advanced Techniques for Multi-Agent Systems)
Show Figures

Figure 1

72 pages, 3613 KB  
Article
Natural-Language Mediation Versus Numerical Aggregation in Multi-Stakeholder AI Governance: Capability Boundaries and Architectural Requirements
by Alexandre P. Uchoa, Carlo E. T. Oliveira, Claudia L. R. Motta and Daniel Schneider
Computers 2026, 15(1), 24; https://doi.org/10.3390/computers15010024 - 5 Jan 2026
Viewed by 243
Abstract
This study investigates whether a large language model (LLM) can perform governance-style mediation among multiple stakeholders when preferences are expressed only in categorical natural language. Building on prior conceptual work proposing an advisory governance layer for AI systems, we designed a controlled experiment [...] Read more.
This study investigates whether a large language model (LLM) can perform governance-style mediation among multiple stakeholders when preferences are expressed only in categorical natural language. Building on prior conceptual work proposing an advisory governance layer for AI systems, we designed a controlled experiment comparing a language-based mediator with a numerical baseline (Borda count) across 1024 synthetic stakeholder scenarios, each executed ten times (10,240 paired decisions). Results show only 31% agreement with Borda, revealing distinct decision logic that produces equity-biased outcomes (68% improved fairness, ~25% Gini reduction, 38% higher minimum utility) at the cost of efficiency (14–20% lower mean utility). Stability analysis identified three reliability zones—stable (39%), middle (28%), and knife-edge (33%)—enabling risk-proportionate oversight. Qualitative analysis revealed that equity bias emerges from opaque pattern-matching followed by post hoc rationalization rather than systematic application of governance principles, with frequent semantic-grounding failures even in stable cases. These findings demonstrate that language-based mediation diverges fundamentally from numerical aggregation, suitable for advisory deliberation but requiring human oversight for value verification and factual accuracy. Full article
Show Figures

Graphical abstract

13 pages, 518 KB  
Article
Asymptotic Analysis of a Thresholding Method for Sparse Models with Application to Network Delay Detection
by Evgeniy Melezhnikov, Oleg Shestakov and Evgeniy Stepanov
Mathematics 2026, 14(1), 148; https://doi.org/10.3390/math14010148 - 30 Dec 2025
Viewed by 163
Abstract
This paper explores a stochastic model of noisy observations with a sparse true signal structure. Such models arise in a wide range of applications, including signal processing, anomaly detection, and performance monitoring in telecommunication networks. As a motivating example, we consider round-trip time [...] Read more.
This paper explores a stochastic model of noisy observations with a sparse true signal structure. Such models arise in a wide range of applications, including signal processing, anomaly detection, and performance monitoring in telecommunication networks. As a motivating example, we consider round-trip time (RTT) data, which characterize the transit time of network packets, where rare, anomalously large values correspond to localized network congestion or failures. The focus is on the asymptotic properties of the mean-square risk associated with thresholding procedures. Upper bounds are obtained for the mean-square risk when using the theoretically optimal threshold. In addition, a central limit theorem and a strong law of large numbers are established for the empirical risk estimate. The results provide a theoretical basis for assessing the effectiveness of thresholding methods in localizing rare anomalous components in noisy data. Full article
Show Figures

Figure 1

17 pages, 3550 KB  
Article
Edge Intelligence-Based Rail Transit Equipment Inspection System
by Lijia Tian, Hongli Zhao, Li Zhu, Hailin Jiang and Xinjun Gao
Sensors 2026, 26(1), 236; https://doi.org/10.3390/s26010236 - 30 Dec 2025
Viewed by 303
Abstract
The safe operation of rail transit systems relies heavily on the efficient and reliable maintenance of their equipment, as any malfunction or abnormal operation may pose serious risks to transportation safety. Traditional manual inspection methods are often characterized by high costs, low efficiency, [...] Read more.
The safe operation of rail transit systems relies heavily on the efficient and reliable maintenance of their equipment, as any malfunction or abnormal operation may pose serious risks to transportation safety. Traditional manual inspection methods are often characterized by high costs, low efficiency, and susceptibility to human error. To address these limitations, this paper presents a rail transit equipment inspection system based on Edge Intelligence (EI) and 5G technology. The proposed system adopts a cloud–edge–end collaborative architecture that integrates Computer Vision (CV) techniques to automate inspection tasks; specifically, a fine-tuned YOLOv8 model is employed for object detection of personnel and equipment, while a ResNet-18 network is utilized for equipment status classification. By implementing an ETSI MEC-compliant framework on edge servers (NVIDIA Jetson AGX Orin), the system enhances data processing efficiency and network performance, while further strengthening security through the use of a 5G private network that isolates critical infrastructure data from the public internet, and improving robustness via distributed edge nodes that eliminate single points of failure. The proposed solution has been deployed and evaluated in real-world scenarios on Beijing Metro Line 6. Experimental results demonstrate that the YOLOv8 model achieves a mean Average Precision (mAP@0.5) of 92.7% ± 0.4% for equipment detection, and the ResNet-18 classifier attains 95.8% ± 0.3% accuracy in distinguishing normal and abnormal statuses. Compared with a cloud-centric architecture, the EI-based system reduces the average end-to-end latency for anomaly detection tasks by 45% (28.5 ms vs. 52.1 ms) and significantly lowers daily bandwidth consumption by approximately 98.1% (from 40.0 GB to 0.76 GB) through an event-triggered evidence upload strategy involving images and short video clips, highlighting its superior real-time performance, security, robustness, and bandwidth efficiency. Full article
Show Figures

Figure 1

15 pages, 1304 KB  
Article
Vericiguat Therapy Is Associated with Reverse Myocardial Remodeling in Chronic Heart Failure with Reduced Ejection Fraction
by Tine Bajec, Neža Žorž, Sabina Ugovšek, Gregor Zemljič, Andraž Cerar, Sabina Frljak, Renata Okrajšek, Petra Girandon Sušanj, Miran Šebeštjen, Bojan Vrtovec and Gregor Poglajen
J. Cardiovasc. Dev. Dis. 2026, 13(1), 17; https://doi.org/10.3390/jcdd13010017 - 29 Dec 2025
Viewed by 266
Abstract
Background and aims: Vericiguat lowers cardiovascular death or heart-failure hospitalization in recently worsened heart failure with reduced ejection fraction (HFrEF), but its effects on cardiac remodeling are less well characterized. Our aim was to evaluate whether the addition of vericiguat to guideline-directed medical [...] Read more.
Background and aims: Vericiguat lowers cardiovascular death or heart-failure hospitalization in recently worsened heart failure with reduced ejection fraction (HFrEF), but its effects on cardiac remodeling are less well characterized. Our aim was to evaluate whether the addition of vericiguat to guideline-directed medical therapy (GDMT) promotes reverse myocardial remodeling in patients with HFrEF and recent worsening. Methods: We conducted a prospective, non-randomized, single-center study enrolling 34 consecutive patients with HFrEF who had experienced recent worsening and were on stable GDMT for at least 3 months prior to decompensation. Clinical, biochemical, and echocardiographic assessments were performed at baseline and at 6 months. Results: A total of 24 patients completed the 6-month follow-up (mean age 63 ± 9 years; 92% male), 96% of whom were in New York Heart Association (NYHA) class III or IV. After 6 months of vericiguat therapy, right ventricular systolic function improved significantly, with an increase in tricuspid annular plane systolic excursion (TAPSE) from 18.5 ± 4.3 mm to 21.4 ± 4.8 mm (p = 0.003). Left ventricular systolic function improved, with a numerical increase in left ventricular ejection fraction (LVEF) (30.1 ± 5.9% to 32.2 ± 10.5%; p = 0.122) and a significant increase in left ventricular outflow tract velocity-time integral (LVOT VTI) (14.8 ± 3.7 cm to 16.1 ± 3.8 cm; p = 0.011). Functional improvements were accompanied by structural remodeling, including reductions in right ventricular internal diameter in diastole (RVIDd) (40.5 ± 5.8 mm to 37.9 ± 6.9 mm; p = 0.002) and left ventricular end-systolic volume (LVESV) (144.0 ± 40.3 mL to 132.4 ± 61.0 mL; p = 0.031). N-terminal pro-B-type natriuretic peptide (NT-proBNP) levels also decreased significantly (median 1829.0 ng/mL to 1241.0 ng/mL; p = 0.03). Conclusions: In patients with HFrEF and recent worsening, the addition of vericiguat to GDMT may be associated with reverse myocardial remodeling. Full article
(This article belongs to the Special Issue Heart Failure: Clinical Diagnostics and Treatment, 2nd Edition)
Show Figures

Graphical abstract

15 pages, 3365 KB  
Article
Lightweight YOLO-Based Online Inspection Architecture for Cup Rupture Detection in the Strip Steel Welding Process
by Yong Qin and Shuai Zhao
Machines 2026, 14(1), 40; https://doi.org/10.3390/machines14010040 - 29 Dec 2025
Viewed by 214
Abstract
Cup rupture failures in strip steel welds can lead to strip breakage, resulting in unplanned downtime of high-speed continuous rolling mills and scrap steel losses. Manual visual inspection suffers from a high false positive rate and cannot meet the production cycle time requirements. [...] Read more.
Cup rupture failures in strip steel welds can lead to strip breakage, resulting in unplanned downtime of high-speed continuous rolling mills and scrap steel losses. Manual visual inspection suffers from a high false positive rate and cannot meet the production cycle time requirements. This paper proposes a lightweight online cup rupture visual inspection method based on an improved YOLOv10 algorithm. The backbone feature extraction network is replaced with ShuffleNetV2 to reduce the model’s parameter count and computational complexity. An ECA attention mechanism is incorporated into the backbone network to enhance the model’s focus on cup rupture micro-cracks. A Slim-Neck design is adopted, utilizing a dual optimization with GSConv and VoV-GSCSP, significantly improving the balance between real-time performance and accuracy. Based on the results, the optimized model achieves a precision of 98.8% and a recall of 99.2%, with a mean average precision (mAP) of 99.5%—an improvement of 0.2 percentage points over the baseline. The model has a computational load of 4.4 GFLOPs and a compact size of only 3.24 MB, approximately half that of the original model. On embedded devices, it achieves a real-time inference speed of 122 FPS, which is about 2.5, 11, and 1.8 times faster than SSD, Faster R-CNN, and YOLOv10n, respectively. Therefore, the lightweight model based on the improved YOLOv10 not only enhances detection accuracy but also significantly reduces computational cost and model size, enabling efficient real-time cup rupture detection in industrial production environments on embedded platforms. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

10 pages, 1106 KB  
Article
Usefulness of Lateral Arm Free Flap in Heel Reconstructions After Malignant Skin Tumor Excision: An Observational Study
by Soyeon Jung, Sodam Yi and Seokchan Eun
J. Clin. Med. 2026, 15(1), 192; https://doi.org/10.3390/jcm15010192 - 26 Dec 2025
Viewed by 151
Abstract
Background/Objectives: Heel reconstruction is a complex procedure that requires soft tissue reconstruction resistant to weight, pressure, and shear stress. Various flap reconstruction methods have been reported; among them, free fasciocutaneous flaps have advantages in terms of function and aesthetics, but also have challenges [...] Read more.
Background/Objectives: Heel reconstruction is a complex procedure that requires soft tissue reconstruction resistant to weight, pressure, and shear stress. Various flap reconstruction methods have been reported; among them, free fasciocutaneous flaps have advantages in terms of function and aesthetics, but also have challenges due to the longer operation time required and the possibility of failure. The primary aim of this study was to examine the functional outcomes of heel reconstruction using free lateral arm fasciocutaneous flaps after wide excision of heel skin cancer. Methods: Between January 2014 and December 2020, eight patients underwent wide excision of skin cancer and reconstruction of the heel with a lateral arm free flap. Perioperative clinical data and postoperative outcomes, including flap survival, complications, Lower Extremity Functional Scale (LEFS) score, and American Orthopaedic Foot and Ankle Society scale (AOFAS) score, were analyzed from clinical records. Functional assessments were performed at a minimum of 12 months postoperatively (mean 18.3 months, range 12–24 months) by a single blinded examiner who was not involved in the surgical procedures. Both preoperative and postoperative LEFS and AOFAS scores were recorded for comparison. Results: The mean size of the skin and soft tissue defect was 32 cm2, the mean duration of surgery was 179 (range: 160–215) minutes, and the mean duration of hospital stay after surgery was 17 (range: 14–19) days, with a mean follow-up period of 48 (range: 33–59) months. Among the eight patients, two had diabetes mellitus (25%), one had peripheral neuropathy (12.5%), and none had clinically significant peripheral vasculopathy. All flaps survived, with one congestive episode. Satisfactory aesthetic and functional results were observed in all patients. The mean preoperative LEFS score was 28 (SD ± 6.1), which improved significantly to a postoperative mean of 57 (SD ± 8.3). Similarly, the mean preoperative AOFAS score was 45 (SD ± 5.8), improving to a postoperative mean of 61 (SD ± 6.2). Minor donor site complications included hypertrophic scarring in two patients (25%) and transient sensory changes in the lateral arm region in three patients (38%), all of which resolved with conservative management. Conclusions: This research suggests that the lateral arm free flap can be considered a reliable option in heel reconstruction, resulting in acceptable functional and aesthetic outcomes. It provides excellent durability, with solid bony union and good contour in small to moderate-sized heel defect cases. Full article
Show Figures

Figure 1

18 pages, 1215 KB  
Perspective
Managing the Uncertainty of “Precision” While Navigating Goals of Care: A Framework for Collaborative Interpretation of Complex Genomic Testing Results in Critically-Ill Neonates
by DonnaMaria E. Cortezzo, Katharine Press Callahan, Bimal P. Chaudhari, Elliott M. Weiss, Monica Hsiung Wojcik, Krishna Acharya, Amy B. Schlegel, Kevin M. Sullivan and Jessica T. Fry
Children 2026, 13(1), 34; https://doi.org/10.3390/children13010034 - 26 Dec 2025
Viewed by 244
Abstract
Each year, many neonates are born with genetic diagnoses that carry a range of prognoses. As the types and availability of genetic testing have expanded, neonatal intensive care units (NICUs) have served as “launching points” for their clinical application. Broad genetic testing has [...] Read more.
Each year, many neonates are born with genetic diagnoses that carry a range of prognoses. As the types and availability of genetic testing have expanded, neonatal intensive care units (NICUs) have served as “launching points” for their clinical application. Broad genetic testing has both improved diagnostic precision and expanded uncertainty. Genetic information may be explicitly uncertain, as in the case of a variant of unknown significance (VUS). But it is also frequently uncertain whether/how the information relates to a patient’s phenotype or what it may mean for a child’s future. Even without ambiguity in the diagnosis or prognosis, the significance within a clinical and familial context may be less certain. Applying the information to clinical care is complex and may engender confusion among clinicians and families as readily as it offers guidance. Since genetic testing results can impact management and, at times, end-of-life decisions, misunderstanding and misapplication of genetic results pose a significant risk. We describe a hypothetical case of an infant with congenital hypotonia and respiratory failure. The family, after discussions with the care team about medically appropriate care paths, is navigating goals of care and considering tracheostomy placement for chronic mechanical ventilation. They consent to rapid genome sequencing in hopes of better understanding the etiology and severity of the neuromuscular condition. We explore three possible scenarios following different genomic results. With each, we discuss how the results may impact decision-making about the best plan of care. We propose a framework for navigating discussions about genetic testing results with families of critically ill children. We illustrate the importance of a multidisciplinary approach with collaboration between neonatology, genetics, and palliative care. By employing the strengths of each subspecialty, providers can manage the inherent uncertainty in genetic testing results, help determine the meaning of the results to the family in the context of their child’s medical care, and enhance the care and support of critically ill neonates and their families. Full article
(This article belongs to the Special Issue Pediatric Palliative Care and Pain Management)
Show Figures

Figure 1

26 pages, 2436 KB  
Article
ETA-Hysteresis-Based Reinforcement Learning for Continuous Multi-Target Hunting of Swarm USVs
by Nur Hamid and Haitham Saleh
Appl. Syst. Innov. 2026, 9(1), 7; https://doi.org/10.3390/asi9010007 - 25 Dec 2025
Viewed by 314
Abstract
Swarm unmanned surface vehicles (USVs) have been increasingly explored for maritime defense and security operations, particularly in scenarios requiring the rapid detection and interception of multiple attackers. The target detection reliability and defender–target assignment stability are significantly crucial to ensure quick responses and [...] Read more.
Swarm unmanned surface vehicles (USVs) have been increasingly explored for maritime defense and security operations, particularly in scenarios requiring the rapid detection and interception of multiple attackers. The target detection reliability and defender–target assignment stability are significantly crucial to ensure quick responses and prevent mission failure. A key challenge in such missions lies in the assignment of targets among multiple defenders, where frequent reassignment can cause instability and inefficiency. This paper proposes a novel ETA-hysteresis-guided reinforcement learning (RL) framework for continuous multi-target hunting with swarm USVs. The approach integrates estimated time of arrival (ETA)-based task allocation with a dual-threshold hysteresis mechanism to balance responsiveness and stability in multi-target assignments. The ETA module provides an efficient criterion for selecting the most suitable defender–target pair, while hysteresis prevents oscillatory reassignments triggered by marginal changes in ETA values. The framework is trained and evaluated in a 3D-simulated water environment with multiple continuous targets under static and dynamic water environments. Experimental results demonstrate that the proposed method achieves substantial measurable improvements compared to basic MAPPO and MAPPO-LSTM, including faster convergence speed (+20–30%), higher interception rates (improvement of +9.5% to +20.9%), and reduced mean time-to-capture (by 9.4–19.0%), while maintaining competitive path smoothness and energy efficiency. The findings highlight the potential of integrating time-aware assignment strategies with reinforcement learning to enable robust, scalable, and stable swarm USV operations for maritime security applications. Full article
Show Figures

Figure 1

40 pages, 10484 KB  
Article
Comparative Assessment of Eight Satellite Precipitation Products over the Complex Terrain of the Lower Yarlung Zangpo Basin: Performance Evaluation and Topographic Influence Analysis
by Anqi Tan, Ming Li, Heng Liu, Liangang Chen, Tao Wang, Wei Wang and Yong Shi
Remote Sens. 2026, 18(1), 63; https://doi.org/10.3390/rs18010063 - 24 Dec 2025
Viewed by 170
Abstract
Real-time precipitation monitoring through satellite remote sensing represents a critical technological frontier for operational hydrology in data-scarce mountainous regions. Following a comprehensive evaluation of reanalysis precipitation products in the downstream Yarlung Zangpo watershed, this investigation advances understanding by systematically assessing eight satellite-based precipitation [...] Read more.
Real-time precipitation monitoring through satellite remote sensing represents a critical technological frontier for operational hydrology in data-scarce mountainous regions. Following a comprehensive evaluation of reanalysis precipitation products in the downstream Yarlung Zangpo watershed, this investigation advances understanding by systematically assessing eight satellite-based precipitation retrieval algorithms against ground truth observations from 18 meteorological stations (2014–2022). Multi-temporal performance analysis employed statistical metrics including correlation analysis, root mean square error, mean absolute error, and bias assessment to characterize algorithm reliability across annual, monthly, and seasonal scales. Representative monthly spatial analysis (January, April, July) and comprehensive 12 month × 18 station heatmap visualization revealed pronounced seasonal performance variations and elevation-dependent error patterns. Satellite retrieval algorithms demonstrated systematic underestimation tendencies, with observational precipitation averaging 2358 mm/yr, substantially exceeding remote sensing estimates across six of eight products. IMERG_EarlyRun and IMERG_LateRun achieved optimal performance with annual correlation coefficients of 0.41/0.37 and minimal bias (relative bias: −3.0%/1.4%), substantially outperforming other products. Unexpectedly, IMERG_FinalRun exhibited severe deterioration (correlation: 0.37, relative bias: −73.8%) compared to Early/Late Run products despite comprehensive gauge adjustment, indicating critical limitations of statistical correction procedures in data-sparse mountainous environments. Temporal analysis revealed substantial year-to-year performance variability across all products, with algorithm accuracy strongly modulated by annual precipitation characteristics and underlying meteorological conditions. Station-level assessment demonstrated that 100% of stations showed underestimation for IMERG_FinalRun versus balanced patterns for IMERG_EarlyRun/LateRun (53% underestimation, 47% overestimation), confirming systematic gauge-adjustment failures. Supplementary terrain–precipitation analysis indicated GSMaP_MVK_G shows superior spatial pattern representation, while IMERG_LateRun excels in capturing temporal variations, suggesting multi-product integration strategies for comprehensive monitoring. Comparative assessment with previous reanalysis evaluation establishes that satellite products offer superior real-time availability but exhibit greater temporal variability compared to model-based approaches’ consistent performance. IMERG_EarlyRun and IMERG_LateRun are recommended for operational real-time applications, GSMaP_MVK_G for terrain-sensitive spatial analysis, and reanalysis products for seasonal assessment, while IMERG_FinalRun and FY2 require substantial improvement before deployment in high-altitude watershed management systems. Full article
Show Figures

Figure 1

Back to TopTop