Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (185)

Search Parameters:
Keywords = manual injection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1274 KB  
Article
A Predictive Approach for the Early Reliability Assessment in Embedded Systems Using Code and Trace Embeddings via Machine Learning
by Felipe Restrepo-Calle, Enrique Abma Romero and Sergio Cuenca-Asensi
Electronics 2026, 15(3), 543; https://doi.org/10.3390/electronics15030543 - 27 Jan 2026
Abstract
Radiation-induced transient faults pose a growing challenge for safety-critical embedded systems, yet traditional radiation testing and large-scale statistical fault injection (SFI) remain costly and impractical during early design stages. This paper presents a predictive approach for early reliability assessment that replaces handcrafted feature [...] Read more.
Radiation-induced transient faults pose a growing challenge for safety-critical embedded systems, yet traditional radiation testing and large-scale statistical fault injection (SFI) remain costly and impractical during early design stages. This paper presents a predictive approach for early reliability assessment that replaces handcrafted feature engineering with automatically learned vector representations of source code and execution traces. We derive multiple embeddings for traces and source code, and use them as inputs to a family of regression models, including ensemble methods and linear baselines, to build predictive models for reliability. Experimental evaluation shows that embedding-based models outperform prior approaches, reducing the mean absolute percentage error (MAPE) from 6.24% to 2.14% for correct executions (unACE), from 20.95% to 10.40% for Hangs, and from 49.09% to 37.69% for silent data corruptions (SDC) after excluding benchmarks with SDC below 1%. These results show that source code and trace embeddings can serve as effective estimators for expensive fault injection campaigns, enabling early-stage reliability assessment in radiation-exposed embedded systems without requiring any manual feature engineering. This capability provides a practical foundation for supporting design-space exploration during early development phases. Full article
Show Figures

Figure 1

31 pages, 706 KB  
Article
Applying Action Research to Developing a GPT-Based Assistant for Construction Cost Code Verification in State-Funded Projects in Vietnam
by Quan T. Nguyen, Thuy-Binh Pham, Hai Phong Bui and Po-Han Chen
Buildings 2026, 16(3), 499; https://doi.org/10.3390/buildings16030499 - 26 Jan 2026
Viewed by 64
Abstract
Cost code verification in state-funded construction projects remains a labor-intensive and error-prone task, particularly given the structural heterogeneity of project estimates and the prevalence of malformed codes, inconsistent units of measurement (UoMs), and locally modified price components. This study evaluates a deterministic GPT-based [...] Read more.
Cost code verification in state-funded construction projects remains a labor-intensive and error-prone task, particularly given the structural heterogeneity of project estimates and the prevalence of malformed codes, inconsistent units of measurement (UoMs), and locally modified price components. This study evaluates a deterministic GPT-based assistant designed to automate Vietnam’s regulatory verification. The assistant was developed and iteratively refined across four Action Research cycles. Also, the system enforces strict rule sequencing and dataset grounding via Python-governed computations. Rather than relying on probabilistic or semantic reasoning, the system performs strictly deterministic checks on code validity, UoM alignment, and unit price conformity in material (MTR), labor (LBR), and machinery (MCR), given the provincial unit price books (UPBs). Deterministic equality is evaluated either on raw numerical values or on values transformed through explicitly declared, rule-governed operations, preserving auditability without introducing tolerance-based or inferential reasoning. A dedicated exact-match mechanism, which is activated only when a code is invalid, enables the recovery of typographical errors only when a project item’s full price vector well matches a normative entry. Using twenty real construction estimates (16,100 rows) and twelve controlled error-injection cases, the study demonstrates that the assistant executes verification steps with high reliability across diverse spreadsheet structures, avoiding ambiguity and maintaining full auditability. Deterministic extraction and normalization routines facilitate robust handling of displaced headers, merged cells, and non-standard labeling, while structured reporting provides line-by-line traceability aligned with professional verification workflows. Practitioner feedback confirms that the system reduces manual tracing effort, improves evaluation consistency, and supports documentation compliance with human judgment. This research contributes a framework for large language model (LLM)-orchestrated verification, demonstrating how Action Research can align AI tools with domain expectations. Furthermore, it establishes a methodology for deploying LLMs in safety-critical and regulation-driven environments. Limitations—including narrow diagnostic scope, unlisted quotation exclusion, single-province UPB compliance, and sensitivity to extreme spreadsheet irregularities—define directions for future deterministic extensions. Overall, the findings illustrate how tightly constrained LLM configurations can augment, rather than replace, professional cost verification practices in public-sector construction. Full article
(This article belongs to the Special Issue Knowledge Management in the Building and Construction Industry)
Show Figures

Figure 1

20 pages, 592 KB  
Review
Detection of Feigned Impairment of the Shoulder Due to External Incentives: A Comprehensive Review
by Nahum Rosenberg
Diagnostics 2026, 16(2), 364; https://doi.org/10.3390/diagnostics16020364 - 22 Jan 2026
Viewed by 251
Abstract
Background: Feigned restriction of shoulder joint movement for secondary gain is clinically relevant and may misdirect care, distort disability determinations, and inflate system costs. Distinguishing feigning from structural pathology and from functional or psychosocial presentations is difficult because pain is subjective, performance varies, [...] Read more.
Background: Feigned restriction of shoulder joint movement for secondary gain is clinically relevant and may misdirect care, distort disability determinations, and inflate system costs. Distinguishing feigning from structural pathology and from functional or psychosocial presentations is difficult because pain is subjective, performance varies, and no single sign or test is definitive. This comprehensive review hypothesizes that the systematic integration of clinical examination, objective biomechanical and neurophysiological testing, and emerging technologies can substantially improve detection accuracy and provide defensible medicolegal documentation. Methods: PubMed and reference lists were searched within a prespecified time frame (primarily 2015–2025, with foundational earlier works included when conceptually essential) using terms related to shoulder movement restriction, malingering/feigning, symptom validity, effort testing, functional assessment, and secondary gain. Evidence was synthesized narratively, emphasizing objective or semi-objective quantification of motion and effort (goniometry, dynamometry, electrodiagnostics, kinematic sensing, and imaging). Results: Detection is best approached as a stepwise, multidimensional evaluation. First-line clinical assessment focuses on reproducible incongruence: non-anatomic patterns, internal inconsistencies, distraction-related improvement, and mismatch between claimed disability and observed function. Repeated examinations and documentation strengthen inference. Instrumented strength testing improves quantification beyond manual testing but remains effort-dependent; repeat-trial variability and atypical agonist–antagonist co-activation can indicate submaximal performance without proving intent. Imaging primarily tests plausibility by confirming lesions or highlighting discordance between claimed limitation and minimal pathology, while recognizing that normal imaging does not exclude pain. Diagnostic anesthetic injections and electrodiagnostics can clarify pain-mediated restriction or exclude neuropathic weakness but require cautious interpretation. Motion capture and inertial sensors can document compensatory strategies and context-dependent normalization, yet validated standalone thresholds are limited. Conclusions: Feigned shoulder impairment cannot be confirmed by any single test. The desirable strategy combines structured assessment of inconsistencies with objective biomechanical and neurophysiologic measurements, interpreted within the whole clinical context and rigorously documented; however, prospective validation is still needed before routine implementation. Full article
Show Figures

Figure 1

17 pages, 9299 KB  
Article
Research and Realization of an OCT-Guided Robotic System for Subretinal Injections
by Yunyao Li, Sujian Wu and Guohua Shi
Actuators 2026, 15(1), 53; https://doi.org/10.3390/act15010053 - 13 Jan 2026
Viewed by 274
Abstract
For retinal degenerative diseases, advanced therapies such as gene therapy and retinal stem cell therapy have emerged as promising treatments, which are often delivered through subretinal injection. However, clinical subretinal injection remains challenging due to the extremely high precision requirements, lack of depth [...] Read more.
For retinal degenerative diseases, advanced therapies such as gene therapy and retinal stem cell therapy have emerged as promising treatments, which are often delivered through subretinal injection. However, clinical subretinal injection remains challenging due to the extremely high precision requirements, lack of depth information, and the physiological limitations of manual operation, often leading to complications such as hypotony and globe atrophy. To address these challenges, this study proposes a novel ophthalmic surgical robotic system designed for high-precision subretinal injections. The robotic system incorporate a remote center of motion mechanism for its mechanical structure and employs a master–slave control system to achieve motion scaling. A microscope-integrated optical coherence tomography device is applied to provide real-time microscopic imaging and depth information. The design and performance of the proposed system are validated through simulations and experiments. Precision tests demonstrate that the system achieves an overall positioning accuracy of less than 30 μm, with injection positioning accuracy under 20 μm. Subretinal injection experiments conducted on artificial eye models further validate the clinical feasibility of the robotic system. Full article
Show Figures

Figure 1

19 pages, 778 KB  
Article
GALR: Graph-Based Root Cause Localization and LLM-Assisted Recovery for Microservice Systems
by Wenya Zhang, Zhi Yang, Fang Peng, Le Zhang, Yiting Chen and Ruibo Chen
Electronics 2026, 15(1), 243; https://doi.org/10.3390/electronics15010243 - 5 Jan 2026
Viewed by 368
Abstract
With the rapid evolution of cloud-native platforms, microservice-based systems have become increasingly large-scale and complex, making fast and accurate root cause localization and recovery a critical challenge. Runtime signals in such systems are inherently multimodal—combining metrics, logs, and traces—and are intertwined through deep, [...] Read more.
With the rapid evolution of cloud-native platforms, microservice-based systems have become increasingly large-scale and complex, making fast and accurate root cause localization and recovery a critical challenge. Runtime signals in such systems are inherently multimodal—combining metrics, logs, and traces—and are intertwined through deep, dynamic service dependencies, which often leads to noisy alerts, ambiguous fault propagation paths, and brittle, manually curated recovery playbooks. To address these issues, we propose GALR, a graph- and LLM-based framework for root cause localization and recovery in microservice-based business middle platforms. GALR first constructs a multimodal service call graph by fusing time-series metrics, structured logs, and trace-derived topology, and employs a GAT-based root cause analysis module with temporal-aware edge attention to model failure propagation. On top of this, an LLM-based node enhancement mechanism infers anomaly, normal, and uncertainty scores from log contexts and injects them into node representations and attention bias terms, improving robustness under noisy or incomplete signals. Finally, GALR integrates a retrieval-augmented LLM agent that retrieves similar historical cases and generates executable recovery strategies, with consistency checking against expert-standard playbooks to ensure safety and reproducibility. Extensive experiments on three representative microservice datasets demonstrate that GALR consistently achieves superior Top-k accuracy and mean reciprocal rank for root cause localization, while the retrieval-augmented agent yields substantially more accurate and actionable recovery plans compared with graph-only and LLM-only baselines, providing a practical closed-loop solution from anomaly perception to recovery execution. Full article
(This article belongs to the Special Issue Advanced Techniques for Multi-Agent Systems)
Show Figures

Figure 1

19 pages, 5120 KB  
Article
Research on the Multi-Layer Optimal Injection Model of CO2-Containing Natural Gas with Minimum Wellhead Gas Injection Pressure and Layered Gas Distribution Volume Requirements as Optimization Goals
by Biao Wang, Yingwen Ma, Yuchen Ji, Jifei Yu, Xingquan Zhang, Ruiquan Liao, Wei Luo and Jihan Wang
Processes 2026, 14(1), 151; https://doi.org/10.3390/pr14010151 - 1 Jan 2026
Viewed by 294
Abstract
The separate-layer gas injection technology is a key means to improve the effect of refined gas injection development. Currently, the measurement and adjustment of separate injection wells primarily rely on manual experience and automatic measurement via instrument traversal, resulting in a long duration, [...] Read more.
The separate-layer gas injection technology is a key means to improve the effect of refined gas injection development. Currently, the measurement and adjustment of separate injection wells primarily rely on manual experience and automatic measurement via instrument traversal, resulting in a long duration, low efficiency, and low qualification rate for injection allocation across multi-layer intervals. Given the different CO2-containing natural gas injection rates across different intervals, this paper establishes a coupled flow model of a separate-layer gas injection wellbore–gas distributor–formation based on the energy and mass conservation equations for wellbore pipe flow, and develops a solution method for determining gas nozzle sizes across multi-layer intervals. Based on the maximum allowable gas nozzle size, an optimization method for multi-layer collaborative allocation of separate injection wells is established, with minimum wellhead injection pressure and layered injection allocation as the optimization objectives, and the opening of gas distributors for each layer as the optimization variable. Taking Well XXX as an example, the optimization process of allocation schemes under different gas allocation requirements is simulated. The research shows that the model and method proposed in this paper have high calculation accuracy, and the formulated allocation schemes have strong adaptability and minor injection allocation errors, providing a scientific decision-making method for formulating refined allocation schemes for separate-layer gas injection wells, with significant theoretical and practical value for promoting the refined development of oilfields. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

20 pages, 1652 KB  
Article
Classification of Point Cloud Data in Road Scenes Based on PointNet++
by Jingfeng Xue, Bin Zhao, Chunhong Zhao, Yueru Li and Yihao Cao
Sensors 2026, 26(1), 153; https://doi.org/10.3390/s26010153 - 25 Dec 2025
Viewed by 522
Abstract
Point cloud data, with its rich information and high-precision geometric details, holds significant value for urban road infrastructure surveying and management. To overcome the limitations of manual classification, this study employs deep learning techniques for automated point cloud feature extraction and classification, achieving [...] Read more.
Point cloud data, with its rich information and high-precision geometric details, holds significant value for urban road infrastructure surveying and management. To overcome the limitations of manual classification, this study employs deep learning techniques for automated point cloud feature extraction and classification, achieving high-precision object recognition in road scenes. By integrating the Princeton ModelNet40, ShapeNet, and Sydney Urban Objects datasets, we extracted 3D spatial coordinates from the Sydney Urban Objects Dataset and organized labeled point cloud files to build a comprehensive dataset reflecting real-world road scenarios. To address noise and occlusion-induced data gaps, three augmentation strategies were implemented: (1) Farthest Point Sampling (FPS): Preserves critical features while mitigating overfitting. (2) Random Z-axis rotation, translation, and scaling: Enhances model generalization. (3) Gaussian noise injection: Improves training sample realism. The PointNet++ framework was enhanced by integrating a point-filling method into the preprocessing module. Model training and prediction were conducted using its Multi-Scale Grouping (MSG) and Single-Scale Grouping (SSG) schemes. The model achieved an average training accuracy of 86.26% (peak single-instance accuracy: 98.54%; best category accuracy: 93.15%) and a test set accuracy of 97.41% (category accuracy: 84.50%). This study demonstrates successful road scene point cloud classification, providing valuable insights for point cloud data processing and related research. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

11 pages, 547 KB  
Article
Genetic Influence on Extended-Release Naltrexone Treatment Outcomes in Patients with Opioid Use Disorder: An Exploratory Study
by Farid Juya, Kristin Klemmetsby Solli, Ann-Christin Sannes, Bente Weimand, Johannes Gjerstad, Lars Tanum and Jon Mordal
Brain Sci. 2026, 16(1), 23; https://doi.org/10.3390/brainsci16010023 - 24 Dec 2025
Viewed by 322
Abstract
Background/Objectives: The variation in the treatment outcomes of extended-release naltrexone (XR-NTX) including the potential role of genetic factors are poorly understood. This study aimed to explore the potential association between the catechol-O-methyltransferase (COMT) rs4680 and mu-opioid receptor (OPRM1) rs1799971 genotypes [...] Read more.
Background/Objectives: The variation in the treatment outcomes of extended-release naltrexone (XR-NTX) including the potential role of genetic factors are poorly understood. This study aimed to explore the potential association between the catechol-O-methyltransferase (COMT) rs4680 and mu-opioid receptor (OPRM1) rs1799971 genotypes and XR-NTX treatment outcomes in patients with opioid use disorder (OUD) specifically focusing on treatment retention, relapse to opioids, number of days of opioid use, and opioid cravings. Methods: This was a 24-week, open-label clinical prospective, exploratory study involving patients with OUD who chose treatment with monthly injections of intramuscular XR-NTX. Men and women aged 18–65 years with OUD according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, were included. The participants were interviewed using the European Addiction Severity Index. Survival analyses and linear mixed models were used to analyze the data. Results: Of the 162 participants included in this study, 138 (21% female) initiated treatment with XR-NTX, with 88 genotyped for COMT rs4680 and 86 for OPRM1 rs1799971. Heterozygous Met/Val carriers of COMT rs4680 were less likely to relapse to opioids compared with those with the COMT rs4680 Met/Met genotype. No significant association was observed for the OPRM1 polymorphism. Conclusions: Patients with the COMT rs4680 Met/Val genotype exhibit a reduced risk of relapse to opioids and may therefore derive greater benefit from XR-NTX treatment compared with those with the COMT rs4680 Met/Met genotype. Future studies should be conducted with a larger number of participants and possibly include other genetic variants and treatment outcomes. The trial is registered at ClinicalTrials.gov (#NCT03647774) and the EU Clinical Trial Register (#2017-004706-18). Full article
(This article belongs to the Section Molecular and Cellular Neuroscience)
Show Figures

Figure 1

39 pages, 94444 KB  
Article
From Capture–Recapture to No Recapture: Efficient SCAD Even After Software Updates
by Kurt A. Vedros, Aleksandar Vakanski, Domenic J. Forte and Constantinos Kolias
Sensors 2026, 26(1), 118; https://doi.org/10.3390/s26010118 - 24 Dec 2025
Viewed by 403
Abstract
Side-Channel-based Anomaly Detection (SCAD) offers a powerful and non-intrusive means of detecting unauthorized behavior in IoT and cyber–physical systems. It leverages signals that emerge from physical activity—such as electromagnetic (EM) emissions or power consumption traces—as passive indicators of software execution integrity. This capability [...] Read more.
Side-Channel-based Anomaly Detection (SCAD) offers a powerful and non-intrusive means of detecting unauthorized behavior in IoT and cyber–physical systems. It leverages signals that emerge from physical activity—such as electromagnetic (EM) emissions or power consumption traces—as passive indicators of software execution integrity. This capability is particularly critical in IoT/IIoT environments, where large fleets of deployed devices are at heightened risk of firmware tampering, malicious code injection, and stealthy post-deployment compromise. However, its deployment remains constrained by the costly and time-consuming need to re-fingerprint whenever a program is updated or modified, as fingerprinting involves a precision-intensive manual capturing process for each execution path. To address this challenge, we propose a generative modeling framework that synthesizes realistic EM signals for newly introduced or updated execution paths. Our approach utilizes a Conditional Wasserstein Generative Adversarial Network with Gradient Penalty (CWGAN-GP) framework trained on real EM traces that are conditioned on Execution State Descriptors (ESDs) that encode instruction sequences, operands, and register values. Comprehensive evaluations at instruction-level granularity demonstrate that our approach generates synthetic signals that faithfully reproduce the distinctive features of real EM emissions—achieving 85–92% similarity to real emanations. The inclusion of ESD conditioning further improves fidelity, reducing the similarity distance by ∼13%. To gauge SCAD utility, we train a basic semi-supervised detector on the synthetic signals and find ROC-AUC results within ±1% of detectors trained on real EM data across varying noise conditions. Furthermore, the proposed 1DCNNGAN model (a CWGAN-GP variant) achieves faster training and reduced memory requirements compared with the previously leading ResGAN. Full article
(This article belongs to the Special Issue Internet of Things Cybersecurity)
Show Figures

Figure 1

14 pages, 828 KB  
Article
Rates of Spawning and Mortality Using Contrasting Methods for Culling Pacific Crown-of-Thorns Starfish, Acanthaster cf. solaris
by Morgan S. Pratchett, Ciemon F. Caballes, Leighton T. Levering, Deborah Burn, Josie F. Chandler, Alec S. Leitman and Peter C. Doll
Biology 2025, 14(12), 1720; https://doi.org/10.3390/biology14121720 - 1 Dec 2025
Viewed by 681
Abstract
Timely, concerted, and persistent culling (or manual removal) is required to effectively manage population irruptions of crown-of-thorns starfish (CoTS; Acanthaster spp.). However, there are concerns that handling and culling gravid starfish may induce spawning. This study explicitly tested the frequency and timing of [...] Read more.
Timely, concerted, and persistent culling (or manual removal) is required to effectively manage population irruptions of crown-of-thorns starfish (CoTS; Acanthaster spp.). However, there are concerns that handling and culling gravid starfish may induce spawning. This study explicitly tested the frequency and timing of spawning for Pacific CoTS (Acanthaster cf. solaris) injected with either bile salts (10 mL of 8 g·L−1 Bile Salts No. 3) or vinegar (20 mL of 4% acetic acid, with 10 mL injected into each of two non-adjacent arms), up to 48 h after treatment, while also considering three distinct experimental controls (handling controls, injection controls, and spawning controls). This study showed that male CoTS often spawn within 24 h after different culling treatments. However, the incidence of spawning by male starfish injected with vinegar (70%) was nearly twice that of male starfish injected with bile salts (36.4%). In contrast, there were no instances of spawning by female CoTS following handling or injections of bile salts and vinegar. Variation in the incidence of spawning between culling treatments is largely attributable to differences in the rate of mortality, whereby CoTS injected with bile salts (n = 23) consistently died within 24 h and therefore had limited opportunity to spawn. Meanwhile, CoTS injected with vinegar generally died >24 h post-treatment, and many had not died even after 48 h. This suggests that, where available, bile salts (rather than vinegar) should be used when culling Acanthaster cf. solaris, especially during reproductive periods. However, sustained culling effort is still the most direct and effective way to suppress the local densities and reproductive capacity of CoTS. Full article
(This article belongs to the Section Marine and Freshwater Biology)
Show Figures

Graphical abstract

34 pages, 14464 KB  
Article
Modular IoT Architecture for Monitoring and Control of Office Environments Based on Home Assistant
by Yevheniy Khomenko and Sergii Babichev
IoT 2025, 6(4), 69; https://doi.org/10.3390/iot6040069 - 17 Nov 2025
Cited by 1 | Viewed by 1703
Abstract
Cloud-centric IoT frameworks remain dominant; however, they introduce major challenges related to data privacy, latency, and system resilience. Existing open-source solutions often lack standardized principles for scalable, local-first deployment and do not adequately integrate fault tolerance with hybrid automation logic. This study presents [...] Read more.
Cloud-centric IoT frameworks remain dominant; however, they introduce major challenges related to data privacy, latency, and system resilience. Existing open-source solutions often lack standardized principles for scalable, local-first deployment and do not adequately integrate fault tolerance with hybrid automation logic. This study presents a practical and extensible local-first IoT architecture designed for full operational autonomy using open-source components. The proposed system features a modular, layered design that includes device, communication, data, management, service, security, and presentation layers. It integrates MQTT, Zigbee, REST, and WebSocket protocols to enable reliable publish–subscribe and request–response communication among heterogeneous devices. A hybrid automation model combines rule-based logic with lightweight data-driven routines for context-aware decision-making. The implementation uses Proxmox-based virtualization with Home Assistant as the core automation engine and operates entirely offline, ensuring privacy and continuity without cloud dependency. The architecture was deployed in a real-world office environment and evaluated under workload and fault-injection scenarios. Results demonstrate stable operation with MQTT throughput exceeding 360,000 messages without packet loss, automatic recovery from simulated failures within three minutes, and energy savings of approximately 28% compared to baseline manual control. Compared to established frameworks such as FIWARE and IoT-A, the proposed approach achieves enhanced modularity, local autonomy, and hybrid control capabilities, offering a reproducible model for privacy-sensitive smart environments. Full article
Show Figures

Figure 1

41 pages, 6244 KB  
Article
A Holistic Framework for Optimizing CO2 Storage: Reviewing Multidimensional Constraints and Application of Automated Hierarchical Spatiotemporal Discretization Algorithm
by Ismail Ismail, Sofianos Panagiotis Fotias and Vassilis Gaganis
Energies 2025, 18(22), 5926; https://doi.org/10.3390/en18225926 - 11 Nov 2025
Viewed by 599
Abstract
Climate change mitigation demands scalable, technologically mature solutions capable of addressing emissions from hard-to-abate sectors. Carbon Capture and Storage (CCS) offers one of the few ready pathways for deep decarbonization by capturing CO2 at large point sources and securely storing it in [...] Read more.
Climate change mitigation demands scalable, technologically mature solutions capable of addressing emissions from hard-to-abate sectors. Carbon Capture and Storage (CCS) offers one of the few ready pathways for deep decarbonization by capturing CO2 at large point sources and securely storing it in deep geological formations. The long-term viability of CCS depends on well control strategies/injection schedules that maximize storage capacity, maintain containment integrity, ensure commercial deliverability and remain economically viable. However, current practice still relies heavily on manual, heuristic-based well scheduling, which struggles to optimize storage capacity while minimizing by-products such as CO2 recycling within the high-dimensional space of interdependent technical, commercial, operational, economic and regulatory constraints. This study makes two contributions: (1) it systematically reviews, maps and characterizes these multidimensional constraints, framing them as an integrated decision space for CCS operations, and (2) it introduces an industry-ready optimization framework—Automated Optimization of Well control Strategies through Dynamic Time–Space Discretization—which couples reservoir simulation with constraint-embedded, hierarchical refinement in space and time. Using a modified genetic algorithm, injection schedules evolve from coarse to fine resolution, accelerating convergence while preserving robustness. Applied to a heterogeneous saline aquifer model, the method was tested under both engineering and financial objectives. Compared to an industry-standard manual schedule, optimal solutions increased net stored CO2 by 14% and reduced recycling by 22%, raising retention efficiency to over 95%. Under financial objectives, the framework maintained these technical gains while increasing cumulative cash flow by 23%, achieved through leaner, smoother injection profiles that minimize costly by-products. The results confirm that the framework’s robustness, scalability and compatibility with commercial simulators make it a practical pathway to enhance CCS performance and accelerate deployment at scale. Full article
(This article belongs to the Section B3: Carbon Emission and Utilization)
Show Figures

Figure 1

18 pages, 4036 KB  
Article
Precise Control of Micropipette Flow Rate for Fluorescence Imaging in In Vivo Micromanipulation
by Ruimin Li, Shaojie Fu, Zijian Guo, Jinyu Qiu, Yuzhu Liu, Mengya Liu, Qili Zhao and Xin Zhao
Sensors 2025, 25(21), 6647; https://doi.org/10.3390/s25216647 - 30 Oct 2025
Viewed by 963
Abstract
Precise regulation of micropipette outlet flow is critical for fluorescence imaging in vivo micromanipulations. In such procedures, a micropipette with a micro-sized opening is driven by gas pressure to deliver internal solution into the in vivo environment. The outlet flow rate needs to [...] Read more.
Precise regulation of micropipette outlet flow is critical for fluorescence imaging in vivo micromanipulations. In such procedures, a micropipette with a micro-sized opening is driven by gas pressure to deliver internal solution into the in vivo environment. The outlet flow rate needs to be precisely regulated to ensure a uniform and stable fluorescence distribution. However, conventional manual pressure injection methods face inherent limitations, including insufficient precision and poor reproducibility. Existing commercial microinjection systems lack a quantitative relationship between pressure and flow rate. And existing calibration methods in the field of microfluidics suffer from a limited flow-rate measurement resolution, constraining the establishment of a precise pressure–flow quantitative relationship. To address these challenges, we developed a closed-loop pressure regulation system with 1 Pa-level control resolution and established a quantitative calibration of the pressure–flow relationship using a droplet-based method. The calibration revealed a linear relationship with a mean pressure–flow gain of 4.846 × 1017m3·s1·Pa1 (R2 > 0.99). Validation results demonstrated that the system achieved the target outlet flow rate with a flow control error less than 10 fL/s. Finally, the application results in brain-slice environment confirmed its capability to maintain stable fluorescence imaging, with fluorescence intensity fluctuations around 1.3%. These results demonstrated that the proposed approach provides stable, precise, and reproducible flow regulation under physiologically relevant conditions, thereby offering a valuable tool for in vivo micromanipulation and detection. Full article
Show Figures

Figure 1

24 pages, 5577 KB  
Review
Intelligent Batch Harvesting of Trellis-Grown Fruits with Application to Kiwifruit Picking Robots
by Yuxin Yang, Mei Zhang, Wei Ma and Yongsong Hu
Agronomy 2025, 15(11), 2499; https://doi.org/10.3390/agronomy15112499 - 28 Oct 2025
Viewed by 1133
Abstract
This study aims to help researchers quickly understand the latest research status of kiwifruit picking robots to expand their research ideas. The centralized picking of kiwifruit is confronted with challenges such as high labor intensity and labor shortage. A series of social issues [...] Read more.
This study aims to help researchers quickly understand the latest research status of kiwifruit picking robots to expand their research ideas. The centralized picking of kiwifruit is confronted with challenges such as high labor intensity and labor shortage. A series of social issues including the decline in agricultural population and population aging have further increased the cost of its harvest. Therefore, intelligent picking robots replacing manual operations is an effective solution. This paper, through literature review and organization, analyzes and evaluates the performance characteristics of various current kiwifruit picking robots. It summarizes the key technologies of kiwifruit picking robots, from the aspects of robot vision systems, mechanical arms, and the end effector. At the same time, it conducts an in-depth analysis of the problems existing in automatic kiwifruit harvesting technology in modern agriculture. Finally, it is concluded that in the future, research should be carried out in aspects such as kiwifruit cluster recognition algorithms, picking efficiency, and damage cost and universality to enhance the operational performance and market promotion potential of kiwifruit picking robots. The significance of this review lies in addressing the imminent labor crisis in agricultural production and steering agriculture toward intelligent and precise transformation. Its contributions are reflected in greatly advancing robotic technology in complex agricultural settings, generating substantial technical achievements, injecting new vitality into related industries and academic fields, and ultimately delivering sustainable economic benefits and stable agricultural supply to society. Full article
(This article belongs to the Special Issue Digital Twins in Precision Agriculture)
Show Figures

Figure 1

14 pages, 6970 KB  
Article
Rehearsal-Free Continual Learning for Emerging Unsafe Behavior Recognition in Construction Industry
by Tao Wang, Saisai Ye, Zimeng Zhai, Weigang Lu and Cunling Bian
Sensors 2025, 25(21), 6525; https://doi.org/10.3390/s25216525 - 23 Oct 2025
Viewed by 686
Abstract
In the realm of Industry 5.0, the incorporation of Artificial Intelligence (AI) in overseeing workers, machinery, and industrial systems is essential for fostering a human-centric, sustainable, and resilient industry. Despite technological advancements, the construction industry remains largely labor intensive, with site management and [...] Read more.
In the realm of Industry 5.0, the incorporation of Artificial Intelligence (AI) in overseeing workers, machinery, and industrial systems is essential for fostering a human-centric, sustainable, and resilient industry. Despite technological advancements, the construction industry remains largely labor intensive, with site management and interventions predominantly reliant on manual judgments, leading to inefficiencies and various challenges. This research emphasizes identifying unsafe behaviors and risks within construction environments by employing AI. Given the continuous emergence of unsafe behaviors that requires certain caution, it is imperative to adapt to these novel categories while retaining the knowledge of existing ones. Although deep convolutional neural networks have shown excellent performance in behavior recognition, they traditionally function as predefined multi-way classifiers, which exhibit limited flexibility in accommodating emerging unsafe behavior classes. Addressing this issue, this study proposes a versatile and efficient recognition model capable of expanding the range of unsafe behaviors while maintaining the recognition of both new and existing categories. Adhering to the continual learning paradigm, this method integrates two types of complementary prompts into the pre-trained model: task-invariant prompts that encode knowledge shared across tasks, and task-specific prompts that adapt the model to individual tasks. These prompts are injected into specific layers of the frozen backbone to guide learning without requiring a rehearsal buffer, enabling effective recognition of both new and previously learned unsafe behaviors. Additionally, this paper introduces a benchmark dataset, Split-UBR, specifically constructed for continual unsafe behavior recognition on construction sites. To rigorously evaluate the proposed model, we conducted comparative experiments using average accuracy and forgetting as metrics, and benchmarked against state-of-the-art continual learning baselines. Results on the Split-UBR dataset demonstrate that our method achieves superior performance in terms of both accuracy and reduced forgetting across all tasks, highlighting its effectiveness in dynamic industrial environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop