Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,905)

Search Parameters:
Keywords = real-time integrity assessment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2100 KiB  
Article
Flexible Demand Side Management in Smart Cities: Integrating Diverse User Profiles and Multiple Objectives
by Nuno Souza e Silva and Paulo Ferrão
Energies 2025, 18(15), 4107; https://doi.org/10.3390/en18154107 (registering DOI) - 2 Aug 2025
Abstract
Demand Side Management (DSM) plays a crucial role in modern energy systems, enabling more efficient use of energy resources and contributing to the sustainability of the power grid. This study examines DSM strategies within a multi-environment context encompassing residential, commercial, and industrial sectors, [...] Read more.
Demand Side Management (DSM) plays a crucial role in modern energy systems, enabling more efficient use of energy resources and contributing to the sustainability of the power grid. This study examines DSM strategies within a multi-environment context encompassing residential, commercial, and industrial sectors, with a focus on diverse appliance types that exhibit distinct operational characteristics and user preferences. Initially, a single-objective optimization approach using Genetic Algorithms (GAs) is employed to minimize the total energy cost under a real Time-of-Use (ToU) pricing scheme. This heuristic method allows for the effective scheduling of appliance operations while factoring in their unique characteristics such as power consumption, usage duration, and user-defined operational flexibility. This study extends the optimization problem to a multi-objective framework that incorporates the minimization of CO2 emissions under a real annual energy mix while also accounting for user discomfort. The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is utilized for this purpose, providing a Pareto-optimal set of solutions that balances these competing objectives. The inclusion of multiple objectives ensures a comprehensive assessment of DSM strategies, aiming to reduce environmental impact and enhance user satisfaction. Additionally, this study monitors the Peak-to-Average Ratio (PAR) to evaluate the impact of DSM strategies on load balancing and grid stability. It also analyzes the impact of considering different periods of the year with the associated ToU hourly schedule and CO2 emissions hourly profile. A key innovation of this research is the integration of detailed, category-specific metrics that enable the disaggregation of costs, emissions, and user discomfort across residential, commercial, and industrial appliances. This granularity enables stakeholders to implement tailored strategies that align with specific operational goals and regulatory compliance. Also, the emphasis on a user discomfort indicator allows us to explore the flexibility available in such DSM mechanisms. The results demonstrate the effectiveness of the proposed multi-objective optimization approach in achieving significant cost savings that may reach 20% for industrial applications, while the order of magnitude of the trade-offs involved in terms of emissions reduction, improvement in discomfort, and PAR reduction is quantified for different frameworks. The outcomes not only underscore the efficacy of applying advanced optimization frameworks to real-world problems but also point to pathways for future research in smart energy management. This comprehensive analysis highlights the potential of advanced DSM techniques to enhance the sustainability and resilience of energy systems while also offering valuable policy implications. Full article
Show Figures

Figure 1

23 pages, 2029 KiB  
Systematic Review
Exploring the Role of Industry 4.0 Technologies in Smart City Evolution: A Literature-Based Study
by Nataliia Boichuk, Iwona Pisz, Anna Bruska, Sabina Kauf and Sabina Wyrwich-Płotka
Sustainability 2025, 17(15), 7024; https://doi.org/10.3390/su17157024 (registering DOI) - 2 Aug 2025
Abstract
Smart cities are technologically advanced urban environments where interconnected systems and data-driven technologies enhance public service delivery and quality of life. These cities rely on information and communication technologies, the Internet of Things, big data, cloud computing, and other Industry 4.0 tools to [...] Read more.
Smart cities are technologically advanced urban environments where interconnected systems and data-driven technologies enhance public service delivery and quality of life. These cities rely on information and communication technologies, the Internet of Things, big data, cloud computing, and other Industry 4.0 tools to support efficient city management and foster citizen engagement. Often referred to as digital cities, they integrate intelligent infrastructures and real-time data analytics to improve mobility, security, and sustainability. Ubiquitous sensors, paired with Artificial Intelligence, enable cities to monitor infrastructure, respond to residents’ needs, and optimize urban conditions dynamically. Given the increasing significance of Industry 4.0 in urban development, this study adopts a bibliometric approach to systematically review the application of these technologies within smart cities. Utilizing major academic databases such as Scopus and Web of Science the research aims to identify the primary Industry 4.0 technologies implemented in smart cities, assess their impact on infrastructure, economic systems, and urban communities, and explore the challenges and benefits associated with their integration. The bibliometric analysis included publications from 2016 to 2023, since the emergence of urban researchers’ interest in the technologies of the new industrial revolution. The task is to contribute to a deeper understanding of how smart cities evolve through the adoption of advanced technological frameworks. Research indicates that IoT and AI are the most commonly used tools in urban spaces, particularly in smart mobility and smart environments. Full article
Show Figures

Figure 1

26 pages, 1567 KiB  
Article
A CDC–ANFIS-Based Model for Assessing Ship Collision Risk in Autonomous Navigation
by Hee-Jin Lee and Ho Namgung
J. Mar. Sci. Eng. 2025, 13(8), 1492; https://doi.org/10.3390/jmse13081492 (registering DOI) - 1 Aug 2025
Abstract
To improve collision risk prediction in high-traffic coastal waters and support real-time decision-making in maritime navigation, this study proposes a regional collision risk prediction system integrating the Computed Distance at Collision (CDC) method with an Adaptive Neuro-Fuzzy Inference System (ANFIS). Unlike Distance at [...] Read more.
To improve collision risk prediction in high-traffic coastal waters and support real-time decision-making in maritime navigation, this study proposes a regional collision risk prediction system integrating the Computed Distance at Collision (CDC) method with an Adaptive Neuro-Fuzzy Inference System (ANFIS). Unlike Distance at Closest Point of Approach (DCPA), which depends on the position of Global Positioning System (GPS) antennas, Computed Distance at Collision (CDC) directly reflects the actual hull shape and potential collision point. This enables a more realistic assessment of collision risk by accounting for the hull geometry and boundary conditions specific to different ship types. The system was designed and validated using ship motion simulations involving bulk and container ships across varying speeds and crossing angles. The CDC method was used to define collision, almost-collision, and near-collision situations based on geometric and hydrodynamic criteria. Subsequently, the FIS–CDC model was constructed using the ANFIS by learning patterns in collision time and distance under each condition. A total of four input variables—ship speed, crossing angle, remaining time, and remaining distance—were used to infer the collision risk index (CRI), allowing for a more nuanced and vessel-specific assessment than traditional CPA-based indicators. Simulation results show that the time to collision decreases with higher speeds and increases with wider crossing angles. The bulk carrier exhibited a wider collision-prone angle range and a greater sensitivity to speed changes than the container ship, highlighting differences in maneuverability and risk response. The proposed system demonstrated real-time applicability and accurate risk differentiation across scenarios. This research contributes to enhancing situational awareness and proactive risk mitigation in Maritime Autonomous Surface Ship (MASS) and Vessel Traffic System (VTS) environments. Future work will focus on real-time CDC optimization and extending the model to accommodate diverse ship types and encounter geometries. Full article
22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 (registering DOI) - 1 Aug 2025
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

16 pages, 914 KiB  
Article
APTIMA mRNA vs. DNA-Based HPV Assays: Analytical Performance Insights from a Resource-Limited South African Setting
by Varsetile Varster Nkwinika, Kelvin Amoh Amissah, Johnny Nare Rakgole, Moshawa Calvin Khaba, Cliff Abdul Magwira and Ramokone Lisbeth Lebelo
Int. J. Mol. Sci. 2025, 26(15), 7450; https://doi.org/10.3390/ijms26157450 (registering DOI) - 1 Aug 2025
Abstract
Cervical cancer remains a major health burden among women in sub-Saharan Africa, where screening is often limited. Persistent high-risk human papillomavirus (HR-HPV) infection is the principal cause, highlighting the need for accurate molecular diagnostics. This cross-sectional study evaluated the analytical performance of one [...] Read more.
Cervical cancer remains a major health burden among women in sub-Saharan Africa, where screening is often limited. Persistent high-risk human papillomavirus (HR-HPV) infection is the principal cause, highlighting the need for accurate molecular diagnostics. This cross-sectional study evaluated the analytical performance of one mRNA assay, APTIMA® HPV assay (APTIMA mRNA), and two DNA-based assays, the Abbott RealTime High Risk HPV assay (Abbott DNA) and Seegene Allplex™ II HPV28 assay (Seegene DNA), in 527 cervical samples from a South African tertiary hospital, focusing on 14 shared HR-HPV genotypes. Seegene DNA yielded the highest detection rate (53.7%), followed by Abbott DNA (48.2%) and APTIMA mRNA (45.2%). APTIMA mRNA showed a strong agreement with Abbott DNA (87.9%, κ = 0.80), 89.9% sensitivity, 91.2% NPV, and the highest accuracy (AUC = 0.8804 vs. 0.8681). The agreement between APTIMA mRNA and Seegene DNA was moderate (83.4%, κ = 0.70), reflecting target differences. Many DNA-positive/mRNA-negative cases likely represent transient infections, though some may be latent with reactivation potential, warranting a follow-up. In resource-constrained settings, prioritizing transcriptionally active infections through mRNA testing may enhance screening efficiency and reduce burden. Scalable, cost-effective assays with strong clinical utility are essential for broadening access and improving cervical cancer prevention. Further studies should assess the integration of mRNA testing into longitudinal screening algorithms. Full article
Show Figures

Figure 1

20 pages, 3582 KiB  
Article
Design and Development of a Real-Time Pressure-Driven Monitoring System for In Vitro Microvasculature Formation
by Gayathri Suresh, Bradley E. Pearson, Ryan Schreiner, Yang Lin, Shahin Rafii and Sina Y. Rabbany
Biomimetics 2025, 10(8), 501; https://doi.org/10.3390/biomimetics10080501 (registering DOI) - 1 Aug 2025
Abstract
Microfluidic platforms offer a powerful approach for ultimately replicating vascularization in vitro, enabling precise microscale control and manipulation of physical parameters. Despite these advances, the real-time ability to monitor and quantify mechanical forces—particularly pressure—within microfluidic environments remains constrained by limitations in cost [...] Read more.
Microfluidic platforms offer a powerful approach for ultimately replicating vascularization in vitro, enabling precise microscale control and manipulation of physical parameters. Despite these advances, the real-time ability to monitor and quantify mechanical forces—particularly pressure—within microfluidic environments remains constrained by limitations in cost and compatibility across diverse device architectures. Our work presents an advanced experimental module for quantifying pressure within a vascularizing microfluidic platform. Equipped with an integrated Arduino microcontroller and image monitoring, the system facilitates real-time remote monitoring to access temporal pressure and flow dynamics within the device. This setup provides actionable insights into the hemodynamic parameters driving vascularization in vitro. In-line pressure sensors, interfaced through I2C communication, are employed to precisely record inlet and outlet pressures during critical stages of microvasculature tubulogenesis. Flow measurements are obtained by analyzing changes in reservoir volume over time (dV/dt), correlated with the change in pressure over time (dP/dt). This quantitative assessment of various pressure conditions in a microfluidic platform offers insights into their impact on microvasculature perfusion kinetics. Data acquisition can help inform and finetune functional vessel network formation and potentially enhance the durability, stability, and reproducibility of engineered in vitro platforms for organoid vascularization in regenerative medicine. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

23 pages, 2888 KiB  
Review
Machine Learning in Flocculant Research and Application: Toward Smart and Sustainable Water Treatment
by Caichang Ding, Ling Shen, Qiyang Liang and Lixin Li
Separations 2025, 12(8), 203; https://doi.org/10.3390/separations12080203 (registering DOI) - 1 Aug 2025
Abstract
Flocculants are indispensable in water and wastewater treatment, enabling the aggregation and removal of suspended particles, colloids, and emulsions. However, the conventional development and application of flocculants rely heavily on empirical methods, which are time-consuming, resource-intensive, and environmentally problematic due to issues such [...] Read more.
Flocculants are indispensable in water and wastewater treatment, enabling the aggregation and removal of suspended particles, colloids, and emulsions. However, the conventional development and application of flocculants rely heavily on empirical methods, which are time-consuming, resource-intensive, and environmentally problematic due to issues such as sludge production and chemical residues. Recent advances in machine learning (ML) have opened transformative avenues for the design, optimization, and intelligent application of flocculants. This review systematically examines the integration of ML into flocculant research, covering algorithmic approaches, data-driven structure–property modeling, high-throughput formulation screening, and smart process control. ML models—including random forests, neural networks, and Gaussian processes—have successfully predicted flocculation performance, guided synthesis optimization, and enabled real-time dosing control. Applications extend to both synthetic and bioflocculants, with ML facilitating strain engineering, fermentation yield prediction, and polymer degradability assessments. Furthermore, the convergence of ML with IoT, digital twins, and life cycle assessment tools has accelerated the transition toward sustainable, adaptive, and low-impact treatment technologies. Despite its potential, challenges remain in data standardization, model interpretability, and real-world implementation. This review concludes by outlining strategic pathways for future research, including the development of open datasets, hybrid physics–ML frameworks, and interdisciplinary collaborations. By leveraging ML, the next generation of flocculant systems can be more effective, environmentally benign, and intelligently controlled, contributing to global water sustainability goals. Full article
(This article belongs to the Section Environmental Separations)
Show Figures

Figure 1

27 pages, 1832 KiB  
Review
Breaking the Traffic Code: How MaaS Is Shaping Sustainable Mobility Ecosystems
by Tanweer Alam
Future Transp. 2025, 5(3), 94; https://doi.org/10.3390/futuretransp5030094 (registering DOI) - 1 Aug 2025
Abstract
Urban areas are facing increasing traffic congestion, pollution, and infrastructure strain. Traditional urban transportation systems are often fragmented. They require users to plan, pay, and travel across multiple disconnected services. Mobility-as-a-Service (MaaS) integrates these services into a single digital platform, simplifying access and [...] Read more.
Urban areas are facing increasing traffic congestion, pollution, and infrastructure strain. Traditional urban transportation systems are often fragmented. They require users to plan, pay, and travel across multiple disconnected services. Mobility-as-a-Service (MaaS) integrates these services into a single digital platform, simplifying access and improving the user experience. This review critically examines the role of MaaS in fostering sustainable mobility ecosystems. MaaS aims to enhance user-friendliness, service variety, and sustainability by adopting a customer-centric approach to transportation. The findings reveal that successful MaaS systems consistently align with multimodal transport infrastructure, equitable access policies, and strong public-private partnerships. MaaS enhances the management of routes and traffic, effectively mitigating delays and congestion while concurrently reducing energy consumption and fuel usage. In this study, the authors examine MaaS as a new mobility paradigm for a sustainable transportation system in smart cities, observing the challenges and opportunities associated with its implementation. To assess the environmental impact, a sustainability index is calculated based on the use of different modes of transportation. Significant findings indicate that MaaS systems are proliferating in both quantity and complexity, increasingly integrating capabilities such as real-time multimodal planning, dynamic pricing, and personalized user profiles. Full article
Show Figures

Figure 1

40 pages, 4775 KiB  
Article
Optimal Sizing of Battery Energy Storage System for Implicit Flexibility in Multi-Energy Microgrids
by Andrea Scrocca, Maurizio Delfanti and Filippo Bovera
Appl. Sci. 2025, 15(15), 8529; https://doi.org/10.3390/app15158529 (registering DOI) - 31 Jul 2025
Abstract
In the context of urban decarbonization, multi-energy microgrids (MEMGs) are gaining increasing relevance due to their ability to enhance synergies across multiple energy vectors. This study presents a block-based MILP framework developed to optimize the operations of a real MEMG, with a particular [...] Read more.
In the context of urban decarbonization, multi-energy microgrids (MEMGs) are gaining increasing relevance due to their ability to enhance synergies across multiple energy vectors. This study presents a block-based MILP framework developed to optimize the operations of a real MEMG, with a particular focus on accurately modeling the structure of electricity and natural gas bills. The objective is to assess the added economic value of integrating a battery energy storage system (BESS) under the assumption it is employed to provide implicit flexibility—namely, bill management, energy arbitrage, and peak shaving. Results show that under assumed market conditions, tariff schemes, and BESS costs, none of the analyzed BESS configurations achieve a positive net present value. However, a 2 MW/4 MWh BESS yields a 3.8% reduction in annual operating costs compared to the base case without storage, driven by increased self-consumption (+2.8%), reduced thermal energy waste (–6.4%), and a substantial decrease in power-based electricity charges (–77.9%). The performed sensitivity analyses indicate that even with a significantly higher day-ahead market price spread, the BESS is not sufficiently incentivized to perform pure energy arbitrage and that the effectiveness of a time-of-use power-based tariff depends not only on the level of price differentiation but also on the BESS size. Overall, this study provides insights into the role of BESS in MEMGs and highlights the need for electricity bill designs that better reward the provision of implicit flexibility by storage systems. Full article
(This article belongs to the Special Issue Innovative Approaches to Optimize Future Multi-Energy Systems)
Show Figures

Figure 1

29 pages, 3400 KiB  
Article
Synthetic Data Generation for Machine Learning-Based Hazard Prediction in Area-Based Speed Control Systems
by Mariusz Rychlicki and Zbigniew Kasprzyk
Appl. Sci. 2025, 15(15), 8531; https://doi.org/10.3390/app15158531 (registering DOI) - 31 Jul 2025
Abstract
This work focuses on the possibilities of generating synthetic data for machine learning in hazard prediction in area-based speed monitoring systems. The purpose of the research conducted was to develop a methodology for generating realistic synthetic data to support the design of a [...] Read more.
This work focuses on the possibilities of generating synthetic data for machine learning in hazard prediction in area-based speed monitoring systems. The purpose of the research conducted was to develop a methodology for generating realistic synthetic data to support the design of a continuous vehicle speed monitoring system to minimize the risk of traffic accidents caused by speeding. The SUMO traffic simulator was used to model driver behavior in the analyzed area and within a given road network. Data from OpenStreetMap and field measurements from over a dozen speed detectors were integrated. Preliminary tests were carried out to record vehicle speeds. Based on these data, several simulation scenarios were run and compared to real-world observations using average speed, the percentage of speed limit violations, root mean square error (RMSE), and percentage compliance. A new metric, the Combined Speed Accuracy Score (CSAS), has been introduced to assess the consistency of simulation results with real-world data. For this study, a basic hazard prediction model was developed using LoRaWAN sensor network data and environmental contextual variables, including time, weather, location, and accident history. The research results in a method for evaluating and selecting the simulation scenario that best represents reality and drivers’ propensities to exceed speed limits. The results and findings demonstrate that it is possible to produce synthetic data with a level of agreement exceeding 90% with real data. Thus, it was shown that it is possible to generate synthetic data for machine learning in hazard prediction for area-based speed control systems using traffic simulators. Full article
Show Figures

Figure 1

20 pages, 6694 KiB  
Article
Spatiotemporal Assessment of Benzene Exposure Characteristics in a Petrochemical Industrial Area Using Mobile-Extraction Differential Optical Absorption Spectroscopy (Me-DOAS)
by Dong keun Lee, Jung-min Park, Jong-hee Jang, Joon-sig Jung, Min-kyeong Kim, Jaeseok Heo and Duckshin Park
Toxics 2025, 13(8), 655; https://doi.org/10.3390/toxics13080655 (registering DOI) - 31 Jul 2025
Abstract
Petrochemical complexes are spatially expansive and host diverse emission sources, making accurate monitoring of volatile organic compounds (VOCs) challenging using conventional two-dimensional methods. This study introduces Mobile-extraction Differential Optical Absorption Spectroscopy (Me-DOAS), a real-time, three-dimensional remote sensing technique for assessing benzene emissions in [...] Read more.
Petrochemical complexes are spatially expansive and host diverse emission sources, making accurate monitoring of volatile organic compounds (VOCs) challenging using conventional two-dimensional methods. This study introduces Mobile-extraction Differential Optical Absorption Spectroscopy (Me-DOAS), a real-time, three-dimensional remote sensing technique for assessing benzene emissions in the Ulsan petrochemical complex, South Korea. A vehicle-mounted Me-DOAS system conducted monthly measurements throughout 2024, capturing data during four daily intervals to evaluate diurnal variation. Routes included perimeter loops and grid-based transects within core industrial zones. The highest benzene concentrations were observed in February (mean: 64.28 ± 194.69 µg/m3; geometric mean: 5.13 µg/m3), with exceedances of the national annual standard (5 µg/m3) in several months. Notably, nighttime and early morning sessions showed elevated levels, suggesting contributions from nocturnal operations and meteorological conditions such as atmospheric inversion. A total of 179 exceedances (≥30 µg/m3) were identified, predominantly in zones with benzene-handling activities. Correlation analysis revealed a significant relationship between high concentrations and specific emission sources. These results demonstrate the utility of Me-DOAS in capturing spatiotemporal emission dynamics and support its application in exposure risk assessment and industrial emission control. The findings provide a robust framework for targeted management strategies and call for integration with source apportionment and dispersion modeling tools. Full article
(This article belongs to the Section Air Pollution and Health)
Show Figures

Figure 1

16 pages, 1803 KiB  
Article
Degradation of Poliovirus Sabin 2 Genome After Electron Beam Irradiation
by Dmitry D. Zhdanov, Anastasia N. Shishparenok, Yury Y. Ivin, Anastasia A. Kovpak, Anastasia N. Piniaeva, Igor V. Levin, Sergei V. Budnik, Oleg A. Shilov, Roman S. Churyukin, Lubov E. Agafonova, Alina V. Berezhnova, Victoria V. Shumyantseva and Aydar A. Ishmukhametov
Vaccines 2025, 13(8), 824; https://doi.org/10.3390/vaccines13080824 (registering DOI) - 31 Jul 2025
Viewed by 30
Abstract
Objectives: Most antiviral vaccines are created by inactivating the virus using chemical methods. The inactivation and production of viral vaccine preparations after the irradiation of viruses with accelerated electrons has a number of significant advantages. Determining the integrity of the genome of the [...] Read more.
Objectives: Most antiviral vaccines are created by inactivating the virus using chemical methods. The inactivation and production of viral vaccine preparations after the irradiation of viruses with accelerated electrons has a number of significant advantages. Determining the integrity of the genome of the resulting viral particles is necessary to assess the quality and degree of inactivation after irradiation. Methods: This work was performed on the Sabin 2 model polio virus. To determine the most sensitive and most radiation-resistant part, the polio virus genome was divided into 20 segments. After irradiation at temperatures of 25 °C, 2–8 °C, −20 °C, or −70 °C, the amplification intensity of these segments was measured in real time. Results: The best correlation between the amplification cycle and the irradiation dose at all temperatures was observed for segment 3D, left. Consequently, this section of the poliovirus genome is the least resistant to the action of accelerated electrons and is the most representative for determining genome integrity. The worst dependence was observed for the VP1 right section, which, therefore, cannot be used to determine genome integrity during inactivation. The electrochemical approach was also employed for a comparative assessment of viral RNA integrity before and after irradiation. An increase in the irradiation dose was accompanied by an increase in signals indicating the electrooxidation of RNA heterocyclic bases. The increase in peak current intensity of viral RNA electrochemical signals confirmed the breaking of viral RNA strands during irradiation. The shorter the RNA fragments, the greater the peak current intensities. In turn, this made the heterocyclic bases more accessible to electrooxidation on the electrode. Conclusions: These results are necessary for characterizing the integrity of the viral genome for the purpose of creating of antiviral vaccines. Full article
(This article belongs to the Special Issue Recent Scientific Development of Poliovirus Vaccines)
Show Figures

Figure 1

11 pages, 378 KiB  
Entry
The Application of Viscoelastic Testing in Patient Blood Management
by Mordechai Hershkop, Behnam Rafiee and Mark T. Friedman
Encyclopedia 2025, 5(3), 110; https://doi.org/10.3390/encyclopedia5030110 - 31 Jul 2025
Viewed by 38
Definition
Patient blood management (PBM) is a multidisciplinary approach aimed at improving patient outcomes through targeted anemia treatment that minimizes allogeneic blood transfusions, employs blood conservation techniques, and avoids inappropriate use of blood product transfusions. Viscoelastic testing (VET) techniques, such as thromboelastography (TEG) and [...] Read more.
Patient blood management (PBM) is a multidisciplinary approach aimed at improving patient outcomes through targeted anemia treatment that minimizes allogeneic blood transfusions, employs blood conservation techniques, and avoids inappropriate use of blood product transfusions. Viscoelastic testing (VET) techniques, such as thromboelastography (TEG) and rotational thromboelastometry (ROTEM), have led to significant advancements in PBM. These techniques offer real-time whole-blood assessment of hemostatic function. This provides the clinician with a more complete hemostasis perspective compared to that provided by conventional coagulation tests (CCTs), such as the prothrombin time (PT) and the activated partial thromboplastin time (aPTT), which only assess plasma-based coagulation. VET does this by mapping the complex processes of clot formation, stability, and breakdown (i.e., fibrinolysis). As a result of real-time whole-blood coagulation assessment during hemorrhage, hemostasis can be achieved through targeted transfusion therapy. This approach helps fulfill an objective of PBM by helping to reduce unnecessary transfusions. However, challenges remain that limit broader adoption of VET, particularly in hospital settings. Of these, standardization and the high cost of the devices are those that are faced the most. This discussion highlights the potential of VET application in PBM to guide blood-clotting therapies and improve outcomes in patients with coagulopathies from various causes that result in hemorrhage. Another aim of this discussion is to highlight the limitations of implementing these technologies so that appropriate measures can be taken toward their wider integration into clinical use. Full article
(This article belongs to the Section Medicine & Pharmacology)
Show Figures

Figure 1

19 pages, 2442 KiB  
Article
Monitoring C. vulgaris Cultivations Grown on Winery Wastewater Using Flow Cytometry
by Teresa Lopes da Silva, Thiago Abrantes Silva, Bruna Thomazinho França, Belina Ribeiro and Alberto Reis
Fermentation 2025, 11(8), 442; https://doi.org/10.3390/fermentation11080442 (registering DOI) - 31 Jul 2025
Viewed by 39
Abstract
Winery wastewater (WWW), if released untreated, poses a serious environmental threat due to its high organic load. In this study, Chlorella vulgaris was cultivated in diluted WWW to assess its suitability as a culture medium. Two outdoor cultivation systems—a 270 L raceway and [...] Read more.
Winery wastewater (WWW), if released untreated, poses a serious environmental threat due to its high organic load. In this study, Chlorella vulgaris was cultivated in diluted WWW to assess its suitability as a culture medium. Two outdoor cultivation systems—a 270 L raceway and a 40 L bubble column—were operated over 33 days using synthetic medium (control) and WWW. A flow cytometry (FC) protocol was implemented to monitor key physiological parameters in near-real time, including cell concentration, membrane integrity, chlorophyll content, cell size, and internal complexity. At the end of cultivation, the bubble column yielded the highest cell concentrations: 2.85 × 106 cells/mL (control) and 2.30 × 106 cells/mL (WWW), though with lower proportions of intact cells (25% and 31%, respectively). Raceway cultures showed lower cell concentrations: 1.64 × 106 (control) and 1.54 × 106 cells/mL (WWW), but higher membrane integrity (76% and 36% for control and WWW cultures, respectively). On average, cells grown in the bubble column had a 22% larger radius than those in the raceway, favouring sedimentation. Heterotrophic cells were more abundant in WWW cultures, due to the presence of organic carbon, indicating its potential for use as animal feed. This study demonstrates that FC is a powerful, real-time tool for monitoring microalgae physiology and optimising cultivation in complex effluents like WWW. Full article
Show Figures

Figure 1

13 pages, 564 KiB  
Article
Enhanced Semantic Retrieval with Structured Prompt and Dimensionality Reduction for Big Data
by Donghyeon Kim, Minki Park, Jungsun Lee, Inho Lee, Jeonghyeon Jin and Yunsick Sung
Mathematics 2025, 13(15), 2469; https://doi.org/10.3390/math13152469 - 31 Jul 2025
Viewed by 38
Abstract
The exponential increase in textual data generated across sectors such as healthcare, finance, and smart manufacturing has intensified the need for effective Big Data analytics. Large language models (LLMs) have become critical tools because of their advanced language processing capabilities. However, their static [...] Read more.
The exponential increase in textual data generated across sectors such as healthcare, finance, and smart manufacturing has intensified the need for effective Big Data analytics. Large language models (LLMs) have become critical tools because of their advanced language processing capabilities. However, their static nature limits their ability to incorporate real-time and domain-specific knowledge. Retrieval-augmented generation (RAG) addresses these limitations by enriching LLM outputs through external content retrieval. Nevertheless, traditional RAG systems remain inefficient, often exhibiting high retrieval latency, redundancy, and diminished response quality when scaled to large datasets. This paper proposes an innovative structured RAG framework specifically designed for large-scale Big Data analytics. The framework transforms unstructured partial prompts into structured semantically coherent partial prompts, leveraging element-specific embedding models and dimensionality reduction techniques, such as principal component analysis. To further improve the retrieval accuracy and computational efficiency, we introduce a multi-level filtering approach integrating semantic constraints and redundancy elimination. In the experiments, the proposed method was compared with structured-format RAG. After generating prompts utilizing two methods, silhouette scores were computed to assess the quality of embedding clusters. The proposed method outperformed the baseline by improving the clustering quality by 32.3%. These results demonstrate the effectiveness of the framework in enhancing LLMs for accurate, diverse, and efficient decision-making in complex Big Data environments. Full article
(This article belongs to the Special Issue Big Data Analysis, Computing and Applications)
Show Figures

Figure 1

Back to TopTop