Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (17,201)

Search Parameters:
Keywords = additive model system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1063 KiB  
Article
A Digital Identity Blockchain Ecosystem: Linking Government-Certified and Uncertified Tokenized Objects
by Juan-Carlos López-Pimentel, Javier Gonzalez-Sanchez and Luis Alberto Morales-Rosales
Appl. Sci. 2025, 15(15), 8577; https://doi.org/10.3390/app15158577 (registering DOI) - 1 Aug 2025
Abstract
This paper presents a novel digital identity ecosystem built upon a hierarchical structure of Blockchain tokens, where both government-certified and uncertified tokens can coexist to represent various attributes of an individual’s identity. At the core of this system is the government, which functions [...] Read more.
This paper presents a novel digital identity ecosystem built upon a hierarchical structure of Blockchain tokens, where both government-certified and uncertified tokens can coexist to represent various attributes of an individual’s identity. At the core of this system is the government, which functions as a trusted authority capable of creating entities and issuing a unique, non-replicable digital identity token for each one. Entities are the exclusive owners of their identity tokens and can attach additional tokens—such as those issued by the government, educational institutions, or financial entities—to form a verifiable, token-based digital identity tree. This model accommodates a flexible identity framework that enables decentralized yet accountable identity construction. Our contributions include the design of a digital identity system (supported by smart contracts) that enforces uniqueness through state-issued identity tokens while supporting user-driven identity formation. The model differentiates between user types and certifies tokens according to their source, enabling a scalable and extensible structure. We also analyze the economic, technical, and social feasibility of deploying this system, including a breakdown of transaction costs for key stakeholders such as governments, end-users, and institutions like universities. Considering the benefits of blockchain, implementing a digital identity ecosystem in this technology is economically viable for all involved stakeholders. Full article
(This article belongs to the Special Issue Advanced Blockchain Technology and Its Applications)
Show Figures

Figure 1

19 pages, 1214 KiB  
Article
Temporal Multi-Query Subgraph Matching in Cybersecurity
by Min Lu, Qianzhen Zhang and Xianqiang Zhu
Technologies 2025, 13(8), 335; https://doi.org/10.3390/technologies13080335 (registering DOI) - 1 Aug 2025
Abstract
Regarding attack scenarios as query graphs and conducting subgraph matching on the data system is an important approach to identify and detect cyber threats. However, existing subgraph matching methods are not suitable for detecting time-evolving attacks since they either focus on single-query graphs [...] Read more.
Regarding attack scenarios as query graphs and conducting subgraph matching on the data system is an important approach to identify and detect cyber threats. However, existing subgraph matching methods are not suitable for detecting time-evolving attacks since they either focus on single-query graphs or ignore the temporal constraints between multiple queries. In this paper, we model the time-evolving attack detection as a novel temporal multi-query subgraph matching problem and propose an efficient algorithm to address this problem. We first give a compact representation of the temporal query graph by merging all queries into one. Based on the temporal query graph, we propose a concise auxiliary data structure to maintain partial solutions. In addition, we employ a query matching tree to generate an efficient matching order and enumerate matchings based on the order. Extensive experiments over real-world datasets confirm the effectiveness and efficiency of our approach. Full article
(This article belongs to the Section Information and Communication Technologies)
13 pages, 709 KiB  
Article
Differential Effects of Green Space Typologies on Congenital Anomalies: Data from the Korean National Health Insurance Service (2008–2013)
by Ji-Eun Lee, Kyung-Shin Lee, Youn-Hee Lim, Soontae Kim, Nami Lee and Yun-Chul Hong
Healthcare 2025, 13(15), 1886; https://doi.org/10.3390/healthcare13151886 (registering DOI) - 1 Aug 2025
Abstract
Background/Objectives: Urban green space has been increasingly recognized as a determinant of maternal and child health. This study investigated the association between prenatal exposure to different types of green space and the risk of congenital anomalies in South Korea. Methods: We [...] Read more.
Background/Objectives: Urban green space has been increasingly recognized as a determinant of maternal and child health. This study investigated the association between prenatal exposure to different types of green space and the risk of congenital anomalies in South Korea. Methods: We analyzed data from the National Health Insurance Service (N = 142,422). Green space exposure was measured at the area level and categorized into grassland and forest; statistical analysis was performed using generalized estimating equations and generalized additive models to analyze the associations. Additionally, subgroup and sensitivity analyses were performed. Results: GEE analysis showed that a 10% increase in the proportion of grassland in a residential district was associated with a reduced risk of nervous system (adjusted odds ratio [aOR]: 0.77, 95% confidence interval [CI]: 0.63–0.94) and genitourinary system anomalies (aOR: 0.83, 95% CI: 0.71–0.97). The subgroup analysis results showed significance only for male infants, but the difference between the sexes was not significant. In the quartile-based analysis, we found a slightly significant p-value for trend for the effect of forests on digestive system anomalies, but the trend was toward increasing risk. In a sensitivity analysis with different exposure classifications, the overall and nervous system anomalies in built green space showed that the risk decreased as green space increased compared to that in the lowest quartile. Conclusions: Our results highlight the importance of spatial environmental factors during pregnancy and suggest that different types of green spaces differentially impact the offspring’s early health outcomes. This study suggests the need for built environment planning as part of preventive maternal and child health strategies. Full article
Show Figures

Figure 1

22 pages, 3015 KiB  
Article
Determining Early Warning Thresholds to Detect Tree Mortality Risk in a Southeastern U.S. Bottomland Hardwood Wetland
by Maricar Aguilos, Jiayin Zhang, Miko Lorenzo Belgado, Ge Sun, Steve McNulty and John King
Forests 2025, 16(8), 1255; https://doi.org/10.3390/f16081255 (registering DOI) - 1 Aug 2025
Abstract
Prolonged inundations are altering coastal forest ecosystems of the southeastern US, causing extensive tree die-offs and the development of ghost forests. This hydrological stressor also alters carbon fluxes, threatening the stability of coastal carbon sinks. This study was conducted to investigate the interactions [...] Read more.
Prolonged inundations are altering coastal forest ecosystems of the southeastern US, causing extensive tree die-offs and the development of ghost forests. This hydrological stressor also alters carbon fluxes, threatening the stability of coastal carbon sinks. This study was conducted to investigate the interactions between hydrological drivers and ecosystem responses by analyzing daily eddy covariance flux data from a wetland forest in North Carolina, USA, spanning 2009–2019. We analyzed temporal patterns of net ecosystem exchange (NEE), gross primary productivity (GPP), and ecosystem respiration (RE) under both flooded and non-flooded conditions and evaluated their relationships with observed tree mortality. Generalized Additive Modeling (GAM) revealed that groundwater table depth (GWT), leaf area index (LAI), NEE, and net radiation (Rn) were key predictors of mortality transitions (R2 = 0.98). Elevated GWT induces root anoxia; declining LAI reduces productivity; elevated NEE signals physiological breakdown; and higher Rn may amplify evapotranspiration stress. Receiver Operating Characteristic (ROC) analysis revealed critical early warning thresholds for tree mortality: GWT = 2.23 cm, LAI = 2.99, NEE = 1.27 g C m−2 d−1, and Rn = 167.54 W m−2. These values offer a basis for forecasting forest mortality risk and guiding early warning systems. Our findings highlight the dominant role of hydrological variability in ecosystem degradation and offer a threshold-based framework for early detection of mortality risks. This approach provides insights into managing coastal forest resilience amid accelerating sea level rise. Full article
(This article belongs to the Special Issue Water and Carbon Cycles and Their Coupling in Forest)
Show Figures

Figure 1

18 pages, 2724 KiB  
Article
Uncertainty-Aware Earthquake Forecasting Using a Bayesian Neural Network with Elastic Weight Consolidation
by Changchun Liu, Yuting Li, Huijuan Gao, Lin Feng and Xinqian Wu
Buildings 2025, 15(15), 2718; https://doi.org/10.3390/buildings15152718 (registering DOI) - 1 Aug 2025
Abstract
Effective earthquake early warning (EEW) is essential for disaster prevention in the built environment, enabling a rapid structural response, system shutdown, and occupant evacuation to mitigate damage and casualties. However, most current EEW systems lack rigorous reliability analyses of their predictive outcomes, limiting [...] Read more.
Effective earthquake early warning (EEW) is essential for disaster prevention in the built environment, enabling a rapid structural response, system shutdown, and occupant evacuation to mitigate damage and casualties. However, most current EEW systems lack rigorous reliability analyses of their predictive outcomes, limiting their effectiveness in real-world scenarios—especially for on-site warnings, where data are limited and time is critical. To address these challenges, we propose a Bayesian neural network (BNN) framework based on Stein variational gradient descent (SVGD). By performing Bayesian inference, we estimate the posterior distribution of the parameters, thus outputting a reliability analysis of the prediction results. In addition, we incorporate a continual learning mechanism based on elastic weight consolidation, allowing the system to adapt quickly without full retraining. Our experiments demonstrate that our SVGD-BNN model significantly outperforms traditional peak displacement (Pd)-based approaches. In a 3 s time window, the Pearson correlation coefficient R increases by 9.2% and the residual standard deviation SD decreases by 24.4% compared to a variational inference (VI)-based BNN. Furthermore, the prediction variance generated by the model can effectively reflect the uncertainty of the prediction results. The continual learning strategy reduces the training time by 133–194 s, enhancing the system’s responsiveness. These features make the proposed framework a promising tool for real-time, reliable, and adaptive EEW—supporting disaster-resilient building design and operation. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

13 pages, 994 KiB  
Article
Evaluation of the Metabolomics Profile in Charcot–Marie–Tooth (CMT) Patients: Novel Potential Biomarkers
by Federica Murgia, Martina Cadeddu, Jessica Frau, Giancarlo Coghe, Lorefice Lorena, Alessandro Vannelli, Maria Rita Murru, Martina Spada, Antonio Noto, Luigi Atzori and Eleonora Cocco
Metabolites 2025, 15(8), 520; https://doi.org/10.3390/metabo15080520 (registering DOI) - 1 Aug 2025
Abstract
Background: Charcot–Marie–Tooth (CMT) is a group of inherited diseases impairing the peripheral nervous system. CMT originates from genetic variants that affect proteins fundamental for the myelination of peripheral nerves and survival. Moreover, environmental and humoral factors can impact disease development and evolution. Currently, [...] Read more.
Background: Charcot–Marie–Tooth (CMT) is a group of inherited diseases impairing the peripheral nervous system. CMT originates from genetic variants that affect proteins fundamental for the myelination of peripheral nerves and survival. Moreover, environmental and humoral factors can impact disease development and evolution. Currently, no therapy is available. Metabolomics is an emerging field of biomedical research that enables the development of novel biomarkers for neurodegenerative diseases by targeting metabolic pathways or metabolites. This study aimed to evaluate the metabolomics profile of CMT disease by comparing patients with healthy individuals. Methods: A total of 22 CMT patients (CMT) were included in this study and were demographically matched with 26 healthy individuals (C). Serum samples were analyzed through Nuclear Magnetic Resonance spectroscopy, and multivariate and univariate statistical analyses were subsequently applied. Results: A supervised model showed a clear separation (R2X = 0.3; R2Y = 0.7; Q2 = 0.4; p-value = 0.0004) between the two classes of subjects, and nine metabolites were found to be significantly different (2-hydroxybutyrate, 3-hydroxybutyrate, 3-methyl-2-oxovalerate, choline, citrate, glutamate, isoleucine, lysine, and methyl succinate). The combined ROC curve showed an AUC of 0.94 (CI: 0.9–1). Additional altered metabolic pathways were also identified within the disease context. Conclusion: This study represents a promising starting point, demonstrating the efficacy of metabolomics in evaluating CMT patients and identifying novel potential disease biomarkers. Full article
(This article belongs to the Section Endocrinology and Clinical Metabolic Research)
Show Figures

Figure 1

16 pages, 4891 KiB  
Article
Effects of Performance Variations in Key Components of CRTS I Slab Ballastless Track on Structural Response Following Slab-Replacement Operations
by Wentao Wu, Hongyao Lu, Yuelei He and Haitao Xia
Materials 2025, 18(15), 3621; https://doi.org/10.3390/ma18153621 (registering DOI) - 1 Aug 2025
Abstract
Slab-replacement operations are crucial for restoring deteriorated CRTS I slab ballastless tracks to operational standards. This study investigates the structural implications of the operation by evaluating the strength characteristics and material properties of track components both prior to and following replacement. Apparent strength [...] Read more.
Slab-replacement operations are crucial for restoring deteriorated CRTS I slab ballastless tracks to operational standards. This study investigates the structural implications of the operation by evaluating the strength characteristics and material properties of track components both prior to and following replacement. Apparent strength was measured using rebound hammer tests on three categories of slabs: retained, deteriorated, and newly installed track slabs. In addition, samples of old and new filling resins were collected and tested to determine their elastic moduli. These empirical data were subsequently used to develop a refined finite-element model that captures both pre- and post-replacement conditions. Under varying temperature loads, disparities in component performance were found to significantly affect stress distribution. Specifically, before replacement, deteriorated track slabs exhibited 10.74% lower strength compared to adjacent retained slabs, whereas, after replacement, new slabs showed a 25.26% increase in strength over retained ones. The elastic modulus of old filling resin was measured at 5.19 kN/mm, 35.13% below the minimum design requirement, while the new resin reached 10.48 kN/mm, exceeding the minimum by 31.00%. Although the slab-replacement operation enhances safety by addressing structural deficiencies, it may also create new weak points in adjacent areas, where insufficient stiffness results in stress concentrations and potential damage. This study offers critical insights for optimizing maintenance strategies and improving the long-term performance of ballastless track systems. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

22 pages, 14333 KiB  
Article
A Transient Combustion Study in a Brick Kiln Using Natural Gas as Fuel by Means of CFD
by Sergio Alonso-Romero, Jorge Arturo Alfaro-Ayala, José Eduardo Frias-Chimal, Oscar A. López-Núñez, José de Jesús Ramírez-Minguela and Roberto Zitzumbo-Guzmán
Processes 2025, 13(8), 2437; https://doi.org/10.3390/pr13082437 - 1 Aug 2025
Abstract
A brick kiln was experimentally studied to measure the transient temperature of hot gases and the compressive strength of the bricks, using pine wood as fuel, in order to evaluate the thermal performance of the actual system. In addition, a transient combustion model [...] Read more.
A brick kiln was experimentally studied to measure the transient temperature of hot gases and the compressive strength of the bricks, using pine wood as fuel, in order to evaluate the thermal performance of the actual system. In addition, a transient combustion model based on computational fluid dynamics (CFD) was used to simulate the combustion of natural gas in the brick kiln as a hypothetical case, with the aim of investigating the potential benefits of fuel switching. The theoretical stoichiometric combustion of both pine wood and natural gas was employed to compare the mole fractions and the adiabatic flame temperature. Also, the transient hot gas temperature obtained from the experimental wood-fired kiln were compared with those from the simulated natural gas-fired kiln. Furthermore, numerical simulations were carried out to obtain the transient hot gas temperature and NOx emissions under stoichiometric, fuel-rich, and excess air conditions. The results of CO2 mole fractions from stoichiometric combustion demonstrate that natural gas may represent a cleaner alternative for use in brick kilns, due to a 44.08% reduction in emissions. Contour plots of transient hot gases temperature, velocity, and CO2 emission inside the kiln are presented. Moreover, the time-dependent emissions of CO2, H2O, and CO at the kiln outlet are shown. It can be concluded that the presence of CO mole fractions at the kiln outlet suggests that the transient combustion process could be further improved. The low firing efficiency of bricks and the thermal efficiency obtained are attributed to uneven temperatures distributions inside the kiln. Moreover, hot gas temperature and NOx emissions were found to be higher under stoichiometric conditions than under fuel-rich or excess of air conditions. Therefore, this work could be useful for improving the thermal–hydraulic and emissions performance of brick kilns, as well as for future kiln design improvements. Full article
(This article belongs to the Special Issue Numerical Simulation of Flow and Heat Transfer Processes)
Show Figures

Figure 1

48 pages, 2506 KiB  
Article
Enhancing Ship Propulsion Efficiency Predictions with Integrated Physics and Machine Learning
by Hamid Reza Soltani Motlagh, Seyed Behbood Issa-Zadeh, Md Redzuan Zoolfakar and Claudia Lizette Garay-Rondero
J. Mar. Sci. Eng. 2025, 13(8), 1487; https://doi.org/10.3390/jmse13081487 - 31 Jul 2025
Abstract
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte [...] Read more.
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte Carlo simulations provides a solid foundation for training machine learning models, particularly in cases where dataset restrictions are present. The XGBoost model demonstrated superior performance compared to Support Vector Regression, Gaussian Process Regression, Random Forest, and Shallow Neural Network models, achieving near-zero prediction errors that closely matched physics-based calculations. The physics-based analysis demonstrated that the Combined scenario, which combines hull coatings with bulbous bow modifications, produced the largest fuel consumption reduction (5.37% at 15 knots), followed by the Advanced Propeller scenario. The results demonstrate that user inputs (e.g., engine power: 870 kW, speed: 12.7 knots) match the Advanced Propeller scenario, followed by Paint, which indicates that advanced propellers or hull coatings would optimize efficiency. The obtained insights help ship operators modify their operational parameters and designers select essential modifications for sustainable operations. The model maintains its strength at low speeds, where fuel consumption is minimal, making it applicable to other oil tankers. The hybrid approach provides a new tool for maritime efficiency analysis, yielding interpretable results that support International Maritime Organization objectives, despite starting with a limited dataset. The model requires additional research to enhance its predictive accuracy using larger datasets and real-time data collection, which will aid in achieving global environmental stewardship. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
31 pages, 5560 KiB  
Article
Design of Reconfigurable Handling Systems for Visual Inspection
by Alessio Pacini, Francesco Lupi and Michele Lanzetta
J. Manuf. Mater. Process. 2025, 9(8), 257; https://doi.org/10.3390/jmmp9080257 (registering DOI) - 31 Jul 2025
Abstract
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such [...] Read more.
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such reconfigurable machines remains a complex, expert-dependent, and time-consuming task. This is primarily due to the lack of structured methodologies and the reliance on trial-and-error processes. In this context, this study proposes a novel theoretical framework to facilitate the design of fully reconfigurable handling systems for VISs, with a particular focus on fixture design. The framework is grounded in Model-Based Definition (MBD), embedding semantic information directly into the 3D CAD models of the inspected product. As an additional contribution, a general hardware architecture for the inspection of axisymmetric components is presented. This architecture integrates an anthropomorphic robotic arm, Numerically Controlled (NC) modules, and adaptable software and hardware components to enable automated, software-driven reconfiguration. The proposed framework and architecture were applied in an industrial case study conducted in collaboration with a leading automotive half-shaft manufacturer. The resulting system, implemented across seven automated cells, successfully inspected over 200 part types from 12 part families and detected more than 60 defect types, with a cycle below 30 s per part. Full article
Show Figures

Figure 1

18 pages, 723 KiB  
Article
A Machine Learning-Based Model for Predicting High Deficiency Risk Ships in Port State Control: A Case Study of the Port of Singapore
by Ming-Cheng Tsou
J. Mar. Sci. Eng. 2025, 13(8), 1485; https://doi.org/10.3390/jmse13081485 - 31 Jul 2025
Abstract
This study developed a model to predict ships with high deficiency risk under Port State Control (PSC) through machine learning techniques, particularly the Random Forest algorithm. The study utilized actual ship inspection data from the Port of Singapore, comprehensively considering various operational and [...] Read more.
This study developed a model to predict ships with high deficiency risk under Port State Control (PSC) through machine learning techniques, particularly the Random Forest algorithm. The study utilized actual ship inspection data from the Port of Singapore, comprehensively considering various operational and safety indicators of ships, including but not limited to flag state, ship age, past deficiencies, and detention history. By analyzing these factors in depth, this research enhances the efficiency and accuracy of PSC inspections, provides decision support for port authorities, and offers strategic guidance for shipping companies to comply with international safety standards. During the research process, I first conducted detailed data preprocessing, including data cleaning and feature selection, to ensure the effectiveness of model training. Using the Random Forest algorithm, I identified key factors influencing the detention risk of ships and established a risk prediction model accordingly. The model validation results indicated that factors such as ship age, tonnage, company performance, and flag state significantly affect whether a ship exhibits a high deficiency rate. In addition, this study explored the potential and limitations of applying the Random Forest model in predicting high deficiency risk under PSC, and proposed future research directions, including further model optimization and the development of real-time prediction systems. By achieving these goals, I hope to provide valuable experience for other global shipping hubs, promote higher international maritime safety standards, and contribute to the sustainable development of the global shipping industry. This research not only highlights the importance of machine learning in the maritime domain but also demonstrates the potential of data-driven decision-making in improving ship safety management and port inspection efficiency. It is hoped that this study will inspire more maritime practitioners and researchers to explore advanced data analytics techniques to address the increasingly complex challenges of global shipping. Full article
(This article belongs to the Topic Digital Technologies in Supply Chain Risk Management)
21 pages, 5466 KiB  
Article
Evaluation of Bending Stress and Shape Recovery Behavior Under Cyclic Loading in PLA 4D-Printed Lattice Structures
by Maria Pia Desole, Annamaria Gisario and Massimiliano Barletta
Appl. Sci. 2025, 15(15), 8540; https://doi.org/10.3390/app15158540 (registering DOI) - 31 Jul 2025
Abstract
This study aims to analyze the bending behavior of polylactic acid (PLA) structures made by fusion deposition modeling (FDM) technology. The investigation analyzed chiral structures such as lozenge and clepsydra, as well as geometries with wavy patterns such as roller and Es, in [...] Read more.
This study aims to analyze the bending behavior of polylactic acid (PLA) structures made by fusion deposition modeling (FDM) technology. The investigation analyzed chiral structures such as lozenge and clepsydra, as well as geometries with wavy patterns such as roller and Es, in addition to a honeycomb structure. All geometries have a relative density of 50%. After being subjected to three-point bending tests, the capacity to spring back with respect to the bending angle and the shape recovery of the structures were measured. The roller and lozenge structures demonstrated the best performance, with shape recovery assessed through three consecutive hot water immersion cycles. The lozenge structure exhibits 25% higher energy absorption than the roller, but the latter ensures better replicability and shape stability. Additionally, the roller absorbs 15% less energy than the lozenge, which experiences a 27% decrease in absorption between the first and second cycle. This work provides new insights into the bending-based energy absorption and recovery behavior of PLA metamaterials, relevant for applications in adaptive and energy-dissipating systems. Full article
Show Figures

Figure 1

19 pages, 2913 KiB  
Article
Radiation Mapping: A Gaussian Multi-Kernel Weighting Method for Source Investigation in Disaster Scenarios
by Songbai Zhang, Qi Liu, Jie Chen, Yujin Cao and Guoqing Wang
Sensors 2025, 25(15), 4736; https://doi.org/10.3390/s25154736 (registering DOI) - 31 Jul 2025
Abstract
Structural collapses caused by accidents or disasters could create unexpected radiation shielding, resulting in sharp gradients within the radiation field. Traditional radiation mapping methods often fail to accurately capture these complex variations, making the rapid and precise localization of radiation sources a significant [...] Read more.
Structural collapses caused by accidents or disasters could create unexpected radiation shielding, resulting in sharp gradients within the radiation field. Traditional radiation mapping methods often fail to accurately capture these complex variations, making the rapid and precise localization of radiation sources a significant challenge in emergency response scenarios. To address this issue, based on standard Gaussian process regression (GPR) models that primarily utilize a single Gaussian kernel to reflect the inverse-square law in free space, a novel multi-kernel Gaussian process regression (MK-GPR) model is proposed for high-fidelity radiation mapping in environments with physical obstructions. MK-GPR integrates two additional kernel functions with adaptive weighting: one models the attenuation characteristics of intervening materials, and the other captures the energy-dependent penetration behavior of radiation. To validate the model, gamma-ray distributions in complex, shielded environments were simulated using GEometry ANd Tracking 4 (Geant4). Compared with conventional methods, including linear interpolation, nearest-neighbor interpolation, and standard GPR, MK-GPR demonstrated substantial improvements in key evaluation metrics, such as MSE, RMSE, and MAE. Notably, the coefficient of determination (R2) increased to 0.937. For practical deployment, the optimized MK-GPR model was deployed to an RK-3588 edge computing platform and integrated into a mobile robot equipped with a NaI(Tl) detector. Field experiments confirmed the system’s ability to accurately map radiation fields and localize gamma sources. When combined with SLAM, the system achieved localization errors of 10 cm for single sources and 15 cm for dual sources. These results highlight the potential of the proposed approach as an effective and deployable solution for radiation source investigation in post-disaster environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

24 pages, 624 KiB  
Systematic Review
Integrating Artificial Intelligence into Perinatal Care Pathways: A Scoping Review of Reviews of Applications, Outcomes, and Equity
by Rabie Adel El Arab, Omayma Abdulaziz Al Moosa, Zahraa Albahrani, Israa Alkhalil, Joel Somerville and Fuad Abuadas
Nurs. Rep. 2025, 15(8), 281; https://doi.org/10.3390/nursrep15080281 (registering DOI) - 31 Jul 2025
Abstract
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping [...] Read more.
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping review of reviews of AI/ML applications spanning reproductive, prenatal, postpartum, neonatal, and early child-development care. Methods: We searched PubMed, Embase, the Cochrane Library, Web of Science, and Scopus through April 2025. Two reviewers independently screened records, extracted data, and assessed methodological quality using AMSTAR 2 for systematic reviews, ROBIS for bias assessment, SANRA for narrative reviews, and JBI guidance for scoping reviews. Results: Thirty-nine reviews met our inclusion criteria. In preconception and fertility treatment, convolutional neural network-based platforms can identify viable embryos and key sperm parameters with over 90 percent accuracy, and machine-learning models can personalize follicle-stimulating hormone regimens to boost mature oocyte yield while reducing overall medication use. Digital sexual-health chatbots have enhanced patient education, pre-exposure prophylaxis adherence, and safer sexual behaviors, although data-privacy safeguards and bias mitigation remain priorities. During pregnancy, advanced deep-learning models can segment fetal anatomy on ultrasound images with more than 90 percent overlap compared to expert annotations and can detect anomalies with sensitivity exceeding 93 percent. Predictive biometric tools can estimate gestational age within one week with accuracy and fetal weight within approximately 190 g. In the postpartum period, AI-driven decision-support systems and conversational agents can facilitate early screening for depression and can guide follow-up care. Wearable sensors enable remote monitoring of maternal blood pressure and heart rate to support timely clinical intervention. Within neonatal care, the Heart Rate Observation (HeRO) system has reduced mortality among very low-birth-weight infants by roughly 20 percent, and additional AI models can predict neonatal sepsis, retinopathy of prematurity, and necrotizing enterocolitis with area-under-the-curve values above 0.80. From an operational standpoint, automated ultrasound workflows deliver biometric measurements at about 14 milliseconds per frame, and dynamic scheduling in IVF laboratories lowers staff workload and per-cycle costs. Home-monitoring platforms for pregnant women are associated with 7–11 percent reductions in maternal mortality and preeclampsia incidence. Despite these advances, most evidence derives from retrospective, single-center studies with limited external validation. Low-resource settings, especially in Sub-Saharan Africa, remain under-represented, and few AI solutions are fully embedded in electronic health records. Conclusions: AI holds transformative promise for perinatal care but will require prospective multicenter validation, equity-centered design, robust governance, transparent fairness audits, and seamless electronic health record integration to translate these innovations into routine practice and improve maternal and neonatal outcomes. Full article
Show Figures

Figure 1

19 pages, 15300 KiB  
Article
Proactive Scheduling and Routing of MRP-Based Production with Constrained Resources
by Jarosław Wikarek and Paweł Sitek
Appl. Sci. 2025, 15(15), 8522; https://doi.org/10.3390/app15158522 (registering DOI) - 31 Jul 2025
Abstract
This research addresses the challenges of proactive scheduling and routing in manufacturing systems governed by the Material Requirement Planning (MRP) method. Such systems often face capacity constraints, difficulties in resource balancing, and limited traceability of component requirements. The lack of seamless integration between [...] Read more.
This research addresses the challenges of proactive scheduling and routing in manufacturing systems governed by the Material Requirement Planning (MRP) method. Such systems often face capacity constraints, difficulties in resource balancing, and limited traceability of component requirements. The lack of seamless integration between customer orders and production tasks, combined with the manual and time-consuming nature of schedule adjustments, highlights the need for an automated and optimized scheduling method. We propose a novel optimization-based approach that leverages mixed-integer linear programming (MILP) combined with a proprietary procedure for reducing the size of the modeled problem to generate feasible and/or optimal production schedules. The model incorporates dynamic routing, partial resource utilization, limited additional resources (e.g., tools, workers), technological breaks, and time quantization. Key results include determining order feasibility, identifying unfulfilled order components, minimizing costs, shortening deadlines, and assessing feasibility in the absence of available resources. By automating the generation of data from MRP/ERP systems, constructing an optimization model, and exporting the results back to the MRP/ERP structure, this method improves decision-making and competes with expensive Advanced Planning and Scheduling (APS) systems. The proposed innovation solution—the integration of MILP-based optimization with the proprietary PT (data transformation) and PR (model-size reduction) procedures—not only increases operational efficiency but also enables demand source tracking and offers a scalable and economical alternative for modern production environments. Experimental results demonstrate significant reductions in production costs (up to 25%) and lead times (more than 50%). Full article
Show Figures

Figure 1

Back to TopTop