Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,675)

Search Parameters:
Keywords = intelligent decision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 4344 KiB  
Review
Digital Cardiovascular Twins, AI Agents, and Sensor Data: A Narrative Review from System Architecture to Proactive Heart Health
by Nurdaulet Tasmurzayev, Bibars Amangeldy, Baglan Imanbek, Zhanel Baigarayeva, Timur Imankulov, Gulmira Dikhanbayeva, Inzhu Amangeldi and Symbat Sharipova
Sensors 2025, 25(17), 5272; https://doi.org/10.3390/s25175272 - 24 Aug 2025
Abstract
Cardiovascular disease remains the world’s leading cause of mortality, yet everyday care still relies on episodic, symptom-driven interventions that detect ischemia, arrhythmias, and remodeling only after tissue damage has begun, limiting the effectiveness of therapy. A narrative review synthesized 183 studies published between [...] Read more.
Cardiovascular disease remains the world’s leading cause of mortality, yet everyday care still relies on episodic, symptom-driven interventions that detect ischemia, arrhythmias, and remodeling only after tissue damage has begun, limiting the effectiveness of therapy. A narrative review synthesized 183 studies published between 2016 and 2025 that were located through PubMed, MDPI, Scopus, IEEE Xplore, and Web of Science. This review examines CVD diagnostics using innovative technologies such as digital cardiovascular twins, which involve the collection of data from wearable IoT devices (electrocardiography (ECG), photoplethysmography (PPG), and mechanocardiography), clinical records, laboratory biomarkers, and genetic markers, as well as their integration with artificial intelligence (AI), including machine learning and deep learning, graph and transformer networks for interpreting multi-dimensional data streams and creating prognostic models, as well as generative AI, medical large language models (LLMs), and autonomous agents for decision support, personalized alerts, and treatment scenario modeling, and with cloud and edge computing for data processing. This multi-layered architecture enables the detection of silent pathologies long before clinical manifestations, transforming continuous observations into actionable recommendations and shifting cardiology from reactive treatment to predictive and preventive care. Evidence converges on four layers: sensors streaming multimodal clinical and environmental data; hybrid analytics that integrate hemodynamic models with deep-, graph- and transformer learning while Bayesian and Kalman filters manage uncertainty; decision support delivered by domain-tuned medical LLMs and autonomous agents; and prospective simulations that trial pacing or pharmacotherapy before bedside use, closing the prediction-intervention loop. This stack flags silent pathology weeks in advance and steers proactive personalized prevention. It also lays the groundwork for software-as-a-medical-device ecosystems and new regulatory guidance for trustworthy AI-enabled cardiovascular care. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

21 pages, 2893 KiB  
Article
Intelligent Fault Diagnosis System for Running Gear of High-Speed Trains
by Shuai Yang, Guoliang Gao, Ziyang Wang, Shengfeng Zeng, Yikai Ouyang and Guanglei Zhang
Sensors 2025, 25(17), 5269; https://doi.org/10.3390/s25175269 - 24 Aug 2025
Abstract
Conventional rail transit train running gear fault diagnosis mainly depends on routine maintenance inspections and manual judgment. However, these approaches lack robustness under complex operational environments and elevated noise levels, rendering them inadequate for real-time performance and the rigorous accuracy standards demanded by [...] Read more.
Conventional rail transit train running gear fault diagnosis mainly depends on routine maintenance inspections and manual judgment. However, these approaches lack robustness under complex operational environments and elevated noise levels, rendering them inadequate for real-time performance and the rigorous accuracy standards demanded by modern rail transit systems. Furthermore, many existing deep learning–based methods suffer from inherent limitations in feature extraction or incur prohibitive computational costs when processing multivariate time series data. This study represents one of the early efforts to introduce the TimesNet time series modeling framework into the domain of fault diagnosis for rail transit train running gear. By utilizing an innovative multi-period decomposition strategy and a mechanism for reshaping one-dimensional data into two-dimensional tensors, the framework enables advanced temporal-spatial representation of time series data. Algorithm validation is performed on both the high-speed train running gear bearing fault dataset and the multi-mode fault diagnosis datasets of gearbox under variable working conditions. The TimesNet model exhibits outstanding diagnostic performance on both datasets, achieving a diagnostic accuracy of 91.7% on the high-speed train bearing fault dataset. Embedded deployment experiments demonstrate that single-sample inference is completed within 70.3 ± 5.8 ms, thereby satisfying the real-time monitoring requirement (<100 ms) with a 100% success rate over 50 consecutive tests. The two-dimensional reshaping approach inherent to TimesNet markedly enhances the capacity of the model to capture intrinsic periodic structures within multivariate time series data, presenting a novel paradigm for the intelligent fault diagnosis of complex mechanical systems in train running gears. The integrated human–machine interaction system includes a comprehensive closed-loop process encompassing detection, diagnosis, and decision-making, thereby laying a robust foundation for the continued development of train running gear predictive maintenance technologies. Full article
Show Figures

Figure 1

20 pages, 964 KiB  
Article
Circuit Design in Biology and Machine Learning. II. Anomaly Detection
by Steven A. Frank
Entropy 2025, 27(9), 896; https://doi.org/10.3390/e27090896 - 24 Aug 2025
Abstract
Anomaly detection is a well-established field in machine learning, identifying observations that deviate from typical patterns. The principles of anomaly detection could enhance our understanding of how biological systems recognize and respond to atypical environmental inputs. However, this approach has received limited attention [...] Read more.
Anomaly detection is a well-established field in machine learning, identifying observations that deviate from typical patterns. The principles of anomaly detection could enhance our understanding of how biological systems recognize and respond to atypical environmental inputs. However, this approach has received limited attention in analyses of cellular and physiological circuits. This study builds on machine learning techniques—such as dimensionality reduction, boosted decision trees, and anomaly classification—to develop a conceptual framework for biological circuits. One problem is that machine learning circuits tend to be unrealistically large for use by cellular and physiological systems. I therefore focus on minimal circuits inspired by machine learning concepts, reduced to the cellular scale. Through illustrative models, I demonstrate that small circuits can provide useful classification of anomalies. The analysis also shows how principles from machine learning—such as temporal and atemporal anomaly detection, multivariate signal integration, and hierarchical decision-making cascades—can inform hypotheses about the design and evolution of cellular circuits. This interdisciplinary approach enhances our understanding of cellular circuits and highlights the universal nature of computational strategies across biological and artificial systems. Full article
(This article belongs to the Special Issue Mathematical Modeling in Systems Biology, 2nd Edition)
22 pages, 6754 KiB  
Article
Railway Intrusion Risk Quantification with Track Semantic Segmentation and Spatiotemporal Features
by Shanping Ning, Feng Ding, Bangbang Chen and Yuanfang Huang
Sensors 2025, 25(17), 5266; https://doi.org/10.3390/s25175266 - 24 Aug 2025
Abstract
Foreign object intrusion in railway perimeter areas poses significant risks to train operation safety. To address the limitation of current visual detection technologies that overly focus on target identification while lacking quantitative risk assessment, this paper proposes a railway intrusion risk quantification method [...] Read more.
Foreign object intrusion in railway perimeter areas poses significant risks to train operation safety. To address the limitation of current visual detection technologies that overly focus on target identification while lacking quantitative risk assessment, this paper proposes a railway intrusion risk quantification method integrating track semantic segmentation and spatiotemporal features. An improved BiSeNetV2 network is employed to accurately extract track regions, while physical-constrained risk zones are constructed based on railway structure gauge standards. The lateral spatial distance of intruding objects is precisely calculated using track gauge prior knowledge. A lightweight detection architecture is designed, adopting ShuffleNetV2 as the backbone to reduce computational complexity, with an incorporated Dilated Transformer module to enhance global context awareness and sparse feature extraction, significantly improving detection accuracy for small-scale objects. The comprehensive risk assessment formula integrates object category weights, lateral risk coefficients in intrusion zones, longitudinal distance decay factors, and dynamic velocity compensation. Experimental results demonstrate that the proposed method achieves 84.9% mean average precision (mAP) on our proprietary dataset, outperforming baseline models by 3.3%. By combining lateral distance detection with multidimensional risk indicators, the method enables quantitative intrusion risk assessment and graded early warning, providing data-driven decision support for active train protection systems and substantially enhancing intelligent safety protection capabilities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 429 KiB  
Systematic Review
Application of Artificial Intelligence in Inborn Errors of Immunity Identification and Management: Past, Present, and Future: A Systematic Review
by Ivan Taietti, Martina Votto, Marta Colaneri, Matteo Passerini, Jessica Leoni, Gian Luigi Marseglia, Amelia Licari and Riccardo Castagnoli
J. Clin. Med. 2025, 14(17), 5958; https://doi.org/10.3390/jcm14175958 - 23 Aug 2025
Abstract
Background: Inborn errors of immunity (IEI) are mainly genetically driven disorders that affect immune function and present with highly heterogeneous clinical manifestations, ranging from severe combined immunodeficiency (SCID) to adult-onset immune dysregulatory diseases. This clinical heterogeneity, coupled with limited awareness and the [...] Read more.
Background: Inborn errors of immunity (IEI) are mainly genetically driven disorders that affect immune function and present with highly heterogeneous clinical manifestations, ranging from severe combined immunodeficiency (SCID) to adult-onset immune dysregulatory diseases. This clinical heterogeneity, coupled with limited awareness and the absence of a universal diagnostic test, makes early and accurate diagnosis challenging. Although genetic testing methods such as whole-exome and genome sequencing have improved detection, they are often expensive, complex, and require functional validation. Recently, artificial intelligence (AI) tools have emerged as promising for enhancing diagnostic accuracy and clinical decision-making for IEI. Methods: We conducted a systematic review of four major databases (PubMed, Scopus, Web of Science, and Embase) to identify peer-reviewed English-published studies focusing on the application of AI techniques in the diagnosis and treatment of IEI across pediatric and adult populations. Twenty-three retrospective/prospective studies and clinical trials were included. Results: AI methodologies demonstrated high diagnostic accuracy, improved detection of pathogenic mutations, and enhanced prediction of clinical outcomes. AI tools effectively integrated and analyzed electronic health records (EHRs), clinical, immunological, and genetic data, thereby accelerating the diagnostic process and supporting personalized treatment strategies. Conclusions: AI technologies show significant promise in the early detection and management of IEI by reducing diagnostic delays and healthcare costs. While offering substantial benefits, limitations such as data bias and methodological inconsistencies among studies must be addressed to ensure broader clinical applicability. Full article
(This article belongs to the Special Issue Inborn Errors of Immunity: Advances in Diagnosis and Treatment)
15 pages, 3154 KiB  
Article
Transformer-Based HER2 Scoring in Breast Cancer: Comparative Performance of a Foundation and a Lightweight Model
by Yeh-Han Wang, Min-Hsiang Chang, Hsin-Hsiu Tsai, Chun-Jui Chien and Jian-Chiao Wang
Diagnostics 2025, 15(17), 2131; https://doi.org/10.3390/diagnostics15172131 - 23 Aug 2025
Abstract
Background/Objectives: Human epidermal growth factor 2 (HER2) scoring is critical for modern breast cancer therapies, especially with emerging indications of antibody–drug conjugates for HER2-low tumors. However, inter-observer agreement remains limited in borderline cases. Automatic artificial intelligence-based scoring has the [...] Read more.
Background/Objectives: Human epidermal growth factor 2 (HER2) scoring is critical for modern breast cancer therapies, especially with emerging indications of antibody–drug conjugates for HER2-low tumors. However, inter-observer agreement remains limited in borderline cases. Automatic artificial intelligence-based scoring has the potential to improve diagnostic consistency and scalability. This study aimed to develop two transformer-based models for HER2 scoring of breast cancer whole-slide images (WSIs) and compare their performance. Methods: We adapted a large-scale foundation model (Virchow) and a lightweight model (TinyViT). Both were trained using patch-level annotations and integrated into a WSI scoring pipeline. Performance was evaluated on a clinical test set (n = 66), including clinical decision tasks and inference efficiency. Results: Both models achieved substantial agreement with pathologist reports (linear weighted kappa: 0.860 for Virchow, 0.825 for TinyViT). Virchow showed slightly higher WSI-level accuracy than TinyViT, whereas TinyViT reduced inference times by 60%. In three binary clinical tasks, both models demonstrated a diagnostic performance comparable to pathologists, particularly in identifying HER2-low tumors for antibody–drug conjugate (ADC) therapy. A continuous scoring framework demonstrated a strong correlation between the two models (Pearson’s r = 0.995) and aligned with human assessments. Conclusions: Both transformer-based artificial intelligence models achieved human-level accuracy for automated HER2 scoring with interpretable outputs. While the foundation model offers marginally higher accuracy, the lightweight model provides practical advantages for clinical deployment. In addition, continuous scoring may provide a more granular HER2 quantification, especially in borderline cases. This could support a new interpretive paradigm for HER2 assessment aligned with the evolving indications of ADC. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Graphical abstract

39 pages, 7455 KiB  
Review
A Comparative Review of Large Language Models in Engineering with Emphasis on Chemical Engineering Applications
by Khoo-Teck Leong, Tin Sin Lee, Soo-Tueen Bee, Chi Ma and Yuan-Yuan Zhang
Processes 2025, 13(9), 2680; https://doi.org/10.3390/pr13092680 - 23 Aug 2025
Abstract
This review provides a comprehensive overview of the evolution and application of artificial intelligence (AI) and large language models (LLMs) in engineering, with a specific focus on chemical engineering. The review traces the historical development of LLMs, from early rule-based systems and statistical [...] Read more.
This review provides a comprehensive overview of the evolution and application of artificial intelligence (AI) and large language models (LLMs) in engineering, with a specific focus on chemical engineering. The review traces the historical development of LLMs, from early rule-based systems and statistical models like N-grams to the transformative introduction of neural networks and transformer architecture. It examines the pivotal role of models like BERT and the GPT series in advancing natural language processing and enabling sophisticated applications across various engineering disciplines. For example, GPT-3 (175B parameters) demonstrates up to 87.7% accuracy in structured information extraction, while GPT-4 introduces multimodal reasoning with estimated token limits exceeding 32k. The review synthesizes recent research on the use of LLMs in software, mechanical, civil, and electrical engineering, highlighting their impact on automation, design, and decision-making. A significant portion is dedicated to the burgeoning applications of LLMs in chemical engineering, including their use as educational tools, process simulation and modelling, reaction optimization, and molecular design. The review delves into specific case studies on distillation column and reactor design, showcasing how LLMs can assist in generating initial parameters and optimizing processes while also underscoring the necessity of validating their outputs against traditional methods. Finally, the review addresses the challenges and future considerations of integrating LLMs into engineering workflows, emphasizing the need for domain-specific adaptations, ethical guidelines, and robust validation frameworks. Full article
Show Figures

Figure 1

45 pages, 6665 KiB  
Review
AI-Driven Digital Twins in Industrialized Offsite Construction: A Systematic Review
by Mohammadreza Najafzadeh and Armin Yeganeh
Buildings 2025, 15(17), 2997; https://doi.org/10.3390/buildings15172997 - 23 Aug 2025
Abstract
The increasing adoption of industrialized offsite construction (IOC) offers substantial benefits in efficiency, quality, and sustainability, yet presents persistent challenges related to data fragmentation, real-time monitoring, and coordination. This systematic review investigates the transformative role of artificial intelligence (AI)-enhanced digital twins (DTs) in [...] Read more.
The increasing adoption of industrialized offsite construction (IOC) offers substantial benefits in efficiency, quality, and sustainability, yet presents persistent challenges related to data fragmentation, real-time monitoring, and coordination. This systematic review investigates the transformative role of artificial intelligence (AI)-enhanced digital twins (DTs) in addressing these challenges within IOC. Employing a hybrid re-view methodology—combining scientometric mapping and qualitative content analysis—52 relevant studies were analyzed to identify technological trends, implementation barriers, and emerging research themes. The findings reveal that AI-driven DTs enable dynamic scheduling, predictive maintenance, real-time quality control, and sustainable lifecycle management across all IOC phases. Seven thematic application clusters are identified, including logistics optimization, safety management, and data interoperability, supported by a layered architectural framework and key enabling technologies. This study contributes to the literature by providing an early synthesis that integrates technical, organizational, and strategic dimensions of AI-driven DT implementation in IOC context. It distinguishes DT applications in IOC from those in onsite construction and expands AI’s role beyond conventional data analytics toward agentive, autonomous decision-making. The proposed future research agenda offers strategic directions such as the development of DT maturity models, lifecycle-spanning integration strategies, scalable AI agent systems, and cost-effective DT solutions for small and medium enterprises. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

23 pages, 1377 KiB  
Article
High-Value Patents Recognition with Random Forest and Enhanced Fire Hawk Optimization Algorithm
by Xiaona Yao, Huijia Li and Sili Wang
Biomimetics 2025, 10(9), 561; https://doi.org/10.3390/biomimetics10090561 - 23 Aug 2025
Abstract
High-value patents are a key indicator of new product development, the emergence of innovative technology, and a source of innovation incentives. Multiple studies have shown that patent value exhibits a significantly skewed distribution, with only about 10% of patents having high value. Identifying [...] Read more.
High-value patents are a key indicator of new product development, the emergence of innovative technology, and a source of innovation incentives. Multiple studies have shown that patent value exhibits a significantly skewed distribution, with only about 10% of patents having high value. Identifying high-value patents from a large volume of patent data in advance has become a crucial problem that needs to be addressed urgently. However, current machine learning methods often rely on manual hyperparameter tuning, which is time-consuming and prone to suboptimal results. Existing optimization algorithms also suffer from slow convergence and local optima issues, limiting their effectiveness on complex patent datasets. In this paper, machine learning and intelligent optimization algorithms are combined to process and analyze the patent data. The Fire Hawk Optimization Algorithm (FHO) is a novel intelligence algorithm suggested in recent years, inspired by the process in nature where Fire Hawks capture prey by setting fires. This paper firstly proposes the Enhanced Fire Hawk Optimizer (EFHO), which combines four strategies, namely adaptive tent chaotic mapping, hunting prey, adding the inertial weight, and enhanced flee strategy to address the weakness of FHO development. Benchmark tests demonstrate EFHO’s superior convergence speed, accuracy, and robustness across standard optimization benchmarks. As a representative real-world application, EFHO is employed to optimize Random Forest hyperparameters for high-value patent recognition. While other intelligent optimizers could be applied, EFHO effectively overcomes common issues like slow convergence and local optima trapping. Compared to other classification methods, the EFHO-optimized Random Forest achieves superior accuracy and classification stability. This study fills a research gap in effective hyperparameter tuning for patent recognition and demonstrates EFHO’s practical value on real-world patent datasets. Full article
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation: 3rd Edition)
Show Figures

Figure 1

21 pages, 749 KiB  
Article
A Blockchain-Enabled Decentralized Autonomous Access Control Scheme for Data Sharing
by Kunyang Li, Heng Pan, Yaoyao Zhang, Bowei Zhang, Ying Xing, Yuyang Zhan, Gaoxu Zhao and Xueming Si
Mathematics 2025, 13(17), 2712; https://doi.org/10.3390/math13172712 - 22 Aug 2025
Abstract
With the rapid development of artificial intelligence, multi-party collaboration based on data sharing has become an inevitable trend. However, in practical applications, shared data often originate from multiple providers. Therefore, achieving secure and efficient data sharing while protecting the rights and interests of [...] Read more.
With the rapid development of artificial intelligence, multi-party collaboration based on data sharing has become an inevitable trend. However, in practical applications, shared data often originate from multiple providers. Therefore, achieving secure and efficient data sharing while protecting the rights and interests of each data provider is a key challenge currently faced. Existing access control methods have the following shortcomings in multi-owner data scenarios. Most methods rely on centralized management, which makes it difficult to solve conflicts caused by inconsistent permission policies among multiple owners. There are problems such as poor consistency of permission management, low security, and lack of protection for the autonomous will of each owner. To this end, our paper proposes a fine-grained decentralized autonomous access control scheme based on blockchain, which includes three core stages: formulation, deployment, and execution of access control policies. In the access control policy formulation stage, the scheme constructs a multi-owner data policy matrix and introduces a benefit function based on a Stackelberg game to balance conflicting attributes to form a unified access policy. Secondly, in the access control policy deployment stage based on smart contracts, all data owners vote on the access control policy by calculating their own benefits to achieve a consensus on joint decision-making on the policy. Finally, in the policy execution and joint authorization phase, a decentralized authorization method based on threshold passwords is used to distribute access keys to each owner, ensuring that data is only granted after receiving authorization from a sufficient number of owners, thereby ensuring the ultimate control of each owner and the fine-grained access control. Finally, we verified the feasibility of the solution through case analysis and experiments. Full article
(This article belongs to the Special Issue Advances in Blockchain and Intelligent Computing)
24 pages, 2604 KiB  
Article
Small Object Detection in Agriculture: A Case Study on Durian Orchards Using EN-YOLO and Thermal Fusion
by Ruipeng Tang, Tan Jun, Qiushi Chu, Wei Sun and Yili Sun
Plants 2025, 14(17), 2619; https://doi.org/10.3390/plants14172619 - 22 Aug 2025
Abstract
Durian is a major tropical crop in Southeast Asia, but its yield and quality are severely impacted by a range of pests and diseases. Manual inspection remains the dominant detection method but suffers from high labor intensity, low accuracy, and difficulty in scaling. [...] Read more.
Durian is a major tropical crop in Southeast Asia, but its yield and quality are severely impacted by a range of pests and diseases. Manual inspection remains the dominant detection method but suffers from high labor intensity, low accuracy, and difficulty in scaling. To address these challenges, this paper proposes EN-YOLO, a novel enhanced YOLO-based deep learning model that integrates the EfficientNet backbone and multimodal attention mechanisms for precise detection of durian pests and diseases. The model removes redundant feature layers and introduces a large-span residual edge to preserve key spatial information. Furthermore, a multimodal input strategy—incorporating RGB, near-infrared and thermal imaging—is used to enhance robustness under variable lighting and occlusion. Experimental results on real orchard datasets demonstrate that EN-YOLO outperforms YOLOv8 (You Only Look Once version 8), YOLOv5-EB (You Only Look Once version 5—Efficient Backbone), and Fieldsentinel-YOLO in detection accuracy, generalization, and small-object recognition. It achieves a 95.3% counting accuracy and shows superior performance in ablation and cross-scene tests. The proposed system also supports real-time drone deployment and integrates an expert knowledge base for intelligent decision support. This work provides an efficient, interpretable, and scalable solution for automated pest and disease management in smart agriculture. Full article
(This article belongs to the Special Issue Plant Protection and Integrated Pest Management)
20 pages, 1538 KiB  
Review
Application of Digital Twin Technology in Smart Agriculture: A Bibliometric Review
by Rajesh Gund, Chetan M. Badgujar, Sathishkumar Samiappan and Sindhu Jagadamma
Agriculture 2025, 15(17), 1799; https://doi.org/10.3390/agriculture15171799 - 22 Aug 2025
Abstract
Digital twin technology is reshaping modern agriculture. Digital twins are the virtual replicas of real-world farming systems, which are continuously updated with real-time data, and are revolutionizing the monitoring, simulation, and optimization of agricultural processes. The literature on agricultural digital twins is multidisciplinary, [...] Read more.
Digital twin technology is reshaping modern agriculture. Digital twins are the virtual replicas of real-world farming systems, which are continuously updated with real-time data, and are revolutionizing the monitoring, simulation, and optimization of agricultural processes. The literature on agricultural digital twins is multidisciplinary, growing rapidly, and often fragmented across disciplines, which lacks well-curated documentation. A bibliometric analysis includes thematic content analysis and science mapping, which provides research trends, gaps, thematic landscape, and key contributors in this continuously evolving and emerging field. Therefore, in this study, we conducted a bibliometric review that included collecting bibliometric data via keyword search strategies on popular scientific databases. The data was further screened, processed, analyzed, and visualized using bibliometric tools to map research trends, landscapes, collaborations, and themes. Key findings show that publications have grown exponentially since 2018, with an annual growth rate of 27.2%. The major contributing countries were China, the USA, the Netherlands, Germany, and India. We observed a collaboration network with distinct geographic clusters, with strong intra-European ties and more localized efforts in China and the USA. The analysis identified seven major research theme clusters revolving around precision farming, Internet of Things integration, artificial intelligence, cyber–physical systems, controlled-environment agriculture, sustainability, and food system applications. We observed that core technologies, such as sensors, artificial intelligence, and data analytics, have been extensively explored, while identifying gaps in research areas. The emerging interests include climate resilience, renewable-energy integration, and supply-chain optimization. The observed transition from task-specific tools to integrated, system-level approaches underline the growing need for adaptive, data-driven decision support. By outlining research trends and identifying strategic research gaps, this review offers insights into leveraging digital twins to improve productivity, sustainability, and resilience in global agriculture. Full article
31 pages, 1508 KiB  
Review
Human-Centered AI in Placemaking: A Review of Technologies, Practices, and Impacts
by Pedro J. S. Cardoso and João M. F. Rodrigues
Appl. Sci. 2025, 15(17), 9245; https://doi.org/10.3390/app15179245 - 22 Aug 2025
Abstract
Artificial intelligence (AI) for placemaking holds the potential to revolutionize how we conceptualize, design, and manage urban spaces to create more vibrant, resilient, and people-centered cities. In this context, integrating Human-Centered AI (HCAI) into public infrastructure presents an exciting opportunity to reimagine the [...] Read more.
Artificial intelligence (AI) for placemaking holds the potential to revolutionize how we conceptualize, design, and manage urban spaces to create more vibrant, resilient, and people-centered cities. In this context, integrating Human-Centered AI (HCAI) into public infrastructure presents an exciting opportunity to reimagine the role of urban amenities and furniture in shaping inclusive, responsive, and technologically enhanced public spaces. This review examines the state-of-the-art in HCAI for placemaking, focusing on some of the main factors that must be analyzed to guide future technological research and development, such as (a) AI-driven tools for community engagement in the placemaking process, including sentiment analysis, participatory design platforms, and virtual reality simulations; (b) AI sensors and image recognition technology for analyzing user behaviors within public spaces to inform evidence-based urban design decisions; (c) the role of HCAI in enhancing community engagement in the placemaking process, focusing on tools and approaches that facilitate more inclusive and participatory design practices; and (d) the utilization of AI in analyzing and understanding user behaviors within public spaces, highlighting how these insights can inform more responsive and user-centric design decisions. The review identifies current innovations, implementation challenges, and emerging opportunities at the intersection of artificial intelligence, urban design, and human experience. Full article
Show Figures

Figure 1

28 pages, 1314 KiB  
Systematic Review
Bioengineering Support in the Assessment and Rehabilitation of Low Back Pain
by Giustino Varrassi, Matteo Luigi Giuseppe Leoni, Ameen Abdulhasan Al-Alwany, Piercarlo Sarzi Puttini and Giacomo Farì
Bioengineering 2025, 12(9), 900; https://doi.org/10.3390/bioengineering12090900 - 22 Aug 2025
Abstract
Low back pain (LBP) remains one of the most prevalent and disabling musculoskeletal conditions globally, with profound social, economic, and healthcare implications. The rising incidence and chronic nature of LBP highlight the need for more objective, personalized, and effective approaches to assessment and [...] Read more.
Low back pain (LBP) remains one of the most prevalent and disabling musculoskeletal conditions globally, with profound social, economic, and healthcare implications. The rising incidence and chronic nature of LBP highlight the need for more objective, personalized, and effective approaches to assessment and rehabilitation. In this context, bioengineering has emerged as a transformative field, offering novel tools and methodologies that enhance the understanding and management of LBP. This narrative review examines current bioengineering applications in both diagnostic and therapeutic domains. For assessment, technologies such as wearable inertial sensors, three-dimensional motion capture systems, surface electromyography, and biomechanical modeling provide real-time, quantitative insights into posture, movement patterns, and muscle activity. On the therapeutic front, innovations including robotic exoskeletons, neuromuscular electrical stimulation, virtual reality-based rehabilitation, and tele-rehabilitation platforms are increasingly being integrated into multimodal treatment protocols. These technologies support precision medicine by tailoring interventions to each patient’s biomechanical and functional profile. Furthermore, the incorporation of artificial intelligence into clinical workflows enables automated data analysis, predictive modeling, and decision support systems, while future directions such as digital twin technology hold promise for personalized simulation and outcome forecasting. While these advancements are promising, further validation in large-scale, real-world settings is required to ensure safety, efficacy, and equitable accessibility. Ultimately, bioengineering provides a multidimensional, data-driven framework that has the potential to significantly improve the assessment, rehabilitation, and overall management of LBP. Full article
(This article belongs to the Special Issue Low-Back Pain: Assessment and Rehabilitation Research)
Show Figures

Figure 1

18 pages, 961 KiB  
Review
Blending Characterization for Effective Management in Mining Operations
by Matias Saavedra, Nathalie Risso, Moe Momayez, Ricardo Nunes, Victor Tenorio and Jinhong Zhang
Minerals 2025, 15(9), 891; https://doi.org/10.3390/min15090891 - 22 Aug 2025
Abstract
Ore blending plays a critical role in ensuring feed consistency and optimizing downstream processes in the mining industry. Despite its importance, effective blending remains challenging due to ore variability and operational constraints. This review focuses exclusively on modern, data-driven blending methodologies, with particular [...] Read more.
Ore blending plays a critical role in ensuring feed consistency and optimizing downstream processes in the mining industry. Despite its importance, effective blending remains challenging due to ore variability and operational constraints. This review focuses exclusively on modern, data-driven blending methodologies, with particular emphasis on the application of data science and machine learning (ML) in predicting key process variables and supporting real-time decision-making. It discusses core challenges such as data quality, feature engineering, and model generalization, alongside enabling technologies including sensor integration, automation platforms, and real-time data acquisition systems. By consolidating the recent literature and highlighting emerging trends, this work outlines future directions for advancing intelligent blending systems and underscores the importance of standardized, high-quality data in the development of robust digital solutions for mineral processing. Full article
Show Figures

Figure 1

Back to TopTop