Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (101)

Search Parameters:
Keywords = pervasive AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 321 KB  
Article
What Might Be the Possible Conditions for Artificial Intelligence to Become Cultural Beings and to Develop a Cultural Heritage of Their Own?
by Dirk H. R. Spennemann
Heritage 2026, 9(4), 124; https://doi.org/10.3390/heritage9040124 - 24 Mar 2026
Viewed by 253
Abstract
As artificial intelligence (AI) as a technology becomes ever more pervasive in a wide range of human endeavor, from highly specialized technological and scientific applications to mass-market generative AI ‘consumed’ by the public, the question arises whether, over time, artificial intelligence could become [...] Read more.
As artificial intelligence (AI) as a technology becomes ever more pervasive in a wide range of human endeavor, from highly specialized technological and scientific applications to mass-market generative AI ‘consumed’ by the public, the question arises whether, over time, artificial intelligence could become cultural beings and, by extension, could develop a heritage of their own. This paper reopens a 2007 examination of whether future sentient robots, autonomous systems, or AI would possess culture, exercise cultural heritage and what conditions need to be met to reach that point. Based on two ‘conversations’ with two generative AI models, ChatGPT4.5 and DeepSeek R1, that examine the models’ ‘understanding’ of culture and heritage, we explored the various thematic and content connections these models make. This paper demonstrates the conditions, technological, attitudinal and societal that are required to allow for an AI culture to develop. That culture will look very different from that maintained by humans and will be based on very different, currently unknowable, value sets. This paper is novel, as it expands the conceptual framework of how we understand heritage from an anthropocentric perspective to one that includes non-human, ‘artificial’ intelligence now and in the future. Full article
21 pages, 457 KB  
Article
Understanding Energy Efficiency of AI Deployments in IoT-Driven Smart Cities
by Salvatore Bramante, Filippo Ferrandino and Alessandro Cilardo
IoT 2026, 7(1), 27; https://doi.org/10.3390/iot7010027 - 8 Mar 2026
Viewed by 551
Abstract
The pervasive adoption of AI and AIoT applications at the network edge presents both opportunities and challenges for smart cities. With a focus on the energy efficiency of AI in urban environments, this paper provides a systematic comparative analysis of representative edge hardware [...] Read more.
The pervasive adoption of AI and AIoT applications at the network edge presents both opportunities and challenges for smart cities. With a focus on the energy efficiency of AI in urban environments, this paper provides a systematic comparative analysis of representative edge hardware platforms, i.e., embedded GPUs, FPGAs, and ultra-low-power microcontroller-/sensor-class devices, assessing their suitability for AI workloads in IoT-driven smart city infrastructures. The evaluation, based on direct characterization of diverse neural networks and relevant datasets, quantifies computational performance and energy behavior through inference latency, throughput, and energy/per inference measurements. Across the evaluated network–board pairs, the measured inference power spans several orders of magnitude, ranging from 0.1–10 mW for ultra-low-power Intelligent Sensor Processing Units (ISPUs) up to 1–10 W for embedded GPUs, highlighting the wide design space between the least and most power-demanding configurations. Results indicate that embedded GPUs provide a favorable performance-to-power ratio for computationally intensive workloads, while MCU/ISPU-class solutions, despite throughput limitations, offer compelling advantages in ultra-low-power scenarios when combined with quantization and pruning, making them well-suited for distributed sensing and actuation typical of smart city deployments. Overall, this comparative analysis guides hardware selection for heterogeneous, sustainable AI-enabled urban services. Full article
(This article belongs to the Special Issue IoT-Driven Smart Cities)
Show Figures

Figure 1

23 pages, 291 KB  
Review
Cognitive Assemblages: Living with Algorithms
by Stéphane Grumbach
Big Data Cogn. Comput. 2026, 10(2), 63; https://doi.org/10.3390/bdcc10020063 - 16 Feb 2026
Cited by 1 | Viewed by 734
Abstract
The rapid expansion of algorithmic systems has transformed cognition into an increasingly distributed and collective enterprise, giving rise to what can be described as cognitive assemblages, dynamic constellations of humans, institutions, data infrastructures, and artificial agents. This paper traces the historical and conceptual [...] Read more.
The rapid expansion of algorithmic systems has transformed cognition into an increasingly distributed and collective enterprise, giving rise to what can be described as cognitive assemblages, dynamic constellations of humans, institutions, data infrastructures, and artificial agents. This paper traces the historical and conceptual evolution that has led to this shift. First, we show how cognition, once conceived as the property of autonomous individuals, has progressively become embedded in socio-technical networks in which algorithmic processes participate as co-agents. Second, we revisit the progressive awareness of human cognitive limits, from bounded rationality to contemporary theories of extended mind. These frameworks anticipate and help explain today’s hybrid cognitive ecologies. Third, we assess the philosophical implications for Enlightenment ideals of autonomy, rationality, and self-governance, showing how these concepts must be reinterpreted in light of pervasive algorithmic intermediation. Finally, we examine global initiatives that seek to integrate augmented cognitive capacities into large-scale cybernetic forms of societal coordination, ranging from digital platforms and data spaces to AI-driven governance systems. These developments offer new opportunities for steering complex societies under conditions of globalization, environmental disruption, and the rise of autonomous intelligent systems, yet they also raise profound questions regarding control, accountability, and democratic legitimacy. We argue that understanding cognitive assemblages is essential to designing socio-technical systems capable of supporting collective intelligence while preserving human values in an era of accelerating complexity. Full article
Show Figures

Figure 1

29 pages, 3196 KB  
Review
The Remote Sensing Geostatistical Paradigm: A Review of Key Technologies and Applications
by Junyu He
Remote Sens. 2026, 18(4), 600; https://doi.org/10.3390/rs18040600 - 14 Feb 2026
Viewed by 442
Abstract
Advancements in earth observation technologies are ushering in the big data era, yet this potential is compromised by intrinsic challenges: inherent uncertainty, spatiotemporal heterogeneity, multi-scale character, and pervasive data gaps. Traditional methods often fail to address these issues within a single, coherent system. [...] Read more.
Advancements in earth observation technologies are ushering in the big data era, yet this potential is compromised by intrinsic challenges: inherent uncertainty, spatiotemporal heterogeneity, multi-scale character, and pervasive data gaps. Traditional methods often fail to address these issues within a single, coherent system. The main contributions of this review are to systematically establish the Remote Sensing Geostatistical Paradigm (RSGP) as a comprehensive, unified framework. Powered by its core theory, Bayesian Maximum Entropy (BME), RSGP is a broadly designed epistemic framework that transcends a mere conceptual reorganization of established methods. It addresses the above challenges by highlighting two pivotal concepts within a spatiotemporal random field: (1) uncertainty quantification via probabilistic soft data, which redefines observations as probability density functions, representing a fundamental epistemological shift from deterministic scalars to probabilistic entities, and provides a universal interface for rigorous assimilation of heterogeneous remote sensing or in situ observations and synergy with other computational models, such as machine learning; and (2) spatiotemporal structure exploitation, which integrates the underlying structure embedded in remote sensing data of natural attributes, moving beyond mere optical properties to incorporate a broader range of available spatiotemporal information, for robust estimation and mapping purposes. Furthermore, the evolution of key technologies is illustrated by using real-world application cases, guiding how to implement RSGP in terms of different scenarios. Finally, the paradigm’s features and limitations are discussed. This synthesis provides the remote sensing community with a robust foundation for uncertainty-aware analysis and multi-source integration, bridging geostatistical logic with next-generation AI-driven Earth observation. Full article
(This article belongs to the Section Remote Sensing for Geospatial Science)
Show Figures

Figure 1

26 pages, 1912 KB  
Article
A Temporally Dynamic Feature-Extraction Framework for Phishing Detection with LIME and SHAP Explanations
by Chris Mayo, Michael Tchuindjang, Sarfraz Brohi and Nikolaos Ersotelos
Future Internet 2026, 18(2), 101; https://doi.org/10.3390/fi18020101 - 14 Feb 2026
Viewed by 627
Abstract
Phishing remains one of the most pervasive social engineering threats, exploiting human vulnerabilities and continuously evolving to bypass static detection mechanisms. Existing machine learning models achieve high accuracy but often act as opaque systems that lack robustness to evolving tactics and explainability, limiting [...] Read more.
Phishing remains one of the most pervasive social engineering threats, exploiting human vulnerabilities and continuously evolving to bypass static detection mechanisms. Existing machine learning models achieve high accuracy but often act as opaque systems that lack robustness to evolving tactics and explainability, limiting trust and real-world deployment. In this research, we propose a dynamic Explainable AI (XAI) approach for phishing detection that integrates temporally aware feature extraction with dual interpretability through LIME and SHAP applied to the resulting window-level features. The novelty of this research lies in a temporally dynamic feature framework that simulates a plausible email reading progression using a heuristic temporal model and employs a sliding window aggregation method to capture behavioural and temporal patterns within email content. Using an aggregated dataset of 82,500 phishing and legitimate emails, dynamic features were extracted and used to train four classifiers: Random Forest, XGBoost, Multi-Layer Perceptron, and Logistic Regression. Ensemble models demonstrated strong performance with XGBoost achieving 94% accuracy and Random Forest 93%. This research addresses an important gap by combining dynamically constructed temporal features with transparent explanations, achieving high detection performance while preserving interpretability. These findings demonstrate that dynamic temporal modelling with explainable learning can enhance the trustworthiness and practicality of phishing detection systems, highlighting that temporally structured features and explainable learning can enhance the trustworthiness and practical deployability of phishing detection systems without incurring excessive computational overhead. Full article
Show Figures

Figure 1

9 pages, 268 KB  
Perspective
Prevention as a Pillar of Communicable Disease Control: Strategies for Equity, Surveillance, and One Health Integration
by Giovanni Genovese, Caterina Elisabetta Rizzo, Linda Bartucciotto, Serena Maria Calderone, Francesco Loddo, Francesco Leonforte, Antonio Mistretta, Raffaele Squeri and Cristina Genovese
Epidemiologia 2026, 7(1), 19; https://doi.org/10.3390/epidemiologia7010019 - 3 Feb 2026
Viewed by 539
Abstract
Global health faces unprecedented challenges driven by communicable diseases, which are increasingly amplified by persistent health inequities, the impact of climate change, and the speed of emerging crises. Prevention is not merely a component but the foundational strategy for an effective, sustainable, and [...] Read more.
Global health faces unprecedented challenges driven by communicable diseases, which are increasingly amplified by persistent health inequities, the impact of climate change, and the speed of emerging crises. Prevention is not merely a component but the foundational strategy for an effective, sustainable, and fiscally responsible public health response. This paper delves into the pivotal role of core prevention levers: robust vaccination programs, stringent hygiene standards, advanced epidemiological surveillance, and targeted health education. We detail how contemporary technological advancements, including Artificial Intelligence (AI), big data analytics, and genomics, are fundamentally reshaping infectious disease management, enabling superior predictive capabilities, faster early warning systems, and personalized prevention models. Furthermore, we thoroughly examine the imperative of integrating the One Health approach, which formally recognizes the close, interdependent links between human, animal, and environmental health as critical for combating complex threats like zoonoses and Antimicrobial Resistance (AMR). Despite significant scientific progress, persistent socio-economic disparities, the pervasive influence of health-related misinformation (infodemics), and structural weaknesses in global preparedness underscore the urgent need for decisive international cooperation and equitable financing models. We conclude that only through integrated, multidisciplinary, and resource-equitable strategies can the global community ensure effective prevention, mitigate severe socio-economic disruption, and successfully build resilient healthcare systems capable of withstanding future global health threats. Full article
14 pages, 1372 KB  
Article
The Organizational Transformation of Artificial Intelligence in Smart Cities: An Urban Artificial Intelligence Governance Maturity Model
by Omar Alrasbi and Samuel T. Ariaratnam
Urban Sci. 2026, 10(1), 63; https://doi.org/10.3390/urbansci10010063 - 20 Jan 2026
Viewed by 654
Abstract
The transformative potential of Artificial Intelligence (AI) in urban management is severely constrained by pervasive systemic fragmentation. While AI applications demonstrate high efficacy within isolated domains, they rarely achieve the cross-domain integration necessary for realizing systemic benefits. Our prior research identified this fragmentation [...] Read more.
The transformative potential of Artificial Intelligence (AI) in urban management is severely constrained by pervasive systemic fragmentation. While AI applications demonstrate high efficacy within isolated domains, they rarely achieve the cross-domain integration necessary for realizing systemic benefits. Our prior research identified this fragmentation paradox, revealing that 91.5% of urban AI implementations operate at the lowest levels of integration. While the Urban Systems Artificial Intelligence Framework (UAIF) offers a technical blueprint for integration, realizing this vision is contingent upon organizational readiness. This paper addresses this critical gap by introducing the Urban AI Governance Maturity Model (UAIG), developed using a Design Science Research methodology. Distinguished from generic maturity models, the UAIG operationalizes Socio-Technical Systems theory by establishing a direct Governance-Technology Interlock that specifically links organizational maturity levels to the engineering requirements of cross-domain AI. The model defines five maturity levels across five critical dimensions: Strategy and Investment; Organizational Structure and Culture; Data Governance and Policy; Technical Capacity and Interoperability; and Trust, Ethics, and Security. Through illustrative applications, we demonstrate how the UAIG serves as a diagnostic tool and a strategic roadmap, enabling policymakers to bridge the gap between technical possibility and organizational reality. Full article
Show Figures

Figure 1

38 pages, 10428 KB  
Article
Conversational AI-Enabled Precision Oncology Reveals Context-Dependent MAPK Pathway Alterations in Hispanic/Latino and Non-Hispanic White Colorectal Cancer Stratified by Age and FOLFOX Exposure
by Fernando C. Diaz, Brigette Waldrup, Francisco G. Carranza, Sophia Manjarrez and Enrique Velazquez-Villarreal
Cancers 2026, 18(2), 293; https://doi.org/10.3390/cancers18020293 - 17 Jan 2026
Cited by 1 | Viewed by 482
Abstract
Background: Colorectal cancer (CRC) demonstrates substantial clinical and biological diversity across age groups, ancestral backgrounds, and treatment settings, alongside a rising incidence of early-onset disease (EOCRC). The mitogen-activated protein kinase (MAPK) pathway is a major driver of CRC development and therapy response; however, [...] Read more.
Background: Colorectal cancer (CRC) demonstrates substantial clinical and biological diversity across age groups, ancestral backgrounds, and treatment settings, alongside a rising incidence of early-onset disease (EOCRC). The mitogen-activated protein kinase (MAPK) pathway is a major driver of CRC development and therapy response; however, the distribution and prognostic value of MAPK alterations across distinct patient subgroups remain unclear. Methods: We analyzed 2515 CRC tumors with harmonized demographic, clinical, genomic, and treatment metadata. Patients were stratified by ancestry (Hispanic/Latino [H/L] vs. non-Hispanic White [NHW]), age at diagnosis (early-onset [EO] vs. late-onset [LO]), and FOLFOX chemotherapy exposure. MAPK pathway alterations were identified using a curated gene set encompassing canonical EGFR-RAS-RAF-MEK-ERK signaling components and regulatory nodes. Conversational artificial intelligence (AI-HOPE and AI-HOPE-MAPK) enabled natural language-driven cohort construction and exploratory analytics; findings were validated using Fisher’s exact testing, chi-square analyses, and Kaplan–Meier survival estimates. Results: MAPK pathway disruption demonstrated marked heterogeneity across ancestry and treatment contexts. Among EO H/L patients, FGFR3, NF1, and RPS6KA6 mutations were significantly enriched in tumors not receiving FOLFOX, whereas PDGFRB alterations were more frequent in FOLFOX-treated EO H/L tumors relative to EO NHW counterparts. In late-onset H/L disease, NTRK2 and PDGFRB mutations were more common in non-FOLFOX tumors. Distinct MAPK-associated alterations were also observed among NHW patients, particularly in non-FOLFOX settings, including AKT3, FGF4, RRAS2, CRKL, DUSP4, JUN, MAPK1, RRAS, and SOS1. Survival analyses provided borderline evidence that MAPK alterations may be linked to improved overall survival in treated EO NHW patients. Conversational AI markedly accelerated analytic throughput and multi-parameter discovery. Conclusions: Although MAPK alterations are pervasive in CRC, their distribution varies meaningfully by ancestry, age, and treatment exposure. These findings highlight NF1, MAPK3, RPS6KA4, and PDGFRB as potential biomarkers in EOCRC and H/L patients, supporting the need for ancestry-aware precision oncology approaches. Full article
(This article belongs to the Special Issue Innovations in Addressing Disparities in Cancer)
Show Figures

Figure 1

17 pages, 813 KB  
Article
Building and Repairing Trust in Chatbots: The Interplay Between Social Role and Performance During Interactions
by Yi Mou, Xiaoyu Ye and Wenbin Ma
Behav. Sci. 2026, 16(1), 118; https://doi.org/10.3390/bs16010118 - 14 Jan 2026
Viewed by 556
Abstract
Trust (or distrust) in artificial intelligence (AI) is a critical research topic, given AI’s pervasive integration across societal domains. Despite its significance, scholarly attention to process-based learned trust in AI remains limited. To address this gap, this study designed a virtual non-fungible token [...] Read more.
Trust (or distrust) in artificial intelligence (AI) is a critical research topic, given AI’s pervasive integration across societal domains. Despite its significance, scholarly attention to process-based learned trust in AI remains limited. To address this gap, this study designed a virtual non-fungible token (NFT) investment task, featuring seven rounds of risk decision-making scenarios, to simulate an investment/trust game to explore participants’ multifaceted trust under the influence of different chatbots’ social role. The findings suggested the chatbot’s social role had a significant impact on participants’ trust behaviors and perceptions over time. Trust in the two chatbot types diverged until the system-induced failures occurred. The friend-like chatbot elicited a higher level of behavioral trust than the servant-like counterpart. During those trust-damaging moments, the friend-like chatbot proved more effective in mitigating trust erosion and facilitating trust repair, as evidenced by relatively stable investment behaviors. The findings reinforce the notion that friendship with AI can function as a relational buffer, softening the impact of trust violations and facilitating smoother trust recovery. Full article
Show Figures

Figure 1

26 pages, 2345 KB  
Article
NeuroStrainSense: A Transformer-Generative AI Framework for Stress Detection Using Heterogeneous Multimodal Datasets
by Dalel Ben Ismail, Wyssem Fathallah, Mourad Mars and Hedi Sakli
Technologies 2026, 14(1), 35; https://doi.org/10.3390/technologies14010035 - 5 Jan 2026
Cited by 1 | Viewed by 657
Abstract
Stress is a pervasive global health concern that adversely contributes to morbidity and reduced productivity, yet it often remains unquantified due to its subjective and variant presentation. Although artificial intelligence offers an encouraging path toward automated monitoring of mental states, current state-of-the-art approaches [...] Read more.
Stress is a pervasive global health concern that adversely contributes to morbidity and reduced productivity, yet it often remains unquantified due to its subjective and variant presentation. Although artificial intelligence offers an encouraging path toward automated monitoring of mental states, current state-of-the-art approaches are challenged by the reliance on single-source data, sparsity of labeled samples, and significant class imbalance. This paper proposes NeuroStrainSense, a novel deep multimodal stress detection model that integrates three complementary datasets—WESAD, SWELL-KW, and TILES—through a Transformer-based feature fusion architecture combined with a Variational Autoencoder for generative data augmentation. The Transformer architecture employs four encoder layers with eight multi-head attention heads and a hidden dimension of 512 to capture complex inter-modal dependencies across physiological, audio, and behavioral modalities. Our experiments demonstrate that NeuroStrainSense achieves a state-of-the-art performance with accuracies of 87.1%, 88.5%, and 89.8% on the respective datasets, with F1-scores exceeding 0.85 and AUCs greater than 0.89, representing improvements of 2.6–6.6 percentage points over existing baselines. We propose a robust evaluation framework that quantifies discrimination among stress types through clustering validity metrics, achieving a Silhouette Score of 0.75 and Intraclass Correlation Coefficient of 0.76. Comprehensive ablation experiments confirm the utility of each modality and the VAE augmentation module, with physiological features contributing most significantly (average performance decrease of 5.8% when removed), followed by audio (2.8%) and behavioral features (2.1%). Statistical validation confirms all findings at the p < 0.01 significance level. Beyond binary classification, the model identifies five clinically relevant stress profiles—Cognitive Overload, Burnout, Acute Stress, Psychosomatic, and Low-Grade Chronic—with an expert concordance of Cohen’s κ = 0.71 (p < 0.001), demonstrating the strong ecological validity for personalized well-being and occupational health applications. External validation on the MIT Reality Mining dataset confirms the generalizability with minimal performance degradation (accuracy: 0.785, F1-score: 0.752, AUC: 0.849). This work underlines the potential of integrated multimodal learning and demographically aware generative AI for continuous, precise, and fair stress monitoring across diverse populations and environmental contexts. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

46 pages, 687 KB  
Article
A Next-Generation Cyber-Range Framework for O-RAN and 6G Security Validation
by Evangelos Chaskos, Nicholas Kolokotronis and Stavros Shiaeles
Future Internet 2026, 18(1), 29; https://doi.org/10.3390/fi18010029 - 4 Jan 2026
Viewed by 1403
Abstract
The evolution towards an Open Radio Access Network (O-RAN) and 6G introduces unprecedented openness and intelligence in mobile networks, alongside significant security challenges. Current cyber-ranges (CRs) are not prepared to address the disaggregated architecture, numerous open interfaces, AI/ML-driven RAN Intelligent Controllers (RICs), and [...] Read more.
The evolution towards an Open Radio Access Network (O-RAN) and 6G introduces unprecedented openness and intelligence in mobile networks, alongside significant security challenges. Current cyber-ranges (CRs) are not prepared to address the disaggregated architecture, numerous open interfaces, AI/ML-driven RAN Intelligent Controllers (RICs), and O-Cloud dependencies of O-RAN, nor the now-established 6G paradigms of AI-native operations and pervasive Zero Trust Architectures (ZTAs). This paper identifies a critical validation gap and proposes a novel theoretical framework for a next-generation CR, specifically architected to address the unique complexities of O-RAN’s disaggregated components, open interfaces, and advanced 6G security paradigms. Our framework features a modular architecture enabling high-fidelity emulation of O-RAN components and interfaces, integrated AI/ML security testing, and native support for ZTA validation. We also conceptualize a novel Federated Cyber-Range (FCR) architecture for enhanced scalability and specialized testing. By systematically linking identified threats to CR requirements and illustrating unique, practical O-RAN-specific exercises, this research lays foundational work for developing CRs capable of proactively assessing and strengthening the security of O-RAN and future 6G systems, while also outlining key implementation challenges. We validate the framework’s feasibility through a proof-of-concept A1 malicious policy injection exercise. Full article
(This article belongs to the Special Issue Secure and Trustworthy Next Generation O-RAN Optimisation)
Show Figures

Figure 1

22 pages, 2079 KB  
Article
Student-Created Screencasts: A Constructivist Response to the Challenges of Generative AI in Education
by Adam Wong, Ken Tsang, Shuyang Lin and Lai Lam Chan
Educ. Sci. 2025, 15(12), 1701; https://doi.org/10.3390/educsci15121701 - 17 Dec 2025
Viewed by 672
Abstract
Screencasts, which are screen-capture videos, have been created by teachers delivering instruction or feedback, reflecting a teacher-centered model of learning. Based on the constructivist principle, this study explores an innovative attempt to position students as screencast creators, who must demonstrate their knowledge by [...] Read more.
Screencasts, which are screen-capture videos, have been created by teachers delivering instruction or feedback, reflecting a teacher-centered model of learning. Based on the constructivist principle, this study explores an innovative attempt to position students as screencast creators, who must demonstrate their knowledge by and explain their work in the screencast. This innovative approach has the potential to promote authentic learning and reduce dependence on generative artificial intelligence (GenAI) tools for completing assignments. However, it is uncertain whether students will have positive attitudes towards this new form of assessment. From 2022 to 2025, the authors used screencasts as assessments in computer programming and English language subjects. Survey results were obtained from 203 university students and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results show that students generally hold positive attitudes toward creating screencasts, with perceived usefulness for future applications exerting the strongest influence on acceptance, followed by perceived performance benefits and ease of use. It is also found that gender, discipline, and study mode did not significantly alter these relationships, although senior students perceived screencast production as more effortful. These findings suggest that student-created screencasts can serve as an effective, student-centered alternative to traditional written assessments. The research results imply that student-created screencasts have the potential to help students develop their skills in an increasingly GenAI-pervasive academic environment. Full article
(This article belongs to the Section Higher Education)
Show Figures

Figure 1

47 pages, 1179 KB  
Review
Space Agriculture: A Comprehensive Systems-Level Review of Challenges and Opportunities
by Hassan Fazayeli, Aaron Lee M. Daigh, Cassandra Palmer, Santosh Pitla, David Jones and Yufeng Ge
Agriculture 2025, 15(24), 2541; https://doi.org/10.3390/agriculture15242541 - 8 Dec 2025
Cited by 1 | Viewed by 4724
Abstract
As humanity prepares for prolonged space missions and future extraterrestrial settlements, developing reliable and resilient food-production systems is becoming a critical priority. Space agriculture, the cultivation of plants beyond Earth (particularly on the Moon and Mars), faces a constellation of interdependent environmental, biological, [...] Read more.
As humanity prepares for prolonged space missions and future extraterrestrial settlements, developing reliable and resilient food-production systems is becoming a critical priority. Space agriculture, the cultivation of plants beyond Earth (particularly on the Moon and Mars), faces a constellation of interdependent environmental, biological, and engineering challenges. These include limited solar radiation, elevated ionizing radiation, large thermal variability, non-Earth atmospheric pressures, reduced gravity, regolith substrates with low nutrient-holding capacity, high-CO2/low-O2 atmospheres, pervasive dust, constrained water and nutrient availability, altered plant physiology, and the overarching need for closed-loop, resource-efficient systems. These stressors create an exceptionally challenging environment for plant growth and require tightly engineered agricultural systems. This review examines these constraints by organizing them across environmental differences, resource limitations, biological adaptation, and operational demands, emphasizing their systemic interdependence and the cascading effects that arise when one subsystem changes. By integrating findings from planetary science, plant biology, space systems engineering, biotechnology, robotics, and controlled-environment agriculture (CEA), the review outlines current limitations and highlights emerging strategies such as regolith utilization, advanced hydroponics, crop selection and genetic engineering, and the use of robotics, sensors, and artificial intelligence (AI) for monitoring and automation. Finally, the article underscores the broader relevance of space–agriculture research for terrestrial food security in extreme or resource-limited environments, providing a structured foundation for designing resilient and sustainable agricultural systems for space exploration and beyond. Full article
(This article belongs to the Section Crop Production)
Show Figures

Figure 1

45 pages, 3086 KB  
Review
Modelling of Insulation Thermal Ageing: Historical Evolution from Fundamental Chemistry Towards Becoming an Electrical Machine Design Tool
by Antonis Theofanous, Israr Ullah, Michael Galea, Paolo Giangrande, Vincenzo Madonna, Yatai Ji, John Licari and Maurice Apap
Energies 2025, 18(23), 6087; https://doi.org/10.3390/en18236087 - 21 Nov 2025
Viewed by 1843
Abstract
Electrical insulation systems (EISs) are the principal reliability bottleneck of modern electrical machines (EMs). Among the many stresses acting on insulation, thermal stress is the most pervasive because it accelerates chemical reactions that progressively erode dielectric and mechanical integrity, ultimately dictating service life. [...] Read more.
Electrical insulation systems (EISs) are the principal reliability bottleneck of modern electrical machines (EMs). Among the many stresses acting on insulation, thermal stress is the most pervasive because it accelerates chemical reactions that progressively erode dielectric and mechanical integrity, ultimately dictating service life. As EMs migrate into compact, high-power-density platforms—automotive, aerospace, and industrial drives—designers need lifetime models that are not merely explanatory but actionable, linking operating temperatures and missions to quantified ageing and risk. This review article traces the evolution of thermal-ageing modelling from fundamental chemistry to a practical design tool. The historical empirical lineage of Arrhenius equation, Arrhenius–Dakin model, and Montsinger model is first revisited, clarifying their assumptions, parameter definitions, and the construction of thermal endurance curves. A discussion then follows on extensions that address deviations from first-order kinetics and demonstrate how variable temperature histories can be incorporated through cumulative damage formulations suitable for duty-cycle analysis. Since models are required to be anchored in data, accelerated thermal ageing (ATA) practices on representative specimens are outlined, alongside a description of the Weibull post-processing for deriving percentile lifetimes aligned with design targets. Building upon these foundations, the Physics-of-Failure (PoF) approach is introduced as a reliability-oriented design (ROD) methodology, in which validated lifetime models guide material selection and geometry optimisation while supporting prognostics and health management during operation. The emerging trend towards a hybrid PoF–AI approach is also discussed, which integrates artificial intelligence to identify nonlinear degradation patterns and drifting parameter relationships beyond the reach of empirical models, with physical constraints ensuring that predictions remain consistent with known ageing mechanisms. Such integration enables the learning process to adapt to operational variability and coupled stress effects, thereby improving both the accuracy and physical interpretability of lifetime estimation. The review aims to provide a concise view of models, tests, and workflows that convert thermal-ageing knowledge into robust, design-time decisions. By linking empirical and physics-based insights with modern data-driven learning, these developments support proactive maintenance, sustainable asset management, and extended operational lifetimes for next-generation EMs. Full article
Show Figures

Figure 1

39 pages, 1188 KB  
Review
A Scoping Review of AI-Based Approaches for Detecting Autism Traits Using Voice and Behavioral Data
by Hajarimino Rakotomanana and Ghazal Rouhafzay
Bioengineering 2025, 12(11), 1136; https://doi.org/10.3390/bioengineering12111136 - 22 Oct 2025
Cited by 3 | Viewed by 4576
Abstract
This scoping review systematically maps the rapidly evolving application of Artificial Intelligence (AI) in Autism Spectrum Disorder (ASD) diagnostics, specifically focusing on computational behavioral phenotyping. Recognizing that observable traits like speech and movement are critical for early, timely intervention, the study synthesizes AI’s [...] Read more.
This scoping review systematically maps the rapidly evolving application of Artificial Intelligence (AI) in Autism Spectrum Disorder (ASD) diagnostics, specifically focusing on computational behavioral phenotyping. Recognizing that observable traits like speech and movement are critical for early, timely intervention, the study synthesizes AI’s use across eight key behavioral modalities. These include voice biomarkers, conversational dynamics, linguistic analysis, movement analysis, activity recognition, facial gestures, visual attention, and multimodal approaches. The review analyzed 158 studies published between 2015 and 2025, revealing that modern Machine Learning and Deep Learning techniques demonstrate highly promising diagnostic performance in controlled environments, with reported accuracies of up to 99%. Despite this significant capability, the review identifies critical challenges that impede clinical implementation and generalizability. These persistent limitations include pervasive issues with dataset heterogeneity, gender bias in samples, and small overall sample sizes. By detailing the current landscape of observable data types, computational methodologies, and available datasets, this work establishes a comprehensive overview of AI’s current strengths and fundamental weaknesses in ASD diagnosis. The article concludes by providing actionable recommendations aimed at guiding future research toward developing diagnostic solutions that are more inclusive, generalizable, and ultimately applicable in clinical settings. Full article
Show Figures

Figure 1

Back to TopTop