Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (835)

Search Parameters:
Keywords = information interpretation technology

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 756 KB  
Article
Modelling Cyber Resilience in SMEs as a Socio-Technical System: A Systemic Approach to Adaptive Digital Risk Management
by Alona Bahmanova and Natalja Lace
Systems 2026, 14(2), 151; https://doi.org/10.3390/systems14020151 (registering DOI) - 31 Jan 2026
Abstract
Small and medium-sized enterprises (SMEs) increasingly rely on digital technologies in everyday operations, often without having sufficient resources or structured mechanisms to manage the cyber risks that accompany this dependence. As digitalization deepens, cyber incidents in SMEs are shaped not only by technical [...] Read more.
Small and medium-sized enterprises (SMEs) increasingly rely on digital technologies in everyday operations, often without having sufficient resources or structured mechanisms to manage the cyber risks that accompany this dependence. As digitalization deepens, cyber incidents in SMEs are shaped not only by technical vulnerabilities but also by human behavior and organizational practices. However, much of the existing research still approaches cyber resilience through fragmented technological or managerial lenses. This study takes a conceptual and theory-driven approach to examine cyber resilience in SMEs as a socio-technical system. Building on systems theory and adaptive management, the analysis draws on a structured synthesis of interdisciplinary literature to develop a systemic model of adaptive digital risk management. The model is developed through a structured conceptual process combining systematic exploration of interdisciplinary literature, analytical synthesis of recurring conceptual patterns, and system-level model construction informed by systems theory and adaptive management principles. Cyber resilience is therefore interpreted as a dynamic capability that develops over time, especially in digital environments characterized by increasing automation and evolving forms of human–technology interaction. The study contributes to cyber resilience research by offering a system-oriented perspective and provides SMEs with a conceptual basis for strengthening adaptive approaches to digital risk management. Full article
(This article belongs to the Section Systems Practice in Social Science)
Show Figures

Figure 1

25 pages, 428 KB  
Review
A Review of Power Grid Frameworks for Planning Under Uncertainty
by Tai Zhang, Stefan Borozan and Goran Strbac
Energies 2026, 19(3), 741; https://doi.org/10.3390/en19030741 - 30 Jan 2026
Viewed by 31
Abstract
Power-system planning is being reshaped by rapid decarbonisation, electrification, and digitalisation, which collectively amplify uncertainty in demand, generation, technology adoption, and policy pathways. This review critically synthesises three principal optimisation paradigms used to plan under uncertainty in power systems: scenario-based stochastic optimisation, set-based [...] Read more.
Power-system planning is being reshaped by rapid decarbonisation, electrification, and digitalisation, which collectively amplify uncertainty in demand, generation, technology adoption, and policy pathways. This review critically synthesises three principal optimisation paradigms used to plan under uncertainty in power systems: scenario-based stochastic optimisation, set-based robust optimisation (including adaptive and distributionally robust variants), and minimax-regret decision models. The review is positioned to address a recurrent limitation of many uncertainty-planning surveys, namely the separation between “method reviews” and “technology reviews”, and the consequent lack of decision-operational guidance for planners and system operators. The central contribution is a decision-centric framework that operationalises method selection through two explicit dimensions. The first is an information posture, which formalises what uncertainty information is credible and usable in practice (probabilistic, set-based, or probability-free scenario representations). The second is a flexibility posture, which formalises the availability, controllability, and timing of operational recourse enabled by smart-grid technologies. These postures are connected to modelling templates, data requirements, tractability implications, and validation/stress-testing needs. Smart-grid technologies are integrated not as an appended catalogue but as explicit sources of recourse that change the economics of uncertainty and, in turn, shift the relative attractiveness of stochastic, robust, and regret-based planning. Soft Open Points, Coordinated Voltage Control, and Vehicle-to-Grid/Vehicle-to-Building are treated uniformly under this recourse lens, highlighting how device capabilities, control timescales, and implementation constraints map into each paradigm. The paper also increases methodological transparency by describing literature-search, screening, and inclusion principles consistent with a structured narrative review. Practical guidance is provided on modelling choices, uncertainty governance, computational scalability, and institutional adoption constraints, alongside revised comparative tables that embed data credibility, regulatory interpretability, and implementation maturity. Full article
(This article belongs to the Special Issue Optimization and Machine Learning Approaches for Power Systems)
20 pages, 875 KB  
Review
On the Coexistence of Captions and Sign Language as Accessibility Solutions in Educational Settings
by Francesco Pavani and Valerio Leonetti
Audiol. Res. 2026, 16(1), 20; https://doi.org/10.3390/audiolres16010020 - 29 Jan 2026
Viewed by 73
Abstract
Background/Objectives: In mainstream educational settings, deaf and hard-of-hearing (DHH) students may have limited or no access to the spoken lectures and discussions that are central to the hearing majority classroom. Yet, engagement in these educational and social exchanges is fundamental to their learning [...] Read more.
Background/Objectives: In mainstream educational settings, deaf and hard-of-hearing (DHH) students may have limited or no access to the spoken lectures and discussions that are central to the hearing majority classroom. Yet, engagement in these educational and social exchanges is fundamental to their learning and inclusion. Two primary visual accessibility solutions can support this need: real-time speech-to-text transcriptions (i.e., captioning) and high-quality sign language interpreting. Their combined use (or coexistence), however, raises concerns of competition between concurrent streams of visual information. This article examines the empirical evidence concerning the effectiveness of using both captioning and sign language simultaneously in educational settings. Specifically, it investigates whether this combined approach leads to better or worse content learning for DHH students, when compared to using either visual accessibility solution in isolation. Methods: A review of all English language studies in peer-reviewed journals until August 2025 was performed. Eligible studies used an experimental design to compare content learning when using sign language and captions together, versus using sign language or captions on their own. Databases Reviewed: EMBASE, PubMed/MEDLINE, and PsycInfo. Results: A total of four studies met the criteria for inclusion. This limited evidence is insufficient to decide on the coexistence of captioning and sign language. Yet, it underscores the potential of captions for content access in education for DHH, even when sign language is available. Conclusions: The present article reveals the lack of evidence in favor or against its coexistence with sign language. With the aim to be constructive for future research, the discussion offers considerations on the attentional demands of simultaneous visual accessibility resources, the diversity of DHH learners, and the impact of current and forthcoming technological advancements. Full article
Show Figures

Figure 1

26 pages, 2496 KB  
Systematic Review
Blockchain and Coffee Supply Chain: Implications for Traceability, Efficiency, and Sustainability: A Systematic Literature Review
by Roberto Ruggieri, Camilla Dioguardi, Luca Silvestri, Marco Ruggeri and Fabrizio D’Ascenzo
Sustainability 2026, 18(3), 1290; https://doi.org/10.3390/su18031290 - 27 Jan 2026
Viewed by 359
Abstract
The high organizational complexity of the Global Coffee Supply Chain (GCSC) poses significant challenges in terms of governance and sustainability, such as asymmetric access to information, deforestation, loss of biodiversity, and overproduction, as well as high price volatility and social issues such as [...] Read more.
The high organizational complexity of the Global Coffee Supply Chain (GCSC) poses significant challenges in terms of governance and sustainability, such as asymmetric access to information, deforestation, loss of biodiversity, and overproduction, as well as high price volatility and social issues such as workers’ rights and the unequal distribution of value along the supply chain. In this context, therefore, the coffee sector could benefit from the adoption of advanced traceability systems such as blockchain, whose implications in the GCSC remain poorly systematized in the literature. Therefore, this research presented a systematic literature review on the application of BC in the GCSC to analyze its efficiency, traceability, and sustainability implications, as well as identifying the main factors that hinder its full implementation. The review included 42 peer-reviewed studies indexed in Scopus, and the results showed that, in terms of efficiency, BC adoption can help improve coordination and reduce information asymmetries along the supply chain, but only in specific contexts, as they depend largely on organizational and infrastructural conditions, rather than on the technical characteristics of the technology. With regard to sustainability, the results sometimes appear contradictory, reflecting profound differences in context. The review highlighted that the main obstacles to the effective adoption of BC in the GCSC stem from a combination of constraints, including centralized governance structures, power asymmetries in data management, infrastructure deficiencies in production contexts, and digital exclusion dynamics. Overall, the study highlighted that BC in the coffee sector cannot be considered a stand-alone solution but should be interpreted as a socio-technical infrastructure whose effectiveness depends on many interconnected factors. Full article
Show Figures

Figure 1

22 pages, 1407 KB  
Review
Artificial Intelligence Drives Advances in Multi-Omics Analysis and Precision Medicine for Sepsis
by Youxie Shen, Peidong Zhang, Jialiu Luo, Shunyao Chen, Shuaipeng Gu, Zhiqiang Lin and Zhaohui Tang
Biomedicines 2026, 14(2), 261; https://doi.org/10.3390/biomedicines14020261 - 23 Jan 2026
Viewed by 353
Abstract
Sepsis is a life-threatening syndrome characterized by marked clinical heterogeneity and complex host–pathogen interactions. Although traditional mechanistic studies have identified key molecular pathways, they remain insufficient to capture the highly dynamic, multifactorial, and systems-level nature of this condition. The advent of high-throughput omics [...] Read more.
Sepsis is a life-threatening syndrome characterized by marked clinical heterogeneity and complex host–pathogen interactions. Although traditional mechanistic studies have identified key molecular pathways, they remain insufficient to capture the highly dynamic, multifactorial, and systems-level nature of this condition. The advent of high-throughput omics technologies—particularly integrative multi-omics approaches encompassing genomics, transcriptomics, proteomics, and metabolomics—has profoundly reshaped sepsis research by enabling comprehensive profiling of molecular perturbations across biological layers. However, the unprecedented scale, dimensionality, and heterogeneity of multi-omics datasets exceed the analytical capacity of conventional statistical methods, necessitating more advanced computational strategies to derive biologically meaningful and clinically actionable insights. In this context, artificial intelligence (AI) has emerged as a powerful paradigm for decoding the complexity of sepsis. By leveraging machine learning and deep learning algorithms, AI can efficiently process ultra-high-dimensional and heterogeneous multi-omics data, uncover latent molecular patterns, and integrate multilayered biological information into unified predictive frameworks. These capabilities have driven substantial advances in early sepsis detection, molecular subtyping, prognosis prediction, and therapeutic target identification, thereby narrowing the gap between molecular mechanisms and clinical application. As a result, the convergence of AI and multi-omics is redefining sepsis research, shifting the field from descriptive analyses toward predictive, mechanistic, and precision-oriented medicine. Despite these advances, the clinical translation of AI-driven multi-omics approaches in sepsis remains constrained by several challenges, including limited data availability, cohort heterogeneity, restricted interpretability and causal inference, high computational demands, difficulties in integrating static molecular profiles with dynamic clinical data, ethical and governance concerns, and limited generalizability across populations and platforms. Addressing these barriers will require the establishment of standardized, multicenter datasets, the development of explainable and robust AI frameworks, and sustained interdisciplinary collaboration between computational scientists and clinicians. Through these efforts, AI-enabled multi-omics research may progress toward reproducible, interpretable, and equitable clinical implementation. Ultimately, the synergy between artificial intelligence and multi-omics heralds a new era of intelligent discovery and precision medicine in sepsis, with the potential to transform both research paradigms and bedside practice. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

14 pages, 272 KB  
Article
Emotional Intelligence, Immediate Auditory Memory, and ICT in Primary Education: A Neuroeducational Approach
by Raquel Muñoz-Pradas, Alejandro Romero-Morales, Antonio Palacios-Rodríguez and Mª Victoria Fernández-Scagliusi
Soc. Sci. 2026, 15(2), 58; https://doi.org/10.3390/socsci15020058 - 23 Jan 2026
Viewed by 165
Abstract
This study examines the relationship between Emotional Intelligence (EI) and Immediate Auditory Memory (IAM) in primary-school students aged 10–12 years. Through a neuroeducational perspective, it explores how emotional competencies, particularly emotional meta-knowledge, interact with cognitive retention processes. Standardized instruments were administered to a [...] Read more.
This study examines the relationship between Emotional Intelligence (EI) and Immediate Auditory Memory (IAM) in primary-school students aged 10–12 years. Through a neuroeducational perspective, it explores how emotional competencies, particularly emotional meta-knowledge, interact with cognitive retention processes. Standardized instruments were administered to a sample of 175 students from schools in Southern Spain. The findings indicate a positive association between Emotional Clarity—a key subdimension of EI—and IAM, with Emotional Clarity emerging as a modest predictor of auditory retention. No notable associations were observed for Emotional Attention or Emotional Repair. These results suggest that the ability to understand one’s emotions may subtly facilitate the processing and retention of auditory information. From neuroscientific and technological viewpoints, the study highlights the potential benefits of integrating emotional education and digital tools in the classroom to enhance student well-being and cognitive development, while calling for cautious interpretation due to the multifaceted nature of these variables. Full article
(This article belongs to the Special Issue Educational Technology for a Multimodal Society)
46 pages, 4076 KB  
Review
A Review of AI-Driven Engineering Modelling and Optimization: Methodologies, Applications and Future Directions
by Jian-Ping Li, Nereida Polovina and Savas Konur
Algorithms 2026, 19(2), 93; https://doi.org/10.3390/a19020093 - 23 Jan 2026
Viewed by 188
Abstract
Engineering is suffering a significant change driven by the integration of artificial intelligence (AI) into engineering optimization in design, analysis, and operational efficiency across numerous disciplines. This review synthesizes the current landscape of AI-driven optimization methodologies and their impacts on engineering applications. In [...] Read more.
Engineering is suffering a significant change driven by the integration of artificial intelligence (AI) into engineering optimization in design, analysis, and operational efficiency across numerous disciplines. This review synthesizes the current landscape of AI-driven optimization methodologies and their impacts on engineering applications. In the literature, several frameworks for AI-based engineering optimization have been identified: (1) machine learning models are trained as objective and constraint functions for optimization problems; (2) machine learning techniques are used to improve the efficiency of optimization algorithms; (3) neural networks approximate complex simulation models such as finite element analysis (FEA) and computational fluid dynamics (CFD) and this makes it possible to optimize complex engineering systems; and (4) machine learning predicts design parameters/initial solutions that are subsequently optimized. Fundamental AI technologies, such as artificial neural networks and deep learning, are examined in this paper, along with commonly used AI-assisted optimization strategies. Representative applications of AI-driven engineering optimization have been surveyed in this paper across multiple fields, including mechanical and aerospace engineering, civil engineering, electrical and computer engineering, chemical and materials engineering, energy and management. These studies demonstrate how AI enables significant improvements in computational modelling, predictive analytics, and generative design while effectively handling complex multi-objective constraints. Despite these advancements, challenges remain in areas such as data quality, model interpretability, and computational cost, particularly in real-time environments. Through a systematic analysis of recent case studies and emerging trends, this paper provides a critical assessment of the state of the art and identifies promising research directions, including physics-informed neural networks, digital twins, and human–AI collaborative optimization frameworks. The findings highlight AI’s potential to redefine engineering optimization paradigms, while emphasizing the need for robust, scalable, and ethically aligned implementations. Full article
(This article belongs to the Special Issue AI-Driven Engineering Optimization)
Show Figures

Figure 1

36 pages, 4575 KB  
Article
A PI-Dual-STGCN Fault Diagnosis Model Based on the SHAP-LLM Joint Explanation Framework
by Zheng Zhao, Shuxia Ye, Liang Qi, Hao Ni, Siyu Fei and Zhe Tong
Sensors 2026, 26(2), 723; https://doi.org/10.3390/s26020723 - 21 Jan 2026
Viewed by 167
Abstract
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability [...] Read more.
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability of graph data by introducing physical constraints and constructs a dual-graph architecture based on physical topology graphs and signal similarity graphs. The experimental results show that the dual-graph complementary architecture enhances diagnostic accuracy to 99.22%. Second, a general-purpose SHAP-LLM explanation framework is designed: Explainable AI (XAI) technology is used to analyze the decision logic of the diagnostic model and generate visual explanations, establishing a hierarchical knowledge base that includes performance metrics, explanation reliability, and fault experience. Retrieval-Augmented Generation (RAG) technology is innovatively combined to integrate model performance and Shapley Additive Explanations (SHAP) reliability assessment through the main report prompt, while the sub-report prompt enables detailed fault analysis and repair decision generation. Finally, experiments demonstrate that this approach avoids the uncertainty of directly using large models for fault diagnosis: we delegate all fault diagnosis tasks and core explainability tasks to more mature deep learning algorithms and XAI technology and only leverage the powerful textual reasoning capabilities of large models to process pre-quantified, fact-based textual information (e.g., model performance metrics, SHAP explanation results). This method enhances diagnostic transparency through XAI-generated visual and quantitative explanations of model decision logic while reducing the risk of large model hallucinations by restricting large models to reasoning over grounded, fact-based textual content rather than direct fault diagnosis, providing verifiable intelligent decision support for industrial fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

12 pages, 436 KB  
Systematic Review
Transverse Diagnosis and CBCT Technology: A Systematic Review
by Daniel Diez-Rodrigálvarez, Elena Bonilla-Morente and Alberto-José López-Jiménez
J. Clin. Med. 2026, 15(2), 868; https://doi.org/10.3390/jcm15020868 - 21 Jan 2026
Viewed by 138
Abstract
Background: Diagnosis is the fundamental basis for understanding biomechanics in orthodontic treatment and for accurately designing the treatment plan. Traditionally, the sagittal plane has been the primary focus of assessment; however, it is essential to consider the patient in all three spatial planes. [...] Read more.
Background: Diagnosis is the fundamental basis for understanding biomechanics in orthodontic treatment and for accurately designing the treatment plan. Traditionally, the sagittal plane has been the primary focus of assessment; however, it is essential to consider the patient in all three spatial planes. Therefore, it is necessary to explore the transverse plane, which is equally as crucial as the sagittal and vertical planes. With current technological advances, it is now possible to obtain three-dimensional images of the patient using cone-beam computed tomography (CBCT), allowing evaluation of all planes in a single diagnostic test. This study aimed to assess the diagnostic methods used for transverse analysis and the usefulness of CBCT for this purpose. Material and Methods: To select the studies for this review, we searched the PubMed, Scopus, and Cochrane databases for publications between 1965 and 2021. Our inclusion criteria targeted studies that evaluated the transverse plane using CBCT or CT. We assessed the level of evidence according to the OCEBM classification and evaluated the risk of bias using the QUADAS-2 scale. Results: After reviewing 535 articles, we selected 16 that met the established criteria. These studies compared various diagnostic methods for transverse analysis and their reproducibility indices. We identified the absence of a gold standard for measuring transverse discrepancies and high variability among diagnostic methods as the main limitations. Conclusions: Based on the available evidence, it can be concluded that dental and skeletal transverse discrepancies can be reliably differentiated using the diagnostic techniques evaluated in this study, particularly through CBCT-based assessment. Therefore, the diagnosis of transverse discrepancies should not be considered unclear, as it can be established using objective and measurable criteria. These findings reinforce the clinical value of current diagnostic tools and highlight the importance of accurate three-dimensional interpretation for informed and effective treatment decision-making. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
Show Figures

Figure 1

22 pages, 1714 KB  
Article
Integrating Machine-Learning Methods with Importance–Performance Maps to Evaluate Drivers for the Acceptance of New Vaccines: Application to AstraZeneca COVID-19 Vaccine
by Jorge de Andrés-Sánchez, Mar Souto-Romero and Mario Arias-Oliva
AI 2026, 7(1), 34; https://doi.org/10.3390/ai7010034 - 21 Jan 2026
Viewed by 213
Abstract
Background: The acceptance of new vaccines under uncertainty—such as during the COVID-19 pandemic—poses a major public health challenge because efficacy and safety information is still evolving. Methods: We propose an integrative analytical framework that combines a theory-based model of vaccine acceptance—the cognitive–affective–normative (CAN) [...] Read more.
Background: The acceptance of new vaccines under uncertainty—such as during the COVID-19 pandemic—poses a major public health challenge because efficacy and safety information is still evolving. Methods: We propose an integrative analytical framework that combines a theory-based model of vaccine acceptance—the cognitive–affective–normative (CAN) model—with machine-learning techniques (decision tree regression, random forest, and Extreme Gradient Boosting) and SHapley Additive exPlanations (SHAP) integrated into an importance–performance map (IPM) to prioritize determinants of vaccination intention. Using survey data collected in Spain in September 2020 (N = 600), when the AstraZeneca vaccine had not yet been approved, we examine the roles of perceived efficacy (EF), fear of COVID-19 (FC), fear of the vaccine (FV), and social influence (SI). Results: EF and SI consistently emerged as the most influential determinants across modelling approaches. Ensemble learners (random forest and Extreme Gradient Boosting) achieved stronger out-of-sample predictive performance than the single decision tree, while decision tree regression provided an interpretable, rule-based representation of the main decision pathways. Exploiting the local nature of SHAP values, we also constructed SHAP-based IPMs for the full sample and for the low-acceptance segment, enhancing the policy relevance of the prioritization exercise. Conclusions: By combining theory-driven structural modelling with predictive and explainable machine learning, the proposed framework offers a transparent and replicable tool to support the design of vaccination communication strategies and can be transferred to other settings involving emerging health technologies. Full article
Show Figures

Figure 1

19 pages, 2077 KB  
Article
Evaluating Natural Language Processing and Named Entity Recognition for Bioarchaeological Data Reuse
by Alphaeus Lien-Talks
Heritage 2026, 9(1), 35; https://doi.org/10.3390/heritage9010035 - 19 Jan 2026
Viewed by 215
Abstract
Bioarchaeology continues to generate growing volumes of data from finite and often destructively sampled resources, making data reusability critical according to FAIR principles (Findable, Accessible, Interoperable, Reusable) and CARE (Collective Benefit, Authority to Control, Responsibility and Ethics). However, much valuable information remains trapped [...] Read more.
Bioarchaeology continues to generate growing volumes of data from finite and often destructively sampled resources, making data reusability critical according to FAIR principles (Findable, Accessible, Interoperable, Reusable) and CARE (Collective Benefit, Authority to Control, Responsibility and Ethics). However, much valuable information remains trapped in grey literature, particularly PDF-based reports, limiting discoverability and machine processing. This paper explores Natural Language Processing (NLP) and Named Entity Recognition (NER) techniques to improve access to osteoarchaeological and palaeopathological data in grey literature. The research developed and evaluated the Osteoarchaeological and Palaeopathological Entity Search (OPES), a lightweight prototype system designed to extract relevant terms from PDF documents within the Archaeology Data Service archive. Unlike transformer-based Large Language Models, OPES employs interpretable, computationally efficient, and sustainable NLP methods. A structured user evaluation (n = 83) involving students (42), experts (26), and the general public (15) assessed five success criteria: usefulness, time-saving ability, accessibility, reliability, and likelihood of reuse. Results demonstrate that while limitations remain in reliability and expert engagement, NLP and NER show clear potential to increase FAIRness of osteoarcheological datasets. The study emphasises the continued need for robust evaluation methodologies in heritage AI applications as new technologies emerge. Full article
(This article belongs to the Special Issue AI and the Future of Cultural Heritage)
Show Figures

Figure 1

12 pages, 2457 KB  
Article
Enhancing Patient-Centered Health Technology Assessment: A Modified Delphi Panel for PICOS Scoping in Spinal Muscular Atrophy
by Emanuele Arcà, Adele Barlassina, Adaeze Eze and Valentina Strammiello
J. Mark. Access Health Policy 2026, 14(1), 6; https://doi.org/10.3390/jmahp14010006 - 19 Jan 2026
Viewed by 269
Abstract
Objectives: This study explores the feasibility and value of integrating structured patient input into the PICOS (Population, Intervention, Comparator, Outcome, Study design) scoping process for Joint Clinical Assessments under the EU Health Technology Assessment Regulation. Methods: A modified Delphi panel, led by a [...] Read more.
Objectives: This study explores the feasibility and value of integrating structured patient input into the PICOS (Population, Intervention, Comparator, Outcome, Study design) scoping process for Joint Clinical Assessments under the EU Health Technology Assessment Regulation. Methods: A modified Delphi panel, led by a steering committee composed of two clinicians, one patient expert, and one policy expert, engaged 12 individuals representing patient organizations across 12 European Member States to reach consensus on PICOS elements for CAR-T therapy in pediatric spinal muscular atrophy. Results: The Delphi process effectively facilitated PICOS consolidation and consensus among the 12 patient experts representing diverse EU contexts. Through 3 iterative rounds integrating quantitative rankings and qualitative feedback, the panel achieved strong agreement on key outcomes, intervention delivery, and study design elements, with population eligibility and comparator selection showing heterogeneity. Patient engagement was central: participants emphasized inclusive eligibility criteria, shared decision-making, and the inclusion of caregiver perspectives. The integration of qualitative insights allowed nuanced interpretation of dissent, distinguishing between genuine disagreement and framing effects, thereby enhancing transparency and scientific validity. Importantly, the process revealed patient priorities for outcomes, treatment burden, and evidence trade-offs, informing both PICOS refinement and future health technology assessment (HTA) strategies. This structured, participatory approach demonstrates the feasibility and value of incorporating patient voices systematically into early-stage EU HTA, fostering robust, credible, and context-sensitive consensus on complex rare-disease interventions. Conclusions: The study demonstrates the potential of consensus-building methodologies to enhance transparency, reduce heterogeneity, and support patient-centered evidence generation and decision-making in HTA. Full article
(This article belongs to the Collection European Health Technology Assessment (EU HTA))
Show Figures

Figure 1

25 pages, 1436 KB  
Article
Entropy-Augmented Forecasting and Portfolio Construction at the Industry-Group Level: A Causal Machine-Learning Approach Using Gradient-Boosted Decision Trees
by Gil Cohen, Avishay Aiche and Ron Eichel
Entropy 2026, 28(1), 108; https://doi.org/10.3390/e28010108 - 16 Jan 2026
Viewed by 258
Abstract
This paper examines whether information-theoretic complexity measures enhance industry-group return forecasting and portfolio construction within a machine-learning framework. Using daily data for 25 U.S. GICS industry groups spanning more than three decades, we augment gradient-boosted decision tree models with Shannon entropy and fuzzy [...] Read more.
This paper examines whether information-theoretic complexity measures enhance industry-group return forecasting and portfolio construction within a machine-learning framework. Using daily data for 25 U.S. GICS industry groups spanning more than three decades, we augment gradient-boosted decision tree models with Shannon entropy and fuzzy entropy computed from recent return dynamics. Models are estimated at weekly, monthly, and quarterly horizons using a strictly causal rolling-window design and translated into two economically interpretable allocation rules, a maximum-profit strategy and a minimum-risk strategy. Results show that the top performing strategy, the weekly maximum-profit model augmented with Shannon entropy, achieves an accumulated return exceeding 30,000%, substantially outperforming both the baseline model and the fuzzy-entropy variant. On monthly and quarterly horizons, entropy and fuzzy entropy generate smaller but robust improvements by maintaining lower volatility and better downside protection. Industry allocations display stable and economically interpretable patterns, profit-oriented strategies concentrate primarily in cyclical and growth-sensitive industries such as semiconductors, automobiles, technology hardware, banks, and energy, while minimum-risk strategies consistently favor defensive industries including utilities, food, beverage and tobacco, real estate, and consumer staples. Overall, the results demonstrate that entropy-based complexity measures improve both economic performance and interpretability, yielding industry-rotation strategies that are simultaneously more profitable, more stable, and more transparent. Full article
(This article belongs to the Special Issue Entropy, Artificial Intelligence and the Financial Markets)
Show Figures

Figure 1

40 pages, 1968 KB  
Article
Large Model in Low-Altitude Economy: Applications and Challenges
by Jinpeng Hu, Wei Wang, Yuxiao Liu and Jing Zhang
Big Data Cogn. Comput. 2026, 10(1), 33; https://doi.org/10.3390/bdcc10010033 - 16 Jan 2026
Viewed by 545
Abstract
The integration of large models and multimodal foundation models into the low-altitude economy is driving a transformative shift, enabling intelligent, autonomous, and efficient operations for low-altitude vehicles (LAVs). This article provides a comprehensive analysis of the role these large models play within the [...] Read more.
The integration of large models and multimodal foundation models into the low-altitude economy is driving a transformative shift, enabling intelligent, autonomous, and efficient operations for low-altitude vehicles (LAVs). This article provides a comprehensive analysis of the role these large models play within the smart integrated lower airspace system (SILAS), focusing on their applications across the four fundamental networks: facility, information, air route, and service. Our analysis yields several key findings, which pave the way for enhancing the application of large models in the low-altitude economy. By leveraging advanced capabilities in perception, reasoning, and interaction, large models are demonstrated to enhance critical functions such as high-precision remote sensing interpretation, robust meteorological forecasting, reliable visual localization, intelligent path planning, and collaborative multi-agent decision-making. Furthermore, we find that the integration of these models with key enabling technologies, including edge computing, sixth-generation (6G) communication networks, and integrated sensing and communication (ISAC), effectively addresses challenges related to real-time processing, resource constraints, and dynamic operational environments. Significant challenges, including sustainable operation under severe resource limitations, data security, network resilience, and system interoperability, are examined alongside potential solutions. Based on our survey, we discuss future research directions, such as the development of specialized low-altitude models, high-efficiency deployment paradigms, advanced multimodal fusion, and the establishment of trustworthy distributed intelligence frameworks. This survey offers a forward-looking perspective on this rapidly evolving field and underscores the pivotal role of large models in unlocking the full potential of the next-generation low-altitude economy. Full article
Show Figures

Figure 1

32 pages, 483 KB  
Review
The Complexity of Communication in Mammals: From Social and Emotional Mechanisms to Human Influence and Multimodal Applications
by Krzysztof Górski, Stanisław Kondracki and Katarzyna Kępka-Borkowska
Animals 2026, 16(2), 265; https://doi.org/10.3390/ani16020265 - 15 Jan 2026
Viewed by 380
Abstract
Communication in mammals constitutes a complex, multimodal system that integrates visual, acoustic, tactile, and chemical signals whose functions extend beyond simple information transfer to include the regulation of social relationships, coordination of behaviour, and expression of emotional states. This article examines the fundamental [...] Read more.
Communication in mammals constitutes a complex, multimodal system that integrates visual, acoustic, tactile, and chemical signals whose functions extend beyond simple information transfer to include the regulation of social relationships, coordination of behaviour, and expression of emotional states. This article examines the fundamental mechanisms of communication from biological, neuroethological, and behavioural perspectives, with particular emphasis on domesticated and farmed species. Analysis of sensory signals demonstrates that their perception and interpretation are closely linked to the physiology of sensory organs as well as to social experience and environmental context. In companion animals such as dogs and cats, domestication has significantly modified communicative repertoires ranging from the development of specialised facial musculature in dogs to adaptive diversification of vocalisations in cats. The neurobiological foundations of communication, including the activity of the amygdala, limbic structures, and mirror-neuron systems, provide evidence for homologous mechanisms of emotion recognition across species. The article also highlights the role of communication in shaping social structures and the influence of husbandry conditions on the behaviour of farm animals. In intensive production environments, acoustic, visual, and chemical signals are often shaped or distorted by crowding, noise, and chronic stress, with direct consequences for welfare. Furthermore, the growing importance of multimodal technologies such as Precision Livestock Farming (PLF) and Animal–Computer Interaction (ACI) is discussed, particularly their role in enabling objective monitoring of emotional states and behaviour and supporting individualised care. Overall, the analysis underscores that communication forms the foundation of social functioning in mammals, and that understanding this complexity is essential for ethology, animal welfare, training practices, and the design of modern technologies facilitating human–animal interaction. Full article
(This article belongs to the Section Human-Animal Interactions, Animal Behaviour and Emotion)
Back to TopTop