Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (410)

Search Parameters:
Keywords = AI risk mitigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 11575 KB  
Article
GeoAI-Driven Land Cover Change Prediction Using Copernicus Earth Observation and Geospatial Data for Law-Compliant Territorial Planning in the Aosta Valley (Italy)
by Tommaso Orusa, Duke Cammareri and Davide Freppaz
Land 2026, 15(4), 533; https://doi.org/10.3390/land15040533 - 25 Mar 2026
Abstract
Mapping land cover, monitoring its changes, and simulating future alterations are essential tasks for sustainable land management. These processes enable accurate assessment of environmental impacts, support informed policymaking, and assist in the planning needed to mitigate risks related to urban expansion, deforestation, and [...] Read more.
Mapping land cover, monitoring its changes, and simulating future alterations are essential tasks for sustainable land management. These processes enable accurate assessment of environmental impacts, support informed policymaking, and assist in the planning needed to mitigate risks related to urban expansion, deforestation, and climate change. This study proposes a GeoAI-based framework leveraging Multilayer Perceptron (MLP), a class of Artificial Neural Networks (ANNs), to predict land cover changes in the Aosta Valley region (NW Italy). The model uses Copernicus Earth Observation data, specifically Sentinel-1 and Sentinel-2 imagery, and is trained and validated on land cover maps derived from different time periods previously validated with ground truth data. The objective is to provide a predictive tool capable of simulating potential future landscape configurations, supporting proactive regional land use planning including regulatory constraints under the current land use plan. Model performance is evaluated using accuracy metrics. The land cover classification methodology follows established approaches in the scientific literature, adapted to the specific geomorphological characteristics of the Aosta Valley. To explore and visualize potential future land cover transitions, Sankey and chord diagrams are used in combination with zonal statistics and thematic plots. These provide detailed insights into the intensity, direction, and magnitude of landscape dynamics. Training data were stratified-sampled across the study area, covering a diverse set of land cover classes to ensure robustness and generalization of the MLP model. This GeoAI approach offers a scalable and replicable methodology for anticipating land cover dynamics, identifying vulnerable areas, and informing adaptive environmental management strategies at the regional scale, while simultaneously considering the latest urban planning regulations. Full article
Show Figures

Figure 1

21 pages, 1403 KB  
Review
Integrating GLP-1 Receptor Agonists into Modern Stroke Prevention: Evidence, Mechanisms, and Clinical Consideration—A Narrative Review
by Shayan Khan, William Herbst, Farbod Zahedi Tajrishi, Sonali Notani, Alexander Knight, Zina Jamil and Keith C. Ferdinand
Biomedicines 2026, 14(4), 743; https://doi.org/10.3390/biomedicines14040743 (registering DOI) - 24 Mar 2026
Abstract
Stroke remains a major cause of morbidity and mortality worldwide. Although reperfusion therapies and secondary prevention have advanced, the global stroke burden continues to rise, driven by increasing rates of hypertension and diabetes mellitus. Type 2 diabetes (T2DM) increases the risk of acute [...] Read more.
Stroke remains a major cause of morbidity and mortality worldwide. Although reperfusion therapies and secondary prevention have advanced, the global stroke burden continues to rise, driven by increasing rates of hypertension and diabetes mellitus. Type 2 diabetes (T2DM) increases the risk of acute ischemic stroke (AIS) through mechanisms involving chronic hyperglycemia, endothelial dysfunction, inflammation, and accelerated atherogenesis. In recent years, glucagon-like peptide-1 receptor agonists (GLP-1RAs) have emerged as promising agents for cardiovascular and cerebrovascular risk reduction in patients with T2DM. Beyond their glucose-lowering properties, GLP-1RAs improve blood pressure regulation and lipid metabolism, as mentioned in the 2025 AHA Journal guidelines for the prevention, detection, evaluation, and management of high blood pressure in adults. Emerging preclinical and clinical evidence indicates that GLP-1RAs also provide direct neurovascular protection by stabilizing the blood–brain barrier, modulating neuroinflammation, and promoting neuronal survival. These mechanisms may reduce ischemic injury, improve recovery after stroke, and protect against cognitive decline. Major cardiovascular outcome trials have demonstrated significant reductions in major adverse cardiovascular events and, to a lesser degree, non-fatal stroke among patients receiving GLP-1RAs. This narrative review evaluates current evidence on the neurovascular, cardiometabolic, and anti-inflammatory actions of GLP-1RAs and their potential role in mitigating stroke risk and promoting cerebrovascular health. Additionally, it highlights gaps in the literature, explores clinical and guideline implications, and outlines future directions for integrating GLP-1RA therapy into comprehensive stroke prevention and recovery strategies. Full article
(This article belongs to the Special Issue Diabetes: Comorbidities, Therapeutics and Insights (3rd Edition))
Show Figures

Figure 1

21 pages, 1719 KB  
Review
From Tool to Agent: A Semi-Systematic Review of Human–AI Alignment and a Proposed Tiered Healing Ecosystem for Mental Health
by Anran Ma, Jingying Chen and Zhiyi Yang
Healthcare 2026, 14(6), 820; https://doi.org/10.3390/healthcare14060820 - 23 Mar 2026
Viewed by 44
Abstract
Background: This study aims to systematically analyze the structural transition of AI in mental health, differentiating between passive tools and autonomous agents, and to propose a governance framework to facilitate responsible integration or mitigate integration risks. Methods: Employing a semi-systematic approach, [...] Read more.
Background: This study aims to systematically analyze the structural transition of AI in mental health, differentiating between passive tools and autonomous agents, and to propose a governance framework to facilitate responsible integration or mitigate integration risks. Methods: Employing a semi-systematic approach, we screened records from IEEE Xplore, PubMed, and ACM DL, ultimately analyzing 61 included studies. We track the transition from the first paradigm, AI-as-Tool (AI-T) to the second paradigm, AI-as-Agent (AI-A). Results: Early empirical evidence suggests that AI-A systems may assist in fostering preliminary working alliances and demonstrate potential for symptom reduction in controlled settings; however, their efficacy cannot currently be equated with, nor serve as a replacement for, standard low-intensity clinical care. Conclusions: To mitigate these risks, we propose the Tiered Human–AI Healing Ecosystem (THHE) for mental health. This framework utilizes dynamic autonomy modulation—automatically restricting AI agency based on real-time risk markers—to manage transitions between AI-led support and human-led care, promoting clinical safety. Full article
(This article belongs to the Special Issue Artificial Intelligence Chatbots and Mental Health)
Show Figures

Figure 1

27 pages, 3391 KB  
Article
AI-Powered Customer Service in Online Retail: Product-Type Differences, Information Asymmetry, and Seller Interventions
by Shuyuan Bai, Xinquan Wang and Jun Xia
J. Theor. Appl. Electron. Commer. Res. 2026, 21(3), 97; https://doi.org/10.3390/jtaer21030097 - 23 Mar 2026
Viewed by 62
Abstract
The rapid integration of AI customer service in e-commerce raises an important managerial question: Can AI effectively reduce product-related information asymmetry and improve sales performance across different product types? While prior research highlights both the uncertainty-reducing benefits of information and the risks of [...] Read more.
The rapid integration of AI customer service in e-commerce raises an important managerial question: Can AI effectively reduce product-related information asymmetry and improve sales performance across different product types? While prior research highlights both the uncertainty-reducing benefits of information and the risks of algorithm aversion, little is known about how AI customer service performs under varying levels of product uncertainty and information asymmetry. Using a difference-in-differences design with fixed effects across time, products, shops, and categories, we examine the impact of replacing customer service with AI on sales outcomes, distinguishing between search and experience goods. We further test how the depth and breadth of product information moderate these effects. Our findings indicate that AI customer service reduces sales for experience goods but not for search goods, unless accompanied by sufficient informational depth and breadth. We argue that this effect arises because AI technically inherits and amplifies the information asymmetry inherent in experience products, while greater informational depth and breadth of product information can mitigate this amplified asymmetry. Additionally, we find that this mitigating effect is more pronounced among products with high return rate. These findings clarify when AI-generated information mitigates product uncertainty and when it exacerbates it. Our results provide actionable guidance for firms seeking to deploy AI strategically in digital commerce environments. Full article
Show Figures

Figure 1

15 pages, 453 KB  
Article
Healthcare Providers’ Perspectives on Generative Artificial Intelligence (GenAI) Adoption, Adaptation, Assimilation, and Use in the United States
by Obinna O. Oleribe, Marissa Brash, Adati Tarfa, Ricardo Izurieta and Simon D. Taylor-Robinson
Healthcare 2026, 14(6), 775; https://doi.org/10.3390/healthcare14060775 - 19 Mar 2026
Viewed by 315
Abstract
Background: Generative artificial intelligence (GenAI) is rapidly permeating healthcare; yet, U.S. clinicians still report mixed feelings about its reliability, impact on workflow, and ethical implications. Current data on provider sentiment are needed to guide safe, patient-centered AI implementation in healthcare. Objective: This study [...] Read more.
Background: Generative artificial intelligence (GenAI) is rapidly permeating healthcare; yet, U.S. clinicians still report mixed feelings about its reliability, impact on workflow, and ethical implications. Current data on provider sentiment are needed to guide safe, patient-centered AI implementation in healthcare. Objective: This study aimed to assess U.S. healthcare providers’ perceptions of generative AI adoption, perceived usefulness, training needs, barriers, and strategies for safe integration. Methods: A nationwide, IRB-approved, cross-sectional survey was administered to healthcare professionals using Qualtrics. A convenience sample of clinicians was recruited via professional listservs and e-mail invitations. The 20-page questionnaire captured demographics, GenAI exposure, organizational adoption status, perceived usefulness (5-point scale), barriers, and mitigation strategies. SPSS v27 and Microsoft Excel were used for statistical analysis. Results: Of 130 respondents, 109 completed the core survey (completion rate 83.8%). Participants were 38.5% physicians, 16.5% nurses, 12.8% allied professionals, and 32.2% other providers; 54.2% were women, and 64.8% were ≥50 years. Overall, 86.9% agreed that GenAI is useful in current patient care, rising to 92.9% when asked about future usefulness. Only 42.4% had received formal GenAI training, and just 23.2% reported that their organization had begun adopting AI. The top perceived benefits were improved documentation/clerking (57.0%) and error reduction (49.4%). Dominant barriers included limited AI knowledge (24.7%) and fear of job loss (16.9%). Despite concerns, 72% expressed willingness to support broader GenAI adoption, favoring human oversight (67.1%) and staff training (60.8%) as key safeguards. There were statistically significant findings in perceived AI usefulness by gender (χ2 = 29.2; p < 0.001); organizational adoption of AI (χ2 = 31.6.2; p = 0.047) and where AI is most useful (χ2 = 101.1; p < 0.001) by qualifications; and support for AI adoption by age (χ2 = 18.0; p = 0.02). Conclusions: U.S. clinicians in our survey viewed GenAI as useful but reported limited training and organizational infrastructure needed for confident use while also expressing concerns regarding data privacy and ethical risk. Education programs and transparent, provider-led implementation strategies may accelerate responsible GenAI assimilation while addressing ethical and workforce concerns. Also, health administrators should use the efficiency gains to improve provider–patient relationships and clinicians’ work–life balance while reducing clinician burnout rates. Full article
(This article belongs to the Section Artificial Intelligence in Healthcare)
Show Figures

Figure 1

12 pages, 428 KB  
Article
Correlation Between Dosimetric Parameters and Hematologic Toxicity in Cervical Cancer Patients Undergoing Intensity-Modulated Pelvic Radiotherapy
by Shuang Zhao, Xi Yang, Lu Zhang, Duan Yang, Xuejiao Yang, Rui Wang, Shuangzheng Jia, Jusheng An and Manni Huang
Cancers 2026, 18(6), 992; https://doi.org/10.3390/cancers18060992 - 19 Mar 2026
Viewed by 107
Abstract
Objective: This study aimed to elucidate the association between hematologic toxicity (HT) and pelvic bone marrow (PBM) dosimetric parameters in patients with cervical cancer (CC) undergoing radiotherapy (RT) combined with artificial intelligence (AI)-assisted organ at risk (OAR) delineation (Software Copyright Registration Number 2023SR0150365). [...] Read more.
Objective: This study aimed to elucidate the association between hematologic toxicity (HT) and pelvic bone marrow (PBM) dosimetric parameters in patients with cervical cancer (CC) undergoing radiotherapy (RT) combined with artificial intelligence (AI)-assisted organ at risk (OAR) delineation (Software Copyright Registration Number 2023SR0150365). Accurate delineation of bone marrow (BM) regions and analysis of radiation doses may provide a theoretical foundation for the application of AI in predicting HT. Methods: This retrospective study included 141 patients with CC who received chemotherapy (sequential or concurrent) and/or pelvic volumetric modulated arc therapy (VMAT) at the Department of Gynecology, Cancer Hospital of the Chinese Academy of Medical Sciences, between March 2019 and December 2019. PBM and its subregions (ilium, lower pelvis, lumbosacral spine, and femoral heads) were delineated using AI-based automatic segmentation of CT images. The volumes receiving 10–40 Gy (V10, V20, V30, V40) were calculated, and baseline clinical characteristics were assessed. HT endpoints included grade ≥ 2 (HT2+) and grade ≥ 3 (HT3+) leukopenia, neutropenia, anemia, or thrombocytopenia. Associations between dosimetric parameters and HT were evaluated using logistic regression models. Results: Of the 141 patients, 107 (75.8%) developed HT2+ and 33 (23.4%) developed HT3+. Univariate analysis showed that chemotherapy and age were correlated with HT2+. Multivariate analysis identified femoral head V30, femoral head V40, and chemotherapy as independent predictors of HT3+. Conclusions: This study highlights the potential of AI-based OAR delineation for assessing PBM dosimetric parameters in patients with CC. Optimizing RT to minimize BM dose and volume may mitigate HT and enhance treatment tolerance. In our cohort, receipt of combined neoadjuvant and concurrent chemotherapy (NACT+CCRT) was a stronger predictor of HT than most BM dosimetric parameters, suggesting that the systemic effect of chemotherapy may dominate the hematologic toxicity profile in this setting. Consequently, patients receiving this combined modality treatment are at particularly high risk for HT and warrant close hematologic monitoring. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

31 pages, 1934 KB  
Review
Artificial Intelligence for Detecting Electoral Disinformation on Social Media: Models, Datasets, and Evaluation
by Félix Díaz, Nhell Cerna, Rafael Liza and Bryan Motta
Information 2026, 17(3), 292; https://doi.org/10.3390/info17030292 - 17 Mar 2026
Viewed by 207
Abstract
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining [...] Read more.
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining performance indicators, science mapping, and a focused full-text synthesis of highly cited papers. The literature grows sharply after 2019, peaks in 2025, and shows geographically uneven production, with collaboration structured around a small set of hubs. The thematic structure suggests that, during the pandemic era, infodemic-related research served as a catalyst, intensifying scientific attention to fake news and disinformation and expanding the associated detection and monitoring agendas. In addition, socio-political harm constructs such as hate speech, extremism, and polarization appear as recurrent and structurally central targets, highlighting that election-relevant work often extends beyond veracity assessment toward monitoring discourse risks. Blockchain also emerges as a novel and adjacent integrity theme, aligned with authenticity and provenance-oriented mitigation rather than mainstream detection pipelines. AI for electoral disinformation is not reducible to veracity classification, as influential studies also target automation and coordinated behavior, verification support, diffusion analysis, and estimation frameworks that focus on exposure and impact. Evaluation remains heterogeneous and is often shaped by benchmark settings, making high accuracy values hard to compare and potentially misleading when labeling quality, topic leakage, or context shift are not characterized. Overall, the findings motivate evaluation protocols that align operational objectives with modeling roles and explicitly address robustness to temporal and platform changes, asymmetric error costs during election windows, and representativeness across electoral contexts and languages, while also guiding future work on emerging integrity challenges and governance-relevant deployment settings. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

30 pages, 1715 KB  
Article
AI-Based Model for Maintaining Good Healthcare Quality Against Cybersecurity Risks
by Abdullah M. Algarni and Vijey Thayananthan
Systems 2026, 14(3), 315; https://doi.org/10.3390/systems14030315 - 17 Mar 2026
Viewed by 238
Abstract
Artificial Intelligence (AI) has strong potential in health monitoring systems to support high-quality healthcare while mitigating cybersecurity risks. AI-based solutions for health and wellness applications, particularly for cardiovascular disease monitoring, are being explored to address complex healthcare challenges and improve patient outcomes. The [...] Read more.
Artificial Intelligence (AI) has strong potential in health monitoring systems to support high-quality healthcare while mitigating cybersecurity risks. AI-based solutions for health and wellness applications, particularly for cardiovascular disease monitoring, are being explored to address complex healthcare challenges and improve patient outcomes. The integration of quantum and AI-based techniques is also gaining attention for enhancing future healthcare applications and communication technologies. Purpose: The primary objective is to improve cardiac care by accurately predicting symptoms and mitigating cyber-risks that threaten digital health integrity. By leveraging Integrated Quantum Networks (IQNs) and AI-driven protocols, this research aims to reduce the prevalence/incidence of non-communicable diseases by 50% by 2035 through proactive prevention and superior treatment management. Method: The framework utilizes AI-based techniques and AI-quantum-enhanced sensors and IQN to build a secure, proactive monitoring system. This theoretical framework integrates high-precision data collection with robust risk management systems to protect against vulnerabilities in digital health infrastructure. These components work in tandem to ensure that sensitive medical data remain resilient against emerging cyber threats. Anticipated Results and Conclusions: The system is expected to improve cybersecurity resilience, system performance, and energy efficiency (EE), supporting the development of secure and advanced future healthcare applications. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

18 pages, 10950 KB  
Article
A Predictable-Image Solution for Copyright Protection Based on Layer-Wise Relevance Propagation
by Yougyung Park, Sieun Kim and Inwhee Joe
Appl. Sci. 2026, 16(6), 2864; https://doi.org/10.3390/app16062864 - 16 Mar 2026
Viewed by 132
Abstract
As artificial intelligence (AI) systems are increasingly deployed in real-world applications, concerns regarding the unauthorized use of copyrighted images during model training have become more pronounced. In particular, both generative and discriminative models may implicitly internalize distinctive visual patterns from copyrighted data, leading [...] Read more.
As artificial intelligence (AI) systems are increasingly deployed in real-world applications, concerns regarding the unauthorized use of copyrighted images during model training have become more pronounced. In particular, both generative and discriminative models may implicitly internalize distinctive visual patterns from copyrighted data, leading to potential ethical and legal risks even after data removal. In this study, we propose a practical copyright protection framework, termed the Predictable-Image Solution (PIS), which aims to disrupt the learning of copyrighted visual features during the training process. PIS leverages Layer-wise Relevance Propagation (LRP) to identify image regions that contribute positively to a model’s prediction and selectively modifies these regions using non-copyrighted visual substitutes, such as textures or benign image patterns. By targeting semantically influential regions rather than applying global perturbations, the proposed approach effectively interferes with feature extraction while preserving the perceptual quality and overall visual structure of the original image. Extensive experiments conducted on multiple pre-trained image classification models demonstrate that PIS consistently degrades classification performance on protected images, while maintaining high visual similarity as measured by perceptual metrics. These results indicate that PIS offers an effective, model-agnostic, and visually unobtrusive solution for mitigating unauthorized exploitation of copyrighted images in practical AI training scenarios. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

37 pages, 984 KB  
Article
Co-Explainers: A Position on Interactive XAI for Human–AI Collaboration as a Harm-Mitigation Infrastructure
by Francisco Herrera, Salvador García, María José del Jesus, Luciano Sánchez and Marcos López de Prado
Mach. Learn. Knowl. Extr. 2026, 8(3), 69; https://doi.org/10.3390/make8030069 - 10 Mar 2026
Viewed by 460
Abstract
Human–AI collaboration (HAIC) increasingly mediates high-risk decisions in public and private sectors, yet many documented AI harms arise not only from model error but from breakdowns in joint human–AI work: miscalibrated reliance, impaired contestability, misallocated agency, and governance opacity. Conventional explainable AI (XAI) [...] Read more.
Human–AI collaboration (HAIC) increasingly mediates high-risk decisions in public and private sectors, yet many documented AI harms arise not only from model error but from breakdowns in joint human–AI work: miscalibrated reliance, impaired contestability, misallocated agency, and governance opacity. Conventional explainable AI (XAI) approaches, often delivered as static one-shot artifacts, are poorly matched to these sociotechnical dynamics. This paper is a position paper arguing that explainability should be reframed as a harm-mitigation infrastructure for HAIC: an interactive, iterative capability that supports ongoing sensemaking, safe handoffs of control, governance stakeholder roles and institutional accountability. We introduce co-explainers as a conceptual framework for interactive XAI, in which explanations are co-produced through structured dialogue, feedback, and governance-aware escalation (explain → feedback → update → govern). To ground this position, we synthesize prior harm taxonomies into six HAIC-oriented harm clusters and use them as heuristic design lenses to derive cluster-specific explainability requirements, including uncertainty communication, provenance and logging, contrastive “why/why-not” and counterfactual querying, role-sensitive justification, and recourse-oriented interaction protocols. We emphasize that co-explainers do not “mitigate” sociotechnical harms in isolation; rather, they provide an interface layer that makes harms more detectable, decisions more contestable, and accountability handoffs more operational under realistic constraints such as sealed models, dynamic updates, and value pluralism. We conclude with an agenda for evaluating co-explainers and aligning interactive XAI with governance frameworks in real-world HAIC deployments. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

29 pages, 2895 KB  
Article
From Virtual Substitution to Phygital Extension: A Strategic Framework for the Tourism Metaverse in Thailand
by Thawatphong Phithak, Kanokwan Rattanakhiriphan and Sorachai Kamollimsakul
Tour. Hosp. 2026, 7(3), 77; https://doi.org/10.3390/tourhosp7030077 - 9 Mar 2026
Viewed by 277
Abstract
The global tourism industry is entering a phygital era, prompting renewed examination of the metaverse as an extension rather than a substitute for physical travel. This study investigates how metaverse technology operates across the Phygital Customer Journey within the Thai tourism context. Drawing [...] Read more.
The global tourism industry is entering a phygital era, prompting renewed examination of the metaverse as an extension rather than a substitute for physical travel. This study investigates how metaverse technology operates across the Phygital Customer Journey within the Thai tourism context. Drawing on in-depth interviews with 12 experts from academic, multimedia development, and policy sectors, the data were analyzed using reflexive thematic analysis. The findings indicate that the metaverse assumes its most structurally significant role during the pre-trip phase. Immersive previews were described as recalibrating perceived risk by enabling advance assessment of accessibility, spatial configuration, and environmental conditions prior to commitment. This staged risk-calibration process operates through three interrelated mechanisms: Sensory Bridging, Psychological Risk Mitigation, and Physical Feasibility Testing, which are particularly relevant for secondary tourism destinations and demographic aging contexts. Building on these patterns, the study advances a four-layer architectural framework as an interpretive synthesis. Within this framework, the metaverse functions as a transactional and coordination layer that integrates booking systems, AI-enabled services, and real-time infrastructural data supported by IoT and Blockchain. The analysis further suggests that the state may assume an enabling role as an Infrastructure Architect through the development of a National Digital Highway and regulatory sandbox arrangements for SMEs. Sustainable adoption depends on hardware-agnostic, mobile-centric accessibility to mitigate digital exclusion. While grounded in Thailand, the framework offers analytical relevance for destinations facing comparable infrastructural and demographic conditions. Full article
Show Figures

Figure 1

22 pages, 1913 KB  
Article
A Novel AI-Based Trading Framework for Futures Markets: Evidence from the MTX Case Study
by Yu-Heng Hsieh, Chiung-Han Lai and Shyan-Ming Yuan
Int. J. Financial Stud. 2026, 14(3), 67; https://doi.org/10.3390/ijfs14030067 - 4 Mar 2026
Viewed by 513
Abstract
This study develops a novel AI-based trading framework designed to consistently generate profits across cyclical bullish and bearish futures markets. Unlike conventional strategies that rely on static rules or a single predictive model, the proposed framework introduces a dual-agent deep reinforcement learning (DRL) [...] Read more.
This study develops a novel AI-based trading framework designed to consistently generate profits across cyclical bullish and bearish futures markets. Unlike conventional strategies that rely on static rules or a single predictive model, the proposed framework introduces a dual-agent deep reinforcement learning (DRL) architecture, where one agent specializes in bullish conditions and the other in bearish conditions, while a trading decision selector dynamically predicts market regimes and allocates execution accordingly. This design enables the system to adapt to regime shifts and mitigate risks arising from market volatility and extreme events. Using Mini Taiwan Stock Exchange Index Futures (MTX) as a case study, a four-year historical backtest is conducted covering multiple disruptive periods, including the tax adjustment and the Russia–Ukraine conflict. The empirical results show that, under a monthly capital reset and loss-compensation rule with a fixed investment of TWD 500,000 per month, the proposed framework achieves an average cumulative return of 2240%, an annualized return of 109%, and a Sharpe ratio of 0.31, with the cumulative ROI exceeding twice the MTX index growth over the same period. Although the Sharpe ratio remains moderate, this outcome reflects the framework’s emphasis on directional trading and absolute return maximization, where profitable trades outweigh intermittent losses despite higher short-term volatility. These findings suggest that adaptive, regime-aware DRL architectures are particularly effective for futures trading in markets characterized by frequent trend reversals, offering both methodological innovation and practical applicability under realistic market conditions, with strong returns achieved at a moderate risk-adjusted level. Full article
Show Figures

Figure 1

15 pages, 1486 KB  
Review
Challenges of Space Debris Detection, Tracking, and Monitoring in Near-Earth Orbit: Overview of Current Status and Mitigation Strategies
by Motti Haridim, Assaf Shaked, Niv Cohen and Jacob Gavan
Information 2026, 17(3), 253; https://doi.org/10.3390/info17030253 - 3 Mar 2026
Viewed by 508
Abstract
The accumulation of space debris in near-Earth orbit, particularly in Low Earth Orbit (LEO), poses an increasing threat to satellite operations, communication infrastructures, and long-term space sustainability. As modern constellations expand and incorporate advanced satellite technologies, including sensing and wireless communications, artificial intelligence-of-things [...] Read more.
The accumulation of space debris in near-Earth orbit, particularly in Low Earth Orbit (LEO), poses an increasing threat to satellite operations, communication infrastructures, and long-term space sustainability. As modern constellations expand and incorporate advanced satellite technologies, including sensing and wireless communications, artificial intelligence-of-things (AIoT), enabled payloads, and edge computing for on-orbit data processing, the risk profile grows. This paper reviews the current debris environment and existing sensing and monitoring techniques, highlights major collision events and deliberate debris-generating activities, and analyzes the role of both governmental and commercial satellite constellations in exacerbating and mitigating the challenges. Emerging space surveillance and tracking (SST) techniques, leveraging radar, optical sensors, and interferometric SAR for enhanced intelligence, surveillance, and reconnaissance (ISR), are highlighted alongside software-defined networking (SDN) approaches and cloud communication technology that enable coordinated debris-avoidance maneuvers. Key international regulatory frameworks, tracking architectures, and mitigation measures, including alignment with ISO 24113 standards, advanced TT&C capabilities, and evolving active debris removal technologies, are examined. The study emphasizes the necessity of a global, interoperable ecosystem that integrates AI/ML (artificial intelligence and machine learning)-driven situational awareness, secure SATCOM links with AJ/LPI/LPD (anti-jamming/low probability of interception/low probability of detection) characteristics, and collaborative protocols among space agencies, commercial operators, and regulatory bodies to ensure the sustainable use of orbital space for future generations. Full article
(This article belongs to the Special Issue Sensing and Wireless Communications)
Show Figures

Figure 1

42 pages, 2328 KB  
Review
Artificial Neural Network Applications in Supply Chain Management: A Literature Review and Classification
by Iman Ghalehkhondabi
Appl. Syst. Innov. 2026, 9(3), 55; https://doi.org/10.3390/asi9030055 - 28 Feb 2026
Viewed by 594
Abstract
Supply Chain Management (SCM) has received considerable attention from the industrial community in recent decades. SCM continues to be an interesting and relevant research topic in many business areas such as revealing supply chain integration benefits, uncertainty and risk mitigation methods, decision-making and [...] Read more.
Supply Chain Management (SCM) has received considerable attention from the industrial community in recent decades. SCM continues to be an interesting and relevant research topic in many business areas such as revealing supply chain integration benefits, uncertainty and risk mitigation methods, decision-making and optimization methodologies, etc. In current supply chain management, huge volumes of data are being developed each second, and emerging technologies such as Radio Frequency Identification (RFID) have amplified the availability of online data. Using Artificial Intelligence (AI) methods that go beyond simply using the huge volume of online data enables Supply Chain (SC) managers to monitor everything in a timely fashion. There are several aspects of an SC that AI—and specifically Artificial Neural Networks (ANNs)—can be applied to better help them manage and optimize. This study aims to review state-of-the-art ANNs and Deep Neural Networks (DNNs) in the field of supply chain management. One hundred high-quality research studies that applied ANNs in supply chain management are reviewed and categorized into four classes: performance optimization, supplier selection, forecasting, and inventory management studies. Our study shows that there is a significant possibility that we could use ANNs and DNNs to better manage supply chains. Across the reviewed studies, neural networks are frequently reported to improve predictive performance and support monitoring/control in complex, nonlinear supply chain settings, often complementing traditional operations research approaches. Finally, the limitations of ANN models and the possibilities for future studies are presented at the end of this study. Full article
(This article belongs to the Section Industrial and Manufacturing Engineering)
Show Figures

Figure 1

20 pages, 23952 KB  
Article
Deepfake Speech Detection Using Perceptual Pathological Features Related to Timbral Attributes and Deep Learning
by Anuwat Chaiwongyen, Khalid Zaman, Kai Li, Suradej Duangpummet, Jessada Karnjana, Waree Kongprawechnon and Masashi Unoki
Appl. Sci. 2026, 16(4), 2077; https://doi.org/10.3390/app16042077 - 20 Feb 2026
Viewed by 359
Abstract
The detection of deepfake speech has become a significant research area due to rapid advancements in generative AI for speech synthesis. These technologies pose significant security risks in applications such as biometric authentication, voice-controlled systems, and automatic speaker verification (ASV) systems. Therefore, enhancing [...] Read more.
The detection of deepfake speech has become a significant research area due to rapid advancements in generative AI for speech synthesis. These technologies pose significant security risks in applications such as biometric authentication, voice-controlled systems, and automatic speaker verification (ASV) systems. Therefore, enhancing the detection capabilities of such applications is essential to mitigate potential threats. This study investigates perceptual speech-pathological features, which are commonly used to evaluate the unnaturalness of voice disorders in clinical settings, as potential indicators for detecting deepfake speech. Specifically, the timbral attributes of hardness, depth, brightness, roughness, sharpness, warmth, boominess, and reverberation are examined. The analysis reveals that these attributes provide meaningful distinctions between genuine and synthetic speech. Furthermore, the detection performance is enhanced by extending the dimensional representation of timbral attributes, enabling a more comprehensive characterization of the speech signal. This paper proposes a method that combines two models: one utilizing the different dimensions of speech-pathological features with a deep neural network (DNN), and another employing a gammatone filterbank model that simulates the auditory processing mechanism of the human cochlea with ResNet-18 architecture, improving deepfake speech detection. The proposed method is evaluated on the Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof) 2019 dataset. Experimental results demonstrate that the proposed approach outperforms baseline models in terms of Equal Error Rate (EER), achieving an EER of 5.93%. Full article
(This article belongs to the Special Issue AI in Audio Analysis: Spectrogram-Based Recognition)
Show Figures

Figure 1

Back to TopTop