Leveraging Simulation and Deep Learning for Enhanced Health and Safety

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "AI Systems: Theory and Applications".

Deadline for manuscript submissions: 28 February 2026 | Viewed by 787

Special Issue Editors

Artie McFerrin Department of Chemical Engineering, Texas A&M University, College Station (zip code: 77843), United States
Interests: deep learning; simulation; computer vision; statistical analysis; property prediction; safety
Bailey College of Engineering & Technology, Indiana State University, Terre Haute, IN 47809, USA
Interests: AI safety; safety with explainable AI; risk simulation; causal inference

Special Issue Information

Dear Colleagues,

The Special Issue Leveraging Simulation and Deep Learning for Enhanced Health and Safety aims to gather cutting-edge research that combines advanced artificial intelligence techniques with high-fidelity simulation to solve safety-critical problems across domains such as property prediction, public health, agriculture, cyber-security, and autonomous transportation.

Focus and Scope.
This Special Issue centers on AI methods—including deep learning, computer vision, causal inference, statistical modeling, and risk-aware reinforcement learning—that are explicitly engineered for health-and-safety-related tasks. We welcome theoretical contributions, empirical studies, and system demonstrations that achieve the following:

  1. Fuse simulation with real-world data to accelerate model training, stress-test edge cases, or generate rare-event scenarios (e.g., pathogen spread models, cyber-attack emulation, corner-case driving incidents).
  2. Advance AI safety and interpretability, including explainable AI (XAI) frameworks that expose causal pathways or quantify uncertainty in safety-critical recommendations.
  3. Translate insights into practice, for example, crop-health monitoring that reduces pesticide load, AI-driven anomaly detection that hardens critical infrastructure, or risk-aware autopilot controllers that meet regulatory standards.
  4. Assess the societal, ethical, and regulatory implications of deploying simulation-augmented AI in sensitive environments.

Papers may span novel architecture, benchmarking datasets, evaluation protocols, or interdisciplinary case studies and should articulate clear pathways from algorithmic innovation to measurable safety or health gains.

Purpose and Contribution to the Literature.
While prior work has advanced either simulation techniques (digital twins, agent-based models) or AI algorithms (deep neural networks, causal discovery) in isolation, few studies examine their synergy for safety augmentation across sectors. The existing literature tends to be siloed—papers on autonomous driving seldom cross-reference medical AI safety frameworks, and agricultural decision support rarely leverages cyber-security risk analytics. This Special Issue therefore aims to achieve the following:

  • Bridge disciplines by showcasing transferable methodologies—e.g., causality-guided computer vision equally applicable to crop disease detection and industrial hazard monitoring.
  • Extend AI safety discourse beyond theoretical alignment by grounding it in domain-specific risk simulations and real-world validations.
  • Fill empirical gaps through curated studies that quantify safety benefits under realistic constraints, supplementing predominantly benchmark-driven AI literature.
  • Set a research agenda for integrated simulation–AI pipelines that satisfy regulatory rigor, improving trust in AI and accelerating its adoption in mission-critical contexts.

By situating diverse yet convergent work in a single forum, this Special Issue aims to catalyze the establishment of a new cross-sector community dedicated to AI systems that are not just intelligent, but also verifiably safe, robust, and societally beneficial.

Dr. Pingfan Hu
Dr. He Wen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • simulation-based modeling
  • AI safety
  • explainable AI (XAI)
  • risk simulation
  • causal inference
  • computer vision
  • digital twins
  • safety-critical systems
  • property prediction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 5431 KiB  
Article
Architectural Gaps in Generative AI: Quantifying Cognitive Risks for Safety Applications
by He Wen and Pingfan Hu
AI 2025, 6(7), 138; https://doi.org/10.3390/ai6070138 - 25 Jun 2025
Viewed by 601
Abstract
Background: The rapid integration of generative AIs, such as ChatGPT, into industrial, process, and construction management introduces both operational advantages and emerging cognitive risks. While these models support task automation and safety analysis, their internal architecture differs fundamentally from human cognition, posing [...] Read more.
Background: The rapid integration of generative AIs, such as ChatGPT, into industrial, process, and construction management introduces both operational advantages and emerging cognitive risks. While these models support task automation and safety analysis, their internal architecture differs fundamentally from human cognition, posing interpretability and trust challenges in high-risk contexts. Methods: This study investigates whether architectural design elements in Transformer-based generative models contribute to a measurable divergence from human reasoning. A methodological framework is developed to examine core AI mechanisms—vectorization, positional encoding, attention scoring, and optimization functions—focusing on how these introduce quantifiable “distances” from human semantic understanding. Results: Through theoretical analysis and a case study involving fall prevention advice in construction, six types of architectural distances are identified and evaluated using cosine similarity and attention mapping. The results reveal misalignments in focus, semantics, and response stability, which may hinder effective human–AI collaboration in safety-critical decisions. Conclusions: These findings suggest that such distances represent not only algorithmic abstraction but also potential safety risks when generative AI is deployed in practice. The study advocates for the development of AI architectures that better reflect human cognitive structures to reduce these risks and improve reliability in safety applications. Full article
Show Figures

Figure 1

Back to TopTop