Unraveling the Black Box: Unleashing the Power of Explainable Deep Learning in Advanced Engineering Sciences

A special issue of Symmetry (ISSN 2073-8994).

Deadline for manuscript submissions: 31 December 2025 | Viewed by 130

Special Issue Editor


E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Embarking on an innovative exploration allows us to delve into Explainable Deep Learning (XDL) and experience its profound impact on engineering sciences. This transformative journey uncovers the heart of XDL's potential, which paves the way for a groundbreaking adventure at the forefront of technological advancement. This Special Issue aims to demystify the black box of AI models, offering an extraordinary platform for researchers, experts, and visionaries to share their pioneering work that harnesses the power of XDL to propel engineering sciences into uncharted territories. This Special Issue unravels the mysteries, illuminate the complexities, and unleash the true potential of XDL in shaping the future of mankind.

This Special Issue seeks to embrace a diverse range of topics at the forefront of XDL's applications in advancing engineering sciences. We invite researchers to delve into the untapped potential of explainable AI, exploring its versatility across various engineering domains. Authors are encouraged to present groundbreaking research, visionary insights, and innovative approaches that redefine the way we perceive and utilize AI in science. Through this Special Issue, we aim to unleash the true power of XDL, demystify the black box, and pave the way for a new era of transparent, interpretable, and trustworthy AI-driven engineering research. Together, we will illuminate the complexities of data, revolutionize technology, and unlock the full potential of engineering. Let us collaboratively propel the frontiers of information sciences, unveil the secrets hidden within the black box, and empower humanity with the transformative power of XDL on the following topics: unraveling complex engineering and technology datasets using interpretable deep learning models; XDL-powered insights into genomic, proteomic, and transcriptomic data for novel discoveries; explainable AI for medical image analysis, diagnosis, and treatment planning; discover hidden patterns and engineering technology in medical images for precision medicine; build trust and transparency in AI models for critical care applications; ethical considerations and fairness in XDL deployments for equitable healthcare; big data analytics in smart road and railway infrastructure systems, including data mining, machine learning, and AI; connected and autonomous vehicles and infrastructure systems; non-destructive tests for resilience, such as infrastructure sensing and imaging; smart infrastructure systems operation and control, including network monitoring, LiDAR, or radar; emerging transport technologies to support synchro modal transport network planning; applications of new methodologies (digital twin, virtual simulation, etc.) in traffic safety research; human action recognition from camera, video, and other relevant sensor data; nontouch and touch interfaces using human action; deep learning approach for human action recognition; integrating multi-modal data to gain holistic insights into complex diseases; and advancing network-based XDL models to decipher intricate biological interactions.

The Special Issue titled "Unraveling the Black Box: Unleashing the Power of Explainable Deep Learning in Advanced Engineering Sciences" directly aligns with the MDPI journal's focus on symmetry/asymmetry phenomena across disciplines. Deep learning, particularly explainable deep learning (XDL), illuminates symmetries and asymmetries in modern technology. This Special Issue delves into XDL's role in understanding complex AI models, shedding light on symmetrical patterns while addressing asymmetrical challenges in AI deployment. Through diverse engineering domains, it reveals unique manifestations of symmetry and asymmetry, offering insights into data interpretation, model transparency, and ethical considerations. By bridging this gap, this Special Issue pioneers transparent, interpretable, and trustworthy AI-driven research in line with the MDPI Journal's goals.

Prof. Dr. Kelvin K. L. Wong
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Explainable Deep Learning (XDL)
  • medical image analysis
  • non-destructive testing
  • multi-modal data integration
  • interpretable deep learning models
  • genomic data analysis
  • big data analytics
  • autonomous vehicles
  • smart infrastructure systems
  • engineering sciences
  • AI transparency
  • digital twin
  • human action recognition

Published Papers

This special issue is now open for submission.
Back to TopTop