Women's Special Issue Series: Artificial Intelligence

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 May 2026 | Viewed by 3419

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automatics and Applied Software, Faculty of Engineering, Aurel Vlaicu University of Arad, 310130 Arad, Romania
Interests: intelligent systems; soft computing; fuzzy control; modeling and simulation; biometrics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
1. Data Science and AI Division, Chalmers University of Technology, Chalmersgatan 4, 412 96 Göteborg, Sweden
2. Data Science and AI Division, University of Gothenburg, Universitetsplatsen 1, 405 30 Göteborg, Sweden
Interests: artificial intelligence; data science; machine learning; deep learning

Special Issue Information

Dear Colleagues,

This Special Issue, titled "Women's Special Issue Series: Artificial Intelligence", focuses on showcasing recent advancements and groundbreaking research led by women in artificial intelligence (AI). As a rapidly evolving field, AI encompasses a wide range of disciplines, including machine learning, deep learning, natural language processing, computer vision, and robotics. The contributions featured in this collection will cover both theoretical developments and practical applications that drive innovation in AI.

This Special Issue aims to explore novel methodologies, emerging trends, and transformative applications in AI. Topics of interest include, but are not limited to:

  • Machine learning and deep learning advancements;
  • Natural language processing and speech recognition;
  • Computer vision and image processing;
  • AI applications in healthcare, education, and industry;
  • Ethical AI and responsible machine learning;
  • Human–AI interaction and explainable AI;
  • Robotics and intelligent automation
  • AI-driven decision support systems;
  • Emerging trends and interdisciplinary applications.

By bringing together diverse perspectives and expertise, this Special Issue seeks to encourage collaboration, inspire future research, and promote inclusivity in AI.

We welcome submissions from all authors, irrespective of gender identity.

Prof. Dr. Valentina E. Balas
Dr. Oana Geman
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • machine learning
  • AI applications
  • intelligent systems
  • robotics
  • decision support systems
  • NLP
  • image processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 2638 KB  
Article
Explainable AI for Predicting and Justifying Firm-Level Financial Resilience in Healthcare Services
by Lucia Morosan-Danila, Claudia-Elena Grigoras-Ichim, Otilia-Maria Bordeianu, Daniela-Mihaela Neamtu, Daniela-Tatiana Agheorghiesei, Dumitru Filipeanu and Alexandru Tugui
Electronics 2026, 15(5), 1022; https://doi.org/10.3390/electronics15051022 - 28 Feb 2026
Viewed by 577
Abstract
Healthcare service providers face recurrent systemic disruptions (e.g., pandemics, reimbursement delays, supply shortages, and regulatory shocks), yet firm-level resilience monitoring remains underdeveloped due to limited explainability and weak out-of-time validation in prior work. We develop an explainable machine learning pipeline to predict firm-level [...] Read more.
Healthcare service providers face recurrent systemic disruptions (e.g., pandemics, reimbursement delays, supply shortages, and regulatory shocks), yet firm-level resilience monitoring remains underdeveloped due to limited explainability and weak out-of-time validation in prior work. We develop an explainable machine learning pipeline to predict firm-level financial resilience (a financial health/robustness proxy) for outpatient healthcare providers. Using annual data for 2600 Romanian firms (Nomenclature of Economic Activities - NACE 8622) over 2014–2023, resilience is operationalised as an ordered three-class label derived from a Principal Component Analysis (PCA)-based composite score built from eight capital structure and asset composition ratios, with train-only frozen thresholds and a strict anti-leakage protocol. We evaluate multinomial logistic regression (baseline), Random Forest (RF), and HistGradientBoosting (HGB) (primary) on a prospective 2023 hold-out using Accuracy, Balanced Accuracy, and Macro-F1, with bootstrap uncertainty for key contrasts. The primary model achieves Balanced Accuracy = 0.943 and Macro-F1 = 0.944 in 2023, outperforming the linear baseline and RF; errors concentrated between adjacent classes. Model-faithful permutation importance on HGB highlights working-capital disciplines (receivables, cash, inventory, asset structure), while RF–SHAPley Additive Explanations (SHAP) is used only for auxiliary pattern exploration and stability checks, with Individual Conditional Expectation (ICE)/Partial Dependency Plot (PDP) confirming key nonlinear regimes on HGB. Overall, the results support governance-ready, interpretable resilience monitoring while maintaining a clear separation between predictive explanations and causal claims. Full article
(This article belongs to the Special Issue Women's Special Issue Series: Artificial Intelligence)
Show Figures

Figure 1

46 pages, 8253 KB  
Article
Quantifying AI Model Trust as a Model Sureness Measure by Bidirectional Active Processing and Visual Knowledge Discovery
by Alice Williams and Boris Kovalerchuk
Electronics 2026, 15(3), 580; https://doi.org/10.3390/electronics15030580 - 29 Jan 2026
Viewed by 455
Abstract
Trust in machine-learning models is critical for deployment by users, especially for high-risk tasks such as healthcare. Model trust involves much more than performance metrics such as accuracy, precision, or recall. It includes user readiness to allow a model to make decisions. Model [...] Read more.
Trust in machine-learning models is critical for deployment by users, especially for high-risk tasks such as healthcare. Model trust involves much more than performance metrics such as accuracy, precision, or recall. It includes user readiness to allow a model to make decisions. Model trust is a multifaceted concept commonly associated with the stability of model predictions under variations in training data, noise, algorithmic parameters, and model explanations. This paper extends existing model trust concepts by introducing a novel Model Sureness measure. Some alternatively purposed Model Sureness measures have been proposed. Here, Model Sureness quantitatively measures the model accuracy stability under training data variations. For any model, this is carried out by combining the proposed Bidirectional Active Processing and Visual Knowledge Discovery. The proposed Bidirectional Active Processing method iteratively retrains a model on varied training data until a user-defined stopping criterion is met; in this work, this criterion is set to a 95% accuracy when the model is evaluated on the test data. This process further finds a minimal sufficient training dataset required for a model to satisfy this criterion. Accordingly, the proposed Model Sureness measure is defined as the ratio of the number of unnecessary cases to all cases in the training data along with variations of these ratios. Higher ratios indicate a greater Model Sureness under this measure, while trust in a model is ultimately a human decision based on multiple measures. Case studies conducted on three benchmark datasets from biology, medicine, and handwritten digit recognition demonstrate a well-preserved model accuracy with Model Sureness scores that reflect the capabilities of the evaluated models. Specifically, unnecessary case removal ranged from 20% to 80%, with an average reduction of approximately 50% of the training data. Full article
(This article belongs to the Special Issue Women's Special Issue Series: Artificial Intelligence)
Show Figures

Figure 1

14 pages, 440 KB  
Article
Mapping User Perceptions of AI in Accounting and Auditing with the TAM—A Network Analysis Approach
by Lavinia Denisia Cuc, Dana Rad, Florin Lucian Isac, Florentina Simona Barbu, Cosmin Silviu Raul Joldeș, Bogdan Cosmin Gomoi, Robert Cristian Almași, Daniel Manațe and Ovidiu Megan
Electronics 2025, 14(23), 4598; https://doi.org/10.3390/electronics14234598 - 24 Nov 2025
Viewed by 1238
Abstract
This study examines the acceptance of artificial intelligence (AI) among professionals in accounting and auditing by applying the Technology Acceptance Model (TAM). The research addresses the broader question of how AI technologies are integrated into professional decision-making in financial services, with a focus [...] Read more.
This study examines the acceptance of artificial intelligence (AI) among professionals in accounting and auditing by applying the Technology Acceptance Model (TAM). The research addresses the broader question of how AI technologies are integrated into professional decision-making in financial services, with a focus on the determinants that shape user acceptance. A standardized survey, consisting of 21 items designed to measure TAM components, was distributed to accounting and auditing professionals. Data were analyzed through confirmatory factor analysis to validate the construct structure, and Pearson’s correlation and network analyses were used to identify relationships among variables. The results confirmed a robust seven-factor model with significant interrelations, highlighting AI knowledge as a central variable influencing perceived usefulness (PU), self-efficacy, and intention to use AI. These findings demonstrate that professional familiarity and confidence in AI tools are key drivers of technology acceptance. The study concludes that enhancing AI literacy can strengthen organizational readiness for digital transformation. Additionally, the research underscores the contribution of women-led scholarship in promoting responsible and inclusive approaches to AI adoption within the accounting and auditing sectors. Full article
(This article belongs to the Special Issue Women's Special Issue Series: Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop