applsci-logo

Journal Browser

Journal Browser

Innovations in Artificial Neural Network Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 June 2025 | Viewed by 1922

Special Issue Editors


E-Mail
Guest Editor
Faculty of Civil Engineering, Transportation Engineering and Architecture, University of Maribor, SI-2000 Maribor, Slovenia
Interests: artificial intelligence; artificial neural networks; blockchain; structural mechanics; earthquake engineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department for Materials and Metallurgy, Faculty for Natural Sciences and Engineering, University of Ljubljana, SI-1000 Ljubljana, Slovenia
Interests: processing of metallic materials; metallic alloys; artificial neural networks

Special Issue Information

Dear Colleagues,

Rapid advances in artificial intelligence (AI) are transforming society, promising faster and more sustainable technological progress. However, current AI models, including large language models (LLMs), face limitations such as hallucinations, unreliability, lack of interpretability, etc. For this Special Issue of Applied Sciences, we invite research contributions aimed at improving AI systems, with a focus on enhancing artificial neural networks (ANNs) for diverse applications across scientific disciplines. A key emphasis is on integrating symbolic logic approaches and first principles into AI methodologies.

Traditional AI systems often operate as (statistical) black boxes, offering predictions or results with limited insight into their reliability or underlying mechanisms. These challenges can be addressed by integrating first principles (and other modern scientific frameworks) and/or symbolic logic to enhance methods for quantifying predictive accuracy and relating results to some key statistical parameters. This approach holds the potential to transform AI tools into more transparent, interpretable, and robust systems, fostering greater trust and broader applicability. We welcome submissions addressing these intersections, paving the way for AI systems and ANNs that align more closely with scientific rigor and especially practical utility.

Relevant topics include, but are not limited to, the following:

  • Enhancing artificial neural networks;
  • The integration of symbolic logic and first principles;
  • Transforming AI tools into more transparent and interpretable systems;
  • Greater trust and broader applicability.

Dr. Iztok Peruš
Prof. Dr. Milan Terčelj
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial neural networks
  • first principles
  • symbolic logic approaches
  • metals
  • engineering
  • physics
  • mathematics
  • applied sciences

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 426 KiB  
Article
AI-Driven Consensus: Modeling Multi-Agent Networks with Long-Range Interactions Through Path-Laplacian Matrices
by Yusef Ahsini, Belén Reverte and J. Alberto Conejero
Appl. Sci. 2025, 15(9), 5064; https://doi.org/10.3390/app15095064 - 2 May 2025
Viewed by 104
Abstract
Extended connectivity in graphs can be analyzed through k-path Laplacian matrices, which permit the capture of long-range interactions in various real-world networked systems such as social, transportation, and multi-agent networks. In this work, we present several alternative methods based on machine learning [...] Read more.
Extended connectivity in graphs can be analyzed through k-path Laplacian matrices, which permit the capture of long-range interactions in various real-world networked systems such as social, transportation, and multi-agent networks. In this work, we present several alternative methods based on machine learning methods (LSTM, xLSTM, Transformer, XGBoost, and ConvLSTM) to predict the final consensus value based on directed networks (Erdös–Renyi, Watts–Strogatz, and Barabási–Albert) and on the initial state. We highlight how different k-hop interactions affect the performance of the tested methods. This framework opens new avenues for analyzing multi-scale diffusion processes in large-scale, complex networks. Full article
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)
Show Figures

Figure 1

17 pages, 2295 KiB  
Article
Quantum Neural Networks Approach for Water Discharge Forecast
by Liu Zhen and Alina Bărbulescu
Appl. Sci. 2025, 15(8), 4119; https://doi.org/10.3390/app15084119 - 9 Apr 2025
Viewed by 297
Abstract
Predicting the river discharge is essential for preparing effective measures against flood hazards or managing hydrological droughts. Despite mathematical modeling advancements, most algorithms have failed to capture the extreme values (especially the highest ones). In this article, we proposed a quantum neural networks [...] Read more.
Predicting the river discharge is essential for preparing effective measures against flood hazards or managing hydrological droughts. Despite mathematical modeling advancements, most algorithms have failed to capture the extreme values (especially the highest ones). In this article, we proposed a quantum neural networks (QNNs) approach for forecasting the river discharge in three scenarios. The algorithm was applied to the raw data series and the series without aberrant values. Comparisons with the results obtained on the same series by other neural networks (LSTM, BPNN, ELM, CNN-LSTM, SSA-BP, and PSO-ELM) emphasized the best performance of the present approach. The lower error between the recorded values and the predicted ones in the evaluation of maxima compared to the case of the competitors mentioned shows that the algorithm best fits the extremes. The most significant mean standard errors (MSEs) and mean absolute errors (MAEs) were 26.9424 and 4.8914, respectively, and the lowest R2 was 84.36%, indicating the good performances of the algorithm. Full article
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)
Show Figures

Figure 1

30 pages, 4869 KiB  
Article
Neural Network Method of Controllers’ Parametric Optimization with Variable Structure and Semi-Permanent Integration Based on the Computation of Second-Order Sensitivity Functions
by Serhii Vladov, Lukasz Scislo, Nina Szczepanik-Ścisło, Anatoliy Sachenko and Victoria Vysotska
Appl. Sci. 2025, 15(5), 2586; https://doi.org/10.3390/app15052586 - 27 Feb 2025
Viewed by 537
Abstract
This article presents a method for researching processes in automatic control systems based on the operator approach for modelling the control object and the controller. Within the method framework, a system of equations has been developed that describes the relations between the control [...] Read more.
This article presents a method for researching processes in automatic control systems based on the operator approach for modelling the control object and the controller. Within the method framework, a system of equations has been developed that describes the relations between the control error, the reference and control action, the output coordinate and the controller and the control object operators. The traditional PI controller modification, including a switching function for adaptation to operating conditions, allows for the system’s effective control in real time. The controller optimization algorithm is based on a functional expression with weighting coefficients that take into account control errors and the control action. To train the neural network through implementing the proposed method, a multilayer architecture was used, including nonlinear activation functions and a dynamic training rate, which ensure high accuracy and accelerated convergence. The TV3-117 turboshaft engine was chosen as the research object, which allows the method to be demonstrated in practical applications in aviation technology. The experimental results showed a significant improvement in control characteristics, including a reduction in the gas-generator rotor speed parameter transient time to ≈1, which is two times faster than the traditional method, where the transient process reaches ≈0.5. The model achieved a maximum accuracy of 0.993 with 160 training epochs, minimizing the error function to 0.005. In comparison with similar approaches, the proposed method demonstrated better results in accuracy and training speed, which was confirmed by a reduction in the number of iterations by 1.36 times and an improvement in the mean square error by 1.86–6.02 times. Full article
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)
Show Figures

Figure 1

12 pages, 2340 KiB  
Article
Tensor Decomposition Through Neural Architectures
by Chady Ghnatios and Francisco Chinesta
Appl. Sci. 2025, 15(4), 1949; https://doi.org/10.3390/app15041949 - 13 Feb 2025
Viewed by 615
Abstract
Machine learning (ML) technologies are currently widely used in many domains of science and technology, to discover models that transform input data into output data. The main advantages of such a procedure are the generality and simplicity of the learning process, while their [...] Read more.
Machine learning (ML) technologies are currently widely used in many domains of science and technology, to discover models that transform input data into output data. The main advantages of such a procedure are the generality and simplicity of the learning process, while their weaknesses remain the required amount of data needed to perform the training and the recurrent difficulties to explain the involved rationale. At present, a panoply of ML techniques exist, and the selection of a method or another depends, in general, on the type and amount of data being considered. This paper proposes a procedure which provides not a field or an image as an output, but its singular value decomposition (SVD), or an SVD-like decomposition, while injecting as input data scalars or the SVD decomposition of an input field. The result is a tensor-to-tensor decomposition, without the need for the full fields, or an input to an output SVD-like decomposition. The proposed method works for the non-hyper-parallepipedic domain, and for any space dimensionality. The results show the ability of the proposed architecture to link the input filed and output field, without requiring access to full space reconstruction. Full article
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)
Show Figures

Figure 1

Back to TopTop