Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (127)

Search Parameters:
Keywords = multi-tasking artificial neural networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 1515 KB  
Review
Designing Neural Dynamics: From Digital Twin Modeling to Regeneration
by Calin Petru Tataru, Adrian Vasile Dumitru, Nicolaie Dobrin, Mugurel Petrinel Rădoi, Alexandru Vlad Ciurea, Octavian Munteanu and Luciana Valentina Munteanu
Int. J. Mol. Sci. 2026, 27(1), 122; https://doi.org/10.3390/ijms27010122 - 22 Dec 2025
Viewed by 453
Abstract
Cognitive deterioration and the transition to neurodegenerative disease does not develop through simple, linear regression; it develops as rapid and global transitions from one state to another within the neural network. Developing understanding and control over these events is among the largest tasks [...] Read more.
Cognitive deterioration and the transition to neurodegenerative disease does not develop through simple, linear regression; it develops as rapid and global transitions from one state to another within the neural network. Developing understanding and control over these events is among the largest tasks facing contemporary neuroscience. This paper will discuss a conceptual reframing of cognitive decline as a transitional phase of the functional state of complex neural networks resulting from the intertwining of molecular degradation, vascular dysfunction and systemic disarray. The paper will integrate the latest findings that have demonstrated how the disruptive changes in glymphatic clearance mechanisms, aquaporin-4 polarity, venous output, and neuroimmune signaling increasingly correlate with the neurophysiologic homeostasis landscape, ultimately leading to the destabilization of the network attraction sites of memory, consciousness, and cognitive resilience. Furthermore, the destabilizing processes are exacerbated by epigenetic silencing; neurovascular decoupling; remodeling of the extracellular matrix; and metabolic collapse that result in accelerating the trajectory of neural circuits towards the pathological tipping point of various neurodegenerative diseases including Alzheimer’s disease; Parkinson’s disease; traumatic brain injury; and intracranial hypertension. New paradigms in systems neuroscience (connectomics; network neuroscience; and critical transition theory) provide an intellectual toolkit to describe and predict these state changes at the systems level. With artificial intelligence and machine learning combined with single cell multi-omics; radiogenomic profiling; and digital twin modeling, the predictive biomarkers and early warnings of impending collapse of the system are beginning to emerge. In terms of therapeutic intervention, the possibility of reprogramming the circuitry of the brain into stable attractor states using precision neurointervention (CRISPR-based neural circuit reprogramming; RNA guided modulation of transcription; lineage switching of glia to neurons; and adaptive neuromodulation) represents an opportunity to prevent further progression of neurodegenerative disease. The paper will address the ethical and regulatory implications of this revolutionary technology, e.g., algorithmic transparency; genomic and other structural safety; and equity of access to advanced neurointervention. We do not intend to present a list of the many vertices through which the mechanisms listed above instigate, exacerbate, or maintain the neurodegenerative disease state. Instead, we aim to present a unified model where the phenomena of molecular pathology; circuit behavior; and computational intelligence converge in describing cognitive decline as a translatable change of state, rather than an irreversible succumbing to degeneration. Thus, we provide a framework for precision neurointervention, regenerative brain medicine, and adaptive intervention, to modulate the trajectory of neurodegeneration. Full article
(This article belongs to the Special Issue From Molecular Insights to Novel Therapies: Neurological Diseases)
Show Figures

Figure 1

25 pages, 783 KB  
Article
Visual Food Ingredient Prediction Using Deep Learning with Direct F-Score Optimization
by Nawanol Theera-Ampornpunt and Panisa Treepong
Foods 2025, 14(24), 4269; https://doi.org/10.3390/foods14244269 - 11 Dec 2025
Viewed by 375
Abstract
Food ingredient prediction from images is a challenging multi-label classification task with significant applications in dietary assessment and automated recipe recommendation systems. This task is particularly difficult due to highly imbalanced classes in real-world datasets, where most ingredients appear infrequently while several common [...] Read more.
Food ingredient prediction from images is a challenging multi-label classification task with significant applications in dietary assessment and automated recipe recommendation systems. This task is particularly difficult due to highly imbalanced classes in real-world datasets, where most ingredients appear infrequently while several common ingredients dominate. In such imbalanced scenarios, the F-score metric is often used to provide a balanced evaluation measure. However, existing methods for training artificial neural networks to directly optimize for the F-score typically rely on computationally expensive hyperparameter optimization. This paper presents a novel approach for direct F-score optimization by reformulating the problem as cost-sensitive classifier optimization. We propose a computationally efficient algorithm for estimating the optimal relative cost parameters. When evaluated on the Recipe1M dataset, our approach achieved a micro F1 score of 0.5616. This represents a substantial improvement from the state-of-the-art method’s score of 0.4927. Our F-score optimization framework offers a principled and generalizable solution to class imbalance problems. It can be extended to other imbalanced binary and multi-label classification tasks beyond food analysis. Full article
(This article belongs to the Section Food Nutrition)
Show Figures

Figure 1

26 pages, 3504 KB  
Review
The Evolution of Artificial Intelligence in Ocular Toxoplasmosis Detection: A Scoping Review on Diagnostic Models, Data Challenges, and Future Directions
by Dodit Suprianto, Loeki Enggar Fitri, Ovi Sofia, Akhmad Sabarudin, Wayan Firdaus Mahmudy, Muhammad Hatta Prabowo and Werasak Surareungchai
Infect. Dis. Rep. 2025, 17(6), 148; https://doi.org/10.3390/idr17060148 - 8 Dec 2025
Viewed by 380
Abstract
Ocular Toxoplasmosis (OT), a leading cause of infectious posterior uveitis, presents significant diagnostic challenges in atypical cases due to phenotypic overlap with other retinochoroiditides and a reliance on expert interpretation of multimodal imaging. This scoping review systematically maps the burgeoning application of artificial [...] Read more.
Ocular Toxoplasmosis (OT), a leading cause of infectious posterior uveitis, presents significant diagnostic challenges in atypical cases due to phenotypic overlap with other retinochoroiditides and a reliance on expert interpretation of multimodal imaging. This scoping review systematically maps the burgeoning application of artificial intelligence (AI), particularly deep learning, in automating OT diagnosis. We synthesized 22 studies to characterize the current evidence, data landscape, and clinical translation readiness. Findings reveal a field in its nascent yet rapidly accelerating phase, dominated by convolutional neural networks (CNNs) applied to fundus photography for binary classification tasks, often reporting high accuracy (87–99.2%). However, development is critically constrained by small, imbalanced, single-center datasets, a near-universal lack of external validation, and insufficient explainable AI (XAI), creating a significant gap between technical promise and clinical utility. While AI demonstrates strong potential to standardize diagnosis and reduce subjectivity, its path to integration is hampered by over-reliance on internal validation, the “black box” nature of models, and an absence of implementation strategies. Future progress hinges on collaborative multi-center data curation, mandatory external and prospective validation, the integration of XAI for transparency, and a focused shift towards developing AI tools that assist in the complex differential diagnosis of posterior uveitis, ultimately bridging the translational chasm to clinical practice. Full article
Show Figures

Figure 1

23 pages, 1305 KB  
Article
Constructing Artificial Features with Grammatical Evolution for the Motor Symptoms of Parkinson’s Disease
by Aimilios Psathas, Ioannis G. Tsoulos, Nikolaos Giannakeas, Alexandros Tzallas and Vasileios Charilogis
Bioengineering 2025, 12(12), 1318; https://doi.org/10.3390/bioengineering12121318 - 2 Dec 2025
Viewed by 494
Abstract
People with Parkinson’s disease often show changes in their movement abilities during the day, especially around the time they take medication. Being able to record these variations in an objective way can help doctors adapt treatment and follow disease changes more closely. A [...] Read more.
People with Parkinson’s disease often show changes in their movement abilities during the day, especially around the time they take medication. Being able to record these variations in an objective way can help doctors adapt treatment and follow disease changes more closely. A methodology for quantitative motor assessment is proposed in this work. It employs data from a custom SmartGlove equipped with inertial sensors. A multi-method feature selection scheme is developed, integrating statistical significance, model-based importance, and variance contribution. The most significant features were retained, and higher-level artificial features were generated using Grammatical Evolution (GE). The framework combines multi-criteria feature selection with evolutionary feature construction, providing a compact and interpretable representation of motor behavior. Additionally, the framework highlights nonlinear and composite features as potential digital biomarkers for Parkinson’s monitoring. The method was validated on recordings collected from Parkinson’s patients before and after medication intake. The recordings have been retrieved during four standardized hand motor tasks targeting tremor, bradykinesia, rigidity, and general movement anomalies. The proposed method was compared with five existing machine learning models based on artificial neural networks. GE-based features reduced classification errors to 10–19%, outperforming baseline models. Furthermore, the proposed methodology performs prediction and recall 80–88%. Full article
Show Figures

Figure 1

32 pages, 9121 KB  
Review
Generative Design of Concentrated Solar Thermal Tower Receivers—State of the Art and Trends
by Jorge Moreno García-Moreno and Kypros Milidonis
Energies 2025, 18(22), 5890; https://doi.org/10.3390/en18225890 - 8 Nov 2025
Viewed by 596
Abstract
The rapid advances in artificial intelligence (AI) and high-performance computing (HPC) are transforming the landscape of engineering design, and the concentrated solar power (CSP) tower sector is no exception. As these technologies increasingly penetrate the energy domain, they bring new capabilities for addressing [...] Read more.
The rapid advances in artificial intelligence (AI) and high-performance computing (HPC) are transforming the landscape of engineering design, and the concentrated solar power (CSP) tower sector is no exception. As these technologies increasingly penetrate the energy domain, they bring new capabilities for addressing the complex, multi-variable nature of receiver design and optimisation. This review explores the application of AI-driven generative design techniques in the context of CSP tower receivers, with a particular focus on the use of metaheuristic algorithms and machine learning models. A structured classification is presented, highlighting the most commonly employed methods, such as Genetic Algorithms (GAs), Particle Swarm Optimisation (PSO), and Artificial Neural Networks (ANNs), and mapping them to specific receiver types: cavity, external, and volumetric. GAs are found to dominate multi-objective optimisation tasks, especially those involving trade-offs between thermal efficiency and heat flux uniformity, while ANNs offer strong potential as surrogate models for accelerating design iterations. The review also identifies existing gaps in the literature and outlines future opportunities, including the integration of high-fidelity simulations and experimental validation into AI design workflows. These insights demonstrate the growing relevance and impact of AI in advancing the next generation of high-performance CSP receiver systems. Full article
Show Figures

Figure 1

14 pages, 6970 KB  
Article
Rehearsal-Free Continual Learning for Emerging Unsafe Behavior Recognition in Construction Industry
by Tao Wang, Saisai Ye, Zimeng Zhai, Weigang Lu and Cunling Bian
Sensors 2025, 25(21), 6525; https://doi.org/10.3390/s25216525 - 23 Oct 2025
Viewed by 601
Abstract
In the realm of Industry 5.0, the incorporation of Artificial Intelligence (AI) in overseeing workers, machinery, and industrial systems is essential for fostering a human-centric, sustainable, and resilient industry. Despite technological advancements, the construction industry remains largely labor intensive, with site management and [...] Read more.
In the realm of Industry 5.0, the incorporation of Artificial Intelligence (AI) in overseeing workers, machinery, and industrial systems is essential for fostering a human-centric, sustainable, and resilient industry. Despite technological advancements, the construction industry remains largely labor intensive, with site management and interventions predominantly reliant on manual judgments, leading to inefficiencies and various challenges. This research emphasizes identifying unsafe behaviors and risks within construction environments by employing AI. Given the continuous emergence of unsafe behaviors that requires certain caution, it is imperative to adapt to these novel categories while retaining the knowledge of existing ones. Although deep convolutional neural networks have shown excellent performance in behavior recognition, they traditionally function as predefined multi-way classifiers, which exhibit limited flexibility in accommodating emerging unsafe behavior classes. Addressing this issue, this study proposes a versatile and efficient recognition model capable of expanding the range of unsafe behaviors while maintaining the recognition of both new and existing categories. Adhering to the continual learning paradigm, this method integrates two types of complementary prompts into the pre-trained model: task-invariant prompts that encode knowledge shared across tasks, and task-specific prompts that adapt the model to individual tasks. These prompts are injected into specific layers of the frozen backbone to guide learning without requiring a rehearsal buffer, enabling effective recognition of both new and previously learned unsafe behaviors. Additionally, this paper introduces a benchmark dataset, Split-UBR, specifically constructed for continual unsafe behavior recognition on construction sites. To rigorously evaluate the proposed model, we conducted comparative experiments using average accuracy and forgetting as metrics, and benchmarked against state-of-the-art continual learning baselines. Results on the Split-UBR dataset demonstrate that our method achieves superior performance in terms of both accuracy and reduced forgetting across all tasks, highlighting its effectiveness in dynamic industrial environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 3233 KB  
Article
Computational Homogenisation and Identification of Auxetic Structures with Interval Parameters
by Witold Beluch, Marcin Hatłas, Jacek Ptaszny and Anna Kloc-Ptaszna
Materials 2025, 18(19), 4554; https://doi.org/10.3390/ma18194554 - 30 Sep 2025
Viewed by 557
Abstract
The subject of this paper is the computational homogenisation and identification of heterogeneous materials in the form of auxetic structures made of materials with nonlinear characteristics. It is assumed that some of the material and topological parameters of the auxetic structures are uncertain [...] Read more.
The subject of this paper is the computational homogenisation and identification of heterogeneous materials in the form of auxetic structures made of materials with nonlinear characteristics. It is assumed that some of the material and topological parameters of the auxetic structures are uncertain and are modelled as interval numbers. Directed interval arithmetic is used to minimise the width of the resulting intervals. The finite element method is employed to solve the boundary value problem, and artificial neural network response surfaces are utilised to reduce the computational effort. In order to solve the identification task, the Pareto approach is adopted, and a multi-objective evolutionary algorithm is used as the global optimisation method. The results obtained from computational homogenisation under uncertainty demonstrate the efficacy of the proposed methodology in capturing material behaviour, thereby underscoring the significance of incorporating uncertainty into material properties. The identification results demonstrate the successful identification of material parameters at the microscopic scale from macroscopic data involving the interval description of the process of deformation of auxetic structures in a nonlinear regime. Full article
(This article belongs to the Section Materials Simulation and Design)
Show Figures

Figure 1

28 pages, 8109 KB  
Article
A Face Image Encryption Scheme Based on Nonlinear Dynamics and RNA Cryptography
by Xiyuan Cheng, Tiancong Cheng, Xinyu Yang, Wenbin Cheng and Yiting Lin
Cryptography 2025, 9(3), 57; https://doi.org/10.3390/cryptography9030057 - 4 Sep 2025
Cited by 1 | Viewed by 973
Abstract
With the rapid development of big data and artificial intelligence, the problem of image privacy leakage has become increasingly prominent, especially for images containing sensitive information such as faces, which poses a higher security risk. In order to improve the security and efficiency [...] Read more.
With the rapid development of big data and artificial intelligence, the problem of image privacy leakage has become increasingly prominent, especially for images containing sensitive information such as faces, which poses a higher security risk. In order to improve the security and efficiency of image privacy protection, this paper proposes an image encryption scheme that integrates face detection and multi-level encryption technology. Specifically, a multi-task convolutional neural network (MTCNN) is used to accurately extract the face area to ensure accurate positioning and high processing efficiency. For the extracted face area, a hierarchical encryption framework is constructed using chaotic systems, lightweight block permutations, RNA cryptographic systems, and bit diffusion, which increases data complexity and unpredictability. In addition, a key update mechanism based on dynamic feedback is introduced to enable the key to change in real time during the encryption process, effectively resisting known plaintext and chosen plaintext attacks. Experimental results show that the scheme performs well in terms of encryption security, robustness, computational efficiency, and image reconstruction quality. This study provides a practical and effective solution for the secure storage and transmission of sensitive face images, and provides valuable support for image privacy protection in intelligent systems. Full article
Show Figures

Figure 1

17 pages, 1103 KB  
Article
Optimizing Carbon Footprint and Strength in High-Performance Concrete Through Data-Driven Modeling
by Saloua Helali, Shadiah Albalawi, Maer Alanazi, Bashayr Alanazi and Nizar Bel Hadj Ali
Sustainability 2025, 17(17), 7808; https://doi.org/10.3390/su17177808 - 29 Aug 2025
Viewed by 1024
Abstract
High-performance concrete (HPC) is an essential construction material used for modern buildings and infrastructure assets, recognized for its exceptional strength, durability, and performance under harsh situations. Nonetheless, the HPC production process frequently correlates with elevated carbon emissions, principally attributable to the high quantity [...] Read more.
High-performance concrete (HPC) is an essential construction material used for modern buildings and infrastructure assets, recognized for its exceptional strength, durability, and performance under harsh situations. Nonetheless, the HPC production process frequently correlates with elevated carbon emissions, principally attributable to the high quantity of cement utilized, which significantly influences its carbon footprint. In this study, data-driven modeling and optimization strategies are employed to minimize the carbon footprint of high-performance concretes while keeping their performance properties. Starting from an experimental dataset, artificial neural networks (ANNs), ensemble techniques (ETs), and Gaussian process regression (GPR) are employed to yield predictive models for compressive strength of HPC mixes. The model’s input variables are the various components of HPC: cement, water, superplasticizer, fly ash, blast furnace slag, and coarse and fine aggregates. Models are trained using a dataset of 356 records. Results proved that the GPR-based model exhibits excellent accuracy with a determination coefficient of 0.90. The prediction model is used in a double objective optimization task formulated to identify mix configurations that allow for high mechanical performance aligned with a reduced carbon emission. The multi-objective optimization task is undertaken using genetic algorithms (GAs). Promising results are obtained when the machine learning prediction model is associated with GA optimization to identify strong yet sustainable mix configurations. Full article
(This article belongs to the Special Issue Advancements in Concrete Materials for Sustainable Construction)
Show Figures

Figure 1

22 pages, 1989 KB  
Review
Machine Learning Models for Predicting Gynecological Cancers: Advances, Challenges, and Future Directions
by Pankaj Garg, Madhu Krishna, Prakash Kulkarni, David Horne, Ravi Salgia and Sharad S. Singhal
Cancers 2025, 17(17), 2799; https://doi.org/10.3390/cancers17172799 - 27 Aug 2025
Cited by 1 | Viewed by 1874
Abstract
Gynecological cancer, especially breast, cervical, and ovarian cancer, are significant health issues affecting women worldwide. When screened they are mostly detected at later stages because of non-specific signs and symptoms as well as the unavailability of reliable screening methods. The improvement of early [...] Read more.
Gynecological cancer, especially breast, cervical, and ovarian cancer, are significant health issues affecting women worldwide. When screened they are mostly detected at later stages because of non-specific signs and symptoms as well as the unavailability of reliable screening methods. The improvement of early oncologic prediction methods is therefore needed to work out the survival rates, guide individualized treatment, and relieve healthcare pressures. Outcome forecasting and clinical detection are rapidly changing with the use of machine learning (ML), one of the promising technologies used to analyze complex biomedical data. Artificial intelligence (AI)-based ML models are capable of determining low-level trends and making accurate predictions of disease risk and outcomes, because they can combine different datasets (clinical records, genomics, proteomics, medical imaging) and learn to identify subtle patterns. Standard algorithms, including support vector machines, random forests, and deep learning (DL) models, such as convolutional neural networks, have demonstrated high potential in identifying the type of cancer, monitoring disease progression, and designing treatment patterns. This manuscript reviews the recent developments in the use of ML models to advance oncologic prediction tasks in gynecologic oncology. It reports on critical domains, like screening, risk classification, and survival modeling, as well as comments on difficulties, like data inconsistency, inability of interpretation of models, and issues of clinical interpretation. New developments, such as explainable AI, federated learning (FL), and multi-omics fusion, are discussed to develop these models and to make them applicable in practice because of their reliability. Conclusively, this article emphasizes the transformative role of ML in precision oncology to deliver improved, patient-centered outcomes to women who are victims of gynecological cancers. Full article
(This article belongs to the Special Issue Advancements in Preclinical Models for Solid Cancers)
Show Figures

Figure 1

33 pages, 8494 KB  
Article
Enhanced Multi-Class Brain Tumor Classification in MRI Using Pre-Trained CNNs and Transformer Architectures
by Marco Antonio Gómez-Guzmán, Laura Jiménez-Beristain, Enrique Efren García-Guerrero, Oscar Adrian Aguirre-Castro, José Jaime Esqueda-Elizondo, Edgar Rene Ramos-Acosta, Gilberto Manuel Galindo-Aldana, Cynthia Torres-Gonzalez and Everardo Inzunza-Gonzalez
Technologies 2025, 13(9), 379; https://doi.org/10.3390/technologies13090379 - 22 Aug 2025
Cited by 1 | Viewed by 3315
Abstract
Early and accurate identification of brain tumors is essential for determining effective treatment strategies and improving patient outcomes. Artificial intelligence (AI) and deep learning (DL) techniques have shown promise in automating diagnostic tasks based on magnetic resonance imaging (MRI). This study evaluates the [...] Read more.
Early and accurate identification of brain tumors is essential for determining effective treatment strategies and improving patient outcomes. Artificial intelligence (AI) and deep learning (DL) techniques have shown promise in automating diagnostic tasks based on magnetic resonance imaging (MRI). This study evaluates the performance of four pre-trained deep convolutional neural network (CNN) architectures for the automatic multi-class classification of brain tumors into four categories: Glioma, Meningioma, Pituitary, and No Tumor. The proposed approach utilizes the publicly accessible Brain Tumor MRI Msoud dataset, consisting of 7023 images, with 5712 provided for training and 1311 for testing. To assess the impact of data availability, subsets containing 25%, 50%, 75%, and 100% of the training data were used. A stratified five-fold cross-validation technique was applied. The CNN architectures evaluated include DeiT3_base_patch16_224, Xception41, Inception_v4, and Swin_Tiny_Patch4_Window7_224, all fine-tuned using transfer learning. The training pipeline incorporated advanced preprocessing and image data augmentation techniques to enhance robustness and mitigate overfitting. Among the models tested, Swin_Tiny_Patch4_Window7_224 achieved the highest classification Accuracy of 99.24% on the test set using 75% of the training data. This model demonstrated superior generalization across all tumor classes and effectively addressed class imbalance issues. Furthermore, we deployed and benchmarked the best-performing DL model on embedded AI platforms (Jetson AGX Xavier and Orin Nano), demonstrating their capability for real-time inference and highlighting their feasibility for edge-based clinical deployment. The results highlight the strong potential of pre-trained deep CNN and transformer-based architectures in medical image analysis. The proposed approach provides a scalable and energy-efficient solution for automated brain tumor diagnosis, facilitating the integration of AI into clinical workflows. Full article
Show Figures

Figure 1

20 pages, 16838 KB  
Article
Multi-Criteria Visual Quality Control Algorithm for Selected Technological Processes Designed for Budget IIoT Edge Devices
by Piotr Lech
Electronics 2025, 14(16), 3204; https://doi.org/10.3390/electronics14163204 - 12 Aug 2025
Viewed by 788
Abstract
This paper presents an innovative multi-criteria visual quality control algorithm designed for deployment on cost-effective Edge devices within the Industrial Internet of Things environment. Traditional industrial vision systems are typically associated with high acquisition, implementation, and maintenance costs. The proposed solution addresses the [...] Read more.
This paper presents an innovative multi-criteria visual quality control algorithm designed for deployment on cost-effective Edge devices within the Industrial Internet of Things environment. Traditional industrial vision systems are typically associated with high acquisition, implementation, and maintenance costs. The proposed solution addresses the need to reduce these costs while maintaining high defect detection efficiency. The developed algorithm largely eliminates the need for time- and energy-intensive neural network training or retraining, though these capabilities remain optional. Consequently, the reliance on human labor, particularly for tasks such as manual data labeling, has been significantly reduced. The algorithm is optimized to run on low-power computing units typical of budget industrial computers, making it a viable alternative to server- or cloud-based solutions. The system supports flexible integration with existing industrial automation infrastructure, but it can also be deployed at manual workstations. The algorithm’s primary application is to assess the spread quality of thick liquid mold filling; however, its effectiveness has also been demonstrated for 3D printing processes. The proposed hybrid algorithm combines three approaches: (1) the classical SSIM image quality metric, (2) depth image measurement using Intel MiDaS technology combined with analysis of depth map visualizations and histogram analysis, and (3) feature extraction using selected artificial intelligence models based on the OpenCLIP framework and publicly available pretrained models. This combination allows the individual methods to compensate for each other’s limitations, resulting in improved defect detection performance. The use of hybrid metrics in defective sample selection has been shown to yield superior algorithmic performance compared to the application of individual methods independently. Experimental tests confirmed the high effectiveness and practical applicability of the proposed solution, preserving low hardware requirements. Full article
Show Figures

Figure 1

21 pages, 875 KB  
Article
Comprehensive Analysis of Neural Network Inference on Embedded Systems: Response Time, Calibration, and Model Optimisation
by Patrick Huber, Ulrich Göhner, Mario Trapp, Jonathan Zender and Rabea Lichtenberg
Sensors 2025, 25(15), 4769; https://doi.org/10.3390/s25154769 - 2 Aug 2025
Cited by 1 | Viewed by 1066
Abstract
The response time of Artificial Neural Network (ANN) inference is critical in embedded systems processing sensor data close to the source. This is particularly important in applications such as predictive maintenance, which rely on timely state change predictions. This study enables estimation of [...] Read more.
The response time of Artificial Neural Network (ANN) inference is critical in embedded systems processing sensor data close to the source. This is particularly important in applications such as predictive maintenance, which rely on timely state change predictions. This study enables estimation of model response times based on the underlying platform, highlighting the importance of benchmarking generic ANN applications on edge devices. We analyze the impact of network parameters, activation functions, and single- versus multi-threading on response times. Additionally, potential hardware-related influences, such as clock rate variances, are discussed. The results underline the complexity of task partitioning and scheduling strategies, stressing the need for precise parameter coordination to optimise performance across platforms. This study shows that cutting-edge frameworks do not necessarily perform the required operations automatically for all configurations, which may negatively impact performance. This paper further investigates the influence of network structure on model calibration, quantified using the Expected Calibration Error (ECE), and the limits of potential optimisation opportunities. It also examines the effects of model conversion to Tensorflow Lite (TFLite), highlighting the necessity of considering both performance and calibration when deploying models on embedded systems. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

25 pages, 2727 KB  
Review
AI-Powered Next-Generation Technology for Semiconductor Optical Metrology: A Review
by Weiwang Xu, Houdao Zhang, Lingjing Ji and Zhongyu Li
Micromachines 2025, 16(8), 838; https://doi.org/10.3390/mi16080838 - 22 Jul 2025
Cited by 2 | Viewed by 3402
Abstract
As semiconductor manufacturing advances into the angstrom-scale era characterized by three-dimensional integration, conventional metrology technologies face fundamental limitations regarding accuracy, speed, and non-destructiveness. Although optical spectroscopy has emerged as a prominent research focus, its application in complex manufacturing scenarios continues to confront significant [...] Read more.
As semiconductor manufacturing advances into the angstrom-scale era characterized by three-dimensional integration, conventional metrology technologies face fundamental limitations regarding accuracy, speed, and non-destructiveness. Although optical spectroscopy has emerged as a prominent research focus, its application in complex manufacturing scenarios continues to confront significant technical barriers. This review establishes three concrete objectives: To categorize AI–optical spectroscopy integration paradigms spanning forward surrogate modeling, inverse prediction, physics-informed neural networks (PINNs), and multi-level architectures; to benchmark their efficacy against critical industrial metrology challenges including tool-to-tool (T2T) matching and high-aspect-ratio (HAR) structure characterization; and to identify unresolved bottlenecks for guiding next-generation intelligent semiconductor metrology. By categorically elaborating on the innovative applications of AI algorithms—such as forward surrogate models, inverse modeling techniques, physics-informed neural networks (PINNs), and multi-level network architectures—in optical spectroscopy, this work methodically assesses the implementation efficacy and limitations of each technical pathway. Through actual application case studies involving J-profiler software 5.0 and associated algorithms, this review validates the significant efficacy of AI technologies in addressing critical industrial challenges, including tool-to-tool (T2T) matching. The research demonstrates that the fusion of AI and optical spectroscopy delivers technological breakthroughs for semiconductor metrology; however, persistent challenges remain concerning data veracity, insufficient datasets, and cross-scale compatibility. Future research should prioritize enhancing model generalization capability, optimizing data acquisition and utilization strategies, and balancing algorithm real-time performance with accuracy, thereby catalyzing the transformation of semiconductor manufacturing towards an intelligence-driven advanced metrology paradigm. Full article
(This article belongs to the Special Issue Recent Advances in Lithography)
Show Figures

Figure 1

28 pages, 6030 KB  
Article
Balancing Solar Energy, Thermal Comfort, and Emissions: A Data-Driven Urban Morphology Optimization Approach
by Chenhang Bian, Panpan Hu, Chun Yin Li, Chi Chung Lee and Xi Chen
Energies 2025, 18(13), 3421; https://doi.org/10.3390/en18133421 - 29 Jun 2025
Cited by 3 | Viewed by 1182
Abstract
Urban morphology critically shapes environmental performance, yet few studies integrate multiple sustainability targets within a unified modeling framework for its design optimization. This study proposes a data-driven, multi-scale approach that combines parametric simulation, artificial neural network-based multi-task learning (MTL), SHAP interpretability, and NSGA-II [...] Read more.
Urban morphology critically shapes environmental performance, yet few studies integrate multiple sustainability targets within a unified modeling framework for its design optimization. This study proposes a data-driven, multi-scale approach that combines parametric simulation, artificial neural network-based multi-task learning (MTL), SHAP interpretability, and NSGA-II optimization to assess and optimize urban form across 18 districts in Hong Kong. Four key sustainability targets—photovoltaic generation (PVG), accumulated urban heat island intensity (AUHII), indoor overheating degree (IOD), and carbon emission intensity (CEI)—were jointly predicted using an artificial neural network-based MTL model. The prediction results outperform single-task models, achieving R2 values of 0.710 (PVG), 0.559 (AUHII), 0.819 (IOD), and 0.405 (CEI), respectively. SHAP analysis identifies building height, density, and orientation as the most important design factors, revealing trade-offs between solar access, thermal stress, and emissions. Urban form design strategies are informed by the multi-objective optimization, with the optimal solution featuring a building height of 72.11 m, building centroid distance of 109.92 m, and east-facing orientation (183°). The optimal configuration yields the highest PVG (55.26 kWh/m2), lowest CEI (359.76 kg/m2/y), and relatively acceptable AUHII (294.13 °C·y) and IOD (92.74 °C·h). This study offers a balanced path toward carbon reduction, thermal resilience, and renewable energy utilization in compact cities for either new town planning or existing district renovation. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

Back to TopTop