Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,479)

Search Parameters:
Keywords = black-box

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2779 KiB  
Article
Complex Network Analytics for Structural–Functional Decoding of Neural Networks
by Jiarui Zhang, Dongxiao Zhang, Hu Lou, Yueer Li, Taijiao Du and Yinjun Gao
Appl. Sci. 2025, 15(15), 8576; https://doi.org/10.3390/app15158576 (registering DOI) - 1 Aug 2025
Viewed by 134
Abstract
Neural networks (NNs) achieve breakthroughs in computer vision and natural language processing, yet their “black box” nature persists. Traditional methods prioritise parameter optimisation and loss design, overlooking NNs’ fundamental structure as topologically organised nonlinear computational systems. This work proposes a complex network theory [...] Read more.
Neural networks (NNs) achieve breakthroughs in computer vision and natural language processing, yet their “black box” nature persists. Traditional methods prioritise parameter optimisation and loss design, overlooking NNs’ fundamental structure as topologically organised nonlinear computational systems. This work proposes a complex network theory framework decoding structure–function coupling by mapping convolutional layers, fully connected layers, and Dropout modules into graph representations. To overcome limitations of heuristic compression techniques, we develop a topology-sensitive adaptive pruning algorithm that evaluates critical paths via node strength centrality, preserving structural–functional integrity. On CIFAR-10, our method achieves 55.5% parameter reduction with only 7.8% accuracy degradation—significantly outperforming traditional approaches. Crucially, retrained pruned networks exceed original model accuracy by up to 2.63%, demonstrating that topology optimisation unlocks latent model potential. This research establishes a paradigm shift from empirical to topologically rationalised neural architecture design, providing theoretical foundations for deep learning optimisation dynamics. Full article
(This article belongs to the Special Issue Artificial Intelligence in Complex Networks (2nd Edition))
Show Figures

Figure 1

22 pages, 2120 KiB  
Article
Machine Learning Algorithms and Explainable Artificial Intelligence for Property Valuation
by Gabriella Maselli and Antonio Nesticò
Real Estate 2025, 2(3), 12; https://doi.org/10.3390/realestate2030012 - 1 Aug 2025
Viewed by 146
Abstract
The accurate estimation of urban property values is a key challenge for appraisers, market participants, financial institutions, and urban planners. In recent years, machine learning (ML) techniques have emerged as promising tools for price forecasting due to their ability to model complex relationships [...] Read more.
The accurate estimation of urban property values is a key challenge for appraisers, market participants, financial institutions, and urban planners. In recent years, machine learning (ML) techniques have emerged as promising tools for price forecasting due to their ability to model complex relationships among variables. However, their application raises two main critical issues: (i) the risk of overfitting, especially with small datasets or with noisy data; (ii) the interpretive issues associated with the “black box” nature of many models. Within this framework, this paper proposes a methodological approach that addresses both these issues, comparing the predictive performance of three ML algorithms—k-Nearest Neighbors (kNN), Random Forest (RF), and the Artificial Neural Network (ANN)—applied to the housing market in the city of Salerno, Italy. For each model, overfitting is preliminarily assessed to ensure predictive robustness. Subsequently, the results are interpreted using explainability techniques, such as SHapley Additive exPlanations (SHAPs) and Permutation Feature Importance (PFI). This analysis reveals that the Random Forest offers the best balance between predictive accuracy and transparency, with features such as area and proximity to the train station identified as the main drivers of property prices. kNN and the ANN are viable alternatives that are particularly robust in terms of generalization. The results demonstrate how the defined methodological framework successfully balances predictive effectiveness and interpretability, supporting the informed and transparent use of ML in real estate valuation. Full article
Show Figures

Figure 1

13 pages, 3685 KiB  
Article
A Controlled Variation Approach for Example-Based Explainable AI in Colorectal Polyp Classification
by Miguel Filipe Fontes, Alexandre Henrique Neto, João Dallyson Almeida and António Trigueiros Cunha
Appl. Sci. 2025, 15(15), 8467; https://doi.org/10.3390/app15158467 (registering DOI) - 30 Jul 2025
Viewed by 174
Abstract
Medical imaging is vital for diagnosing and treating colorectal cancer (CRC), a leading cause of mortality. Classifying colorectal polyps and CRC precursors remains challenging due to operator variability and expertise dependence. Deep learning (DL) models show promise in polyp classification but face adoption [...] Read more.
Medical imaging is vital for diagnosing and treating colorectal cancer (CRC), a leading cause of mortality. Classifying colorectal polyps and CRC precursors remains challenging due to operator variability and expertise dependence. Deep learning (DL) models show promise in polyp classification but face adoption barriers due to their ‘black box’ nature, limiting interpretability. This study presents an example-based explainable artificial intehlligence (XAI) approach using Pix2Pix to generate synthetic polyp images with controlled size variations and LIME to explain classifier predictions visually. EfficientNet and Vision Transformer (ViT) were trained on datasets of real and synthetic images, achieving strong baseline accuracies of 94% and 96%, respectively. Image quality was assessed using PSNR (18.04), SSIM (0.64), and FID (123.32), while classifier robustness was evaluated across polyp sizes. Results show that Pix2Pix effectively controls image attributes like polyp size despite limitations in visual fidelity. LIME integration revealed classifier vulnerabilities, underscoring the value of complementary XAI techniques. This enhances DL model interpretability and deepens understanding of their behaviour. The findings contribute to developing explainable AI tools for polyp classification and CRC diagnosis. Future work will improve synthetic image quality and refine XAI methodologies for broader clinical use. Full article
Show Figures

Figure 1

25 pages, 26404 KiB  
Review
Review of Deep Learning Applications for Detecting Special Components in Agricultural Products
by Yifeng Zhao and Qingqing Xie
Computers 2025, 14(8), 309; https://doi.org/10.3390/computers14080309 - 30 Jul 2025
Viewed by 312
Abstract
The rapid evolution of deep learning (DL) has fundamentally transformed the paradigm for detecting special components in agricultural products, addressing critical challenges in food safety, quality control, and precision agriculture. This comprehensive review systematically analyzes many seminal studies to evaluate cutting-edge DL applications [...] Read more.
The rapid evolution of deep learning (DL) has fundamentally transformed the paradigm for detecting special components in agricultural products, addressing critical challenges in food safety, quality control, and precision agriculture. This comprehensive review systematically analyzes many seminal studies to evaluate cutting-edge DL applications across three core domains: contaminant surveillance (heavy metals, pesticides, and mycotoxins), nutritional component quantification (soluble solids, polyphenols, and pigments), and structural/biomarker assessment (disease symptoms, gel properties, and physiological traits). Emerging hybrid architectures—including attention-enhanced convolutional neural networks (CNNs) for lesion localization, wavelet-coupled autoencoders for spectral denoising, and multi-task learning frameworks for joint parameter prediction—demonstrate unprecedented accuracy in decoding complex agricultural matrices. Particularly noteworthy are sensor fusion strategies integrating hyperspectral imaging (HSI), Raman spectroscopy, and microwave detection with deep feature extraction, achieving industrial-grade performance (RPD > 3.0) while reducing detection time by 30–100× versus conventional methods. Nevertheless, persistent barriers in the “black-box” nature of complex models, severe lack of standardized data and protocols, computational inefficiency, and poor field robustness hinder the reliable deployment and adoption of DL for detecting special components in agricultural products. This review provides an essential foundation and roadmap for future research to bridge the gap between laboratory DL models and their effective, trusted application in real-world agricultural settings. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

22 pages, 4093 KiB  
Article
A Deep Learning-Driven Black-Box Benchmark Generation Method via Exploratory Landscape Analysis
by Haoming Liang, Fuqing Zhao, Tianpeng Xu and Jianlin Zhang
Appl. Sci. 2025, 15(15), 8454; https://doi.org/10.3390/app15158454 - 30 Jul 2025
Viewed by 214
Abstract
In the context of algorithm selection, the careful design of benchmark functions and problem instances plays a pivotal role in evaluating the performance of optimization methods. Traditional benchmark functions have been criticized for their limited resemblance to real-world problems and insufficient coverage of [...] Read more.
In the context of algorithm selection, the careful design of benchmark functions and problem instances plays a pivotal role in evaluating the performance of optimization methods. Traditional benchmark functions have been criticized for their limited resemblance to real-world problems and insufficient coverage of the problem space. Exploratory landscape analysis (ELA) offers a systematic framework for characterizing objective functions, based on quantitative landscape features. This study proposes a method for generating benchmark functions tailored to single-objective continuous optimization problems with boundary constraints using predefined ELA feature vectors to guide their construction. The process begins with the creation of random decision variables and corresponding objective values, which are iteratively adjusted using the covariance matrix adaptation evolution strategy (CMA-ES) to ensure alignment with a target ELA feature vector within a specified tolerance. Once the feature criteria are met, the resulting topological map point is used to train a neural network to produce a surrogate function that retains the desired landscape characteristics. To validate the proposed approach, functions from the well-known Black Box Optimization Benchmark (BBOB) suite are replicated, and novel functions are generated with unique ELA feature combinations not found in the original suite. The experiment results demonstrate that the synthesized landscapes closely resemble their BBOB counterparts and preserve the consistency of the algorithm rankings, thereby supporting the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

20 pages, 951 KiB  
Article
Causally-Informed Instance-Wise Feature Selection for Explaining Visual Classifiers
by Li Tan
Entropy 2025, 27(8), 814; https://doi.org/10.3390/e27080814 - 29 Jul 2025
Viewed by 208
Abstract
We propose a novel interpretability framework that integrates instance-wise feature selection with causal reasoning to explain decisions made by black-box image classifiers. Instead of relying on feature importance or mutual information, our method identifies input regions that exert the greatest causal influence on [...] Read more.
We propose a novel interpretability framework that integrates instance-wise feature selection with causal reasoning to explain decisions made by black-box image classifiers. Instead of relying on feature importance or mutual information, our method identifies input regions that exert the greatest causal influence on model predictions. Causal influence is formalized using a structural causal model and quantified via a conditional mutual information term. To optimize this objective efficiently, we employ continuous subset sampling and the matrix-based Rényi’s α-order entropy functional. The resulting explanations are compact, semantically meaningful, and causally grounded. Experiments across multiple vision datasets demonstrate that our method outperforms existing baselines in terms of predictive fidelity. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

37 pages, 1037 KiB  
Review
Machine Learning for Flood Resiliency—Current Status and Unexplored Directions
by Venkatesh Uddameri and E. Annette Hernandez
Environments 2025, 12(8), 259; https://doi.org/10.3390/environments12080259 - 28 Jul 2025
Viewed by 675
Abstract
A systems-oriented review of machine learning (ML) over the entire flood management spectrum, encompassing fluvial flood control, pluvial flood management, and resiliency-risk characterization was undertaken. Deep learners like long short-term memory (LSTM) networks perform well in predicting reservoir inflows and outflows. Convolution neural [...] Read more.
A systems-oriented review of machine learning (ML) over the entire flood management spectrum, encompassing fluvial flood control, pluvial flood management, and resiliency-risk characterization was undertaken. Deep learners like long short-term memory (LSTM) networks perform well in predicting reservoir inflows and outflows. Convolution neural networks (CNNs) and other object identification algorithms are being explored in assessing levee and flood wall failures. The use of ML methods in pump station operations is limited due to lack of public-domain datasets. Reinforcement learning (RL) has shown promise in controlling low-impact development (LID) systems for pluvial flood management. Resiliency is defined in terms of the vulnerability of a community to floods. Multi-criteria decision making (MCDM) and unsupervised ML methods are used to capture vulnerability. Supervised learning is used to model flooding hazards. Conventional approaches perform better than deep learners and ensemble methods for modeling flood hazards due to paucity of data and large inter-model predictive variability. Advances in satellite-based, drone-facilitated data collection and Internet of Things (IoT)-based low-cost sensors offer new research avenues to explore. Transfer learning at ungauged basins holds promise but is largely unexplored. Explainable artificial intelligence (XAI) is seeing increased use and helps the transition of ML models from black-box forecasters to knowledge-enhancing predictors. Full article
(This article belongs to the Special Issue Hydrological Modeling and Sustainable Water Resources Management)
Show Figures

Figure 1

13 pages, 559 KiB  
Article
Dynamic Modeling and Online Updating of Full-Power Converter Wind Turbines Based on Physics-Informed Neural Networks and Bayesian Neural Networks
by Yunyang Xu, Bo Zhou, Xinwei Sun, Yuting Tian and Xiaofeng Jiang
Electronics 2025, 14(15), 2985; https://doi.org/10.3390/electronics14152985 - 26 Jul 2025
Viewed by 179
Abstract
This paper presents a dynamic model for full-power converter permanent magnet synchronous wind turbines based on Physics-Informed Neural Networks (PINNs). The model integrates the physical dynamics of the wind turbine directly into the loss function, enabling high-accuracy equivalent modeling with limited data and [...] Read more.
This paper presents a dynamic model for full-power converter permanent magnet synchronous wind turbines based on Physics-Informed Neural Networks (PINNs). The model integrates the physical dynamics of the wind turbine directly into the loss function, enabling high-accuracy equivalent modeling with limited data and overcoming the typical “black-box” constraints and large data requirements of traditional data-driven approaches. To enhance the model’s real-time adaptability, we introduce an online update mechanism leveraging Bayesian Neural Networks (BNNs) combined with a clustering-guided strategy. This mechanism estimates uncertainty in the neural network weights in real-time, accurately identifies error sources, and performs local fine-tuning on clustered data. This improves the model’s ability to track real-time errors and addresses the challenge of parameter-specific adjustments. Finally, the data-driven model is integrated into the CloudPSS platform, and its multi-scenario modeling accuracy is validated across various typical cases, demonstrating the robustness of the proposed approach. Full article
Show Figures

Figure 1

21 pages, 2789 KiB  
Article
BIM-Based Adversarial Attacks Against Speech Deepfake Detectors
by Wendy Edda Wang, Davide Salvi, Viola Negroni, Daniele Ugo Leonzio, Paolo Bestagini and Stefano Tubaro
Electronics 2025, 14(15), 2967; https://doi.org/10.3390/electronics14152967 - 24 Jul 2025
Viewed by 239
Abstract
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic [...] Read more.
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic audio, potentially enabling unauthorized access through ASV systems. To counter these threats, forensic detectors have been developed to distinguish between real and fake speech. Although these models achieve strong performance, their deep learning nature makes them susceptible to adversarial attacks, i.e., carefully crafted, imperceptible perturbations in the audio signal that make the model unable to classify correctly. In this paper, we explore adversarial attacks targeting speech deepfake detectors. Specifically, we analyze the effectiveness of Basic Iterative Method (BIM) attacks applied in both time and frequency domains under white- and black-box conditions. Additionally, we propose an ensemble-based attack strategy designed to simultaneously target multiple detection models. This approach generates adversarial examples with balanced effectiveness across the ensemble, enhancing transferability to unseen models. Our experimental results show that, although crafting universally transferable attacks remains challenging, it is possible to fool state-of-the-art detectors using minimal, imperceptible perturbations, highlighting the need for more robust defenses in speech deepfake detection. Full article
Show Figures

Figure 1

23 pages, 7106 KiB  
Article
A Simulation-Based Comparative Study of Advanced Control Strategies for Residential Air Conditioning Systems
by Jonadri Bundo, Donald Selmanaj, Genci Sharko, Stefan Svensson and Orion Zavalani
Eng 2025, 6(8), 170; https://doi.org/10.3390/eng6080170 - 24 Jul 2025
Viewed by 287
Abstract
This study presents a simulation-based evaluation of advanced control strategies for residential air conditioning systems, including On–Off, PI, and Model Predictive Control (MPC) approaches. A black-box system model was identified using an ARX(2,2,0) structure, achieving over 90% prediction accuracy (FIT) for indoor temperature [...] Read more.
This study presents a simulation-based evaluation of advanced control strategies for residential air conditioning systems, including On–Off, PI, and Model Predictive Control (MPC) approaches. A black-box system model was identified using an ARX(2,2,0) structure, achieving over 90% prediction accuracy (FIT) for indoor temperature and power consumption. Six controllers were implemented and benchmarked in a high-fidelity Simscape environment under a realistic 48-h summer temperature profile. The proposed MPC scheme, particularly when incorporating outdoor temperature gradient logic, reduced energy consumption by up to 30% compared to conventional PI control while maintaining indoor thermal comfort within the acceptable range. This virtual design workflow shortens the development cycle by deferring climatic chamber testing to the final validation phase. Full article
(This article belongs to the Section Chemical, Civil and Environmental Engineering)
Show Figures

Figure 1

21 pages, 3048 KiB  
Article
Transfersome-Based Delivery of Optimized Black Tea Extract for the Prevention of UVB-Induced Skin Damage
by Nadia Benedetto, Maria Ponticelli, Ludovica Lela, Emanuele Rosa, Flavia Carriero, Immacolata Faraone, Carla Caddeo, Luigi Milella and Antonio Vassallo
Pharmaceutics 2025, 17(8), 952; https://doi.org/10.3390/pharmaceutics17080952 - 23 Jul 2025
Viewed by 296
Abstract
Background/Objectives: Ultraviolet B (UVB) radiation contributes significantly to skin aging and skin disorders by promoting oxidative stress, inflammation, and collagen degradation. Natural antioxidants such as theaflavins and thearubigins from Camellia sinensis L. (black tea) have shown photoprotective effects. This study aimed to optimize [...] Read more.
Background/Objectives: Ultraviolet B (UVB) radiation contributes significantly to skin aging and skin disorders by promoting oxidative stress, inflammation, and collagen degradation. Natural antioxidants such as theaflavins and thearubigins from Camellia sinensis L. (black tea) have shown photoprotective effects. This study aimed to optimize the extraction of theaflavins and thearubigins from black tea leaves and evaluate the efficacy of the extract against UVB-induced damage using a transfersome-based topical formulation. Methods: Extraction of theaflavins and thearubigins was optimized via response surface methodology (Box-Behnken Design), yielding an extract rich in active polyphenols. This extract was incorporated into transfersomes that were characterized for size, polydispersity, zeta potential, storage stability, and entrapment efficiency. Human dermal fibroblasts (NHDF) were used to assess cytotoxicity, protection against UVB-induced viability loss, collagen degradation, and expression of inflammatory (IL6, COX2, iNOS) and matrix-degrading (MMP1) markers. Cellular uptake of the extract’s bioactive marker compounds was measured via LC-MS/MS. Results: The transfersomes (~60 nm) showed a good stability and a high entrapment efficiency (>85%). The transfersomes significantly protected NHDF cells from UVB-induced cytotoxicity, restored collagen production, and reduced gene expression of MMP1, IL6, COX2, and iNOS. Cellular uptake of key extract’s polyphenols was markedly enhanced by the nanoformulation compared to the free extract. Conclusions: Black tea extract transfersomes effectively prevented UVB-induced oxidative and inflammatory damage in skin fibroblasts. This delivery system enhanced bioavailability of the extract and cellular protection, supporting the use of the optimized extract in cosmeceutical formulations targeting photoaging and UV-induced skin disorders. Full article
(This article belongs to the Section Drug Delivery and Controlled Release)
Show Figures

Figure 1

19 pages, 313 KiB  
Article
Survey on the Role of Mechanistic Interpretability in Generative AI
by Leonardo Ranaldi
Big Data Cogn. Comput. 2025, 9(8), 193; https://doi.org/10.3390/bdcc9080193 - 23 Jul 2025
Viewed by 755
Abstract
The rapid advancement of artificial intelligence (AI) and machine learning has revolutionised how systems process information, make decisions, and adapt to dynamic environments. AI-driven approaches have significantly enhanced efficiency and problem-solving capabilities across various domains, from automated decision-making to knowledge representation and predictive [...] Read more.
The rapid advancement of artificial intelligence (AI) and machine learning has revolutionised how systems process information, make decisions, and adapt to dynamic environments. AI-driven approaches have significantly enhanced efficiency and problem-solving capabilities across various domains, from automated decision-making to knowledge representation and predictive modelling. These developments have led to the emergence of increasingly sophisticated models capable of learning patterns, reasoning over complex data structures, and generalising across tasks. As AI systems become more deeply integrated into networked infrastructures and the Internet of Things (IoT), their ability to process and interpret data in real-time is essential for optimising intelligent communication networks, distributed decision making, and autonomous IoT systems. However, despite these achievements, the internal mechanisms that drive LLMs’ reasoning and generalisation capabilities remain largely unexplored. This lack of transparency, compounded by challenges such as hallucinations, adversarial perturbations, and misaligned human expectations, raises concerns about their safe and beneficial deployment. Understanding the underlying principles governing AI models is crucial for their integration into intelligent network systems, automated decision-making processes, and secure digital infrastructures. This paper provides a comprehensive analysis of explainability approaches aimed at uncovering the fundamental mechanisms of LLMs. We investigate the strategic components contributing to their generalisation abilities, focusing on methods to quantify acquired knowledge and assess its representation within model parameters. Specifically, we examine mechanistic interpretability, probing techniques, and representation engineering as tools to decipher how knowledge is structured, encoded, and retrieved in AI systems. Furthermore, by adopting a mechanistic perspective, we analyse emergent phenomena within training dynamics, particularly memorisation and generalisation, which also play a crucial role in broader AI-driven systems, including adaptive network intelligence, edge computing, and real-time decision-making architectures. Understanding these principles is crucial for bridging the gap between black-box AI models and practical, explainable AI applications, thereby ensuring trust, robustness, and efficiency in language-based and general AI systems. Full article
Show Figures

Figure 1

20 pages, 1461 KiB  
Article
Vulnerability-Based Economic Loss Rate Assessment of a Frame Structure Under Stochastic Sequence Ground Motions
by Zheng Zhang, Yunmu Jiang and Zixin Liu
Buildings 2025, 15(15), 2584; https://doi.org/10.3390/buildings15152584 - 22 Jul 2025
Viewed by 232
Abstract
Modeling mainshock–aftershock ground motions is essential for seismic risk assessment, especially in regions experiencing frequent earthquakes. Recent studies have often employed Copula-based joint distributions or machine learning techniques to simulate the statistical dependency between mainshock and aftershock parameters. While effective at capturing nonlinear [...] Read more.
Modeling mainshock–aftershock ground motions is essential for seismic risk assessment, especially in regions experiencing frequent earthquakes. Recent studies have often employed Copula-based joint distributions or machine learning techniques to simulate the statistical dependency between mainshock and aftershock parameters. While effective at capturing nonlinear correlations, these methods are typically black box in nature, data-dependent, and difficult to generalize across tectonic settings. More importantly, they tend to focus solely on marginal or joint parameter correlations, which implicitly treat mainshocks and aftershocks as independent stochastic processes, thereby overlooking their inherent spectral interaction. To address these limitations, this study proposes an explicit and parameterized modeling framework based on the evolutionary power spectral density (EPSD) of random ground motions. Using the magnitude difference between a mainshock and an aftershock as the control variable, we derive attenuation relationships for the amplitude, frequency content, and duration. A coherence function model is further developed from real seismic records, treating the mainshock–aftershock pair as a vector-valued stochastic process and thus enabling a more accurate representation of their spectral dependence. Coherence analysis shows that the function remains relatively stable between 0.3 and 0.6 across the 0–30 Rad/s frequency range. Validation results indicate that the simulated response spectra align closely with recorded spectra, achieving R2 values exceeding 0.90 and 0.91. To demonstrate the model’s applicability, a case study is conducted on a representative frame structure to evaluate seismic vulnerability and economic loss. As the mainshock PGA increases from 0.2 g to 1.2 g, the structure progresses from slight damage to complete collapse, with loss rates saturating near 1.0 g. These findings underscore the engineering importance of incorporating mainshock–aftershock spectral interaction in seismic damage and risk modeling, offering a transparent and transferable tool for future seismic resilience assessments. Full article
(This article belongs to the Special Issue Structural Vibration Analysis and Control in Civil Engineering)
Show Figures

Figure 1

10 pages, 1321 KiB  
Article
Black Box Warning by the United States Food and Drug Administration: The Impact on the Dispensing Rate of Benzodiazepines
by Neta Shanwetter Levit, Keren Filosof, Jacob Glazer and Daniel A. Goldstein
Pharmacoepidemiology 2025, 4(3), 16; https://doi.org/10.3390/pharma4030016 - 21 Jul 2025
Viewed by 318
Abstract
Background/objectives: In 9/2020, the United States Food and Drug Administration )FDA( posted a black box warning for all benzodiazepines, addressing their association with serious risks of abuse, addiction, physical dependence, and withdrawal reactions. We evaluated changes in benzodiazepine dispensing rate trends after this [...] Read more.
Background/objectives: In 9/2020, the United States Food and Drug Administration )FDA( posted a black box warning for all benzodiazepines, addressing their association with serious risks of abuse, addiction, physical dependence, and withdrawal reactions. We evaluated changes in benzodiazepine dispensing rate trends after this warning. Methods: The dataset of Clalit Health Services (Israel’s largest insurer, with 5 million members) was used to identify and collect benzodiazepine dispensing data for all patients who were dispensed these drugs at least once during the study period (1/2017–12/2021). The dispensing rate (number of patients who were dispensed benzodiazepines per month divided by the number of patients alive during that month) was calculated for each month in the study period. Linear regression and change point regression were used to review the change in trend before and after the black box warning. New users of benzodiazepines after the black box warning were analyzed by age. Results: A total of 639,515 patients using benzodiazepines were reviewed. The mean benzodiazepine dispensing rate per month was 0.21 and ranged from 0.17 (in 2/2017) to 0.24 (in 3/2020). No significant change in trend was observed before vs. after the black box warning (slopes of 0.00675 percentage points per month and 0.00001 percentage points per month, respectively; p = 0.38). The change point regression analysis identified a change point in 4/2019, which is prior to the black box warning. New users were younger after the black box warning compared to before this warning. Conclusions: The FDA black box warning did not affect the dispensing rate of benzodiazepines. Full article
Show Figures

Figure 1

29 pages, 3930 KiB  
Article
KAN-Based Tool Wear Modeling with Adaptive Complexity and Symbolic Interpretability in CNC Turning Processes
by Zhongyuan Che, Chong Peng, Jikun Wang, Rui Zhang, Chi Wang and Xinyu Sun
Appl. Sci. 2025, 15(14), 8035; https://doi.org/10.3390/app15148035 - 18 Jul 2025
Viewed by 308
Abstract
Tool wear modeling in CNC turning processes is critical for proactive maintenance and process optimization in intelligent manufacturing. However, traditional physics-based models lack adaptability, while machine learning approaches are often limited by poor interpretability. This study develops Kolmogorov–Arnold Networks (KANs) to address the [...] Read more.
Tool wear modeling in CNC turning processes is critical for proactive maintenance and process optimization in intelligent manufacturing. However, traditional physics-based models lack adaptability, while machine learning approaches are often limited by poor interpretability. This study develops Kolmogorov–Arnold Networks (KANs) to address the trade-off between accuracy and interpretability in lathe tool wear modeling. Three KAN variants (KAN-A, KAN-B, and KAN-C) with varying complexities are proposed, using feed rate, depth of cut, and cutting speed as input variables to model flank wear. The proposed KAN-based framework generates interpretable mathematical expressions for tool wear, enabling transparent decision-making. To evaluate the performance of KANs, this research systematically compares prediction errors, topological evolutions, and mathematical interpretations of derived symbolic formulas. For benchmarking purposes, MLP-A, MLP-B, and MLP-C models are developed based on the architectures of their KAN counterparts. A comparative analysis between KAN and MLP frameworks is conducted to assess differences in modeling performance, with particular focus on the impact of network depth, width, and parameter configurations. Theoretical analyses, grounded in the Kolmogorov–Arnold representation theorem and Cybenko’s theorem, explain KANs’ ability to approximate complex functions with fewer nodes. The experimental results demonstrate that KANs exhibit two key advantages: (1) superior accuracy with fewer parameters compared to traditional MLPs, and (2) the ability to generate white-box mathematical expressions. Thus, this work bridges the gap between empirical models and black-box machine learning in manufacturing applications. KANs uniquely combine the adaptability of data-driven methods with the interpretability of physics-based models, offering actionable insights for researchers and practitioners. Full article
Show Figures

Figure 1

Back to TopTop