Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (313)

Search Parameters:
Keywords = continuous fine-tune

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3972 KB  
Article
Solar Panel Surface Defect and Dust Detection: Deep Learning Approach
by Atta Rahman
J. Imaging 2025, 11(9), 287; https://doi.org/10.3390/jimaging11090287 (registering DOI) - 25 Aug 2025
Abstract
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five [...] Read more.
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five standard anomaly classes: Non-Defective, Dust, Defective, Physical Damage, and Snow on photovoltaic surfaces. To build a robust foundation, a heterogeneous dataset of 8973 images was sourced from public repositories and standardized into a uniform labeling scheme. This dataset was then expanded through an aggressive augmentation strategy, including flips, rotations, zooms, and noise injections. A YOLOv11-based model was trained and fine-tuned using both fixed and adaptive learning rate schedules, achieving a mAP@0.5 of 85% and accuracy, recall, and F1-score above 95% when evaluated across diverse lighting and dust scenarios. The optimized model is integrated into an interactive dashboard that processes live camera streams, issues real-time alerts upon defect detection, and supports proactive maintenance scheduling. Comparative evaluations highlight the superiority of this approach over manual inspections and earlier YOLO versions in both precision and inference speed, making it well suited for deployment on edge devices. Automating visual inspection not only reduces labor costs and operational downtime but also enhances the longevity of solar installations. By offering a scalable solution for continuous monitoring, this work contributes to improving the reliability and cost-effectiveness of large-scale solar energy systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

28 pages, 17913 KB  
Article
Towards Robust Industrial Control Interpretation Through Comparative Analysis of Vision–Language Models
by Juan Izquierdo-Domenech, Jordi Linares-Pellicer, Carlos Aliaga-Torro and Isabel Ferri-Molla
Machines 2025, 13(9), 759; https://doi.org/10.3390/machines13090759 - 25 Aug 2025
Abstract
Industrial environments frequently rely on analog control instruments due to their reliability and robustness; however, automating the interpretation of these controls remains challenging due to variability in design, lighting conditions, and scale precision requirements. This research investigates the effectiveness of Vision–Language Models (VLMs) [...] Read more.
Industrial environments frequently rely on analog control instruments due to their reliability and robustness; however, automating the interpretation of these controls remains challenging due to variability in design, lighting conditions, and scale precision requirements. This research investigates the effectiveness of Vision–Language Models (VLMs) for automated interpretation of industrial controls through analysis of three distinct approaches: general-purpose VLMs, fine-tuned specialized models, and lightweight models optimized for edge computing. Each approach was evaluated using two prompting strategies, Holistic-Thought Protocol (HTP) and sequential Chain-of-Thought (CoT), across a representative dataset of continuous and discrete industrial controls. The results demonstrate that the fine-tuned Generative Pre-trained Transformer 4 omni (GPT-4o) significantly outperformed other approaches, achieving low Mean Absolute Error (MAE) for continuous controls and the highest accuracy and Matthews Correlation Coefficient (MCC) for discrete controls. Fine-tuned models demonstrated less sensitivity to prompt variations, enhancing their reliability. In contrast, although general-purpose VLMs showed acceptable zero-shot performance, edge-optimized models exhibited severe limitations. This work highlights the capability of fine-tuned VLMs for practical deployment in industrial scenarios, balancing precision, computational efficiency, and data annotation requirements. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

36 pages, 2219 KB  
Article
Automated Malware Source Code Generation via Uncensored LLMs and Adversarial Evasion of Censored Model
by Raúl Acosta-Bermejo, José Alexis Terrazas-Chavez and Eleazar Aguirre-Anaya
Appl. Sci. 2025, 15(17), 9252; https://doi.org/10.3390/app15179252 - 22 Aug 2025
Viewed by 237
Abstract
Malicious programs, commonly called malware, have had a pervasive presence in the world for nearly forty years and have continued to evolve and multiply exponentially. On the other hand, there are multiple research works focused on malware detection with different strategies that seem [...] Read more.
Malicious programs, commonly called malware, have had a pervasive presence in the world for nearly forty years and have continued to evolve and multiply exponentially. On the other hand, there are multiple research works focused on malware detection with different strategies that seem to work only temporarily, as new attack tactics and techniques quickly emerge. There are increasing proposals to analyze the problem from the attacker’s perspective, as suggested by MITRE ATT&CK. This article presents a proposal that utilizes Large Language Models (LLMs) to generate malware and understand its generation from the perspective of a red team. It demonstrates how to create malware using current models that incorporate censorship, and a specialized model is trained (fine-tuned) to generate code, enabling it to learn how to create malware. Both scenarios are evaluated using the pass@k metric and a controlled execution environment (malware lab) to prevent its spread. Full article
(This article belongs to the Special Issue Information Security: Threats and Attacks)
Show Figures

Figure 1

21 pages, 4332 KB  
Article
A Comparative Study of Time–Frequency Representations for Bearing and Rotating Fault Diagnosis Using Vision Transformer
by Ahmet Orhan, Nikolay Yordanov, Merve Ertarğın, Marin Zhilevski and Mikho Mikhov
Machines 2025, 13(8), 737; https://doi.org/10.3390/machines13080737 - 19 Aug 2025
Viewed by 381
Abstract
This paper presents a comparative analysis of bearing and rotating component fault classification based on different time–frequency representations using vision transformer (ViT). Four different time–frequency transformation techniques—short-time Fourier transform (STFT), continuous wavelet transform (CWT), Hilbert–Huang transform (HHT), and Wigner–Ville distribution (WVD)—were applied to [...] Read more.
This paper presents a comparative analysis of bearing and rotating component fault classification based on different time–frequency representations using vision transformer (ViT). Four different time–frequency transformation techniques—short-time Fourier transform (STFT), continuous wavelet transform (CWT), Hilbert–Huang transform (HHT), and Wigner–Ville distribution (WVD)—were applied to convert the signals into 2D images. A pretrained ViT-Base architecture was fine-tuned on the resulting images for classification tasks. The model was evaluated on two separate scenarios: (i) eight-class rotating component fault classification and (ii) four-class bearing fault classification. Importantly, in each task, the samples were collected under varying conditions of the other component (i.e., different rotating conditions in bearing classification and vice versa). This design allowed for an independent assessment of the model’s ability to generalize across fault domains. The experimental results demonstrate that the ViT-based approach achieves high classification performance across various time–frequency representations, highlighting its potential for mechanical fault diagnosis in rotating machinery. Notably, the model achieved higher accuracy in bearing fault classification compared to rotating component faults, suggesting higher sensitivity to bearing-related anomalies. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

25 pages, 4349 KB  
Article
The Economic Optimization of a Grid-Connected Hybrid Renewable System with an Electromagnetic Frequency Regulator Using a Genetic Algorithm
by Aziz Oloroun-Shola Bissiriou, Joale de Carvalho Pereira, Ednardo Pereira da Rocha, Ricardo Ferreira Pinheiro, Elmer Rolando Llanos Villarreal and Andrés Ortiz Salazar
Energies 2025, 18(16), 4404; https://doi.org/10.3390/en18164404 - 19 Aug 2025
Viewed by 190
Abstract
This paper presents a comprehensive economic optimization of a grid-connected hybrid renewable energy system (HRES) enhanced with an electromagnetic frequency regulator (EFR) to improve frequency stability and provide clean and continuous electricity to the Macau City Campus while reducing dependence on fossil sources. [...] Read more.
This paper presents a comprehensive economic optimization of a grid-connected hybrid renewable energy system (HRES) enhanced with an electromagnetic frequency regulator (EFR) to improve frequency stability and provide clean and continuous electricity to the Macau City Campus while reducing dependence on fossil sources. The system includes photovoltaic (PV) arrays, wind turbines, battery storage, EFR, and a backup diesel generator. A genetic algorithm (GA) is employed to optimally size these components with the objective of maximizing the net present value (NPV) over the system’s lifetime. The GA implementation was validated on standard benchmark functions to ensure correctness and was finely tuned for robust convergence. Comprehensive sensitivity analyses of key parameters (discount rate, component costs, resource availability, etc.) were performed to assess solution robustness. The optimized design (PV35kWp, WT=30kW, ESS200kWh, and EFR=30kW) achieves a highly positive net present value of BRL 1.86 M in 2015 values (BRL 3.11 M in 2025) and discounted payback in approximately 9 years. A comparative assessment with the 2015 baseline project revealed up to a 10.1% enhancement in the net present value, underscoring the economic advantages of the optimized design. These results confirm the system’s strong economic viability and environmental benefits, providing a valuable guideline for future grid-connected hybrid energy systems. Full article
Show Figures

Figure 1

20 pages, 1548 KB  
Article
A Credibility-Based Self-Evolution Algorithm for Equipment Digital Twins Based on Multi-Layer Deep Koopman Operator
by Hongbo Cheng, Lin Zhang, Kunyu Wang, Han Lu and Yihan Guo
Appl. Sci. 2025, 15(16), 9082; https://doi.org/10.3390/app15169082 - 18 Aug 2025
Viewed by 193
Abstract
In the context of Industry 4.0 and intelligent manufacturing, the scale and complexity of complex equipment systems are continuously increasing, making effective high-precision modeling, simulation, and prediction in the engineering field significant challenges. Digital twin technology, by establishing real-time connections between virtual models [...] Read more.
In the context of Industry 4.0 and intelligent manufacturing, the scale and complexity of complex equipment systems are continuously increasing, making effective high-precision modeling, simulation, and prediction in the engineering field significant challenges. Digital twin technology, by establishing real-time connections between virtual models and physical systems, provides strong support for the real-time monitoring, optimization, and prediction of complex systems. However, traditional digital twin models face significant limitations when synchronizing with high-dimensional nonlinear and non-stationary dynamical systems due to the latter’s dynamic characteristics. To address this issue, we propose a multi-layer deep Koopman operator-based (MDK) credibility-based self-evolution algorithm for equipment digital twins. By constructing multiple time-scale embedding layers and combining deep neural networks for observability function learning, the algorithm effectively captures the dynamic features of complex nonlinear systems at different time scales, enabling their global dynamic modeling and precise analysis. Furthermore, to enhance the model’s adaptability, a trustworthiness-based evolution-triggering mechanism and an adaptive model fine-tuning algorithm are designed. When the digital twin model’s trustworthiness assessment indicates a decline in prediction accuracy, the evolution mechanism is automatically triggered to optimize and update the model with the fine-tuning algorithm to maintain its stability and robustness during dynamic evolution. The experimental results demonstrate that the proposed method achieves significant improvements in prediction accuracy within unmanned aerial vehicle (UAV) systems, showcasing its broad application potential in intelligent manufacturing and complex equipment systems. Full article
(This article belongs to the Special Issue Integration of Digital Simulation Models in Smart Manufacturing)
Show Figures

Figure 1

21 pages, 806 KB  
Tutorial
Multi-Layered Framework for LLM Hallucination Mitigation in High-Stakes Applications: A Tutorial
by Sachin Hiriyanna and Wenbing Zhao
Computers 2025, 14(8), 332; https://doi.org/10.3390/computers14080332 - 16 Aug 2025
Viewed by 684
Abstract
Large language models (LLMs) now match or exceed human performance on many open-ended language tasks, yet they continue to produce fluent but incorrect statements, which is a failure mode widely referred to as hallucination. In low-stakes settings this may be tolerable; in regulated [...] Read more.
Large language models (LLMs) now match or exceed human performance on many open-ended language tasks, yet they continue to produce fluent but incorrect statements, which is a failure mode widely referred to as hallucination. In low-stakes settings this may be tolerable; in regulated or safety-critical domains such as financial services, compliance review, and client decision support, it is not. Motivated by these realities, we develop an integrated mitigation framework that layers complementary controls rather than relying on any single technique. The framework combines structured prompt design, retrieval-augmented generation (RAG) with verifiable evidence sources, and targeted fine-tuning aligned with domain truth constraints. Our interest in this problem is practical. Individual mitigation techniques have matured quickly, yet teams deploying LLMs in production routinely report difficulty stitching them together in a coherent, maintainable pipeline. Decisions about when to ground a response in retrieved data, when to escalate uncertainty, how to capture provenance, and how to evaluate fidelity are often made ad hoc. Drawing on experience from financial technology implementations, where even rare hallucinations can carry material cost, regulatory exposure, or loss of customer trust, we aim to provide clearer guidance in the form of an easy-to-follow tutorial. This paper makes four contributions. First, we introduce a three-layer reference architecture that organizes mitigation activities across input governance, evidence-grounded generation, and post-response verification. Second, we describe a lightweight supervisory agent that manages uncertainty signals and triggers escalation (to humans, alternate models, or constrained workflows) when confidence falls below policy thresholds. Third, we analyze common but under-addressed security surfaces relevant to hallucination mitigation, including prompt injection, retrieval poisoning, and policy evasion attacks. Finally, we outline an implementation playbook for production deployment, including evaluation metrics, operational trade-offs, and lessons learned from early financial-services pilots. Full article
Show Figures

Figure 1

24 pages, 2794 KB  
Article
Algorithmic Modeling of Generation Z’s Therapeutic Toys Consumption Behavior in an Emotional Economy Context
by Xinyi Ma, Xu Qin and Li Lv
Algorithms 2025, 18(8), 506; https://doi.org/10.3390/a18080506 - 13 Aug 2025
Viewed by 314
Abstract
The quantification of emotional value and accurate prediction of purchase intention has emerged as a critical interdisciplinary challenge in the evolving emotional economy. Focusing on Generation Z (born 1995–2009), this study proposes a hybrid algorithmic framework integrating text-based sentiment computation, feature selection, and [...] Read more.
The quantification of emotional value and accurate prediction of purchase intention has emerged as a critical interdisciplinary challenge in the evolving emotional economy. Focusing on Generation Z (born 1995–2009), this study proposes a hybrid algorithmic framework integrating text-based sentiment computation, feature selection, and random forest modeling to forecast purchase intention for therapeutic toys and interpret its underlying drivers. First, 856 customer reviews were scraped from Jellycat’s official website and subjected to polarity classification using a fine-tuned RoBERTa-wwm-ext model (F1 = 0.92), with generated sentiment scores and high-frequency keywords mapped as interpretable features. Next, Boruta–SHAP feature selection was applied to 35 structured variables from 336 survey records, retaining 17 significant predictors. The core module employed a RF (random forest) model to estimate continuous “purchase intention” scores, achieving R2 = 0.83 and MSE = 0.14 under 10-fold cross-validation. To enhance interpretability, RF model was also utilized to evaluate feature importance, quantifying each feature’s contribution to the model outputs, revealing Social Ostracism (β = 0.307) and Task Overload (β = 0.207) as dominant predictors. Finally, k-means clustering with gap statistics segmented consumers based on emotional relevance, value rationality, and interest level, with model performance compared across clusters. Experimental results demonstrate that our integrated predictive model achieves a balance between forecasting accuracy and decision interpretability in emotional value computation, offering actionable insights for targeted product development and precision marketing in the therapeutic goods sector. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

19 pages, 4719 KB  
Article
Laser Stripe Segmentation Network Based on Evidential Uncertainty Theory Modeling Fine-Tuning Optimization Symmetric Algorithm
by Chenbo Shi, Delin Wang, Xiangyu Zhang, Chun Zhang, Jia Yan, Changsheng Zhu and Xiaobing Feng
Symmetry 2025, 17(8), 1280; https://doi.org/10.3390/sym17081280 - 9 Aug 2025
Viewed by 372
Abstract
In welding applications, line-structured-light vision is widely used for seam tracking, but intense noise from arc glow, spatter, smoke, and reflections makes reliable laser-stripe segmentation difficult. To address these challenges, we propose EUFNet, an uncertainty-driven symmetrical two-stage segmentation network for precise stripe extraction [...] Read more.
In welding applications, line-structured-light vision is widely used for seam tracking, but intense noise from arc glow, spatter, smoke, and reflections makes reliable laser-stripe segmentation difficult. To address these challenges, we propose EUFNet, an uncertainty-driven symmetrical two-stage segmentation network for precise stripe extraction under real-world welding conditions. In the first stage, a lightweight backbone generates a coarse stripe mask and a pixel-wise uncertainty map; in the second stage, a functionally mirrored refinement network uses this uncertainty map to symmetrically guide fine-tuning of the same image regions, thereby preserving stripe continuity. We further employ an uncertainty-weighted loss that treats ambiguous pixels and their corresponding evidence in a one-to-one, symmetric manner. Evaluated on a large-scale dataset of 3100 annotated welding images, EUFNet achieves a mean IoU of 89.3% and a mean accuracy of 95.9% at 236.7 FPS (compared to U-Net’s 82.5% mean IoU and 90.2% mean accuracy), significantly outperforming existing approaches in both accuracy and real-time performance. Moreover, EUFNet generalizes effectively to the public WLSD benchmark, surpassing state-of-the-art baselines in both accuracy and speed. These results confirm that a structurally and functionally symmetric, uncertainty-driven two-stage refinement strategy—combined with targeted loss design and efficient feature integration—yields high-precision, real-time performance for automated welding vision. Full article
Show Figures

Figure 1

5 pages, 185 KB  
Opinion
Frameworks for Ethical Conduct in Clinical Trials and Health Research in Africa
by Lembit Rägo and Jacqueline Sawyer
J. Pharm. BioTech Ind. 2025, 2(3), 13; https://doi.org/10.3390/jpbi2030013 - 8 Aug 2025
Viewed by 294
Abstract
Current estimates suggest that Africa contains about 14% of the world’s population and accounts for 20% of the global burden of disease. Yet, it accounts for a mere 3% of clinical trials globally. The time is ripe—even overdue—for determining how best to direct [...] Read more.
Current estimates suggest that Africa contains about 14% of the world’s population and accounts for 20% of the global burden of disease. Yet, it accounts for a mere 3% of clinical trials globally. The time is ripe—even overdue—for determining how best to direct future health research efforts. In response, a call has been heard for a continent-wide Africa-centric research ethics framework to redirect health research in Africa, as well as address the health research ethics malpractices that have violated the rights, dignity and well-being of participating African communities. Nevertheless, we should remain aware of what already exists and what continues to be of value. Creating parallel frameworks risks fragmentation of research, increased costs in having to meet differing requirements and delayed access of patients to new treatments. Existing international consensus documents which have evolved and been fine-tuned over time, offer guidance for ensuring ethical instigation and management of health research. The Declaration of Helsinki enunciates clear principles for ensuring the ethical conduct of clinical research, while CIOMS’ 2016 International Ethical Guidelines for Health-related Research involving Humans offer guidance for implementing these principles. It is failure to apply existing ethical principles and guidance—and not any perceived inadequacy of those principles—that has resulted in sub-optimal protection of African research participants. Full article
25 pages, 3472 KB  
Article
Physical Information-Based Mach Number Prediction and Model Migration in Continuous Wind Tunnels
by Luping Zhao and Chong Wang
Aerospace 2025, 12(8), 701; https://doi.org/10.3390/aerospace12080701 - 7 Aug 2025
Viewed by 283
Abstract
In wind tunnel tests for aerospace and bridge engineering, the accurate prediction of Mach number remains a core challenge to ensure the reliability of airflow dynamics characterization. Pure data-driven models often fail to meet high-precision prediction requirements due to the lack of physical [...] Read more.
In wind tunnel tests for aerospace and bridge engineering, the accurate prediction of Mach number remains a core challenge to ensure the reliability of airflow dynamics characterization. Pure data-driven models often fail to meet high-precision prediction requirements due to the lack of physical mechanism constraints and insufficient generalization capability. This paper proposes a physical information-based long short-term memory network (P-LSTM), which constructs a physical loss function by embedding isentropic flow equations from gas dynamics, thereby constraining the Mach number prediction solution space within the physically feasible domain. This approach effectively balances the neural network’s ability to capture temporal features with the interpretability of physical mechanisms. Aiming at the scarcity of data in new wind tunnel scenarios, an adaptive weight transfer learning method (AWTL) is further proposed, realizing efficient knowledge transfer across different-scale wind tunnels via cross-domain data calibration, adaptive source-domain weight reweighting, and target-domain fine-tuning. Experimental results show that the P-LSTM method achieves a 50.65–62.54% reduction in RMSE, 48.00–54.05% in MAE, and 47.88–73.68% in MD compared with traditional LSTM for Mach number prediction in the 0.6 m continuous wind tunnel flow field. The AWTL model also outperforms the direct training model significantly in the 2.4 m continuous wind tunnel, with RMSE, MAE, and MD reduced by 85.26%, 95.12%, and 71.14%, respectively. These results validate that the proposed models achieve high-precision Mach number prediction with strong generalization capability. Full article
(This article belongs to the Special Issue New Results in Wind Tunnel Testing)
Show Figures

Figure 1

22 pages, 20118 KB  
Article
Streamflow Forecasting: A Comparative Analysis of ARIMAX, Rolling Forecasting LSTM Neural Network and Physically Based Models in a Pristine Catchment
by Diego Perazzolo, Gianluca Lazzaro, Alvise Fiume, Pietro Fanton and Enrico Grisan
Water 2025, 17(15), 2341; https://doi.org/10.3390/w17152341 - 6 Aug 2025
Viewed by 382
Abstract
Accurate streamflow forecasting at fine temporal and spatial scales is essential to manage the diverse hydrological behaviors of individual catchments, particularly in rapidly responding mountainous regions. This study compares three forecasting models ARIMAX, LSTM, and HEC-HMS applied to the Posina River basin in [...] Read more.
Accurate streamflow forecasting at fine temporal and spatial scales is essential to manage the diverse hydrological behaviors of individual catchments, particularly in rapidly responding mountainous regions. This study compares three forecasting models ARIMAX, LSTM, and HEC-HMS applied to the Posina River basin in northern Italy, using 13 years of hourly hydrological data. While recent literature promotes multi-basin LSTM training for generalization, we show that a well-configured single-basin LSTM, combined with a rolling forecast strategy, can achieve comparable accuracy under high-frequency, data-constrained conditions. The physically based HEC-HMS model, calibrated for continuous simulation, provides robust peak flow prediction but requires extensive parameter tuning. ARIMAX captures baseflows but underestimates sharp hydrological events. Evaluation through NSE, KGE, and MAE shows that both LSTM and HEC-HMS outperform ARIMAX, with LSTM offering a compelling balance between accuracy and ease of implementation. This study enhances our understanding of streamflow model behavior in small basins and demonstrates that LSTM networks, despite their simplified configuration, can be reliable tools for flood forecasting in localized Alpine catchments, where physical modeling is resource-intensive and regional data for multi-basin training are often unavailable. Full article
Show Figures

Graphical abstract

14 pages, 881 KB  
Article
Fine-Tuning BiomedBERT with LoRA and Pseudo-Labeling for Accurate Drug–Drug Interactions Classification
by Ioan-Flaviu Gheorghita, Vlad-Ioan Bocanet and Laszlo Barna Iantovics
Appl. Sci. 2025, 15(15), 8653; https://doi.org/10.3390/app15158653 - 5 Aug 2025
Viewed by 419
Abstract
In clinical decision support systems (CDSSs), where accurate classification of drug–drug interactions (DDIs) can directly affect treatment safety and outcomes, identifying drug interactions is a major challenge, introducing a scalable approach for classifying DDIs utilizing a finely-tuned biomedical language model. The method shown [...] Read more.
In clinical decision support systems (CDSSs), where accurate classification of drug–drug interactions (DDIs) can directly affect treatment safety and outcomes, identifying drug interactions is a major challenge, introducing a scalable approach for classifying DDIs utilizing a finely-tuned biomedical language model. The method shown here uses BiomedBERT, a domain-specific version of bidirectional encoder representations from transformers (BERT) that was pre-trained on biomedical literature, to reduce the number of resources needed during fine-tuning. Low-rank adaptation (LoRA) was used to fine-tune the model on the DrugBank dataset. The objective was to classify DDIs into two clinically distinct categories, that is, synergistic and antagonistic interactions. A pseudo-labeling strategy was created to deal with the problem of not having enough labeled data. A curated ground-truth dataset was constructed using polarity-labeled interaction entries from DrugComb and verified DrugBank antagonism pairs. The fine-tuned model is used to figure out what kinds of interactions there are in the rest of the unlabeled data. A checkpointing system saves predictions and confidence scores in small pieces, which means that the process can be continued and is not affected by system crashes. The framework is designed to log every prediction it makes, allowing results to be refined later, either manually or through automated updates, without discarding low-confidence cases, as traditional threshold-based methods often do. The method keeps a record of every output it generates, making it easier to revisit earlier predictions, either by experts or with improved tools, without depending on preset confidence cutoffs. It was built with efficiency in mind, so it can handle large amounts of biomedical text without heavy computational demands. Rather than focusing on model novelty, this research demonstrates how existing biomedical transformers can be adapted to polarity-aware DDI classification with minimal computational overhead, emphasizing deployment feasibility and clinical relevance. Full article
Show Figures

Figure 1

24 pages, 30837 KB  
Article
A Transfer Learning Approach for Diverse Motion Augmentation Under Data Scarcity
by Junwon Yoon, Jeon-Seong Kang, Ha-Yoon Song, Beom-Joon Park, Kwang-Woo Jeon, Hyun-Joon Chung and Jang-Sik Park
Mathematics 2025, 13(15), 2506; https://doi.org/10.3390/math13152506 - 4 Aug 2025
Viewed by 401
Abstract
Motion-capture data provide high accuracy but are difficult to obtain, necessitating dataset augmentation. To our knowledge, no prior study has investigated few-shot generative models for motion-capture data that address both quality and diversity. We tackle the diversity loss that arises with extremely small [...] Read more.
Motion-capture data provide high accuracy but are difficult to obtain, necessitating dataset augmentation. To our knowledge, no prior study has investigated few-shot generative models for motion-capture data that address both quality and diversity. We tackle the diversity loss that arises with extremely small datasets (n ≤ 10) by applying transfer learning and continual learning to retain the rich variability of a larger pretraining corpus. To assess quality, we introduce MFMMD (Motion Feature-Based Maximum Mean Discrepancy)—a metric well-suited for small samples—and evaluate diversity with the multimodality metric. Our method embeds an Elastic Weight Consolidation (EWC)-based regularization term in the generator’s loss and then fine-tunes the limited motion-capture set. We analyze how the strength of this term influences diversity and uncovers motion-specific characteristics, revealing behavior that differs from that observed in image-generation tasks. The experiments indicate that the transfer learning pipeline improves generative performance in low-data scenarios. Increasing the weight of the regularization term yields higher diversity in the synthesized motions, demonstrating a marked uplift in motion diversity. These findings suggest that the proposed approach can effectively augment small motion-capture datasets with greater variety, a capability expected to benefit applications that rely on diverse human-motion data across modern robotics, animation, and virtual reality. Full article
(This article belongs to the Special Issue Deep Neural Networks: Theory, Algorithms and Applications)
Show Figures

Figure 1

13 pages, 1809 KB  
Perspective
Specific Low/Endogenous Replication Stress Response Protects Genomic Stability via Controlled ROS Production in an Adaptive Way and Is Dysregulated in Transformed Cells
by Bernard S. Lopez
Cells 2025, 14(15), 1183; https://doi.org/10.3390/cells14151183 - 31 Jul 2025
Viewed by 332
Abstract
Cells are assaulted daily by stresses that jeopardize genome integrity. Primary human cells adapt their response to the intensity of replication stress (RS) in a diphasic manner: below a stress threshold, the canonical DNA damage response (cDDR) is not activated, but a noncanonical [...] Read more.
Cells are assaulted daily by stresses that jeopardize genome integrity. Primary human cells adapt their response to the intensity of replication stress (RS) in a diphasic manner: below a stress threshold, the canonical DNA damage response (cDDR) is not activated, but a noncanonical cellular response, low-level stress-DDR (LoL-DDR), has recently been described. LoL-DDR prevents the accumulation of premutagenic oxidized bases (8-oxoguanine) through the production of ROS in an adaptive way. The production of RS-induced ROS (RIR) is tightly controlled: RIR are excluded from the nucleus and are produced by the NADPH oxidases DUOX1/DUOX2, which are controlled by NF-κB and PARP1; then, RIR activate the FOXO1-detoxifying pathway. Increasing the intensity of RS suppresses RIR via p53 and ATM. Notably, LoL-DDR is dysregulated in cancer cell lines, in which RIR are not produced by NADPH oxidases, are not detoxified under high-level stress, and favor the accumulation of 8-oxoguanine. LoL-DDR dysregulation occurred at an early stage of cancer progression in an in vitro model. Since, conversely, ROS trigger RS, this establishes a vicious cycle that continuously jeopardizes genome integrity, fueling tumorigenesis. These data reveal a novel type of ROS-controlled DNA damage response and demonstrate the fine-tuning of the cellular response to stress. The effects on genomic stability and carcinogenesis are discussed here. Full article
Show Figures

Figure 1

Back to TopTop