Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,540)

Search Parameters:
Keywords = computer architectures

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 3840 KB  
Review
Efficient and Secure GANs: A Survey on Privacy-Preserving and Resource-Aware Models
by Niovi Efthymia Apostolou, Elpida Vasiliki Balourdou, Maria Mouratidou, Eleni Tsalera, Ioannis Voyiatzis, Andreas Papadakis and Maria Samarakou
Appl. Sci. 2025, 15(20), 11207; https://doi.org/10.3390/app152011207 (registering DOI) - 19 Oct 2025
Abstract
Generative Adversarial Networks (GANs) generate synthetic content to support applications such as data augmentation, image-to-image translation, and training models where data availability is limited. Nevertheless, their broader deployment is constrained by limitations in data availability, high computational and energy demands, as well as [...] Read more.
Generative Adversarial Networks (GANs) generate synthetic content to support applications such as data augmentation, image-to-image translation, and training models where data availability is limited. Nevertheless, their broader deployment is constrained by limitations in data availability, high computational and energy demands, as well as privacy and security concerns. These factors restrict their scalability and integration in real-world applications. This survey provides a systematic review of research aimed at addressing these challenges. Techniques such as few-shot learning, consistency regularization, and advanced data augmentation are examined to address data scarcity. Approaches designed to reduce computational and energy costs, including hardware-based acceleration and model optimization, are also considered. In addition, strategies to improve privacy and security, such as privacy-preserving GAN architectures and defense mechanisms against adversarial attacks, are analyzed. By organizing the literature into these thematic categories, the review highlights available solutions, their trade-offs, and remaining open issues. Our findings underline the growing role of GANs in artificial intelligence, while also emphasizing the importance of efficient, sustainable, and secure designs. This work not only concentrates the current knowledge but also sets the basis for future research. Full article
(This article belongs to the Special Issue Big Data Analytics and Deep Learning for Predictive Maintenance)
17 pages, 10634 KB  
Article
Hybrid Convolutional Transformer with Dynamic Prompting for Adaptive Image Restoration
by Jinmei Zhang, Guorong Chen, Junliang Yang, Qingru Zhang, Shaofeng Liu and Weijie Zhang
Mathematics 2025, 13(20), 3329; https://doi.org/10.3390/math13203329 (registering DOI) - 19 Oct 2025
Abstract
High-quality image restoration (IR) is a fundamental task in computer vision, aiming to recover a clear image from its degraded version. Prevailing methods typically employ a static inference pipeline, neglecting the spatial variability of image content and degradation, which makes it difficult for [...] Read more.
High-quality image restoration (IR) is a fundamental task in computer vision, aiming to recover a clear image from its degraded version. Prevailing methods typically employ a static inference pipeline, neglecting the spatial variability of image content and degradation, which makes it difficult for them to adaptively handle complex and diverse restoration scenarios. To address this issue, we propose a novel adaptive image restoration framework named Hybrid Convolutional Transformer with Dynamic Prompting (HCTDP). Our approach introduces two key architectural innovations: a Spatially Aware Dynamic Prompt Head Attention (SADPHA) module, which performs fine-grained local restoration by generating spatially variant prompts through real-time analysis of image content and a Gated Skip-Connection (GSC) module that refines multi-scale feature flow using efficient channel attention. To guide the network in generating more visually plausible results, the framework is optimized with a hybrid objective function that combines a pixel-wise L1 loss and a feature-level perceptual loss. Extensive experiments on multiple public benchmarks, including image deraining, dehazing, and denoising, demonstrate that our proposed HCTDP exhibits superior performance in both quantitative and qualitative evaluations, validating the effectiveness of the adaptive restoration framework while utilizing fewer parameters than key competitors. Full article
(This article belongs to the Special Issue Intelligent Mathematics and Applications)
19 pages, 818 KB  
Article
NAMI: A Neuro-Adaptive Multimodal Architecture for Wearable Human–Computer Interaction
by Christos Papakostas, Christos Troussas, Akrivi Krouska and Cleo Sgouropoulou
Multimodal Technol. Interact. 2025, 9(10), 108; https://doi.org/10.3390/mti9100108 (registering DOI) - 18 Oct 2025
Abstract
The increasing ubiquity of wearable computing and multimodal interaction technologies has created unprecedented opportunities for natural and seamless human–computer interaction. However, most existing systems adapt only to external user actions such as speech, gesture, or gaze, without considering internal cognitive or affective states. [...] Read more.
The increasing ubiquity of wearable computing and multimodal interaction technologies has created unprecedented opportunities for natural and seamless human–computer interaction. However, most existing systems adapt only to external user actions such as speech, gesture, or gaze, without considering internal cognitive or affective states. This limits their ability to provide intelligent and empathetic adaptations. This paper addresses this critical gap by proposing the Neuro-Adaptive Multimodal Architecture (NAMI), a principled, modular, and reproducible framework designed to integrate behavioral and neurophysiological signals in real time. NAMI combines multimodal behavioral inputs with lightweight EEG and peripheral physiological measurements to infer cognitive load and engagement and adapt the interface dynamically to optimize user experience. The architecture is formally specified as a three-layer pipeline encompassing sensing and acquisition, cognitive–affective state estimation, and adaptive interaction control, with clear data flows, mathematical formalization, and real-time performance on wearable platforms. A prototype implementation of NAMI was deployed in an augmented reality Java programming tutor for postgraduate informatics students, where it dynamically adjusted task difficulty, feedback modality, and assistance frequency based on inferred user state. Empirical evaluation with 100 participants demonstrated significant improvements in task performance, reduced subjective workload, and increased engagement and satisfaction, confirming the effectiveness of the neuro-adaptive approach. Full article
27 pages, 2710 KB  
Review
Hepatoprotective Effect of Silymarin Herb in Prevention of Liver Dysfunction Using Pig as Animal Model
by Prarthana Sharma, Varun Asediya, Garima Kalra, Sharmin Sultana, Nihal Purohit, Kamila Kibitlewska, Wojciech Kozera, Urszula Czarnik, Krzysztof Karpiesiuk, Marek Lecewicz, Paweł Wysocki, Adam Lepczyński, Małgorzata Ożgo, Marta Marynowska, Agnieszka Herosimczyk, Elżbieta Redlarska, Brygida Ślaska, Krzysztof Kowal, Angelika Tkaczyk-Wlizło, Paweł Grychnik, Athul P. Kurian, Kaja Ziółkowska-Twarowska, Katarzyna Chałaśkiewicz, Katarzyna Kępka-Borkowska, Ewa Poławska, Magdalena Ogłuszka, Rafał R. Starzyński, Hiroaki Taniguchi, Chandra Shekhar Pareek and Mariusz Pierzchałaadd Show full author list remove Hide full author list
Nutrients 2025, 17(20), 3278; https://doi.org/10.3390/nu17203278 (registering DOI) - 18 Oct 2025
Abstract
Silymarin, a flavonolignan-rich extract of Silybum marianum, is widely recognized for its hepatoprotective potential. While rodent studies predominate, pigs (Sus scrofa) offer a more translationally relevant model due to their hepatic architecture, bile acid composition, and transporter expression, which closely [...] Read more.
Silymarin, a flavonolignan-rich extract of Silybum marianum, is widely recognized for its hepatoprotective potential. While rodent studies predominate, pigs (Sus scrofa) offer a more translationally relevant model due to their hepatic architecture, bile acid composition, and transporter expression, which closely resemble those of humans. This narrative review synthesises current evidence on the chemistry, pharmacokinetics, biodistribution, and hepatoprotective activity of silymarin in porcine models. Available studies demonstrate that when adequate intrahepatic exposure is achieved, particularly through optimised formulations, silymarin can attenuate oxidative stress, suppress inflammatory signalling, stabilise mitochondria, and modulate fibrogenic pathways. Protective effects have been reported across diverse porcine injury paradigms, including toxin-induced necrosis, ethanol- and diet-associated steatosis, metabolic dysfunction, ischemia–reperfusion injury, and partial hepatectomy. However, the evidence base remains limited, with few long-term studies addressing fibrosis or regeneration, and methodological heterogeneity complicates the comparison of data across studies. Current knowledge gaps in silymarin research include inconsistent chemotype characterization among plant sources, limited reporting of unbound pharmacokinetic parameters, and variability in histological scoring criteria across studies, which collectively hinder cross-study comparability and mechanistic interpretation. Advances in analytical chemistry, transporter biology, and formulation design are beginning to refine the interpretation of exposure–response relationships. Advances in analytical chemistry, transporter biology, and formulation design are beginning to refine the interpretation of exposure–response relationships. In parallel, emerging computational approaches, including machine-learning-assisted chemotype fingerprinting, automated histology scoring, and Bayesian exposure modeling, are being explored as supportive tools to enhance reproducibility and translational relevance; however, these frameworks remain exploratory and require empirical validation, particularly in modeling enterohepatic recirculation. Collectively, current porcine evidence supports silymarin as a context-dependent yet credible hepatoprotective agent, highlighting priorities for future research to better define its therapeutic potential in clinical nutrition and veterinary practice. Full article
Show Figures

Graphical abstract

16 pages, 2759 KB  
Article
Machine Learning-Based Position Detection Using Hall-Effect Sensor Arrays on Resource-Constrained Microcontroller
by Zalán Németh, Chan Hwang See, Keng Goh, Arfan Ghani, Simeon Keates and Raed A. Abd-Alhameed
Sensors 2025, 25(20), 6444; https://doi.org/10.3390/s25206444 (registering DOI) - 18 Oct 2025
Abstract
This paper presents an electromagnetic levitation system that stabilizes a magnetic body using an array of electromagnets controlled by a Hall-effect sensor array and TinyML-based position detection. Departing from conventional optical tracking methods, the proposed design combines finite-element-optimized electromagnets with a microcontroller-optimized neural [...] Read more.
This paper presents an electromagnetic levitation system that stabilizes a magnetic body using an array of electromagnets controlled by a Hall-effect sensor array and TinyML-based position detection. Departing from conventional optical tracking methods, the proposed design combines finite-element-optimized electromagnets with a microcontroller-optimized neural network that processes sensor data to predict the levitated object’s position with 0.0263–0.0381 mm mean absolute error. The system employs both quantized and full-precision implementations of a supervised multi-output regression model trained on spatially sampled data (40 × 40 × 15 mm volume at 5 mm intervals). Comprehensive benchmarking demonstrates stable operation at 850–1000 Hz control frequencies, matching optical systems’ performance while eliminating their cost and complexity. The integrated solution performs real-time position detection and current calculation entirely on-board, requiring no external tracking devices or high-performance computing. By achieving sub 30 μm accuracy with standard microcontrollers and minimal hardware, this work validates machine learning as a viable alternative to optical position detection in magnetic levitation systems, reducing implementation barriers for research and industrial applications. The complete system design, including electromagnetic array characterization, neural network architecture selection, and real-time implementation challenges, is presented alongside performance comparisons with conventional approaches. Full article
(This article belongs to the Special Issue Magnetic Field Sensing and Measurement Techniques)
Show Figures

Figure 1

20 pages, 719 KB  
Article
Quantum-Driven Chaos-Informed Deep Learning Framework for Efficient Feature Selection and Intrusion Detection in IoT Networks
by Padmasri Turaka and Saroj Kumar Panigrahy
Technologies 2025, 13(10), 470; https://doi.org/10.3390/technologies13100470 - 17 Oct 2025
Abstract
The rapid development of the Internet of Things (IoT) poses significant problems in securing heterogeneous, massive, and high-volume network traffic against cyber threats. Traditional intrusion detection systems (IDSs) are often found to be poorly scalable, or are ineffective computationally, because of the presence [...] Read more.
The rapid development of the Internet of Things (IoT) poses significant problems in securing heterogeneous, massive, and high-volume network traffic against cyber threats. Traditional intrusion detection systems (IDSs) are often found to be poorly scalable, or are ineffective computationally, because of the presence of redundant or irrelevant features, and they suffer from high false positive rates. Addressing these limitations, this study proposes a hybrid intelligent model that combines quantum computing, chaos theory, and deep learning to achieve efficient feature selection and effective intrusion classification. The proposed system offers four novel modules for feature optimization: chaotic swarm intelligence, quantum diffusion modeling, transformer-guided ranking, and multi-agent reinforcement learning, all of which work with a graph-based classifier enhanced with quantum attention mechanisms. This architecture allows as much as 75% feature reduction, while achieving 4% better classification accuracy and reducing computational overhead by 40% compared to the best-performing models. When evaluated on benchmark datasets (NSL-KDD, CICIDS2017, and UNSW-NB15), it shows superior performance in intrusion detection tasks, thereby marking it as a viable candidate for scalable and real-time IoT security analytics. Full article
Show Figures

Figure 1

29 pages, 1748 KB  
Article
Optimizing Informer with Whale Optimization Algorithm for Enhanced Ship Trajectory Prediction
by Haibo Xie, Jinliang Wang, Zhiqiang Shi and Shiyuan Xue
J. Mar. Sci. Eng. 2025, 13(10), 1999; https://doi.org/10.3390/jmse13101999 - 17 Oct 2025
Abstract
The rapid expansion of global shipping has led to continuously increasing vessel traffic density, making high-accuracy ship trajectory prediction particularly critical for navigational safety and traffic management optimization in complex waters such as ports and narrow channels. However, existing methods still face challenges [...] Read more.
The rapid expansion of global shipping has led to continuously increasing vessel traffic density, making high-accuracy ship trajectory prediction particularly critical for navigational safety and traffic management optimization in complex waters such as ports and narrow channels. However, existing methods still face challenges in medium-to-long-term prediction and nonlinear trajectory modeling, including insufficient accuracy and low computational efficiency. To address these issues, this paper proposes an enhanced Informer model (WOA-Informer) based on the Whale Optimization Algorithm (WOA). The model leverages Informer to capture long-term temporal dependencies and incorporates WOA for automated hyperparameter tuning, thereby improving prediction accuracy and robustness. Experimental results demonstrate that the WOA-Informer model achieves outstanding performance across three distinct trajectory patterns, with an average reduction of 23.1% in Root Mean Square Error (RMSE) and 27.8% in Haversine distance (HAV) compared to baseline models. The model also exhibits stronger robustness and stability in multi-step predictions while maintaining a favorable balance in computational efficiency. These results substantiate the effectiveness of metaheuristic optimization for strengthening deep learning architectures and present a computationally efficient, high-accuracy framework for vessel trajectory prediction. Full article
(This article belongs to the Special Issue Ship Manoeuvring and Control)
25 pages, 3235 KB  
Article
An Energy-Aware Generative AI Edge Inference Framework for Low-Power IoT Devices
by Yafei Xie and Quanrong Fang
Electronics 2025, 14(20), 4086; https://doi.org/10.3390/electronics14204086 - 17 Oct 2025
Abstract
The rapid proliferation of the Internet of Things (IoT) has created an urgent need for on-device intelligence that balances high computational demands with stringent energy constraints. Existing edge inference frameworks struggle to deploy generative artificial intelligence (AI) models efficiently on low-power devices, often [...] Read more.
The rapid proliferation of the Internet of Things (IoT) has created an urgent need for on-device intelligence that balances high computational demands with stringent energy constraints. Existing edge inference frameworks struggle to deploy generative artificial intelligence (AI) models efficiently on low-power devices, often sacrificing fidelity for efficiency or lacking adaptability to dynamic conditions. To address this gap, we propose a generative AI edge inference framework integrating lightweight architecture compression, adaptive quantization, and energy-aware scheduling. Extensive experiments on CIFAR-10, Tiny-ImageNet, and IoT-SensorStream show that our method reduces energy consumption by up to 31% and inference latency by 27% compared with state-of-the-art baselines, while consistently improving generative quality. Robustness tests further confirm resilience under noise, cross-task, and cross-dataset conditions, and ablation studies validate the necessity of each module. Finally, deployment in a hospital IoT laboratory demonstrates real-world feasibility. These results highlight both the theoretical contribution of unifying compression, quantization, and scheduling, and the practical potential for sustainable, scalable, and reliable deployment of generative AI in diverse IoT ecosystems. Full article
25 pages, 2128 KB  
Article
A Low-Cost UAV System and Dataset for Real-Time Weed Detection in Salad Crops
by Alina L. Machidon, Andraž Krašovec, Veljko Pejović, Daniele Latini, Sarathchandrakumar T. Sasidharan, Fabio Del Frate and Octavian M. Machidon
Electronics 2025, 14(20), 4082; https://doi.org/10.3390/electronics14204082 - 17 Oct 2025
Viewed by 51
Abstract
The global food crises and growing population necessitate efficient agricultural land use. Weeds cause up to 40% yield loss in major crops, resulting in over USD 100 billion in annual economic losses. Camera-equipped UAVs offer a solution for automatic weed detection, but the [...] Read more.
The global food crises and growing population necessitate efficient agricultural land use. Weeds cause up to 40% yield loss in major crops, resulting in over USD 100 billion in annual economic losses. Camera-equipped UAVs offer a solution for automatic weed detection, but the high computational and energy demands of deep learning models limit their use to expensive, high-end UAVs. In this paper, we present a low-cost UAV system built from off-the-shelf components, featuring a custom-designed on-board computing system based on the NVIDIA Jetson Nano. This system efficiently manages real-time image acquisition and inference using the energy-efficient Squeeze U-Net neural network for weed detection. Our approach ensures the pipeline operates in real time without affecting the drone’s flight autonomy. We also introduce the AgriAdapt dataset, a novel collection of 643 high-resolution aerial images of salad crops with weeds, which fills a key gap by providing realistic UAV data for benchmarking segmentation models under field conditions. Several deep learning models are trained and validated on the newly introduced AgriAdapt dataset, demonstrating its suitability for effective weed segmentation in UAV imagery. Quantitative results show that the dataset supports a range of architectures, from larger models such as DeepLabV3 to smaller, lightweight networks like Squeeze U-Net (with only 2.5 M parameters), achieving high accuracy (around 90%) across the board. These contributions distinguish our work from earlier UAV-based weed detection systems by combining a novel dataset with a comprehensive evaluation of accuracy, latency, and energy efficiency, thus directly targeting deep learning applications for real-time UAV deployment. Our results demonstrate the feasibility of deploying a low-cost, energy-efficient UAV system for real-time weed detection, making advanced agricultural technology more accessible and practical for widespread use. Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems with Autonomous Navigation, 2nd Edition)
Show Figures

Figure 1

19 pages, 5686 KB  
Article
RipenessGAN: Growth Day Embedding-Enhanced GAN for Stage-Wise Jujube Ripeness Data Generation
by Jeon-Seong Kang, Junwon Yoon, Beom-Joon Park, Junyoung Kim, Sung Chul Jee, Ha-Yoon Song and Hyun-Joon Chung
Agronomy 2025, 15(10), 2409; https://doi.org/10.3390/agronomy15102409 - 17 Oct 2025
Viewed by 45
Abstract
RipenessGAN is a novel Generative Adversarial Network (GAN) designed to generate synthetic images across different ripeness stages of jujubes (green fruit, white ripe fruit, semi-red fruit, and fully red fruit), aiming to provide balanced training data for diverse applications beyond classification accuracy. This [...] Read more.
RipenessGAN is a novel Generative Adversarial Network (GAN) designed to generate synthetic images across different ripeness stages of jujubes (green fruit, white ripe fruit, semi-red fruit, and fully red fruit), aiming to provide balanced training data for diverse applications beyond classification accuracy. This study addresses the problem of data imbalance by augmenting each ripeness stage using our proposed Growth Day Embedding mechanism, thereby enhancing the performance of downstream classification models. The core innovation of RipenessGAN lies in its ability to capture continuous temporal transitions among discrete ripeness classes by incorporating fine-grained growth day information (0–56 days) in addition to traditional class labels. The experimental results show that RipenessGAN produces synthetic data with higher visual quality and greater diversity compared to CycleGAN. Furthermore, the classification models trained on the enriched dataset exhibit more consistent and accurate performance. We also conducted comprehensive comparisons of RipenessGAN against CycleGAN and class-conditional diffusion models (DDPM) under strictly controlled and fair experimental settings, carefully matching model architectures, computational resources, training conditions, and evaluation metrics. The results indicate that although diffusion models yield highly realistic images and CycleGAN ensures stable cycle-consistent generation, RipenessGAN provides superior practical benefits in training efficiency, temporal controllability, and adaptability for agricultural applications. This research demonstrates the potential of RipenessGAN to mitigate data imbalance in agriculture and highlights its scalability to other crops. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

30 pages, 6302 KB  
Article
Pixel-Attention W-Shaped Network for Joint Lesion Segmentation and Diabetic Retinopathy Severity Staging
by Archana Singh, Sushma Jain and Vinay Arora
Diagnostics 2025, 15(20), 2619; https://doi.org/10.3390/diagnostics15202619 - 17 Oct 2025
Viewed by 70
Abstract
Background: Visual impairment remains a critical public health challenge, and diabetic retinopathy (DR) is a leading cause of preventable blindness worldwide. Early stages of the disease are particularly difficult to identify, as lesions are subtle, expert review is time-consuming, and conventional diagnostic workflows [...] Read more.
Background: Visual impairment remains a critical public health challenge, and diabetic retinopathy (DR) is a leading cause of preventable blindness worldwide. Early stages of the disease are particularly difficult to identify, as lesions are subtle, expert review is time-consuming, and conventional diagnostic workflows remain subjective. Methods: To address these challenges, we propose a novel Pixel-Attention W-shaped (PAW-Net) deep learning framework that integrates a Lesion-Prior Cross Attention (LPCA) module with a W-shaped encoder–decoder architecture. The LPCA module enhances pixel-level representation of microaneurysms, hemorrhages, and exudates, while the dual-branch W-shaped design jointly performs lesion segmentation and disease severity grading in a single, clinically interpretable pass. The framework has been trained and validated using DDR and a preprocessed Messidor + EyePACS dataset, with APTOS-2019 reserved for external, out-of-distribution evaluation. Results: The proposed PAW-Net framework achieved robust performance across severity levels, with an accuracy of 98.65%, precision of 98.42%, recall (sensitivity) of 98.83%, specificity of 99.12%, F1-score of 98.61%, and a Dice coefficient of 98.61%. Comparative analyses demonstrate consistent improvements over contemporary architectures, particularly in accuracy and F1-score. Conclusions: The PAW-Net framework generates interpretable lesion overlays that facilitate rapid triage and follow-up, exhibits resilience under domain shift, and maintains an efficient computational footprint suitable for telemedicine and mobile deployment. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

17 pages, 2475 KB  
Article
YOLO-LMTB: A Lightweight Detection Model for Multi-Scale Tea Buds in Agriculture
by Guofeng Xia, Yanchuan Guo, Qihang Wei, Yiwen Cen, Loujing Feng and Yang Yu
Sensors 2025, 25(20), 6400; https://doi.org/10.3390/s25206400 - 16 Oct 2025
Viewed by 218
Abstract
Tea bud targets are typically located in complex environments characterized by multi-scale variations, high density, and strong color resemblance to the background, which pose significant challenges for rapid and accurate detection. To address these issues, this study presents YOLO-LMTB, a lightweight multi-scale detection [...] Read more.
Tea bud targets are typically located in complex environments characterized by multi-scale variations, high density, and strong color resemblance to the background, which pose significant challenges for rapid and accurate detection. To address these issues, this study presents YOLO-LMTB, a lightweight multi-scale detection model based on the YOLOv11n architecture. First, a Multi-scale Edge-Refinement Context Aggregator (MERCA) module is proposed to replace the original C3k2 block in the backbone. MERCA captures multi-scale contextual features through hierarchical receptive field collaboration and refines edge details, thereby significantly improving the perception of fine structures in tea buds. Furthermore, a Dynamic Hyperbolic Token Statistics Transformer (DHTST) module is developed to replace the original PSA block. This module dynamically adjusts feature responses and statistical measures through attention weighting using learnable threshold parameters, effectively enhancing discriminative features while suppressing background interference. Additionally, a Bidirectional Feature Pyramid Network (BiFPN) is introduced to replace the original network structure, enabling the adaptive fusion of semantically rich and spatially precise features via bidirectional cross-scale connections while reducing computational complexity. In the self-built tea bud dataset, experimental results demonstrate that compared to the original model, the YO-LO-LMTB model achieves a 2.9% improvement in precision (P), along with increases of 1.6% and 2.0% in mAP50 and mAP50-95, respectively. Simultaneously, the number of parameters decreased by 28.3%, and the model size reduced by 22.6%. To further validate the effectiveness of the improvement scheme, experiments were also conducted using public datasets. The results demonstrate that each enhancement module can boost the model’s detection performance and exhibits strong generalization capabilities. The model not only excels in multi-scale tea bud detection but also offers a valuable reference for reducing computational complexity, thereby providing a technical foundation for the practical application of intelligent tea-picking systems. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

32 pages, 25136 KB  
Article
Efficiency Evaluation of Sampling Density for Indoor Building LiDAR Point-Cloud Segmentation
by Yiquan Zou, Wenxuan Chen, Tianxiang Liang and Biao Xiong
Sensors 2025, 25(20), 6398; https://doi.org/10.3390/s25206398 - 16 Oct 2025
Viewed by 321
Abstract
Prior studies on indoor LiDAR point-cloud semantic segmentation consistently report that sampling density strongly affects segmentation accuracy as well as runtime and memory, establishing an accuracy–efficiency trade-off. Nevertheless, in practice, the density is often chosen heuristically and reported under heterogeneous protocols, which limits [...] Read more.
Prior studies on indoor LiDAR point-cloud semantic segmentation consistently report that sampling density strongly affects segmentation accuracy as well as runtime and memory, establishing an accuracy–efficiency trade-off. Nevertheless, in practice, the density is often chosen heuristically and reported under heterogeneous protocols, which limits quantitative guidance. We present a unified evaluation framework that treats density as the sole independent variable. To control architectural variability, three representative backbones—PointNet, PointNet++, and DGCNN—are each augmented with an identical Point Transformer module, yielding PointNet-Trans, PointNet++-Trans, and DGCNN-Trans trained and tested under one standardized protocol. The framework couples isotropic voxel-guided uniform down-sampling with a decision rule integrating three signals: (i) accuracy sufficiency, (ii) the onset of diminishing efficiency, and (iii) the knee of the accuracy–density curve. Experiments on scan-derived indoor point clouds (with BIM-derived counterparts for contrast) quantify the accuracy–runtime trade-off and identify an engineering-feasible operating band of 1600–2900 points/m2, with a robust setting near 2400 points/m2. Planar components saturate at moderate densities, whereas beams are more sensitive to down-sampling. By isolating density effects and enforcing one protocol, the study provides reproducible, model-agnostic guidance for scan planning and compute budgeting in indoor mapping and Scan-to-BIM workflows. Full article
(This article belongs to the Special Issue Application of LiDAR Remote Sensing and Mapping)
Show Figures

Figure 1

27 pages, 2137 KB  
Review
Engineering Bispecific Peptides for Precision Immunotherapy and Beyond
by Xumeng Ding and Yi Li
Int. J. Mol. Sci. 2025, 26(20), 10082; https://doi.org/10.3390/ijms262010082 - 16 Oct 2025
Viewed by 78
Abstract
Bispecific peptides represent an emerging therapeutic platform in immunotherapy, offering simultaneous engagement of two distinct molecular targets to enhance specificity, functional synergy, and immune modulation. Their compact structure and modular design enable precise interaction with protein–protein interfaces and shallow binding sites that are [...] Read more.
Bispecific peptides represent an emerging therapeutic platform in immunotherapy, offering simultaneous engagement of two distinct molecular targets to enhance specificity, functional synergy, and immune modulation. Their compact structure and modular design enable precise interaction with protein–protein interfaces and shallow binding sites that are otherwise difficult to target. This review summarizes current design strategies of bispecific peptides, including fused, linked, and self-assembled architectures, and elucidates their mechanisms in bridging tumor cells with immune effector cells and blocking immune checkpoint pathways. Recent developments highlight their potential applications not only in oncology but also in autoimmune and infectious diseases. Key translational challenges, including proteolytic stability, immunogenicity, delivery barriers, and manufacturing scalability, are discussed, along with emerging peptide engineering and computational design strategies to address these limitations. Bispecific peptides offer a versatile and adaptable platform poised to advance precision immunotherapy and expand therapeutic options across immune-mediated diseases. Full article
(This article belongs to the Section Molecular Immunology)
Show Figures

Figure 1

32 pages, 30808 KB  
Article
Deep Learning for Automated Sewer Defect Detection: Benchmarking YOLO and RT-DETR on the Istanbul Dataset
by Mustafa Oğurlu, Bülent Bayram, Bahadır Kulavuz and Tolga Bakırman
Appl. Sci. 2025, 15(20), 11096; https://doi.org/10.3390/app152011096 - 16 Oct 2025
Viewed by 152
Abstract
The inspection and maintenance of urban sewer infrastructure remain critical challenges for megacities, where conventional manual inspection approaches are labor-intensive, time-consuming, and prone to human error. Although deep learning has been increasingly applied to sewer inspection, the field lacks both a publicly available [...] Read more.
The inspection and maintenance of urban sewer infrastructure remain critical challenges for megacities, where conventional manual inspection approaches are labor-intensive, time-consuming, and prone to human error. Although deep learning has been increasingly applied to sewer inspection, the field lacks both a publicly available large-scale dataset and a systematic evaluation of CNN and transformer-based models on real sewer footage. The primary aim of this study is to systematically evaluate and compare state-of-the-art deep learning architectures for automated sewer defect detection using a newly introduced dataset. We present the Istanbul Sewer Defect Dataset (ISWDS), comprising 13,491 expert-annotated images collected from Istanbul’s wastewater network and covering eight defect categories that account for approximately 90% of reported failures. The scientific novelty of this work lies in both the introduction of the ISWDS and the first systematic benchmarking of YOLO (v8/11/12) and RT-DETR (v1/v2) architectures under identical protocols on real sewer inspection footage. Experimental results demonstrate that RT-DETR v2 achieves the best performance (F1: 79.03%, Recall: 81.10%), significantly outperforming the best YOLO variant. While transformer-based architectures excel in detecting partially occluded defects and complex operational conditions, YOLO models provide computational efficiency advantages for resource-constrained deployments. Furthermore, a QGIS-based inspection tool integrating the best-performing models was developed to enable real-time video analysis and automated reporting. Overall, this study highlights the trade-offs between accuracy and efficiency, demonstrating that RT-DETR v2 is most suitable for server-based processing. In contrast, compact YOLO variants are more appropriate for edge deployment. Full article
Show Figures

Figure 1

Back to TopTop