Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,570)

Search Parameters:
Keywords = art collective

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1950 KiB  
Article
Ancient Ritual Behavior as Reflected in the Imagery at Picture Cave, Missouri, USA
by Carol Diaz-Granados and James R. Duncan
Arts 2025, 14(4), 88; https://doi.org/10.3390/arts14040088 (registering DOI) - 6 Aug 2025
Abstract
Since 1992, we have promoted the use of descriptions from ethnographic data, including ancient, surviving oral traditions, to aid in explaining the iconography portrayed in pictographs and petroglyphs found in Missouri, particularly those at Picture Cave. The literature to which we refer is [...] Read more.
Since 1992, we have promoted the use of descriptions from ethnographic data, including ancient, surviving oral traditions, to aid in explaining the iconography portrayed in pictographs and petroglyphs found in Missouri, particularly those at Picture Cave. The literature to which we refer is from American Indian groups related linguistically and connected to the pre-Columbian inhabitants of Missouri. In addition, we have had on-going conversations with many elder tribal members of the Dhegiha Sioux language group (including the Osage, Quapaw, and Kansa (the Ponca and Omaha are also part of this cognate linguistic group)). With the copious collections of southern Siouan ethnographic accounts, we have been able to explain salient features in the iconography of several of the detailed rock art motifs and vignettes, and propose interpretations. This Midwest region is part of the Cahokia interaction sphere, an area that displays western Mississippian symbolism associated with that found in Missouri rock art as well as on pottery, shell, and copper. Full article
(This article belongs to the Special Issue Advances in Rock Art Studies)
Show Figures

Figure 1

24 pages, 1684 KiB  
Article
Beyond Assistance: Embracing AI as a Collaborative Co-Agent in Education
by Rena Katsenou, Konstantinos Kotsidis, Agnes Papadopoulou, Panagiotis Anastasiadis and Ioannis Deliyannis
Educ. Sci. 2025, 15(8), 1006; https://doi.org/10.3390/educsci15081006 - 6 Aug 2025
Abstract
The integration of artificial intelligence (AI) in education offers novel opportunities to enhance critical thinking while also posing challenges to independent cognitive development. In particular, Human-Centered Artificial Intelligence (HCAI) in education aims to enhance human experience by providing a supportive and collaborative learning [...] Read more.
The integration of artificial intelligence (AI) in education offers novel opportunities to enhance critical thinking while also posing challenges to independent cognitive development. In particular, Human-Centered Artificial Intelligence (HCAI) in education aims to enhance human experience by providing a supportive and collaborative learning environment. Rather than replacing the educator, HCAI serves as a tool that empowers both students and teachers, fostering critical thinking and autonomy in learning. This study investigates the potential for AI to become a collaborative partner that assists learning and enriches academic engagement. The research was conducted during the 2024–2025 winter semester within the Pedagogical and Teaching Sufficiency Program offered by the Audio and Visual Arts Department, Ionian University, Corfu, Greece. The research employs a hybrid ethnographic methodology that blends digital interactions—where students use AI tools to create artistic representations—with physical classroom engagement. Data was collected through student projects, reflective journals, and questionnaires, revealing that structured dialog with AI not only facilitates deeper critical inquiry and analytical reasoning but also induces a state of flow, characterized by intense focus and heightened creativity. The findings highlight a dialectic between individual agency and collaborative co-agency, demonstrating that while automated AI responses may diminish active cognitive engagement, meaningful interactions can transform AI into an intellectual partner that enriches the learning experience. These insights suggest promising directions for future pedagogical strategies that balance digital innovation with traditional teaching methods, ultimately enhancing the overall quality of education. Furthermore, the study underscores the importance of integrating reflective practices and adaptive frameworks to support evolving student needs, ensuring a sustainable model. Full article
(This article belongs to the Special Issue Unleashing the Potential of E-learning in Higher Education)
Show Figures

Figure 1

11 pages, 443 KiB  
Article
Cognitive Screening with the Italian International HIV Dementia Scale in People Living with HIV: A Cross-Sectional Study in the cART Era
by Maristella Belfiori, Francesco Salis, Sergio Angioni, Claudia Bonalumi, Diva Cabeccia, Camilla Onnis, Nicola Pirisi, Francesco Ortu, Paola Piano, Stefano Del Giacco and Antonella Mandas
Infect. Dis. Rep. 2025, 17(4), 95; https://doi.org/10.3390/idr17040095 (registering DOI) - 6 Aug 2025
Abstract
Background: HIV-associated neurocognitive disorders (HANDs) continue to be a significant concern, despite the advancements in prognosis achieved through Combination Antiretroviral Therapy (cART). Neuropsychological assessment, recommended by international guidelines for HANDs diagnosis, can be resource-intensive. Brief screening tools, like the International HIV Dementia [...] Read more.
Background: HIV-associated neurocognitive disorders (HANDs) continue to be a significant concern, despite the advancements in prognosis achieved through Combination Antiretroviral Therapy (cART). Neuropsychological assessment, recommended by international guidelines for HANDs diagnosis, can be resource-intensive. Brief screening tools, like the International HIV Dementia Scale (IHDS) and the Montreal Cognitive Assessment (MoCA), are crucial in facilitating initial evaluations. This study aims to assess the Italian IHDS (IHDS-IT) and evaluate its sensitivity and specificity in detecting cognitive impairment in HIV patients. Methods: This cross-sectional study involved 294 patients aged ≥30 years, evaluated at the Immunology Unit of the University of Cagliari. Cognitive function was assessed using the MoCA and IHDS. Laboratory parameters, such as CD4 nadir, current CD4 count, and HIV-RNA levels, were also collected. Statistical analyses included Spearman’s correlation, Receiver Operating Characteristic analysis, and the Youden J statistic to identify the optimal IHDS-IT cut-off for cognitive impairment detection. Results: The IHDS and MoCA scores showed a moderate positive correlation (Spearman’s rho = 0.411, p < 0.0001). ROC analysis identified an IHDS-IT cut-off of ≤9, yielding an Area Under the Curve (AUC) of 0.76, sensitivity of 71.7%, and specificity of 67.2%. At this threshold, 73.1% of patients with MoCA scores below 23 also presented abnormal IHDS scores, highlighting the complementary utility of both cognitive assessment instruments. Conclusions: The IHDS-IT exhibited fair diagnostic accuracy for intercepting cognitive impairment, with a lower optimal cut-off than previously reported. The observed differences may reflect this study cohort’s demographic and clinical characteristics, including advanced age and long-lasting HIV infection. Further, longitudinal studies are necessary to validate these findings and to confirm the proposed IHDS cut-off over extended periods. Full article
(This article belongs to the Section HIV-AIDS)
Show Figures

Figure 1

22 pages, 2053 KiB  
Article
Enhanced Real-Time Method Traffic Light Signal Color Recognition Using Advanced Convolutional Neural Network Techniques
by Fakhri Yagob and Jurek Z. Sasiadek
World Electr. Veh. J. 2025, 16(8), 441; https://doi.org/10.3390/wevj16080441 - 5 Aug 2025
Abstract
Real-time traffic light detection is essential for the safe navigation of autonomous vehicles, where timely and accurate recognition of signal states is critical. YOLOv8, a state-of-the-art object detection model, offers enhanced speed and precision, making it well-suited for real-time applications in complex driving [...] Read more.
Real-time traffic light detection is essential for the safe navigation of autonomous vehicles, where timely and accurate recognition of signal states is critical. YOLOv8, a state-of-the-art object detection model, offers enhanced speed and precision, making it well-suited for real-time applications in complex driving environments. This study presents a modified YOLOv8 architecture optimized for traffic light detection by integrating Depth-Wise Separable Convolutions (DWSCs) throughout the backbone and head. The model was first pretrained on a public traffic light dataset to establish a strong baseline and then fine-tuned on a custom real-time dataset consisting of 480 images collected from video recordings under diverse road conditions. Experimental results demonstrate high detection performance, with precision scores of 0.992 for red, 0.995 for yellow, and 0.853 for green lights. The model achieved an average mAP@0.5 of 0.947, with stable F1 scores and low validation losses over 80 epochs, confirming effective learning and generalization. Compared to existing YOLO variants, the modified architecture showed superior performance, especially for red and yellow lights. Full article
Show Figures

Figure 1

22 pages, 4426 KiB  
Article
A Digital Twin Platform for Real-Time Intersection Traffic Monitoring, Performance Evaluation, and Calibration
by Abolfazl Afshari, Joyoung Lee and Dejan Besenski
Infrastructures 2025, 10(8), 204; https://doi.org/10.3390/infrastructures10080204 - 4 Aug 2025
Abstract
Emerging transportation challenges necessitate cutting-edge technologies for real-time infrastructure and traffic monitoring. To create a dynamic digital twin for intersection monitoring, data gathering, performance assessment, and calibration of microsimulation software, this study presents a state-of-the-art platform that combines high-resolution LiDAR sensor data with [...] Read more.
Emerging transportation challenges necessitate cutting-edge technologies for real-time infrastructure and traffic monitoring. To create a dynamic digital twin for intersection monitoring, data gathering, performance assessment, and calibration of microsimulation software, this study presents a state-of-the-art platform that combines high-resolution LiDAR sensor data with VISSIM simulation software. Intending to track traffic flow and evaluate important factors, including congestion, delays, and lane configurations, the platform gathers and analyzes real-time data. The technology allows proactive actions to improve safety and reduce interruptions by utilizing the comprehensive information that LiDAR provides, such as vehicle trajectories, speed profiles, and lane changes. The digital twin technique offers unparalleled precision in traffic and infrastructure state monitoring by fusing real data streams with simulation-based performance analysis. The results show how the platform can transform real-time monitoring and open the door to data-driven decision-making, safer intersections, and more intelligent traffic data collection methods. Using the proposed platform, this study calibrated a VISSIM simulation network to optimize the driving behavior parameters in the software. This study addresses current issues in urban traffic management with real-time solutions, demonstrating the revolutionary impact of emerging technology in intelligent infrastructure monitoring. Full article
Show Figures

Figure 1

28 pages, 21813 KiB  
Article
Adaptive RGB-D Semantic Segmentation with Skip-Connection Fusion for Indoor Staircase and Elevator Localization
by Zihan Zhu, Henghong Lin, Anastasia Ioannou and Tao Wang
J. Imaging 2025, 11(8), 258; https://doi.org/10.3390/jimaging11080258 - 4 Aug 2025
Viewed by 63
Abstract
Accurate semantic segmentation of indoor architectural elements, such as staircases and elevators, is critical for safe and efficient robotic navigation, particularly in complex multi-floor environments. Traditional fusion methods struggle with occlusions, reflections, and low-contrast regions. In this paper, we propose a novel feature [...] Read more.
Accurate semantic segmentation of indoor architectural elements, such as staircases and elevators, is critical for safe and efficient robotic navigation, particularly in complex multi-floor environments. Traditional fusion methods struggle with occlusions, reflections, and low-contrast regions. In this paper, we propose a novel feature fusion module, Skip-Connection Fusion (SCF), that dynamically integrates RGB (Red, Green, Blue) and depth features through an adaptive weighting mechanism and skip-connection integration. This approach enables the model to selectively emphasize informative regions while suppressing noise, effectively addressing challenging conditions such as partially blocked staircases, glossy elevator doors, and dimly lit stair edges, which improves obstacle detection and supports reliable human–robot interaction in complex environments. Extensive experiments on a newly collected dataset demonstrate that SCF consistently outperforms state-of-the-art methods, including PSPNet and DeepLabv3, in both overall mIoU (mean Intersection over Union) and challenging-case performance. Specifically, our SCF module improves segmentation accuracy by 5.23% in the top 10% of challenging samples, highlighting its robustness in real-world conditions. Furthermore, we conduct a sensitivity analysis on the learnable weights, demonstrating their impact on segmentation quality across varying scene complexities. Our work provides a strong foundation for real-world applications in autonomous navigation, assistive robotics, and smart surveillance. Full article
Show Figures

Figure 1

20 pages, 10013 KiB  
Article
Addressing Challenges in Rds,on Measurement for Cloud-Connected Condition Monitoring in WBG Power Converter Applications
by Farzad Hosseinabadi, Sachin Kumar Bhoi, Hakan Polat, Sajib Chakraborty and Omar Hegazy
Electronics 2025, 14(15), 3093; https://doi.org/10.3390/electronics14153093 - 2 Aug 2025
Viewed by 124
Abstract
This paper presents the design, implementation, and experimental validation of a Condition Monitoring (CM) circuit for SiC-based Power Electronics Converters (PECs). The paper leverages in situ drain–source resistance (Rds,on) measurements, interfaced with cloud connectivity for data processing and lifetime assessment, [...] Read more.
This paper presents the design, implementation, and experimental validation of a Condition Monitoring (CM) circuit for SiC-based Power Electronics Converters (PECs). The paper leverages in situ drain–source resistance (Rds,on) measurements, interfaced with cloud connectivity for data processing and lifetime assessment, addressing key limitations in current state-of-the-art (SOTA) methods. Traditional approaches rely on expensive data acquisition systems under controlled laboratory conditions, making them unsuitable for real-world applications due to component variability, time delay, and noise sensitivity. Furthermore, these methods lack cloud interfacing for real-time data analysis and fail to provide comprehensive reliability metrics such as Remaining Useful Life (RUL). Additionally, the proposed CM method benefits from noise mitigation during switching transitions by utilizing delay circuits to ensure stable and accurate data capture. Moreover, collected data are transmitted to the cloud for long-term health assessment and damage evaluation. In this paper, experimental validation follows a structured design involving signal acquisition, filtering, cloud transmission, and temperature and thermal degradation tracking. Experimental testing has been conducted at different temperatures and operating conditions, considering coolant temperature variations (40 °C to 80 °C), and an output power of 7 kW. Results have demonstrated a clear correlation between temperature rise and Rds,on variations, validating the ability of the proposed method to predict device degradation. Finally, by leveraging cloud computing, this work provides a practical solution for real-world Wide Band Gap (WBG)-based PEC reliability and lifetime assessment. Full article
(This article belongs to the Section Industrial Electronics)
11 pages, 1941 KiB  
Article
Nomenclature and Typification of the Goat Grass Aegilops tauschii Coss. (Poaceae: Triticeae): A Key Species for the Secondary Gene Pool of Common Wheat Triticum aestivum
by P. Pablo Ferrer-Gallego, Raúl Ferrer-Gallego, Diego Rivera, Concepción Obón, Emilio Laguna and Nikolay P. Goncharov
Plants 2025, 14(15), 2375; https://doi.org/10.3390/plants14152375 - 1 Aug 2025
Viewed by 177
Abstract
Background: The typification of the name Aegilops tauschii Coss. (Poaceae: Triticeae) is revisited. Several authors cited a gathering from Iberia as the locality and Buxbaum as the collector of as the type, but no actual specimens from this collection have been located, nor [...] Read more.
Background: The typification of the name Aegilops tauschii Coss. (Poaceae: Triticeae) is revisited. Several authors cited a gathering from Iberia as the locality and Buxbaum as the collector of as the type, but no actual specimens from this collection have been located, nor is there evidence that such a gathering existed. In 1994, van Slageren designated as lectotype an illustration from Buxbaum’s Plantarum minus cognitarum centuria I (1728), which, although original material, is not the only element cited in the protologue. The protologue mentions several gatherings, some of which are represented by identifiable herbarium specimens qualifying as syntypes. Methods: This work is based on the analysis of the protologue of Aegilops tauschii and the study of specimens conserved in several herbaria. According to the International Code of Nomenclature for algae, fungi, and plants (ICN, Shenzhen Code 2018), an illustration does not hold the same nomenclatural weight as preserved specimens cited in the protologue. Therefore, van Slageren’s lectotypification does not comply with Art. 9.12 of the ICN and must be superseded. Results: The original material includes multiple elements, and a new lectotype is designated from a specimen at PRC from Azerbaijan. Full article
(This article belongs to the Special Issue Taxonomy and Nomenclature of Euro + Mediterranean Vascular Plants)
Show Figures

Figure 1

33 pages, 1512 KiB  
Review
Advances and Challenges in Deep Learning for Acoustic Pathology Detection: A Review
by Florin Bogdan and Mihaela-Ruxandra Lascu
Technologies 2025, 13(8), 329; https://doi.org/10.3390/technologies13080329 - 1 Aug 2025
Viewed by 197
Abstract
Recent advancements in data collection technologies, data science, and speech processing have fueled significant interest in the computational analysis of biological sounds. This enhanced analytical capability shows promise for improved understanding and detection of various pathological conditions, extending beyond traditional speech analysis to [...] Read more.
Recent advancements in data collection technologies, data science, and speech processing have fueled significant interest in the computational analysis of biological sounds. This enhanced analytical capability shows promise for improved understanding and detection of various pathological conditions, extending beyond traditional speech analysis to encompass other forms of acoustic data. A particularly promising and rapidly evolving area is the application of deep learning techniques for the detection and analysis of diverse pathologies, including respiratory, cardiac, and neurological disorders, through sound processing. This paper provides a comprehensive review of the current state-of-the-art in using deep learning for pathology detection via analysis of biological sounds. It highlights key successes achieved in the field, identifies existing challenges and limitations, and discusses potential future research directions. This review aims to serve as a valuable resource for researchers and clinicians working in this interdisciplinary domain. Full article
Show Figures

Graphical abstract

26 pages, 1790 KiB  
Article
A Hybrid Deep Learning Model for Aromatic and Medicinal Plant Species Classification Using a Curated Leaf Image Dataset
by Shareena E. M., D. Abraham Chandy, Shemi P. M. and Alwin Poulose
AgriEngineering 2025, 7(8), 243; https://doi.org/10.3390/agriengineering7080243 - 1 Aug 2025
Viewed by 214
Abstract
In the era of smart agriculture, accurate identification of plant species is critical for effective crop management, biodiversity monitoring, and the sustainable use of medicinal resources. However, existing deep learning approaches often underperform when applied to fine-grained plant classification tasks due to the [...] Read more.
In the era of smart agriculture, accurate identification of plant species is critical for effective crop management, biodiversity monitoring, and the sustainable use of medicinal resources. However, existing deep learning approaches often underperform when applied to fine-grained plant classification tasks due to the lack of domain-specific, high-quality datasets and the limited representational capacity of traditional architectures. This study addresses these challenges by introducing a novel, well-curated leaf image dataset consisting of 39 classes of medicinal and aromatic plants collected from the Aromatic and Medicinal Plant Research Station in Odakkali, Kerala, India. To overcome performance bottlenecks observed with a baseline Convolutional Neural Network (CNN) that achieved only 44.94% accuracy, we progressively enhanced model performance through a series of architectural innovations. These included the use of a pre-trained VGG16 network, data augmentation techniques, and fine-tuning of deeper convolutional layers, followed by the integration of Squeeze-and-Excitation (SE) attention blocks. Ultimately, we propose a hybrid deep learning architecture that combines VGG16 with Batch Normalization, Gated Recurrent Units (GRUs), Transformer modules, and Dilated Convolutions. This final model achieved a peak validation accuracy of 95.24%, significantly outperforming several baseline models, such as custom CNN (44.94%), VGG-19 (59.49%), VGG-16 before augmentation (71.52%), Xception (85.44%), Inception v3 (87.97%), VGG-16 after data augumentation (89.24%), VGG-16 after fine-tuning (90.51%), MobileNetV2 (93.67), and VGG16 with SE block (94.94%). These results demonstrate superior capability in capturing both local textures and global morphological features. The proposed solution not only advances the state of the art in plant classification but also contributes a valuable dataset to the research community. Its real-world applicability spans field-based plant identification, biodiversity conservation, and precision agriculture, offering a scalable tool for automated plant recognition in complex ecological and agricultural environments. Full article
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)
Show Figures

Figure 1

21 pages, 12997 KiB  
Article
Aerial-Ground Cross-View Vehicle Re-Identification: A Benchmark Dataset and Baseline
by Linzhi Shang, Chen Min, Juan Wang, Liang Xiao, Dawei Zhao and Yiming Nie
Remote Sens. 2025, 17(15), 2653; https://doi.org/10.3390/rs17152653 - 31 Jul 2025
Viewed by 227
Abstract
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, [...] Read more.
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, typically elevated viewpoints, these settings do not fully reflect complex aerial-ground collaborative remote sensing scenarios. In this work, we introduce a novel and challenging task: aerial-ground cross-view vehicle Re-ID, which involves retrieving vehicles in ground-view image galleries using query images captured from aerial (top-down) perspectives. This task is increasingly relevant due to the integration of drone-based surveillance and ground-level monitoring in multi-source remote sensing systems, yet it poses substantial challenges due to significant appearance variations between aerial and ground views. To support this task, we present AGID (Aerial-Ground Vehicle Re-Identification), the first benchmark dataset specifically designed for aerial-ground cross-view vehicle Re-ID. AGID comprises 20,785 remote sensing images of 834 vehicle identities, collected using drones and fixed ground cameras. We further propose a novel method, Enhanced Self-Correlation Feature Computation (ESFC), which enhances spatial relationships between semantically similar regions and incorporates shape information to improve feature discrimination. Extensive experiments on the AGID dataset and three widely used vehicle Re-ID benchmarks validate the effectiveness of our method, which achieves a Rank-1 accuracy of 69.0% on AGID, surpassing state-of-the-art approaches by 2.1%. Full article
Show Figures

Figure 1

28 pages, 5699 KiB  
Article
Multi-Modal Excavator Activity Recognition Using Two-Stream CNN-LSTM with RGB and Point Cloud Inputs
by Hyuk Soo Cho, Kamran Latif, Abubakar Sharafat and Jongwon Seo
Appl. Sci. 2025, 15(15), 8505; https://doi.org/10.3390/app15158505 (registering DOI) - 31 Jul 2025
Viewed by 136
Abstract
Recently, deep learning algorithms have been increasingly applied in construction for activity recognition, particularly for excavators, to automate processes and enhance safety and productivity through continuous monitoring of earthmoving activities. These deep learning algorithms analyze construction videos to classify excavator activities for earthmoving [...] Read more.
Recently, deep learning algorithms have been increasingly applied in construction for activity recognition, particularly for excavators, to automate processes and enhance safety and productivity through continuous monitoring of earthmoving activities. These deep learning algorithms analyze construction videos to classify excavator activities for earthmoving purposes. However, previous studies have solely focused on single-source external videos, which limits the activity recognition capabilities of the deep learning algorithm. This paper introduces a novel multi-modal deep learning-based methodology for recognizing excavator activities, utilizing multi-stream input data. It processes point clouds and RGB images using the two-stream long short-term memory convolutional neural network (CNN-LSTM) method to extract spatiotemporal features, enabling the recognition of excavator activities. A comprehensive dataset comprising 495,000 video frames of synchronized RGB and point cloud data was collected across multiple construction sites under varying conditions. The dataset encompasses five key excavator activities: Approach, Digging, Dumping, Idle, and Leveling. To assess the effectiveness of the proposed method, the performance of the two-stream CNN-LSTM architecture is compared with that of single-stream CNN-LSTM models on the same RGB and point cloud datasets, separately. The results demonstrate that the proposed multi-stream approach achieved an accuracy of 94.67%, outperforming existing state-of-the-art single-stream models, which achieved 90.67% accuracy for the RGB-based model and 92.00% for the point cloud-based model. These findings underscore the potential of the proposed activity recognition method, making it highly effective for automatic real-time monitoring of excavator activities, thereby laying the groundwork for future integration into digital twin systems for proactive maintenance and intelligent equipment management. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

14 pages, 290 KiB  
Article
Patterns of Reverse Transcriptase Inhibitor Resistance Mutations in People Living with Human Immunodeficiency Virus in Libreville, Gabon
by Guy Francis Nzengui-Nzengui, Gaël Mourembou, Euloge Ibinga, Ayawa Claudine Kombila-Koumavor, Hervé M’boyis-Kamdem, Edmery Muriel Mpouho-Ntsougha, Alain Mombo-Mombo and Angélique Ndjoyi-Mbiguino
Trop. Med. Infect. Dis. 2025, 10(8), 216; https://doi.org/10.3390/tropicalmed10080216 - 30 Jul 2025
Viewed by 257
Abstract
Objective: To characterize the profiles of resistance mutations to HIV reverse transcriptase inhibitors in Gabon. Design: Cross-sectional study conducted over 37 months, from October 2019 to October 2022, at the IST/HIV/AIDS Reference Laboratory, a reference center for the biological monitoring of people living [...] Read more.
Objective: To characterize the profiles of resistance mutations to HIV reverse transcriptase inhibitors in Gabon. Design: Cross-sectional study conducted over 37 months, from October 2019 to October 2022, at the IST/HIV/AIDS Reference Laboratory, a reference center for the biological monitoring of people living with the human immunodeficiency virus (PWHIV) in Gabon. Methods: Plasma from 666 PWHIV receiving antiretroviral treatment was collected, followed by RNA extraction, amplification, and reverse transcriptase gene sequencing. Statistical analyses were performed using Stata® 14.0 software (USA). Results: Six hundred and sixty-six (666) PWHIV plasma collected from 252 male and 414 female patients were analyzed and 1654 mutations were detected in 388 patients, including 849 (51.3%) associated with nucleoside reverse transcriptase inhibitors (NRTIs) and 805 (48.7%) with non-nucleoside reverse transcriptase inhibitors (NNRTIs). Three of the most prescribed treatment regimens were associated to the appearance of both NRTIs and NNRTIs resistance mutations: TDF + 3TC + EFV (24.02%; 160/666); TDF + FTC + EFV) (17.2%; 114/666) and AZT + 3TC + EFV (14.6%; 97/666). Additionally, stage 3 of CD4 T-lymphocyte deficiency, the higher viral load, and treatment duration are risk factors influencing the appearance of virus mutations. Also, treatment containing TDF-3TC + DTG is more protective against mutations. Conclusions: Drug resistance mutations are common in Gabon and compromise the efficacy of ART. Further study must search for other causes of therapeutic failure in Gabon in PWHIV. Full article
(This article belongs to the Special Issue HIV Testing, Prevention and Care Interventions, 2nd Edition)
13 pages, 11739 KiB  
Article
DeepVinci: Organ and Tool Segmentation with Edge Supervision and a Densely Multi-Scale Pyramid Module for Robot-Assisted Surgery
by Li-An Tseng, Yuan-Chih Tsai, Meng-Yi Bai, Mei-Fang Li, Yi-Liang Lee, Kai-Jo Chiang, Yu-Chi Wang and Jing-Ming Guo
Diagnostics 2025, 15(15), 1917; https://doi.org/10.3390/diagnostics15151917 - 30 Jul 2025
Viewed by 238
Abstract
Background: Automated surgical navigation can be separated into three stages: (1) organ identification and localization, (2) identification of the organs requiring further surgery, and (3) automated planning of the operation path and steps. With its ideal visual and operating system, the da [...] Read more.
Background: Automated surgical navigation can be separated into three stages: (1) organ identification and localization, (2) identification of the organs requiring further surgery, and (3) automated planning of the operation path and steps. With its ideal visual and operating system, the da Vinci surgical system provides a promising platform for automated surgical navigation. This study focuses on the first step in automated surgical navigation by identifying organs in gynecological surgery. Methods: Due to the difficulty of collecting da Vinci gynecological endoscopy data, we propose DeepVinci, a novel end-to-end high-performance encoder–decoder network based on convolutional neural networks (CNNs) for pixel-level organ semantic segmentation. Specifically, to overcome the drawback of a limited field of view, we incorporate a densely multi-scale pyramid module and feature fusion module, which can also enhance the global context information. In addition, the system integrates an edge supervision network to refine the segmented results on the decoding side. Results: Experimental results show that DeepVinci can achieve state-of-the-art accuracy, obtaining dice similarity coefficient and mean pixel accuracy values of 0.684 and 0.700, respectively. Conclusions: The proposed DeepVinci network presents a practical and competitive semantic segmentation solution for da Vinci gynecological surgery. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

30 pages, 37977 KiB  
Article
Text-Guided Visual Representation Optimization for Sensor-Acquired Video Temporal Grounding
by Yun Tian, Xiaobo Guo, Jinsong Wang and Xinyue Liang
Sensors 2025, 25(15), 4704; https://doi.org/10.3390/s25154704 - 30 Jul 2025
Viewed by 253
Abstract
Video temporal grounding (VTG) aims to localize a semantically relevant temporal segment within an untrimmed video based on a natural language query. The task continues to face challenges arising from cross-modal semantic misalignment, which is largely attributed to redundant visual content in sensor-acquired [...] Read more.
Video temporal grounding (VTG) aims to localize a semantically relevant temporal segment within an untrimmed video based on a natural language query. The task continues to face challenges arising from cross-modal semantic misalignment, which is largely attributed to redundant visual content in sensor-acquired video streams, linguistic ambiguity, and discrepancies in modality-specific representations. Most existing approaches rely on intra-modal feature modeling, processing video and text independently throughout the representation learning stage. However, this isolation undermines semantic alignment by neglecting the potential of cross-modal interactions. In practice, a natural language query typically corresponds to spatiotemporal content in video signals collected through camera-based sensing systems, encompassing a particular sequence of frames and its associated salient subregions. We propose a text-guided visual representation optimization framework tailored to enhance semantic interpretation over video signals captured by visual sensors. This framework leverages textual information to focus on spatiotemporal video content, thereby narrowing the cross-modal gap. Built upon the unified cross-modal embedding space provided by CLIP, our model leverages video data from sensing devices to structure representations and introduces two dedicated modules to semantically refine visual representations across spatial and temporal dimensions. First, we design a Spatial Visual Representation Optimization (SVRO) module to learn spatial information within intra-frames. It selects salient patches related to the text, capturing more fine-grained visual details. Second, we introduce a Temporal Visual Representation Optimization (TVRO) module to learn temporal relations from inter-frames. Temporal triplet loss is employed in TVRO to enhance attention on text-relevant frames and capture clip semantics. Additionally, a self-supervised contrastive loss is introduced at the clip–text level to improve inter-clip discrimination by maximizing semantic variance during training. Experiments on Charades-STA, ActivityNet Captions, and TACoS, widely used benchmark datasets, demonstrate that our method outperforms state-of-the-art methods across multiple metrics. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop