Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,113)

Search Parameters:
Keywords = common vision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4576 KiB  
Article
Enhanced HoVerNet Optimization for Precise Nuclei Segmentation in Diffuse Large B-Cell Lymphoma
by Gei Ki Tang, Chee Chin Lim, Faezahtul Arbaeyah Hussain, Qi Wei Oung, Aidy Irman Yajid, Sumayyah Mohammad Azmi and Yen Fook Chong
Diagnostics 2025, 15(15), 1958; https://doi.org/10.3390/diagnostics15151958 - 4 Aug 2025
Abstract
Background/Objectives: Diffuse Large B-Cell Lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphoma and demands precise segmentation and classification of nuclei for effective diagnosis and disease severity assessment. This study aims to evaluate the performance of HoVerNet, a deep learning model, [...] Read more.
Background/Objectives: Diffuse Large B-Cell Lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphoma and demands precise segmentation and classification of nuclei for effective diagnosis and disease severity assessment. This study aims to evaluate the performance of HoVerNet, a deep learning model, for nuclei segmentation and classification in CMYC-stained whole slide images and to assess its integration into a user-friendly diagnostic tool. Methods: A dataset of 122 CMYC-stained whole slide images (WSIs) was used. Pre-processing steps, including stain normalization and patch extraction, were applied to improve input consistency. HoVerNet, a multi-branch neural network, was used for both nuclei segmentation and classification, particularly focusing on its ability to manage overlapping nuclei and complex morphological variations. Model performance was validated using metrics such as accuracy, precision, recall, and F1 score. Additionally, a graphic user interface (GUI) was developed to incorporate automated segmentation, cell counting, and severity assessment functionalities. Results: HoVerNet achieved a validation accuracy of 82.5%, with a precision of 85.3%, recall of 82.6%, and an F1 score of 83.9%. The model showed powerful performance in differentiating overlapping and morphologically complex nuclei. The developed GUI enabled real-time visualization and diagnostic support, enhancing the efficiency and usability of DLBCL histopathological analysis. Conclusions: HoVerNet, combined with an integrated GUI, presents a promising approach for streamlining DLBCL diagnostics through accurate segmentation and real-time visualization. Future work will focus on incorporating Vision Transformers and additional staining protocols to improve generalizability and clinical utility. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Radiomics in Medical Diagnosis)
Show Figures

Figure 1

24 pages, 2584 KiB  
Article
Precise and Continuous Biomass Measurement for Plant Growth Using a Low-Cost Sensor Setup
by Lukas Munser, Kiran Kumar Sathyanarayanan, Jonathan Raecke, Mohamed Mokhtar Mansour, Morgan Emily Uland and Stefan Streif
Sensors 2025, 25(15), 4770; https://doi.org/10.3390/s25154770 - 2 Aug 2025
Viewed by 205
Abstract
Continuous and accurate biomass measurement is a critical enabler for control, decision making, and optimization in modern plant production systems. It supports the development of plant growth models for advanced control strategies like model predictive control, and enables responsive, data-driven, and plant state-dependent [...] Read more.
Continuous and accurate biomass measurement is a critical enabler for control, decision making, and optimization in modern plant production systems. It supports the development of plant growth models for advanced control strategies like model predictive control, and enables responsive, data-driven, and plant state-dependent cultivation. Traditional biomass measurement methods, such as destructive sampling, are time-consuming and unsuitable for high-frequency monitoring. In contrast, image-based estimation using computer vision and deep learning requires frequent retraining and is sensitive to changes in lighting or plant morphology. This work introduces a low-cost, load-cell-based biomass monitoring system tailored for vertical farming applications. The system operates at the level of individual growing trays, offering a valuable middle ground between impractical plant-level sensing and overly coarse rack-level measurements. Tray-level data allow localized control actions, such as adjusting light spectrum and intensity per tray, thereby enhancing the utility of controllable LED systems. This granularity supports layer-specific optimization and anomaly detection, which are not feasible with rack-level feedback. The biomass sensor is easily scalable and can be retrofitted, addressing common challenges such as mechanical noise and thermal drift. It offers a practical and robust solution for biomass monitoring in dynamic, growing environments, enabling finer control and smarter decision making in both commercial and research-oriented vertical farming systems. The developed sensor was tested and validated against manual harvest data, demonstrating high agreement with actual plant biomass and confirming its suitability for integration into vertical farming systems. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2025)
Show Figures

Figure 1

20 pages, 4569 KiB  
Article
Lightweight Vision Transformer for Frame-Level Ergonomic Posture Classification in Industrial Workflows
by Luca Cruciata, Salvatore Contino, Marianna Ciccarelli, Roberto Pirrone, Leonardo Mostarda, Alessandra Papetti and Marco Piangerelli
Sensors 2025, 25(15), 4750; https://doi.org/10.3390/s25154750 - 1 Aug 2025
Viewed by 205
Abstract
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly [...] Read more.
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly on raw RGB images without requiring skeleton reconstruction, joint angle estimation, or image segmentation. A single ViT model simultaneously classifies eight anatomical regions, enabling efficient multi-label posture assessment. Training is supervised using a multimodal dataset acquired from synchronized RGB video and full-body inertial motion capture, with ergonomic risk labels derived from RULA scores computed on joint kinematics. The system is validated on realistic, simulated industrial tasks that include common challenges such as occlusion and posture variability. Experimental results show that the ViT model achieves state-of-the-art performance, with F1-scores exceeding 0.99 and AUC values above 0.996 across all regions. Compared to previous CNN-based system, the proposed model improves classification accuracy and generalizability while reducing complexity and enabling real-time inference on edge devices. These findings demonstrate the model’s potential for unobtrusive, scalable ergonomic risk monitoring in real-world manufacturing environments. Full article
(This article belongs to the Special Issue Secure and Decentralised IoT Systems)
Show Figures

Figure 1

22 pages, 1470 KiB  
Article
An NMPC-ECBF Framework for Dynamic Motion Planning and Execution in Vision-Based Human–Robot Collaboration
by Dianhao Zhang, Mien Van, Pantelis Sopasakis and Seán McLoone
Machines 2025, 13(8), 672; https://doi.org/10.3390/machines13080672 - 1 Aug 2025
Viewed by 257
Abstract
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes [...] Read more.
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes advantage of the prediction capabilities of nonlinear model predictive control (NMPC) to execute safe path planning based on feedback from a vision system. To satisfy the requirements of real-time path planning, an embedded solver based on a penalty method is applied. However, due to tight sampling times, NMPC solutions are approximate; therefore, the safety of the system cannot be guaranteed. To address this, we formulate a novel safety-critical paradigm that uses an exponential control barrier function (ECBF) as a safety filter. Several common human–robot assembly subtasks have been integrated into a real-life HRC assembly task to validate the performance of the proposed controller and to investigate whether integrating human pose prediction can help with safe and efficient collaboration. The robot uses OptiTrack cameras for perception and dynamically generates collision-free trajectories to the predicted target interactive position. Results for a number of different configurations confirm the efficiency of the proposed motion planning and execution framework, with a 23.2% reduction in execution time achieved for the HRC task compared to an implementation without human motion prediction. Full article
(This article belongs to the Special Issue Visual Measurement and Intelligent Robotic Manufacturing)
Show Figures

Figure 1

15 pages, 4667 KiB  
Article
Longitudinal High-Resolution Imaging of Retinal Sequelae of a Choroidal Nevus
by Kaitlyn A. Sapoznik, Stephen A. Burns, Todd D. Peabody, Lucie Sawides, Brittany R. Walker and Thomas J. Gast
Diagnostics 2025, 15(15), 1904; https://doi.org/10.3390/diagnostics15151904 - 29 Jul 2025
Viewed by 240
Abstract
Background: Choroidal nevi are common, benign tumors. These tumors rarely cause adverse retinal sequalae, but when they do, they can lead to disruption of the outer retina and vision loss. In this paper, we used high-resolution retinal imaging modalities, optical coherence tomography [...] Read more.
Background: Choroidal nevi are common, benign tumors. These tumors rarely cause adverse retinal sequalae, but when they do, they can lead to disruption of the outer retina and vision loss. In this paper, we used high-resolution retinal imaging modalities, optical coherence tomography (OCT) and adaptive optics scanning laser ophthalmoscopy (AOSLO), to longitudinally monitor retinal sequelae of a submacular choroidal nevus. Methods: A 31-year-old female with a high-risk choroidal nevus resulting in subretinal fluid (SRF) and a 30-year-old control subject were longitudinally imaged with AOSLO and OCT in this study over 18 and 22 months. Regions of interest (ROI) including the macular region (where SRF was present) and the site of laser photocoagulation were imaged repeatedly over time. The depth of SRF in a discrete ROI was quantified with OCT and AOSLO images were assessed for visualization of photoreceptors and retinal pigmented epithelium (RPE). Cell-like structures that infiltrated the site of laser photocoagulation were measured and their count was assessed over time. In the control subject, images were assessed for RPE visualization and the presence and stability of cell-like structures. Results: We demonstrate that AOSLO can be used to assess cellular-level changes at small ROIs in the retina over time. We show the response of the retina to SRF and laser photocoagulation. We demonstrate that the RPE can be visualized when SRF is present, which does not appear to depend on the height of retinal elevation. We also demonstrate that cell-like structures, presumably immune cells, are present within and adjacent to areas of SRF on both OCT and AOSLO, and that similar cell-like structures infiltrate areas of retinal laser photocoagulation. Conclusions: Our study demonstrates that dynamic, cellular-level retinal responses to SRF and laser photocoagulation can be monitored over time with AOSLO in living humans. Many retinal conditions exhibit similar retinal findings and laser photocoagulation is also indicated in numerous retinal conditions. AOSLO imaging may provide future opportunities to better understand the clinical implications of such responses in vivo. Full article
(This article belongs to the Special Issue High-Resolution Retinal Imaging: Hot Topics and Recent Developments)
Show Figures

Figure 1

21 pages, 1574 KiB  
Article
Reevaluating Wildlife–Vehicle Collision Risk During COVID-19: A Simulation-Based Perspective on the ‘Fewer Vehicles–Fewer Casualties’ Assumption
by Andreas Y. Troumbis and Yiannis G. Zevgolis
Diversity 2025, 17(8), 531; https://doi.org/10.3390/d17080531 - 29 Jul 2025
Viewed by 159
Abstract
Wildlife–vehicle collisions (WVCs) remain a significant cause of animal mortality worldwide, particularly in regions experiencing rapid road network expansion. During the COVID-19 pandemic, a number of studies reported decreased WVC rates, attributing this trend to reduced traffic volumes. However, the validity of the [...] Read more.
Wildlife–vehicle collisions (WVCs) remain a significant cause of animal mortality worldwide, particularly in regions experiencing rapid road network expansion. During the COVID-19 pandemic, a number of studies reported decreased WVC rates, attributing this trend to reduced traffic volumes. However, the validity of the simplified assumption that “fewer vehicles means fewer collisions” remains underexplored from a mechanistic perspective. This study aims to reevaluate that assumption using two simulation-based models that incorporate both the physics of vehicle movement and behavioral parameters of road-crossing animals. Employing an inverse modeling approach with quasi-realistic traffic scenarios, we quantify how vehicle speed, spacing, and animal hesitation affect collision likelihood. The results indicate that approximately 10% of modeled cases contradict the prevailing assumption, with collision risk peaking at intermediate traffic densities. These findings challenge common interpretations of WVC dynamics and underscore the need for more refined, behaviorally informed mitigation strategies. We suggest that integrating such approaches into road planning and conservation policy—particularly under the European Union’s ‘Vision Zero’ framework—could help reduce wildlife mortality more effectively in future scenarios, including potential pandemics or mobility disruptions. Full article
(This article belongs to the Section Biodiversity Conservation)
Show Figures

Figure 1

9 pages, 219 KiB  
Article
Politics, Theology, and Spiritual Autobiography: Dag Hammarskjöld and Thomas Merton—A Case Study
by Iuliu-Marius Morariu
Religions 2025, 16(8), 980; https://doi.org/10.3390/rel16080980 - 28 Jul 2025
Viewed by 423
Abstract
(1) Background: Among the most important authors of spiritual autobiography, Dag Hammarskjöld and Thomas Merton must surely mentioned. The first one, a Swedish Evangelical, and the second one, an American Cistercian monk, provide valuable and interdisciplinary works. Among the topics found, their political [...] Read more.
(1) Background: Among the most important authors of spiritual autobiography, Dag Hammarskjöld and Thomas Merton must surely mentioned. The first one, a Swedish Evangelical, and the second one, an American Cistercian monk, provide valuable and interdisciplinary works. Among the topics found, their political theology is also present. Noticing its relevance, we will try there to take into account the way the aforementioned topic is reflected in their work. (2) Results: Aspects such as communism, racism, diplomacy, or love will constitute some of the topics that we will bring into attention in this research in an attempt to present the particularities, common points, and differences of the approaches of the two relevant authors, one from the Protestant space and the other from the Catholic one, both with an ecumenical vocation and openness to dialogue. (3) Methods: As for our methods, we will use the historical inquiry, the analysis of documents, and the deductive and the qualitative method. (4) Conclusions: The work will therefore investigate the aspects of political theology found in their research and will emphasize their vision, the common points, the use of Christian theology in the understanding of political and social realities, but also the differences that may occur between their approaches. At the same time, the role played by the context where they lived, worked, and wrote will be taken into attention in order to provide a more complex perspective on the relationship between their life and work. Full article
15 pages, 1638 KiB  
Article
MFEAM: Multi-View Feature Enhanced Attention Model for Image Captioning
by Yang Cui and Juan Zhang
Appl. Sci. 2025, 15(15), 8368; https://doi.org/10.3390/app15158368 - 28 Jul 2025
Viewed by 246
Abstract
Image captioning plays a crucial role in aligning visual content with natural language, serving as a key step toward effective cross-modal understanding. Transformer has become the dominant language model in image captioning. Existing Transformer-based models seldom highlight important features from multiple views in [...] Read more.
Image captioning plays a crucial role in aligning visual content with natural language, serving as a key step toward effective cross-modal understanding. Transformer has become the dominant language model in image captioning. Existing Transformer-based models seldom highlight important features from multiple views in the use of self-attention. In this paper, we propose MFEAM, an innovative network that leverages the multi-view feature enhanced attention. To accurately represent the entangled features of vision and text, the teacher model employs the multi-view feature enhanced attention to guide the student model training through knowledge distillation and model averaging from both visual and textual views. To mitigate the impact of excessive feature enhancement, the student model divides the decoding layer into two groups, which separately process instance features and the relationships between instances. Experimental results demonstrate that MFEAM attains competitive performance on the MSCOCO (Microsoft Common Objects in Context) when trained without leveraging external data. Full article
Show Figures

Figure 1

37 pages, 55522 KiB  
Article
EPCNet: Implementing an ‘Artificial Fovea’ for More Efficient Monitoring Using the Sensor Fusion of an Event-Based and a Frame-Based Camera
by Orla Sealy Phelan, Dara Molloy, Roshan George, Edward Jones, Martin Glavin and Brian Deegan
Sensors 2025, 25(15), 4540; https://doi.org/10.3390/s25154540 - 22 Jul 2025
Viewed by 234
Abstract
Efficient object detection is crucial to real-time monitoring applications such as autonomous driving or security systems. Modern RGB cameras can produce high-resolution images for accurate object detection. However, increased resolution results in increased network latency and power consumption. To minimise this latency, Convolutional [...] Read more.
Efficient object detection is crucial to real-time monitoring applications such as autonomous driving or security systems. Modern RGB cameras can produce high-resolution images for accurate object detection. However, increased resolution results in increased network latency and power consumption. To minimise this latency, Convolutional Neural Networks (CNNs) often have a resolution limitation, requiring images to be down-sampled before inference, causing significant information loss. Event-based cameras are neuromorphic vision sensors with high temporal resolution, low power consumption, and high dynamic range, making them preferable to regular RGB cameras in many situations. This project proposes the fusion of an event-based camera with an RGB camera to mitigate the trade-off between temporal resolution and accuracy, while minimising power consumption. The cameras are calibrated to create a multi-modal stereo vision system where pixel coordinates can be projected between the event and RGB camera image planes. This calibration is used to project bounding boxes detected by clustering of events into the RGB image plane, thereby cropping each RGB frame instead of down-sampling to meet the requirements of the CNN. Using the Common Objects in Context (COCO) dataset evaluator, the average precision (AP) for the bicycle class in RGB scenes improved from 21.08 to 57.38. Additionally, AP increased across all classes from 37.93 to 46.89. To reduce system latency, a novel object detection approach is proposed where the event camera acts as a region proposal network, and a classification algorithm is run on the proposed regions. This achieved a 78% improvement over baseline. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

37 pages, 1831 KiB  
Review
Deep Learning Techniques for Retinal Layer Segmentation to Aid Ocular Disease Diagnosis: A Review
by Oliver Jonathan Quintana-Quintana, Marco Antonio Aceves-Fernández, Jesús Carlos Pedraza-Ortega, Gendry Alfonso-Francia and Saul Tovar-Arriaga
Computers 2025, 14(8), 298; https://doi.org/10.3390/computers14080298 - 22 Jul 2025
Viewed by 390
Abstract
Age-related ocular conditions like macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma are leading causes of irreversible vision loss globally. Optical coherence tomography (OCT) provides essential non-invasive visualization of retinal structures for early diagnosis, but manual analysis of these images is labor-intensive and [...] Read more.
Age-related ocular conditions like macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma are leading causes of irreversible vision loss globally. Optical coherence tomography (OCT) provides essential non-invasive visualization of retinal structures for early diagnosis, but manual analysis of these images is labor-intensive and prone to variability. Deep learning (DL) techniques have emerged as powerful tools for automating the segmentation of the retinal layer in OCT scans, potentially improving diagnostic efficiency and consistency. This review systematically evaluates the state of the art in DL-based retinal layer segmentation using the PRISMA methodology. We analyze various architectures (including CNNs, U-Net variants, GANs, and transformers), examine the characteristics and availability of datasets, discuss common preprocessing and data augmentation strategies, identify frequently targeted retinal layers, and compare performance evaluation metrics across studies. Our synthesis highlights significant progress, particularly with U-Net-based models, which often achieve Dice scores exceeding 0.90 for well-defined layers, such as the retinal pigment epithelium (RPE). However, it also identifies ongoing challenges, including dataset heterogeneity, inconsistent evaluation protocols, difficulties in segmenting specific layers (e.g., OPL, RNFL), and the need for improved clinical integration. This review provides a comprehensive overview of current strengths, limitations, and future directions to guide research towards more robust and clinically applicable automated segmentation tools for enhanced ocular disease diagnosis. Full article
Show Figures

Figure 1

13 pages, 987 KiB  
Article
Clinical and Genetic Characteristics of Senior-Loken Syndrome Patients in Korea
by Jae Ryong Song, Sangwon Jung, Kwangsic Joo, Hoon Il Choi, Yoon Jeon Kim and Se Joon Woo
Genes 2025, 16(7), 835; https://doi.org/10.3390/genes16070835 - 17 Jul 2025
Viewed by 337
Abstract
Background/Objectives: Senior-Loken syndrome (SLS) is a rare autosomal recessive renal–retinal disease caused by mutations in 10 genes. This study aimed to review the ophthalmic findings, renal function, and genotypes of Korean SLS cases. Methods: We retrospectively reviewed 17 genetically confirmed SLS [...] Read more.
Background/Objectives: Senior-Loken syndrome (SLS) is a rare autosomal recessive renal–retinal disease caused by mutations in 10 genes. This study aimed to review the ophthalmic findings, renal function, and genotypes of Korean SLS cases. Methods: We retrospectively reviewed 17 genetically confirmed SLS patients in Korea, including 9 newly identified cases and 8 previously reported. Comprehensive ophthalmologic evaluations and renal assessments were conducted. Genetic testing was performed using whole-genome sequencing (WGS), whole-exome sequencing (WES), or Sanger sequencing. Results: Among the 17 patients, patients with NPHP1 mutations were most common (35.3%), followed by those with NPHP4 (29.4%), IQCB1 (NPHP5, 29.4%), and SDCCAG8 (NPHP10, 5.9%) mutations. Patients with NPHP1 mutations showed retinitis pigmentosa (RP) sine pigmento and preserved central vision independent of renal deterioration. Patients with NPHP4 mutations showed early renal dysfunction. Two patients aged under 20 maintained relatively good visual function, but older individuals progressed to severe retinopathy. Patients with IQCB1 mutations were generally prone to early and severe retinal degeneration, typically manifesting as Leber congenital amaurosis (LCA) (three patients), while two patients exhibited milder RP sine pigmento with preserved central vision. Notably, two out of five (40.0%) maintained normal renal function at the time of diagnosis, and both had large deletions in IQCB1. The patient with SDCCAG8 mutation exhibited both end-stage renal disease and congenital blindness due to LCA. Wide-field fundus autofluorescence (AF) revealed perifoveal and peripapillary hypoAF with a perifoveal hyperAF in younger patients across genotypes. Patients under 20 years old showed relatively preserved central vision, regardless of the underlying genetic mutation. Conclusions: The clinical manifestation of renal and ocular impairment demonstrated heterogeneity among Korean SLS patients according to causative genes, and the severity of renal dysfunction and visual decline was not correlated. Therefore, simultaneous comprehensive evaluations of both renal and ocular function should be performed at the initial diagnosis to guide timely intervention and optimize long-term outcomes. Full article
(This article belongs to the Special Issue Study of Inherited Retinal Diseases—Volume II)
Show Figures

Figure 1

29 pages, 4633 KiB  
Article
Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction
by Diqing Fan, Chenjiang Yu, Ling Sha, Haifeng Zhang and Xintian Liu
Machines 2025, 13(7), 616; https://doi.org/10.3390/machines13070616 - 17 Jul 2025
Viewed by 254
Abstract
As a key component in the hydraulic brake system of automobiles, the brake joint directly affects the braking performance and driving safety of the vehicle. Therefore, improving the quality of brake joints is crucial. During the processing, due to the complexity of the [...] Read more.
As a key component in the hydraulic brake system of automobiles, the brake joint directly affects the braking performance and driving safety of the vehicle. Therefore, improving the quality of brake joints is crucial. During the processing, due to the complexity of the welding material and welding process, the weld seam is prone to various defects such as cracks, pores, undercutting, and incomplete fusion, which can weaken the joint and even lead to product failure. Traditional weld seam detection methods include destructive testing and non-destructive testing; however, destructive testing has high costs and long cycles, and non-destructive testing, such as radiographic testing and ultrasonic testing, also have problems such as high consumable costs, slow detection speed, or high requirements for operator experience. In response to these challenges, this article proposes a defect detection and classification method for laser welding seams of automotive brake joints based on machine vision inspection technology. Laser-welded automotive brake joints are subjected to weld defect detection and classification, and image processing algorithms are optimized to improve the accuracy of detection and failure analysis by utilizing the high efficiency, low cost, flexibility, and automation advantages of machine vision technology. This article first analyzes the common types of weld defects in laser welding of automotive brake joints, including craters, holes, and nibbling, and explores the causes and characteristics of these defects. Then, an image processing algorithm suitable for laser welding of automotive brake joints was studied, including pre-processing steps such as image smoothing, image enhancement, threshold segmentation, and morphological processing, to extract feature parameters of weld defects. On this basis, a welding seam defect detection and classification system based on the cascade classifier and AdaBoost algorithm was designed, and efficient recognition and classification of welding seam defects were achieved by training the cascade classifier. The results show that the system can accurately identify and distinguish pits, holes, and undercutting defects in welds, with an average classification accuracy of over 90%. The detection and recognition rate of pit defects reaches 100%, and the detection accuracy of undercutting defects is 92.6%. And the overall missed detection rate is less than 3%, with both the missed detection rate and false detection rate for pit defects being 0%. The average detection time for each image is 0.24 s, meeting the real-time requirements of industrial automation. Compared with infrared and ultrasonic detection methods, the proposed machine-vision-based detection system has significant advantages in detection speed, surface defect recognition accuracy, and industrial adaptability. This provides an efficient and accurate solution for laser welding defect detection of automotive brake joints. Full article
Show Figures

Figure 1

16 pages, 769 KiB  
Article
[177Lu]Lu-PSMA-617 in Patients with Progressive PSMA+ mCRPC Treated With or Without Prior Taxane-Based Chemotherapy: A Phase 2, Open-Label, Single-Arm Trial in Japan
by Kouji Izumi, Ryuji Matsumoto, Yusuke Ito, Seiji Hoshi, Nobuaki Matsubara, Toshinari Yamasaki, Takashi Mizowaki, Atsushi Komaru, Satoshi Nomura, Toru Hattori, Hiroya Kambara, Shaheen Alanee, Makoto Hosono and Seigo Kinuya
Cancers 2025, 17(14), 2351; https://doi.org/10.3390/cancers17142351 - 15 Jul 2025
Viewed by 587
Abstract
Background: This Phase 2 trial evaluated the efficacy, tolerability, and safety of [177Lu]Lu-PSMA-617 (177Lu-PSMA-617) in patients with ≥1 measurable lesion and progressive prostate-specific membrane antigen-positive (PSMA+) metastatic castration-resistant prostate cancer (mCRPC) in Japan. Methods: This study comprises four parts; [...] Read more.
Background: This Phase 2 trial evaluated the efficacy, tolerability, and safety of [177Lu]Lu-PSMA-617 (177Lu-PSMA-617) in patients with ≥1 measurable lesion and progressive prostate-specific membrane antigen-positive (PSMA+) metastatic castration-resistant prostate cancer (mCRPC) in Japan. Methods: This study comprises four parts; data from three parts are presented here. Part 1 evaluated safety and tolerability; Parts 2 (post-taxane) and 3 (pre-taxane/taxane-naive) assessed the overall response rate (ORR; primary endpoint), overall survival (OS), radiographic progression-free survival (rPFS), disease control rate (DCR), PFS, and safety; and Part 4 is the expansion part. Patients received 7.4 GBq (±10%) 177Lu-PSMA-617 Q6W for up to six cycles. Results: Of the 35 patients who underwent a [68Ga]Ga-PSMA-11 (68Ga-PSMA-11) PET/CT scan, 30 received 177Lu-PSMA-617 (post-taxane, n = 12; pre-taxane, n = 18). No dose-limiting toxicity was noted in Part 1 (n = 3). Post- and pre-taxane patients had a median of three and five cycles, respectively. The primary endpoint, ORR, met the pre-specified threshold, with the lower limit of the 90% confidence interval (CI) above the threshold of 5% for post-taxane and 12% for pre-taxane. Post- and pre-taxane patients had an ORR of 25.0% (90% CI: 7.2–52.7) and 33.3% (90% CI: 15.6–55.4), respectively. In post- and pre-taxane patients, the DCR was 91.7% and 83.3%, the median rPFS was 3.71 and 12.25 months, and the median PFS was 3.71 and 5.59 months, respectively. The median OS was 14.42 and 12.94 months in post- and pre-taxane patients, respectively. The most common adverse events were constipation, decreased appetite, decreased platelet count, anemia, and nausea. Conclusions: The primary endpoint (ORR) was met. The safety profile of 177Lu-PSMA-617 was consistent with the VISION and PSMAfore studies, with no new safety signals in the Japanese patients with mCRPC. Full article
(This article belongs to the Section Cancer Therapy)
Show Figures

Figure 1

23 pages, 16886 KiB  
Article
SAVL: Scene-Adaptive UAV Visual Localization Using Sparse Feature Extraction and Incremental Descriptor Mapping
by Ganchao Liu, Zhengxi Li, Qiang Gao and Yuan Yuan
Remote Sens. 2025, 17(14), 2408; https://doi.org/10.3390/rs17142408 - 12 Jul 2025
Viewed by 413
Abstract
In recent years, the use of UAVs has become widespread. Long distance flight of UAVs requires obtaining precise geographic coordinates. Global Navigation Satellite Systems (GNSS) are the most common positioning models, but their signals are susceptible to interference from obstacles and complex electromagnetic [...] Read more.
In recent years, the use of UAVs has become widespread. Long distance flight of UAVs requires obtaining precise geographic coordinates. Global Navigation Satellite Systems (GNSS) are the most common positioning models, but their signals are susceptible to interference from obstacles and complex electromagnetic environments. In this case, vision-based technology can serve as an alternative solution to ensure the self-positioning capability of UAVs. Therefore, a scene adaptive UAV visual localization framework (SAVL) is proposed. In the proposed framework, UAV images are mapped to satellite images with geographic coordinates through pixel-level matching to locate UAVs. Firstly, to tackle the challenge of inaccurate localization resulting from sparse terrain features, this work proposes a novel feature extraction network grounded in a general visual model, leveraging the robust zero-shot generalization capability of the pre-trained model and extracting sparse features from UAV and satellite imagery. Secondly, in order to overcome the problem of weak generalization ability in unknown scenarios, a descriptor incremental mapping module was designed, which reduces multi-source image differences at the semantic level through UAV satellite image descriptor mapping and constructs a confidence-based incremental strategy to dynamically adapt to the scene. Finally, due to the lack of annotated public datasets, a scene-rich UAV dataset (RealUAV) was constructed to study UAV visual localization in real-world environments. In order to evaluate the localization performance of the proposed framework, several related methods were compared and analyzed in detail. The results on the dataset indicate that the proposed method achieves excellent positioning accuracy, with an average error of only 8.71 m. Full article
Show Figures

Figure 1

19 pages, 3266 KiB  
Article
The European Wine Tourism Charter and Its Link with Wine Museums in Spain
by Ángel Raúl Ruiz Pulpón and María del Carmen Cañizares Ruiz
Tour. Hosp. 2025, 6(3), 128; https://doi.org/10.3390/tourhosp6030128 - 4 Jul 2025
Viewed by 403
Abstract
The European Charter for Wine Tourism (2005) promotes the sustainable development of tourism activities associated with viticulture. The document identifies the active role that wine-growing territories must play in the conservation, management, and valorization of their resources. This study aims to understand the [...] Read more.
The European Charter for Wine Tourism (2005) promotes the sustainable development of tourism activities associated with viticulture. The document identifies the active role that wine-growing territories must play in the conservation, management, and valorization of their resources. This study aims to understand the degree of linkage that this Charter establishes with initiatives for the heritage of wine culture, specifically focusing on wine museums in Spain. It examines how these museums contribute to defining a tourism development program, constructing a common strategic vision, and encouraging cooperation between the social and economic agents involved in the territory. As case studies, the Vivanco Museum of Wine Culture (La Rioja), considered by World Tourism Organization (UNWTO) as the best in the world, and the Valdepeñas Wine Museum (Castilla-La Mancha), an example of rehabilitation and musealization in the region with the highest concentration of vineyards in the world, have been chosen. The results show that both museums exemplify management, development, and cooperation in their respective territories, aligning with the theoretical assumptions established in the Charter. Full article
Show Figures

Figure 1

Back to TopTop