Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (59)

Search Parameters:
Keywords = distant labels

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1416 KiB  
Article
Benefits from 18F-FDG PET-CT-Based Radiotherapy Planning in Stage III Non-Small-Cell Lung Cancer: A Prospective Single-Center Study
by Admir Mulita, Pipitsa Valsamaki, Eleni Bekou, Stavros Anevlavis, Christos Nanos, Athanasios Zisimopoulos, Alexandra Giatromanolaki and Michael I. Koukourakis
Cancers 2025, 17(12), 1969; https://doi.org/10.3390/cancers17121969 - 13 Jun 2025
Viewed by 569
Abstract
Background/Objectives: Lung cancer is the leading cause of cancer-related mortality worldwide. Accurate radiotherapy (RT) planning alongside chemotherapy and immunotherapy is critical for improving treatment outcomes for inoperable non-metastatic cases. Conventional computed tomography (CT)-based planning may be inadequate for accurately identifying tumor margins and [...] Read more.
Background/Objectives: Lung cancer is the leading cause of cancer-related mortality worldwide. Accurate radiotherapy (RT) planning alongside chemotherapy and immunotherapy is critical for improving treatment outcomes for inoperable non-metastatic cases. Conventional computed tomography (CT)-based planning may be inadequate for accurately identifying tumor margins and the location of nodal disease. We investigated whether 18F-labeled fluorodeoxyglucose positron emission tomography (18F-FDG PET-CT) imaging can assist in target volume delineation for primary, nodal, and metastatic disease in the RT planning and overall therapeutic planning of patients. Methods: In this single-center, prospective study, we recruited 34 patients with histologically confirmed locally advanced non-small-cell lung carcinoma (NSCLC). All patients underwent 18F-FDG PET-CT-based RT simulation. Two sequential RT plans were created by the same radiation oncologist: one based on CT alone and the other PET-CT. Planning target volumes (PTVs) and PET-CT-guided adjustments were analyzed to assess their impact. Standardized protocols for immobilization, imaging, target delineation, and dose prescription were applied. Results: A total of 34 patients (31 males and 3 females) were recruited in the study. 18F-FDG PET-CT detected distant metastases in 7/34 (20.6%) patients, altering the overall therapeutic plan in 4/34 (11.8%) and allowing radical RT in 3 of them who had oligometastatic disease (8.8%). It modified RT planning in 26/34 (76.5%) patients and clarified malignancy in atelectatic areas. Nodal involvement was identified in 3/34 patients (8.8%) and excluded in 3/34 cases, avoiding unnecessary nodal irradiation. Additional involved nodes were revealed in 12/34 (35.3%) patients, requiring dose escalation. Overall, changes to the tumor PTV were made in 23/30 (76.6%) and to the nodal PTV in 19/30 (63.3%) cases (p < 0.0001). Primary tumor and nodal PTVs increased in 20/34 (66.7%) and 13/34 (43.3%), respectively. Conclusions: 18F-FDG PET-CT significantly improves RT planning by more precisely defining tumor and nodal volumes, identifying undetected lesions, and guiding dose adaptation. Larger long-term studies are required to confirm potential locoregional control and survival improvements. Full article
Show Figures

Figure 1

15 pages, 2213 KiB  
Article
VirtualPainting: Addressing Sparsity with Virtual Points and Distance-Aware Data Augmentation for 3D Object Detection
by Sudip Dhakal, Deyuan Qu, Dominic Carrillo, Mohammad Dehghani Tezerjani and Qing Yang
Sensors 2025, 25(11), 3367; https://doi.org/10.3390/s25113367 - 27 May 2025
Viewed by 399
Abstract
In recent times, there has been a notable surge in multimodal approaches that decorate raw LiDAR point clouds with camera-derived features to improve object detection performance. However, we found that these methods still grapple with the inherent sparsity of LiDAR point cloud data, [...] Read more.
In recent times, there has been a notable surge in multimodal approaches that decorate raw LiDAR point clouds with camera-derived features to improve object detection performance. However, we found that these methods still grapple with the inherent sparsity of LiDAR point cloud data, primarily because fewer points are enriched with camera-derived features for sparsely distributed objects. We present an innovative approach that involves the generation of virtual LiDAR points using camera images and enhancing these virtual points with semantic labels obtained from image-based segmentation networks to tackle this issue and facilitate the detection of sparsely distributed objects, particularly those that are occluded or distant. Furthermore, we integrate a distance-aware data augmentation (DADA) technique to enhance the model’s capability to recognize these sparsely distributed objects by generating specialized training samples. Our approach offers a versatile solution that can be seamlessly integrated into various 3D frameworks and 2D semantic segmentation methods, resulting in significantly improved overall detection accuracy. Evaluation on the KITTI and nuScenes datasets demonstrates substantial enhancements in both 3D and bird’s eye view (BEV) detection benchmarks. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

21 pages, 1835 KiB  
Article
Radiological, Pathological, and Surgical Outcomes with Neoadjuvant Cemiplimab for Stage II–IV Cutaneous Squamous Cell Carcinoma in the Deep Sequencing in Cutaneous Squamous Cell Carcinomas (DISCERN) Trial
by Annette M. Lim, Benjamin Baker, Peter Lion, Christopher M. Angel, Jennifer Simmons, Bryce Jackson, Matthew Magarey, Angela Webb, Kevin Nguyen, Jo Hudson, Kwang Yang Chin, Anthony Cardin, Rajeev Ravi, Edwin Morrison, Tam Quinn, Ian Hunt and Danny Rischin
Cancers 2025, 17(10), 1727; https://doi.org/10.3390/cancers17101727 - 21 May 2025
Viewed by 677
Abstract
Background: A previous published Phase 2 trial using 2–4 doses of neoadjuvant cemiplimab in stage II–IV resectable cutaneous squamous cell carcinoma (CSCC) demonstrated that a complete pathological (pCR) rate of 51% and major pathological response (mPR) rate of 13% could be achieved with [...] Read more.
Background: A previous published Phase 2 trial using 2–4 doses of neoadjuvant cemiplimab in stage II–IV resectable cutaneous squamous cell carcinoma (CSCC) demonstrated that a complete pathological (pCR) rate of 51% and major pathological response (mPR) rate of 13% could be achieved with durable disease control. Methods: In this open-label, single-institution phase II trial (NCT05878288), patients with stage II–IV resectable CSCC received up to four doses of neoadjuvant cemiplimab prior to surgery. The primary endpoint of the study was to perform comprehensive molecular profiling. The focus of this report are the secondary clinical endpoints of pCR rate, mPR (defined as <10% viable tumour) rate, overall response rate (ORR) using Response Evaluation Criteria in Solid Tumours (RECIST) 1.1, immune-modified RECIST (imRECIST) and Immune PET Response Criteria in Solid Tumours (iPERCIST), disease-free survival (DFS), overall survival (OS), safety, and to describe changes in planned surgery. Results: Eleven patients were enrolled, with all proceeding with surgery. An ORR and pCR rate of 73% (8/11; 95% CI 0.39–0.93) was achieved, whilst 3/11 patients progressed on treatment. On pre-operative imaging, all 8/11 pCR patients demonstrated a partial response (RECIST 1.1), whilst 6/8 achieved a complete metabolic response and 2/8 a partial metabolic response (iPERCIST). Median follow-up was 10.2 (IQR 6.7–16.4) months. DFS was 91% (95% CI 0.57–1) and OS was 100% (95% CI 0.68–1), with one non-responder patient who developed recurrent locoregional and distant metastatic disease. There were no unexpected safety signals. Pathological features of response to neoadjuvant immunotherapy most commonly were granulomatous inflammation with keratin, fibrosis and inflammation. No cases with a dense inflammatory infiltrate were observed. Neoadjuvant immunotherapy did not impact the intra-operative planning and execution of surgery, but in the eight pCR cases, it reduced the extent of required surgery, whilst in the three non-responder cases, surgery was more extensive than originally planned. Conclusions: The DISCERN trial confirms that an excellent complete response rate can be achieved with four doses of neoadjuvant immunotherapy in stage II–IV CSCC. Proposed refinements to the pathological assessment of response and metabolic response criteria in CSCC for the neoadjuvant context are provided. Full article
(This article belongs to the Section Cancer Immunology and Immunotherapy)
Show Figures

Figure 1

20 pages, 682 KiB  
Article
Sentence Interaction and Bag Feature Enhancement for Distant Supervised Relation Extraction
by Wei Song and Qingchun Liu
AI 2025, 6(3), 51; https://doi.org/10.3390/ai6030051 - 4 Mar 2025
Viewed by 918
Abstract
Background: Distant supervision employs external knowledge bases to automatically match with text, allowing for the automatic annotation of sentences. Although this method effectively tackles the challenge of manual labeling, it inevitably introduces noisy labels. Traditional approaches typically employ sentence-level attention mechanisms, assigning lower [...] Read more.
Background: Distant supervision employs external knowledge bases to automatically match with text, allowing for the automatic annotation of sentences. Although this method effectively tackles the challenge of manual labeling, it inevitably introduces noisy labels. Traditional approaches typically employ sentence-level attention mechanisms, assigning lower weights to noisy sentences to mitigate their impact. But this approach overlooks the critical importance of information flow between sentences. Additionally, previous approaches treated an entire bag as a single classification unit, giving equal importance to all features within the bag. However, they failed to recognize that different dimensions of features have varying levels of significance. Method: To overcome these challenges, this study introduces a novel network that incorporates sentence interaction and a bag-level feature enhancement (ESI-EBF) mechanism. We concatenate sentences within a bag into a continuous context, allowing information to flow freely between them during encoding. At the bag level, we partition the features into multiple groups based on dimensions, assigning an importance coefficient to each sub-feature within a group. This enhances critical features while diminishing the influence of less important ones. In the end, the enhanced features are utilized to construct high-quality bag representations, facilitating more accurate classification by the classification module. Result: The experimental findings from the New York Times (NYT) and Wiki-20m datasets confirm the efficacy of our suggested encoding approach and feature improvement module. Our method also outperforms state-of-the-art techniques on these datasets, achieving superior relation extraction accuracy. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

17 pages, 1912 KiB  
Protocol
Tn5-Labeled DNA-FISH: An Optimized Probe Preparation Method for Probing Genome Architecture
by Yang Yang, Gengzhan Chen, Tong Gao, Duo Ning, Yuqing Deng, Zhongyuan (Simon) Tian and Meizhen Zheng
Int. J. Mol. Sci. 2025, 26(5), 2224; https://doi.org/10.3390/ijms26052224 - 28 Feb 2025
Viewed by 1155
Abstract
Three-dimensional genome organization reveals that gene regulatory elements, which are linearly distant on the genome, can spatially interact with target genes to regulate their expression. DNA fluorescence in situ hybridization (DNA-FISH) is an efficient method for studying the spatial proximity of genomic loci. [...] Read more.
Three-dimensional genome organization reveals that gene regulatory elements, which are linearly distant on the genome, can spatially interact with target genes to regulate their expression. DNA fluorescence in situ hybridization (DNA-FISH) is an efficient method for studying the spatial proximity of genomic loci. In this study, we developed an optimized Tn5 transposome-based DNA-FISH method, termed Tn5-labeled DNA-FISH. This approach amplifies the target region and uses a self-assembled Tn5 transposome to simultaneously fragment the DNA into ~100 bp segments and label it with fluorescent oligonucleotides in a single step. This method enables the preparation of probes for regions as small as 4 kb and visualizes both endogenous and exogenous genomic loci at kb resolution. Tn5-labeled DNA-FISH provides a streamlined and cost-effective tool for probe generation, facilitating the investigation of chromatin spatial conformations, gene interactions, and genome architecture. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

23 pages, 3856 KiB  
Article
Neurons Co-Expressing GLP-1, CCK, and PYY Receptors Particularly in Right Nodose Ganglion and Innervating Entire GI Tract in Mice
by Elizabeth Laura Lansbury, Vasiliki Vana, Mari Lilith Lund, Mette Q. Ludwig, Esmira Mamedova, Laurent Gautron, Myrtha Arnold, Kristoffer Lihme Egerod, Rune Ehrenreich Kuhre, Jens Juul Holst, Jens Rekling, Thue W. Schwartz, Stanislava Pankratova and Oksana Dmytriyeva
Int. J. Mol. Sci. 2025, 26(5), 2053; https://doi.org/10.3390/ijms26052053 - 26 Feb 2025
Cited by 3 | Viewed by 1586
Abstract
Afferent vagal neurons convey gut–brain signals related to the mechanical and chemical sensing of nutrients, with the latter also mediated by gut hormones secreted from enteroendocrine cells. Cell bodies of these neurons are located in the nodose ganglia (NG), with the right NG [...] Read more.
Afferent vagal neurons convey gut–brain signals related to the mechanical and chemical sensing of nutrients, with the latter also mediated by gut hormones secreted from enteroendocrine cells. Cell bodies of these neurons are located in the nodose ganglia (NG), with the right NG playing a key role in metabolic regulation. Notably, glucagon-like peptide-1 receptor (GLP1R) neurons primarily innervate the muscle layer of the stomach, distant from glucagon-like peptide-1 (GLP-1)-secreting gut cells. However, the co-expression of gut hormone receptors in these NG neurons remains unclear. Using RNAscope combined with immunohistochemistry, we confirmed GLP1R expression in a large population of NG neurons, with Glp1r, cholecystokinin A receptor (Cckar), and Neuropeptide Y Y2 Receptor (Npy2r) being more highly expressed in the right NG, while neurotensin receptor 1 (Ntsr), G protein-coupled receptor (Gpr65), and 5-hydroxytryptamine receptor 3A (5ht3a) showed equal expressions in the left and right NG. Co-expression analysis demonstrated the following: (i) most Glp1r, Cckar, and Npy2r neurons co-expressed all three receptors; (ii) nearly all Ntsr1- and Gpr65-positive neurons co-expressed both receptors; and (iii) 5ht3a was expressed in subpopulations of all peptide-hormone-receptor-positive neurons. Retrograde labeling demonstrated that the anterior part of the stomach was preferentially innervated by the left NG, while the right NG innervated the posterior part. The entire gastrointestinal (GI) tract, including the distal colon, was strongly innervated by NG neurons. Most importantly, dual retrograde labeling with two distinct tracers identified a population of neurons co-expressing Glp1r, Cckar, and Npy2r that innervated both the stomach and the colon. Thus, neurons co-expressing GLP-1, cholecystokinin (CCK), and peptide YY (PYY) receptors, predominantly found in the right NG, sample chemical, nutrient-induced signals along the entire GI tract and likely integrate these with mechanical signals from the stomach. Full article
(This article belongs to the Section Molecular Endocrinology and Metabolism)
Show Figures

Figure 1

24 pages, 16681 KiB  
Article
A Deep Ensemble Learning Approach Based on a Vision Transformer and Neural Network for Multi-Label Image Classification
by Anas W. Abulfaraj and Faisal Binzagr
Big Data Cogn. Comput. 2025, 9(2), 39; https://doi.org/10.3390/bdcc9020039 - 11 Feb 2025
Cited by 2 | Viewed by 1791
Abstract
Convolutional Neural Networks (CNNs) have proven to be very effective in image classification due to their status as a powerful feature learning algorithm. Traditional approaches have considered the problem of multiclass classification, where the goal is to classify a set of objects at [...] Read more.
Convolutional Neural Networks (CNNs) have proven to be very effective in image classification due to their status as a powerful feature learning algorithm. Traditional approaches have considered the problem of multiclass classification, where the goal is to classify a set of objects at once. However, co-occurrence can make the discriminative features of the target less salient and may lead to overfitting of the model, resulting in lower performance. To address this, we propose a multi-label classification ensemble model including a Vision Transformer (ViT) and CNN for directly detecting one or multiple objects in an image. First, we improve the MobileNetV2 and DenseNet201 models using extra convolutional layers to strengthen image classification. In detail, three convolution layers are applied in parallel at the end of both models. ViT can learn dependencies among distant positions and local detail, making it an effective tool for multi-label classification. Finally, an ensemble learning algorithm is used to combine the classification predictions of the ViT, the modified MobileNetV2, and DenseNet201 bands for increased image classification accuracy using a voting system. The performance of the proposed model is examined on four benchmark datasets, achieving accuracies of 98.24%, 98.89%, 99.91%, and 96.69% on ASCAL VOC 2007, PASCAL VOC 2012, MS-COCO, and NUS-WIDE 318, respectively, showing that our framework can enhance current state-of-the-art methods. Full article
Show Figures

Figure 1

16 pages, 467 KiB  
Article
A Zero-Shot Framework for Low-Resource Relation Extraction via Distant Supervision and Large Language Models
by Peisheng Han, Geng Liang and Yongfei Wang
Electronics 2025, 14(3), 593; https://doi.org/10.3390/electronics14030593 - 2 Feb 2025
Cited by 1 | Viewed by 1002
Abstract
While Large Language Models (LLMs) have significantly advanced various benchmarks in Natural Language Processing (NLP), the challenge of low-resource tasks persists, primarily due to the scarcity of data and difficulties in annotation. This study introduces LoRE, a framework designed for zero-shot relation extraction [...] Read more.
While Large Language Models (LLMs) have significantly advanced various benchmarks in Natural Language Processing (NLP), the challenge of low-resource tasks persists, primarily due to the scarcity of data and difficulties in annotation. This study introduces LoRE, a framework designed for zero-shot relation extraction in low-resource settings, which blends distant supervision with the powerful capabilities of LLMs. LoRE addresses the challenges of data sparsity and noise inherent in traditional distant supervision methods, enabling high-quality relation extraction without requiring extensive labeled data. By leveraging LLMs for zero-shot open information extraction and incorporating heuristic entity and relation alignment with semantic disambiguation, LoRE enhances the accuracy and relevance of the extracted data. Low-resource tasks refer to scenarios where labeled data are extremely limited, making traditional supervised learning approaches impractical. This study aims to develop a robust framework that not only tackles these challenges but also demonstrates the theoretical and practical implications of zero-shot relation extraction. The Chinese Person Relationship Extraction (CPRE) dataset, developed under this framework, demonstrates LoRE’s proficiency in extracting person-related triples. The CPRE dataset consists of 1000 word pairs, capturing diverse semantic relationships. Extensive experiments on the CPRE, IPRE, and DuIE datasets show significant improvements in dataset quality and a reduction in manual annotation efforts. These findings highlight the potential of LoRE to advance both the theoretical understanding and practical applications of relation extraction in low-resource settings. Notably, the performance of LoRE on the manually annotated DuIE dataset attests to the quality of the CPRE dataset, rivaling that of manually curated datasets, and highlights LoRE’s potential for reducing the complexities and costs associated with dataset construction for zero-shot and low-resource tasks. Full article
Show Figures

Figure 1

16 pages, 302 KiB  
Review
Nuclear Medicine and Molecular Imaging in Urothelial Cancer: Current Status and Future Directions
by Sam McDonald, Kevin G. Keane, Richard Gauci and Dickon Hayne
Cancers 2025, 17(2), 232; https://doi.org/10.3390/cancers17020232 - 13 Jan 2025
Cited by 2 | Viewed by 1919
Abstract
Background: The role of molecular imaging in urothelial cancer is less defined than other cancers, and its utility remains controversial due to limitations such as high urinary tracer excretion, complicating primary tumour assessment in the bladder and upper urinary tract. This review [...] Read more.
Background: The role of molecular imaging in urothelial cancer is less defined than other cancers, and its utility remains controversial due to limitations such as high urinary tracer excretion, complicating primary tumour assessment in the bladder and upper urinary tract. This review explores the current landscape of PET imaging in the clinical management of urothelial cancer, with a special emphasis on potential future advancements including emerging novel non-18F FDG PET agents, PET radiopharmaceuticals, and PET-MRI applications. Methods: We conducted a comprehensive literature search in the PubMed database, using keywords such as “PET”, “PET-CT”, “PET-MRI”, “FDG PET”, “Urothelial Cancer”, and “Theranostics”. Studies were screened for relevance, focusing on imaging modalities and advances in PET tracers for urothelial carcinoma. Non-English language, off-topic papers, and case reports were excluded, resulting in 80 articles being selected for discussion. Results: 18F FDG PET-CT has demonstrated superior sensitivity over conventional imaging, such as contrast-enhanced CT and MRI, for detecting lymph node metastasis and distant disease. Despite these advantages, FDG PET-CT is limited for T-staging of primary urothelial tumours due to high urinary excretion of the tracer. Emerging evidence supports the role of PETC-CT in assessing response to neoadjuvant chemotherapy and in identifying recurrence, with a high diagnostic accuracy reported in several studies. Novel PET tracers, such as 68Ga-labelled FAPI, have shown promising results in targeting cancer-associated fibroblasts, providing higher tumour-to-background ratios and detecting lesions missed by traditional imaging. Antibody-based PET tracers, like those targeting Nectin-4, CAIX, and uPAR, are under investigation for their diagnostic and theranostic potential, and initial studies indicate that these agents may offer advantages over conventional imaging and FDG PET. Conclusions: Molecular imaging is a rapidly evolving field in urothelial cancer, offering improved diagnostic and prognostic capabilities. While 18F FDG PET-CT has shown utility in staging, further prospective research is needed to establish and refine standardised protocols and validate new tracers. Advances in theranostics and precision imaging may revolutionise urothelial cancer management, enhancing the ability to tailor treatments and improve patient outcomes. Full article
(This article belongs to the Special Issue Advances in Management of Urothelial Cancer)
23 pages, 6926 KiB  
Article
Characterising the Thematic Content of Image Pixels with Topologically Structured Clustering
by Giles M. Foody
Remote Sens. 2025, 17(1), 130; https://doi.org/10.3390/rs17010130 - 2 Jan 2025
Viewed by 1785
Abstract
The location of a pixel in feature space is a function of its thematic composition. The latter is central to an image classification analysis, notably as an input (e.g., training data for a supervised classifier) and/or an output (e.g., predicted class label). Whether [...] Read more.
The location of a pixel in feature space is a function of its thematic composition. The latter is central to an image classification analysis, notably as an input (e.g., training data for a supervised classifier) and/or an output (e.g., predicted class label). Whether as an input to or output from a classification, little if any information beyond a class label is typically available for a pixel. The Kohonen self-organising feature map (SOFM) neural network however offers a means to both cluster together spectrally similar pixels that can be allocated suitable class labels and indicate relative thematic similarity of the clusters generated. Here, the thematic composition of pixels allocated to clusters represented by individual SOFM output units was explored with two remotely sensed data sets. It is shown that much of the spectral information of the input image data is maintained in the production of the SOFM output. This output provides a topologically structured representation of the image data, allowing spectrally similar pixels to be grouped together and the similarity of different clusters to be assessed. In particular, it is shown that the thematic composition of both pure and mixed pixels can be characterised by a SOFM. The location of the output unit in the output layer of the SOFM associated with a pixel conveys information on its thematic composition. Pixels in spatially close output units are more similar spectrally and thematically than those in more distant units. This situation also enables specific sub-areas of interest in the SOFM output space and/or feature space to be identified. This may, for example, provide a means to target efforts in training data acquisition for supervised classification as the most useful training cases may have a tendency to lie within specific sub-areas of feature space. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

16 pages, 360 KiB  
Article
EduDCM: A Novel Framework for Automatic Educational Dialogue Classification Dataset Construction via Distant Supervision and Large Language Models
by Changyong Qi, Longwei Zheng, Yuang Wei, Haoxin Xu, Peiji Chen and Xiaoqing Gu
Appl. Sci. 2025, 15(1), 154; https://doi.org/10.3390/app15010154 - 27 Dec 2024
Viewed by 1021
Abstract
Educational dialogue classification is a critical task for analyzing classroom interactions and fostering effective teaching strategies. However, the scarcity of annotated data and the high cost of manual labeling pose significant challenges, especially in low-resource educational contexts. This article presents the EduDCM framework [...] Read more.
Educational dialogue classification is a critical task for analyzing classroom interactions and fostering effective teaching strategies. However, the scarcity of annotated data and the high cost of manual labeling pose significant challenges, especially in low-resource educational contexts. This article presents the EduDCM framework for the first time, offering an original approach to addressing these challenges. EduDCM innovatively integrates distant supervision with the capabilities of Large Language Models (LLMs) to automate the construction of high-quality educational dialogue classification datasets. EduDCM reduces the noise typically associated with distant supervision by leveraging LLMs for context-aware label generation and incorporating heuristic alignment techniques. To validate the framework, we constructed the EduTalk dataset, encompassing diverse classroom dialogues labeled with pedagogical categories. Extensive experiments on EduTalk and publicly available datasets, combined with expert evaluations, confirm the superior quality of EduDCM-generated datasets. Models trained on EduDCM data achieved a performance comparable to that of manually annotated datasets. Expert evaluations using a 5-point Likert scale show that EduDCM outperforms Template-Based Generation and Few-Shot GPT in terms of annotation accuracy, category coverage, and consistency. These findings emphasize EduDCM’s novelty and its effectiveness in generating high-quality, scalable datasets for low-resource educational NLP tasks, thus reducing manual annotation efforts. Full article
(This article belongs to the Special Issue Intelligent Systems and Tools for Education)
Show Figures

Figure 1

21 pages, 4210 KiB  
Article
Cross-Field Road Markings Detection Based on Inverse Perspective Mapping
by Eric Hsueh-Chan Lu and Yi-Chun Hsieh
Sensors 2024, 24(24), 8080; https://doi.org/10.3390/s24248080 - 18 Dec 2024
Viewed by 813
Abstract
With the rapid development of the autonomous vehicles industry, there has been a dramatic proliferation of research concerned with related works, where road markings detection is an important issue. When there is no public open data in a field, we must collect road [...] Read more.
With the rapid development of the autonomous vehicles industry, there has been a dramatic proliferation of research concerned with related works, where road markings detection is an important issue. When there is no public open data in a field, we must collect road markings data and label them by ourselves manually, which is huge labor work and takes lots of time. Moreover, object detection often encounters the problem of small object detection. The detection accuracy often decreases when the detection distance increases. This is primarily because distant objects on the road take up few pixels in the image and object scales vary depending on different distances and perspectives. For the sake of solving the issues mentioned above, this paper utilizes a virtual dataset and open dataset to train the object detection model and cross-field testing in the field of Taiwan roads. In order to make the model more robust and stable, the data augmentation method is employed to generate more data. Therefore, the data are increased through the data augmentation method and homography transformation of images in the limited dataset. Additionally, Inverse Perspective Mapping is performed on the input images to transform them into the bird’s eye view, which solves the “small objects at far distance” problem and the “perspective distortion of objects” problem so that the model can clearly recognize the objects on the road. The model testing on the front-view images and bird’s eye view images also shows a remarkable improvement of accuracy by 18.62%. Full article
Show Figures

Figure 1

10 pages, 1034 KiB  
Article
Ultrahypofractionated Versus Normofractionated Preoperative Radiotherapy for Soft Tissue Sarcoma: A Multicenter, Prospective Real-World-Time Phase 2 Clinical Trial
by Philip Heesen, Michele Di Lonardo, Olga Ciobanu-Caraus, Georg Schelling, Daniel Zwahlen, Beata Bode-Lesniewska, Christoph Glanzmann, Gabriela Studer and Bruno Fuchs
Cancers 2024, 16(23), 4063; https://doi.org/10.3390/cancers16234063 - 4 Dec 2024
Cited by 1 | Viewed by 1040
Abstract
Background/Objectives: The historically most commonly used preoperative radiotherapy regimen for soft tissue sarcomas (STSs) consists of 50 Gray (Gy) delivered in 25 fractions over 5 weeks, achieving excellent local control, but with significant challenges due to prolonged treatment duration and early side effects. [...] Read more.
Background/Objectives: The historically most commonly used preoperative radiotherapy regimen for soft tissue sarcomas (STSs) consists of 50 Gray (Gy) delivered in 25 fractions over 5 weeks, achieving excellent local control, but with significant challenges due to prolonged treatment duration and early side effects. Reducing therapy duration while maintaining optimal local and distant control would be highly beneficial for patients. We aimed to investigate the outcome of an ultrahypofractionated radiotherapy (uhRT) regimen which may represent a shorter and more patient-friendly alternative. Methods: This multi-center, open-label, phase 2 clinical trial with a clustered cohort design was conducted within the Swiss Sarcoma Network (SSN). Adult patients (aged ≥ 18 years) with STS of the extremities or superficial trunk and an Eastern Cooperative Oncology Group (ECOG) performance status of 0–3 were included. Participants were assigned to either normofractionated radiotherapy (nRT) at 50 Gy in 25 fractions or uhRT at 25 Gy in 5 fractions. Data were collected prospectively in real-world-time clinical settings. The primary outcome was local recurrence-free survival (LRFS), with overall survival (OS) and wound complications as secondary outcomes. Results: Between March 2020 and October 2023, 138 patients were included in the study; 74 received nRT and 64 received uhRT. The median follow-up times were 2.2 years for uhRT and 3.6 years for nRT. The LRFS rates at 1 year were 97.0% for nRT and 94.8% for uhRT (p = 0.57). The two-year LRFS rates were 91.9% and 94.8%, respectively (p = 0.57). The one- and two-year OS rates were 97.1%/86.3% and 98.2%/88.8%, respectively (p = 0.72). The wound complication rate was comparable between the nRT (12.0%) and uhRT (12.5%) groups (p = 0.99). Conclusions: UhRT for STSs offers an effective and safe alternative to traditional nRT, with comparable early LRFS, OS and wound complication rates. Given the two-year median follow-up, which is critical for evaluating local recurrence, uhRT shows promise as a shorter and more convenient treatment regimen. UhRT may be a safe and effective alternative treatment option to traditional nRT. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

22 pages, 5240 KiB  
Article
MMPW-Net: Detection of Tiny Objects in Aerial Imagery Using Mixed Minimum Point-Wasserstein Distance
by Nan Su, Zilong Zhao, Yiming Yan, Jinpeng Wang, Wanxuan Lu, Hongbo Cui, Yunfei Qu, Shou Feng and Chunhui Zhao
Remote Sens. 2024, 16(23), 4485; https://doi.org/10.3390/rs16234485 - 29 Nov 2024
Cited by 3 | Viewed by 1565
Abstract
The detection of distant tiny objects in aerial imagery plays a pivotal role in early warning, localization, and recognition tasks. However, due to the scarcity of appearance information, minimal pixel representation, susceptibility to blending with the background, and the incompatibility of conventional metrics, [...] Read more.
The detection of distant tiny objects in aerial imagery plays a pivotal role in early warning, localization, and recognition tasks. However, due to the scarcity of appearance information, minimal pixel representation, susceptibility to blending with the background, and the incompatibility of conventional metrics, the rapid and accurate detection of tiny objects poses significant challenges. To address these issues, a single-stage tiny object detector tailored for aerial imagery is proposed, comprising two primary components. Firstly, we introduce a light backbone-heavy neck architecture, named the Global Context Self-Attention and Dense Nested Connection Feature Extraction Network (GC-DN Network), which efficiently extracts and fuses multi-scale features of the target. Secondly, we propose a novel metric, MMPW, to replace the Intersection over Union (IoU) in label assignment strategies, Non-Maximum Suppression (NMS), and regression loss functions. Specifically, MMPW models bounding boxes as 2D Gaussian distributions and utilizes the Mixed Minimum Point-Wasserstein Distance to quantify the similarity between boxes. Experiments conducted on the latest aerial image tiny object datasets, AI-TOD and VisDrone-19, demonstrate that our method improves AP50 performance by 9.4% and 5%, respectively, and AP performance by 4.3% and 3.6%. This validates the efficacy of our approach for detecting tiny objects in aerial imagery. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

22 pages, 5584 KiB  
Article
Enhanced Magnetic Resonance Imaging-Based Brain Tumor Classification with a Hybrid Swin Transformer and ResNet50V2 Model
by Abeer Fayez Al Bataineh, Khalid M. O. Nahar, Hayel Khafajeh, Ghassan Samara, Raed Alazaidah, Ahmad Nasayreh, Ayah Bashkami, Hasan Gharaibeh and Waed Dawaghreh
Appl. Sci. 2024, 14(22), 10154; https://doi.org/10.3390/app142210154 - 6 Nov 2024
Cited by 4 | Viewed by 2526
Abstract
Brain tumors can be serious; consequently, rapid and accurate detection is crucial. Nevertheless, a variety of obstacles, such as poor imaging resolution, doubts over the accuracy of data, a lack of diverse tumor classes and stages, and the possibility of misunderstanding, present challenges [...] Read more.
Brain tumors can be serious; consequently, rapid and accurate detection is crucial. Nevertheless, a variety of obstacles, such as poor imaging resolution, doubts over the accuracy of data, a lack of diverse tumor classes and stages, and the possibility of misunderstanding, present challenges to achieve an accurate and final diagnosis. Effective brain cancer detection is crucial for patients’ safety and health. Deep learning systems provide the capability to assist radiologists in quickly and accurately detecting diagnoses. This study presents an innovative deep learning approach that utilizes the Swin Transformer. The suggested method entails integrating the Swin Transformer with the pretrained deep learning model Resnet50V2, called (SwT+Resnet50V2). The objective of this modification is to decrease memory utilization, enhance classification accuracy, and reduce training complexity. The self-attention mechanism of the Swin Transformer identifies distant relationships and captures the overall context. Resnet 50V2 improves both accuracy and training speed by extracting adaptive features from the Swin Transformer’s dependencies. We evaluate the proposed framework using two publicly accessible brain magnetic resonance imaging (MRI) datasets, each including two and four distinct classes, respectively. Employing data augmentation and transfer learning techniques enhances model performance, leading to more dependable and cost-effective training. The suggested model achieves an impressive accuracy of 99.9% on the binary-labeled dataset and 96.8% on the four-labeled dataset, outperforming the VGG16, MobileNetV2, Resnet50V2, EfficientNetV2B3, ConvNeXtTiny, and convolutional neural network (CNN) algorithms used for comparison. This demonstrates that the Swin transducer, when combined with Resnet50V2, is capable of accurately diagnosing brain tumors. This method leverages the combination of SwT+Resnet50V2 to create an innovative diagnostic tool. Radiologists have the potential to accelerate and improve the detection of brain tumors, leading to improved patient outcomes and reduced risks. Full article
(This article belongs to the Special Issue Advances in Bioinformatics and Biomedical Engineering)
Show Figures

Figure 1

Back to TopTop