Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (162)

Search Parameters:
Keywords = common-sense reasoning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 526 KB  
Article
Chain Ladder Under Aggregation of Calendar Periods
by Greg Taylor
Risks 2025, 13(11), 215; https://doi.org/10.3390/risks13110215 - 3 Nov 2025
Viewed by 124
Abstract
The chain ladder model is defined by a set of assumptions about the claim array to which it is applied. It is, in practice, applied to claim arrays whose data relate to different frequencies, e.g., yearly, quarterly, monthly, weekly, etc. There is sometimes [...] Read more.
The chain ladder model is defined by a set of assumptions about the claim array to which it is applied. It is, in practice, applied to claim arrays whose data relate to different frequencies, e.g., yearly, quarterly, monthly, weekly, etc. There is sometimes a tacit assumption that one can shift between these frequencies at will, and that the model will remain applicable. It is not obvious that this is the case. One needs to check whether a model whose assumptions hold for annual data will continue to hold for a quarterly (for example) representation of the same data. The present paper studies this question in the case of preservation of calendar periods, i.e., (in the example) annual calendar periods are dissected into quarters. The study covers the two most common forms of chain ladder model, namely the Tweedie chain ladder and Mack chain ladder. The conclusion is broadly, if not absolutely, negative. Certain parameter sets can indeed be found for which the chain ladder structure is maintained under a change in data frequency. However, while it may be technically possible to maintain the chain ladder model under such a change to the data, it is not possible in any reasonable, practical sense. Full article
Show Figures

Figure 1

40 pages, 33004 KB  
Article
Sampling-Based Path Planning and Semantic Navigation for Complex Large-Scale Environments
by Shakeeb Ahmad and James Sean Humbert
Robotics 2025, 14(11), 149; https://doi.org/10.3390/robotics14110149 - 24 Oct 2025
Viewed by 348
Abstract
This article proposes a multi-agent path planning and decision-making solution for high-tempo field robotic operations, such as search-and-rescue, in large-scale unstructured environments. As a representative example, the subterranean environments can span many kilometers and are loaded with challenges such as limited to no [...] Read more.
This article proposes a multi-agent path planning and decision-making solution for high-tempo field robotic operations, such as search-and-rescue, in large-scale unstructured environments. As a representative example, the subterranean environments can span many kilometers and are loaded with challenges such as limited to no communication, hazardous terrain, blocked passages due to collapses, and vertical structures. The time-sensitive nature of these operations inherently requires solutions that are reliably deployable in practice. Moreover, a human-supervised multi-robot team is required to ensure that mobility and cognitive capabilities of various agents are leveraged for efficiency of the mission. Therefore, this article attempts to propose a solution that is suited for both air and ground vehicles and is adapted well for information sharing between different agents. This article first details a sampling-based autonomous exploration solution that brings significant improvements with respect to the current state of the art. These improvements include relying on an occupancy grid-based sample-and-project solution to terrain assessment and formulating the solution-search problem as a constraint-satisfaction problem to further enhance the computational efficiency of the planner. In addition, the demonstration of the exploration planner by team MARBLE at the DARPA Subterranean Challenge finals is presented. The inevitable interaction of heterogeneous autonomous robots with human operators demands the use of common semantics for reasoning across the robot and human teams making use of different geometric map capabilities suited for their mobility and computational resources. To this end, the path planner is further extended to include semantic mapping and decision-making into the framework. Firstly, the proposed solution generates a semantic map of the exploration environment by labeling position history of a robot in the form of probability distributions of observations. The semantic reasoning solution uses higher-level cues from a semantic map in order to bias exploration behaviors toward a semantic of interest. This objective is achieved by using a particle filter to localize a robot on a given semantic map followed by a Partially Observable Markov Decision Process (POMDP)-based controller to guide the exploration direction of the sampling-based exploration planner. Hence, this article aims to bridge an understanding gap between human and a heterogeneous robotic team not just through a common-sense semantic map transfer among the agents but by also enabling a robot to make use of such information to guide its lower-level reasoning in case such abstract information is transferred to it. Full article
(This article belongs to the Special Issue Autonomous Robotics for Exploration)
Show Figures

Figure 1

20 pages, 1818 KB  
Article
Image Captioning Model Based on Multi-Step Cross-Attention Cross-Modal Alignment and External Commonsense Knowledge Augmentation
by Liang Wang, Meiqing Jiao, Zhihai Li, Mengxue Zhang, Haiyan Wei, Yuru Ma, Honghui An, Jiaqi Lin and Jun Wang
Electronics 2025, 14(16), 3325; https://doi.org/10.3390/electronics14163325 - 21 Aug 2025
Viewed by 1366
Abstract
To address the semantic mismatch between limited textual descriptions in image captioning training datasets and the multi-semantic nature of images, as well as the underutilized external commonsense knowledge, this article proposes a novel image captioning model based on multi-step cross-attention cross-modal alignment and [...] Read more.
To address the semantic mismatch between limited textual descriptions in image captioning training datasets and the multi-semantic nature of images, as well as the underutilized external commonsense knowledge, this article proposes a novel image captioning model based on multi-step cross-attention cross-modal alignment and external commonsense knowledge enhancement. The model employs a backbone architecture comprising CLIP’s ViT visual encoder, Faster R-CNN, BERT text encoder, and GPT-2 text decoder. It incorporates two core mechanisms: a multi-step cross-attention mechanism that iteratively aligns image and text features across multiple rounds, progressively enhancing inter-modal semantic consistency for more accurate cross-modal representation fusion. Moreover, the model employs Faster R-CNN to extract region-based object features. These features are mapped to corresponding entities within the dataset through entity probability calculation and entity linking. External commonsense knowledge associated with these entities is then retrieved from the ConceptNet knowledge graph, followed by knowledge embedding via TransE and multi-hop reasoning. Finally, the fused multimodal features are fed into the GPT-2 decoder to steer caption generation, enhancing the lexical richness, factual accuracy, and cognitive plausibility of the generated descriptions. In the experiments, the model achieves CIDEr scores of 142.6 on MSCOCO and 78.4 on Flickr30k. Ablations confirm both modules enhance caption quality. Full article
Show Figures

Figure 1

27 pages, 399 KB  
Article
Becoming a Citizen in the Age of Trump: Citizenship as Social Rights for Latines in Texas
by Nancy Plankey-Videla and Mary E. Campbell
Soc. Sci. 2025, 14(7), 445; https://doi.org/10.3390/socsci14070445 - 21 Jul 2025
Viewed by 2229
Abstract
In the anti-immigrant national context of the first Trump administration, what motivated Latine immigrants in Texas to pursue naturalization? Based on 31 Spanish and English semi-structured interviews conducted during 2017–2019, we examine how lawful permanent residents’ (LPRs’) perceptions of contemporary immigration policy and [...] Read more.
In the anti-immigrant national context of the first Trump administration, what motivated Latine immigrants in Texas to pursue naturalization? Based on 31 Spanish and English semi-structured interviews conducted during 2017–2019, we examine how lawful permanent residents’ (LPRs’) perceptions of contemporary immigration policy and their social rights affect their motivations to naturalize. Surprisingly, we find that although fear of deportation was an extremely common motivation, it was rarely the residents’ primary motivation. When asked why they wanted to naturalize, our respondents expressed four primary motivations grounded in their claims for social rights: proactive (gain the right to vote, benefit the group), pragmatic (expedite family reunification, access better jobs, benefit the individual), defensive (protect against deportation), and emotional (formalize a sense of belonging). Although 60 percent of interview subjects mentioned some defensive motivations, citing the current national and state political climate as hostile to immigrants, it was the least common primary motivation for naturalization; that is, they named another motivation first as their most important reason for naturalizing. The need to naturalize to protect their social rights in a shifting political context is a strong subtext to subjects’ narratives about why they choose to become citizens. Defensive motivations undergird all other motivations, but the national hostile climate is moderated by relatively positive local interactions with law enforcement and the larger community. Full article
(This article belongs to the Special Issue Migration, Citizenship and Social Rights)
15 pages, 7157 KB  
Article
RADAR: Reasoning AI-Generated Image Detection for Semantic Fakes
by Haochen Wang, Xuhui Liu, Ziqian Lu, Cilin Yan, Xiaolong Jiang, Runqi Wang and Efstratios Gavves
Technologies 2025, 13(7), 280; https://doi.org/10.3390/technologies13070280 - 2 Jul 2025
Viewed by 1572
Abstract
As modern generative models advance rapidly, AI-generated images exhibit higher resolution and lifelike details. However, the generated images may not adhere to world knowledge and common sense, as there is no such awareness and supervision in the generative models. For instance, the generated [...] Read more.
As modern generative models advance rapidly, AI-generated images exhibit higher resolution and lifelike details. However, the generated images may not adhere to world knowledge and common sense, as there is no such awareness and supervision in the generative models. For instance, the generated images could feature a penguin walking in the desert or a man with three arms, scenarios that are highly unlikely to occur in real life. Current AI-generated image detection methods mainly focus on low-level features, such as detailed texture patterns and frequency domain inconsistency, which are specific to certain generative models, making it challenging to identify the above-mentioned general semantic fakes. In this work, (1) we propose a new task, reasoning AI-generated image detection, which focuses on identifying semantic fakes in generative images that violate world knowledge and common sense. (2) To benchmark the new task, we collect a new dataset Spot the Semantic Fake (STSF). STSF contains 358 images with clear semantic fakes generated by three different modern diffusion models and provides bounding boxes as well as text annotations to locate the fakes. (3) We propose RADAR, a reasoning AI-generated image detection assistor, to locate semantic fakes in the generative images and output corresponding text explanations. Specifically, RADAR contains a specialized multimodal LLM to process given images and detect semantic fakes. To improve the generalization ability, we further incorporate ChatGPT as an assistor to detect unrealistic components in grounded text descriptions. The experiments on the STSF dataset show that RADAR effectively detects semantic fakes in modern generative images. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Figure 1

18 pages, 4292 KB  
Article
Plugging Small Models in Large Language Models for POI Recommendation in Smart Tourism
by Hong Zheng, Zhenhui Xu, Qihong Pan, Zhenzhen Zhao and Xiangjie Kong
Algorithms 2025, 18(7), 376; https://doi.org/10.3390/a18070376 - 20 Jun 2025
Viewed by 944
Abstract
Point-of-interest (POI) recommendation is a crucial task in location-based social networks, especially for enhancing personalized travel experiences in smart tourism. Recently, large language models (LLMs) have demonstrated significant potential in this domain. Unlike classical deep learning-based methods, which focus on capturing various user [...] Read more.
Point-of-interest (POI) recommendation is a crucial task in location-based social networks, especially for enhancing personalized travel experiences in smart tourism. Recently, large language models (LLMs) have demonstrated significant potential in this domain. Unlike classical deep learning-based methods, which focus on capturing various user preferences, LLM-based approaches can further analyze candidate POIs using common sense and provide corresponding reasons. However, existing methods often fail to fully capture user preferences due to limited contextual inputs and insufficient incorporation of cooperative signals. Additionally, most methods inadequately address target temporal information, which is essential for planning travel itineraries. To address these limitations, we propose PSLM4ST, a novel framework that enables synergistic interaction between LLMs and a lightweight temporal knowledge graph reasoning model. This plugin model enhances the input to LLMs by making adjustments and additions, guiding them to focus on reasoning processes related to fine-grained preferences and temporal information. Extensive experiments on three real-world datasets demonstrate the efficacy of PSLM4ST. Full article
Show Figures

Figure 1

56 pages, 3118 KB  
Article
Semantic Reasoning Using Standard Attention-Based Models: An Application to Chronic Disease Literature
by Yalbi Itzel Balderas-Martínez, José Armando Sánchez-Rojas, Arturo Téllez-Velázquez, Flavio Juárez Martínez, Raúl Cruz-Barbosa, Enrique Guzmán-Ramírez, Iván García-Pacheco and Ignacio Arroyo-Fernández
Big Data Cogn. Comput. 2025, 9(6), 162; https://doi.org/10.3390/bdcc9060162 - 19 Jun 2025
Viewed by 1594
Abstract
Large-language-model (LLM) APIs demonstrate impressive reasoning capabilities, but their size, cost, and closed weights limit the deployment of knowledge-aware AI within biomedical research groups. At the other extreme, standard attention-based neural language models (SANLMs)—including encoder–decoder architectures such as Transformers, Gated Recurrent Units (GRUs), [...] Read more.
Large-language-model (LLM) APIs demonstrate impressive reasoning capabilities, but their size, cost, and closed weights limit the deployment of knowledge-aware AI within biomedical research groups. At the other extreme, standard attention-based neural language models (SANLMs)—including encoder–decoder architectures such as Transformers, Gated Recurrent Units (GRUs), and Long Short-Term Memory (LSTM) networks—are computationally inexpensive. However, their capacity for semantic reasoning in noisy, open-vocabulary knowledge bases (KBs) remains unquantified. Therefore, we investigate whether compact SANLMs can (i) reason over hybrid OpenIE-derived KBs that integrate commonsense, general-purpose, and non-communicable-disease (NCD) literature; (ii) operate effectively on commodity GPUs; and (iii) exhibit semantic coherence as assessed through manual linguistic inspection. To this end, we constructed four training KBs by integrating ConceptNet (600k triples), a 39k-triple general-purpose OpenIE set, and an 18.6k-triple OpenNCDKB extracted from 1200 PubMed abstracts. Encoder–decoder GRU, LSTM, and Transformer models (1–2 blocks) were trained to predict the object phrase given the subject + predicate. Beyond token-level cross-entropy, we introduced the Meaning-based Selectional-Preference Test (MSPT): for each withheld triple, we masked the object, generated a candidate, and measured its surplus cosine similarity over a random baseline using word embeddings, with significance assessed via a one-sided t-test. Hyperparameter sensitivity (311 GRU/168 LSTM runs) was analyzed, and qualitative frame–role diagnostics completed the evaluation. Our results showed that all SANLMs learned effectively from the point of view of the cross entropy loss. In addition, our MSPT provided meaningful semantic insights: for the GRUs (256-dim, 2048-unit, 1-layer): mean similarity (μsts) of 0.641 to the ground truth vs. 0.542 to the random baseline (gap 12.1%; p<10180). For the 1-block Transformer: μsts=0.551 vs. 0.511 (gap 4%; p<1025). While Transformers minimized loss and accuracy variance, GRUs captured finer selectional preferences. Both architectures trained within <24 GB GPU VRAM and produced linguistically acceptable, albeit over-generalized, biomedical assertions. Due to their observed performance, LSTM results were designated as baseline models for comparison. Therefore, properly tuned SANLMs can achieve statistically robust semantic reasoning over noisy, domain-specific KBs without reliance on massive LLMs. Their interpretability, minimal hardware footprint, and open weights promote equitable AI research, opening new avenues for automated NCD knowledge synthesis, surveillance, and decision support. Full article
Show Figures

Figure 1

15 pages, 4666 KB  
Article
Fusion of Medium- and High-Resolution Remote Images for the Detection of Stress Levels Associated with Citrus Sooty Mould
by Enrique Moltó, Marcela Pereira-Sandoval, Héctor Izquierdo-Sanz and Sergio Morell-Monzó
Agronomy 2025, 15(6), 1342; https://doi.org/10.3390/agronomy15061342 - 30 May 2025
Viewed by 634
Abstract
Citrus sooty mould caused by Capnodium spp. alters the quality of fruits on the tree and affects their productivity. Past laboratory and hand-held spectrometry tests have concluded that sooty mould exhibits a typical spectral response in the near-infrared spectrum region. For this reason, [...] Read more.
Citrus sooty mould caused by Capnodium spp. alters the quality of fruits on the tree and affects their productivity. Past laboratory and hand-held spectrometry tests have concluded that sooty mould exhibits a typical spectral response in the near-infrared spectrum region. For this reason, this study aims at developing an automatic method for remote sensing of this disease, combining 10 m spatial resolution Sentinel-2 satellite images and 0.25 m spatial resolution orthophotos to identify sooty mould infestation levels in small orchards, common in Mediterranean conditions. Citrus orchards of the Comunitat Valenciana region (Spain) underwent field inspection in 2022 during two months of minimum (August) and maximum (October) infestation. The inspectors categorised their observations according to three levels of infestation in three representative positions of each orchard. Two synthetic images condensing the monthly information were generated for both periods. A filtering algorithm was created, based on high-resolution images, to select informative pixels in the lower resolution images. The data were used to evaluate the performance of a Random Forest classifier in predicting intensity levels through cross-validation. Combining the information from medium- and high-resolution images improved the overall accuracy from 0.75 to 0.80, with mean producer’s accuracies of above 0.65 and mean user’s accuracies of above 0.78. Bowley–Yule skewness coefficients were +0.50 for the overall accuracy and +0.28 for the kappa index. Full article
Show Figures

Figure 1

26 pages, 3148 KB  
Article
Transcriptional Regulatory Systems in Pseudomonas: A Comparative Analysis of Helix-Turn-Helix Domains and Two-Component Signal Transduction Networks
by Zulema Udaondo, Kelsey Aguirre Schilder, Ana Rosa Márquez Blesa, Mireia Tena-Garitaonaindia, José Canto Mangana and Abdelali Daddaoua
Int. J. Mol. Sci. 2025, 26(10), 4677; https://doi.org/10.3390/ijms26104677 - 14 May 2025
Cited by 1 | Viewed by 1164
Abstract
Bacterial communities in diverse environmental niches respond to various external stimuli for survival. A primary means of communication between bacterial cells involves one-component (OC) and two-component signal transduction systems (TCSs). These systems are key for sensing environmental changes and regulating bacterial physiology. TCSs, [...] Read more.
Bacterial communities in diverse environmental niches respond to various external stimuli for survival. A primary means of communication between bacterial cells involves one-component (OC) and two-component signal transduction systems (TCSs). These systems are key for sensing environmental changes and regulating bacterial physiology. TCSs, which are the more complex of the two, consist of a sensor histidine kinase for receiving an external input and a response regulator to convey changes in bacterial cell physiology. For numerous reasons, TCSs have emerged as significant targets for antibacterial drug design due to their role in regulating expression level, bacterial viability, growth, and virulence. Diverse studies have shown the molecular mechanisms by which TCSs regulate virulence and antibiotic resistance in pathogenic bacteria. In this study, we performed a thorough analysis of the data from multiple public databases to assemble a comprehensive catalog of the principal detection systems present in both the non-pathogenic Pseudomonas putida KT2440 and the pathogenic Pseudomonas aeruginosa PAO1 strains. Additionally, we conducted a sequence analysis of regulatory elements associated with transcriptional proteins. These were classified into regulatory families based on Helix-turn-Helix (HTH) protein domain information, a common structural motif for DNA-binding proteins. Moreover, we highlight the function of bacterial TCSs and their involvement in functions essential for bacterial survival and virulence. This comparison aims to identify novel targets that can be exploited for the development of advanced biotherapeutic strategies, potentially leading to new treatments for bacterial infections. Full article
Show Figures

Figure 1

22 pages, 5387 KB  
Article
Landslide Segmentation in High-Resolution Remote Sensing Images: The Van–UPerAttnSeg Framework with Multi-Scale Feature Enhancement
by Chang Li, Quan Zou, Guoqing Li and Wenyang Yu
Remote Sens. 2025, 17(7), 1265; https://doi.org/10.3390/rs17071265 - 2 Apr 2025
Viewed by 770
Abstract
Among geological disasters, landslides are a common and extremely destructive disaster. Their rapid identification is crucial for disaster analysis and response. However, traditional methods of landslide recognition mainly rely on visual interpretation and manual recognition of remote sensing images, which are time-consuming and [...] Read more.
Among geological disasters, landslides are a common and extremely destructive disaster. Their rapid identification is crucial for disaster analysis and response. However, traditional methods of landslide recognition mainly rely on visual interpretation and manual recognition of remote sensing images, which are time-consuming and susceptible to subjective factors, thereby limiting the accuracy and efficiency of recognition. To overcome these limitations, for high-resolution remote sensing images, this method first uses online equalization sampling and enhancement strategy to sample high-resolution remote sensing images to ensure data balance and diversity. Then, it adopts an encoder–decoder structure, where the encoder is a visual attention network (Van) that focuses on extracting discriminative features of different scales from landslide images. The decoder consists of a pyramid pooling module (PPM) and feature pyramid network (FPN), combined with a convolutional block attention module (CBAM) module. Through this structure, the model can effectively integrate features of different scales, achieving precise positioning and recognition of landslide areas. In addition, this study introduces a sliding window algorithm based on Gaussian fusion as a post-processing method, which optimizes the prediction of landslide edge in high-resolution remote sensing images and ensures the context reasoning ability of the model. In the validation set, this method achieved a significant landslide recognition effect with a Dice score of 84.75%, demonstrating high accuracy and efficiency. This result demonstrates the importance and effectiveness of the research method in improving the accuracy and efficiency of landslide recognition, providing strong technical support for analysis and response to geological disasters. Full article
(This article belongs to the Topic Remote Sensing and Geological Disasters)
Show Figures

Figure 1

23 pages, 9651 KB  
Article
CDEA: Causality-Driven Dialogue Emotion Analysis via LLM
by Xue Zhang, Mingjiang Wang, Xuyi Zhuang, Xiao Zeng and Qiang Li
Symmetry 2025, 17(4), 489; https://doi.org/10.3390/sym17040489 - 25 Mar 2025
Cited by 2 | Viewed by 1977
Abstract
With the rapid advancement of human–machine dialogue technology, sentiment analysis has become increasingly crucial. However, deep learning-based methods struggle with interpretability and reliability due to the subjectivity of emotions and the challenge of capturing emotion–cause relationships. To address these issues, we propose a [...] Read more.
With the rapid advancement of human–machine dialogue technology, sentiment analysis has become increasingly crucial. However, deep learning-based methods struggle with interpretability and reliability due to the subjectivity of emotions and the challenge of capturing emotion–cause relationships. To address these issues, we propose a novel sentiment analysis framework that integrates structured commonsense knowledge to explicitly infer emotional causes, enabling causal reasoning between historical and target sentences. Additionally, we enhance sentiment classification by leveraging large language models (LLMs) with dynamic example retrieval, constructing an experience database to guide the model using contextually relevant instances. To further improve adaptability, we design a semantic interpretation task for refining emotion category representations and fine-tune the LLM accordingly. Experiments on three benchmark datasets show that our approach significantly improves accuracy and reliability, surpassing traditional deep-learning methods. These findings underscore the effectiveness of structured reasoning, knowledge retrieval, and LLM-driven sentiment adaptation in advancing emotion–cause-based sentiment analysis. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 1882 KB  
Article
Attention Mechanism-Based Cognition-Level Scene Understanding
by Xuejiao Tang and Wenbin Zhang
Information 2025, 16(3), 203; https://doi.org/10.3390/info16030203 - 5 Mar 2025
Viewed by 1229
Abstract
Given a question–image input, a visual commonsense reasoning (VCR) model predicts an answer with a corresponding rationale, which requires inference abilities based on real-world knowledge. The VCR task, which calls for exploiting multi-source information as well as learning different levels of understanding and [...] Read more.
Given a question–image input, a visual commonsense reasoning (VCR) model predicts an answer with a corresponding rationale, which requires inference abilities based on real-world knowledge. The VCR task, which calls for exploiting multi-source information as well as learning different levels of understanding and extensive commonsense knowledge, is a cognition-level scene understanding challenge. The VCR task has aroused researchers’ interests due to its wide range of applications, including visual question answering, automated vehicle systems, and clinical decision support. Previous approaches to solving the VCR task have generally relied on pre-training or exploiting memory with long-term dependency relationship-encoded models. However, these approaches suffer from a lack of generalizability and a loss of information in long sequences. In this work, we propose a parallel attention-based cognitive VCR network, termed PAVCR, which fuses visual–textual information efficiently and encodes semantic information in parallel to enable the model to capture rich information for cognition-level inference. Extensive experiments show that the proposed model yields significant improvements over existing methods on the benchmark VCR dataset. Moreover, the proposed model provides an intuitive interpretation of visual commonsense reasoning. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining: Innovations in Big Data Analytics)
Show Figures

Figure 1

27 pages, 470 KB  
Article
Enhancing Domain-Specific Knowledge Graph Reasoning via Metapath-Based Large Model Prompt Learning
by Ruidong Ding and Bin Zhou
Electronics 2025, 14(5), 1012; https://doi.org/10.3390/electronics14051012 - 3 Mar 2025
Cited by 2 | Viewed by 2989
Abstract
Representing domain knowledge extracted from unstructured texts using knowledge graphs supports knowledge reasoning, enabling the extraction of accurate factual information and the generation of interpretable results. However, reasoning with knowledge graphs is challenging due to their complex logical structures, which require deep semantic [...] Read more.
Representing domain knowledge extracted from unstructured texts using knowledge graphs supports knowledge reasoning, enabling the extraction of accurate factual information and the generation of interpretable results. However, reasoning with knowledge graphs is challenging due to their complex logical structures, which require deep semantic understanding and the ability to address uncertainties with common sense. The rapid development of large language models makes them an option for solving this problem, with good complementary capabilities regarding the determinacy of knowledge graph reasoning. However, the use of large language models for knowledge graph reasoning also has challenges, including structural understanding challenges and the balance of semantic density sparsity. This study proposes a domain knowledge graph reasoning method based on a large model prompt learning metapath (DKGM-path), discussing how to use large models for the preliminary induction of reasoning paths and completing reasoning on knowledge graphs based on iterative queries. The method has made significant progress on several public reasoning question answering benchmark datasets, demonstrating multi-hop reasoning capabilities based on knowledge graphs. It utilizes structured data interfaces to achieve accurate and effective data access and information processing and can intuitively show the reasoning process, with good interpretability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

23 pages, 520 KB  
Article
Investigation of Text-Independent Speaker Verification by Support Vector Machine-Based Machine Learning Approaches
by Odin Kohler and Masudul Imtiaz
Electronics 2025, 14(5), 963; https://doi.org/10.3390/electronics14050963 - 28 Feb 2025
Cited by 1 | Viewed by 1811
Abstract
Speaker verification is a common issue that has enumerable biomedical security applications. Speaker verification comes in two different forms: text-independent and text-dependent. Each of these forms can be implemented via many different machine learning and deep learning techniques. From our research, we found [...] Read more.
Speaker verification is a common issue that has enumerable biomedical security applications. Speaker verification comes in two different forms: text-independent and text-dependent. Each of these forms can be implemented via many different machine learning and deep learning techniques. From our research, we found that there is significantly less work implementing text-independent speaker verification using machine learning techniques than there is using deep learning techniques. Because of this gap, we were motivated to build our own SVM and CNN model for text-independent speaker verification and compare them to other systems using SVMs or deep learning techniques. We limited ourselves to SVMs because they are commonly used for speech recognition and achieved very high accuracies. The main motivation behind this was two-fold. The first reason is to demonstrate that SVMs can and have been successfully used for text-independent speaker verification at a level comparable to deep learning techniques; the second reason is to make work using SVMs for text-independent speaker verification more accessible so it can be expanded upon easily. The analysis and comparison conducted in this paper will demonstrate how SVMs achieve results comparable to deep learning techniques and allow future researchers to more easily find SVMs used for text-independent speaker verification and derive a sense of what is being implemented in the field. Full article
Show Figures

Figure 1

15 pages, 291 KB  
Review
Non-Invasive Detection of Tumors by Volatile Organic Compounds in Urine
by Tomoaki Hara, Sikun Meng, Yasuko Arao, Yoshiko Saito, Kana Inoue, Aya Hasan Alshammari, Hideyuki Hatakeyama, Eric di Luccio, Andrea Vecchione, Takaaki Hirotsu and Hideshi Ishii
Biomedicines 2025, 13(1), 109; https://doi.org/10.3390/biomedicines13010109 - 6 Jan 2025
Cited by 4 | Viewed by 3926
Abstract
Cancer is one of the major causes of death, and as it becomes more malignant, it becomes an intractable disease that is difficult to cure completely. Therefore, early detection is important to increase the survival rate. For this reason, testing with blood biomarkers [...] Read more.
Cancer is one of the major causes of death, and as it becomes more malignant, it becomes an intractable disease that is difficult to cure completely. Therefore, early detection is important to increase the survival rate. For this reason, testing with blood biomarkers is currently common. However, in order to accurately diagnose early-stage cancer, new biomarkers and diagnostic methods that enable highly accurate diagnosis are needed. This review summarizes recent studies on cancer biomarker detection. In particular, we focus on the analysis of volatile organic compounds (VOCs) in urine and the development of diagnostic methods using olfactory receptors in living organisms. Urinary samples from cancer patients contain a wide variety of VOCs, and the identification of cancer specific compounds is underway. It has also been found that the olfactory sense of organisms can distinguish cancer-specific odors, which may be applicable to cancer diagnosis. We explore the possibility of novel cancer biomarker candidates and novel diagnostic methods. Full article
Back to TopTop