Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (698)

Search Parameters:
Keywords = real word

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 579 KiB  
Article
Automated Classification of Crime Narratives Using Machine Learning and Language Models in Official Statistics
by Klaus Lehmann, Elio Villaseñor, Alejandro Pimentel, Javiera Preuss, Nicolás Berhó, Oswaldo Diaz and Ignacio Agloni
Stats 2025, 8(3), 68; https://doi.org/10.3390/stats8030068 - 30 Jul 2025
Viewed by 300
Abstract
This paper presents the implementation of a language model–based strategy for the automatic codification of crime narratives for the production of official statistics. To address the high workload and inconsistencies associated with manual coding, we developed and evaluated three models: an XGBoost classifier [...] Read more.
This paper presents the implementation of a language model–based strategy for the automatic codification of crime narratives for the production of official statistics. To address the high workload and inconsistencies associated with manual coding, we developed and evaluated three models: an XGBoost classifier with bag-of-words features and word embeddings features, an LSTM network using pretrained Spanish word embeddings as a language model, and a fine-tuned BERT language model (BETO). Deep learning models outperformed the traditional baseline, with BETO achieving the highest accuracy. The new ENUSC (Encuesta Nacional Urbana de Seguridad Ciudadana) workflow integrates the selected model into an API for automated classification, incorporating a certainty threshold to distinguish between cases suitable for automation and those requiring expert review. This hybrid strategy led to a 68.4% reduction in manual review workload while preserving high-quality standards. This study represents the first documented application of deep learning for the automated classification of victimization narratives in official statistics, demonstrating its feasibility and impact in a real-world production environment. Our results demonstrate that deep learning can significantly improve the efficiency and consistency of crime statistics coding, offering a scalable solution for other national statistical offices. Full article
(This article belongs to the Section Applied Statistics and Machine Learning Methods)
Show Figures

Figure 1

16 pages, 2283 KiB  
Article
Recognition of Japanese Finger-Spelled Characters Based on Finger Angle Features and Their Continuous Motion Analysis
by Tamon Kondo, Ryota Murai, Zixun He, Duk Shin and Yousun Kang
Electronics 2025, 14(15), 3052; https://doi.org/10.3390/electronics14153052 - 30 Jul 2025
Viewed by 102
Abstract
To improve the accuracy of Japanese finger-spelled character recognition using an RGB camera, we focused on feature design and refinement of the recognition method. By leveraging angular features extracted via MediaPipe, we proposed a method that effectively captures subtle motion differences while minimizing [...] Read more.
To improve the accuracy of Japanese finger-spelled character recognition using an RGB camera, we focused on feature design and refinement of the recognition method. By leveraging angular features extracted via MediaPipe, we proposed a method that effectively captures subtle motion differences while minimizing the influence of background and surrounding individuals. We constructed a large-scale dataset that includes not only the basic 50 Japanese syllables but also those with diacritical marks, such as voiced sounds (e.g., “ga”, “za”, “da”) and semi-voiced sounds (e.g., “pa”, “pi”, “pu”), to enhance the model’s ability to recognize a wide variety of characters. In addition, the application of a change-point detection algorithm enabled accurate segmentation of sign language motion boundaries, improving word-level recognition performance. These efforts laid the foundation for a highly practical recognition system. However, several challenges remain, including the limited size and diversity of the dataset and the need for further improvements in segmentation accuracy. Future work will focus on enhancing the model’s generalizability by collecting more diverse data from a broader range of participants and incorporating segmentation methods that consider contextual information. Ultimately, the outcomes of this research should contribute to the development of educational support tools and sign language interpretation systems aimed at real-world applications. Full article
Show Figures

Figure 1

31 pages, 2944 KiB  
Systematic Review
Mapping the Landscape of Sustainability Reporting: A Bibliometric Analysis Across ESG, Circular Economy, and Integrated Reporting with Sectoral Perspectives
by Radosveta Krasteva-Hristova, Diana Papradanova and Ventsislav Vechev
J. Risk Financial Manag. 2025, 18(8), 416; https://doi.org/10.3390/jrfm18080416 - 28 Jul 2025
Viewed by 323
Abstract
Sustainability reporting has evolved into a multidimensional field encompassing Environmental, Social, and Governance (ESG) disclosure, integrated reporting (IR), and circular economy (CE) practices. This study aims to map the intellectual and thematic landscape of sustainability reporting research over the past decade, with a [...] Read more.
Sustainability reporting has evolved into a multidimensional field encompassing Environmental, Social, and Governance (ESG) disclosure, integrated reporting (IR), and circular economy (CE) practices. This study aims to map the intellectual and thematic landscape of sustainability reporting research over the past decade, with a focus on sectoral differentiation. Drawing on bibliometric analysis of 1611 scientific articles indexed in Scopus, this research applies co-word analysis, thematic mapping, and bibliographic coupling to identify prevailing trends, conceptual clusters, and knowledge gaps. The results reveal a clear progression from fragmented debates toward a more integrated discourse combining ESG, IR, and CE frameworks. In the real economy, sustainability reporting demonstrates a mature operational focus, supported by standardized frameworks and extensive empirical evidence. In contrast, the banking sector exhibits emerging engagement with sustainability disclosure, while the public sector remains at an earlier stage of conceptual and practical development. Despite the increasing convergence of research streams, gaps persist in linking reporting practices to tangible sustainability outcomes, integrating digital innovations, and addressing social dimensions of circularity. This study concludes that further interdisciplinary and sector-specific research is essential to advance credible, comparable, and decision-useful reporting practices capable of supporting the transition toward sustainable and circular business models. Full article
Show Figures

Figure 1

31 pages, 1089 KiB  
Article
Adaptive Learned Belief Propagation for Decoding Error-Correcting Codes
by Alireza Tasdighi and Mansoor Yousefi
Entropy 2025, 27(8), 795; https://doi.org/10.3390/e27080795 - 25 Jul 2025
Viewed by 170
Abstract
Weighted belief propagation (WBP) for the decoding of linear block codes is considered. In WBP, the Tanner graph of the code is unrolled with respect to the iterations of the belief propagation decoder. Then, weights are assigned to the edges of the resulting [...] Read more.
Weighted belief propagation (WBP) for the decoding of linear block codes is considered. In WBP, the Tanner graph of the code is unrolled with respect to the iterations of the belief propagation decoder. Then, weights are assigned to the edges of the resulting recurrent network and optimized offline using a training dataset. The main contribution of this paper is an adaptive WBP where the weights of the decoder are determined for each received word. Two variants of this decoder are investigated. In the parallel WBP decoders, the weights take values in a discrete set. A number of WBP decoders are run in parallel to search for the best sequence- of weights in real time. In the two-stage decoder, a small neural network is used to dynamically determine the weights of the WBP decoder for each received word. The proposed adaptive decoders demonstrate significant improvements over the static counterparts in two applications. In the first application, Bose–Chaudhuri–Hocquenghem, polar and quasi-cyclic low-density parity-check (QC-LDPC) codes are used over an additive white Gaussian noise channel. The results indicate that the adaptive WBP achieves bit error rates (BERs) up to an order of magnitude less than the BERs of the static WBP at about the same decoding complexity, depending on the code, its rate, and the signal-to-noise ratio. The second application is a concatenated code designed for a long-haul nonlinear optical fiber channel where the inner code is a QC-LDPC code and the outer code is a spatially coupled LDPC code. In this case, the inner code is decoded using an adaptive WBP, while the outer code is decoded using the sliding window decoder and static belief propagation. The results show that the adaptive WBP provides a coding gain of 0.8 dB compared to the neural normalized min-sum decoder, with about the same computational complexity and decoding latency. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 990 KiB  
Article
The Temporal Spillover Effect of Green Attribute Changes on Eco-Hotel Location Scores: The Moderating Role of Consumer Environmental Involvement
by Zulei Qin, Shugang Li, Ziyi Li, Yanfang Wei, Ning Wang, Jiayi Zhang, Meitong Liu and He Zhu
Sustainability 2025, 17(14), 6593; https://doi.org/10.3390/su17146593 - 19 Jul 2025
Viewed by 246
Abstract
This study focuses on a profound paradox in eco-hotel evaluations: why do consumer ratings for location, a static asset, exhibit dynamic fluctuations? To solve this puzzle, we construct a two-stage signal-processing theoretical framework that integrates Signaling Theory and the Elaboration Likelihood Model (ELM). [...] Read more.
This study focuses on a profound paradox in eco-hotel evaluations: why do consumer ratings for location, a static asset, exhibit dynamic fluctuations? To solve this puzzle, we construct a two-stage signal-processing theoretical framework that integrates Signaling Theory and the Elaboration Likelihood Model (ELM). This framework posits that the dynamic trajectory of a hotel’s green attributes (encompassing eco-facilities, sustainable practices, and ecological experiences) constitutes a high-fidelity market signal about its underlying quality. We utilized natural language processing techniques (Word2Vec and LSA) to conduct a longitudinal analysis of over 60,000 real consumer reviews from Booking.com between 2020 and 2023. This study confirms that continuous improvements in green attributes result in significant positive spillovers to location scores, while any degradation triggers strong negative spillovers. More critically, consumer environmental involvement (CEI) acts as an amplifier in this process, with high-involvement consumers reacting more intensely to both types of signals. The research further uncovers complex non-linear threshold characteristics in the spillover effect, subverting traditional linear management thinking. These findings not only provide a dynamic and psychologically deep theoretical explanation for sustainable consumption behavior but also offer forward-thinking practical implications for hoteliers on how to strategically manage dynamic signals to maximize brand value. Full article
Show Figures

Figure 1

18 pages, 5039 KiB  
Article
Global Research Trends on Water Contamination by Microorganisms: A Bibliometric Analysis
by Zoila Isabel Cárdenas Tirado, Isaías Wilmer Duenas Sayaverde, Rosario del Socorro Avellaneda Yajahuanca, Sdenka Caballero Aparicio, Kelly Myriam Jiménez de Aliaga, Edo Gallegos Aparicio, Maria Antonieta Rubio Tyrrel, Maria do Livramento Fortes Figueiredo, José Wicto Pereira Borges, Rosilane de Lima Brito Magalhães, Denise Andrade, Daniela Reis Joaquim de Freitas, Ana Raquel Batista de Carvalho and Maria Eliete Batista Moura
Int. J. Environ. Res. Public Health 2025, 22(7), 1128; https://doi.org/10.3390/ijerph22071128 - 17 Jul 2025
Viewed by 345
Abstract
Water is an essential resource for life; however, the quality of available water on the planet has been compromised due to various factors, including microbiological contamination. Objective: To analyze the global scientific production of microbiological water contamination using bibliometric methods. Method: A search [...] Read more.
Water is an essential resource for life; however, the quality of available water on the planet has been compromised due to various factors, including microbiological contamination. Objective: To analyze the global scientific production of microbiological water contamination using bibliometric methods. Method: A search for scientific articles was conducted using the advanced query function in the Web of Science™ database, specifically in its core collection, on 26 February 2025. Data from 2000 articles were analyzed using the Bibliometrix package in R (version 4.2.1) and the Biblioshiny application (version 2.0). Results: The evaluated articles were published between 1952 and 2025, with a peak in publications in 2022. The journal Water Research stood out as the most relevant, publishing 128 articles. The Egyptian Knowledge Bank was identified as the most productive institution, while China had the highest number of contributing authors. The most cited article received 475 citations. Additionally, KeyWords Plus™ highlighted the focus of the studies on ecological and biotechnological methods for contaminant removal, as well as the presence of waterborne pathogens and their inactivation methods. Conclusions: The results show a growing interest in the development of ecological and biotechnological methods for contaminant removal and pathogen inactivation in water. The integration of artificial intelligence with real-time monitoring systems emerges as a promising strategy for improving water quality management. These findings highlight the relevance of the topic for public health and health education. Full article
Show Figures

Figure 1

13 pages, 1604 KiB  
Article
Assessing LLMs on IDSA Practice Guidelines for the Diagnosis and Treatment of Native Vertebral Osteomyelitis: A Comparison Study
by Filip Milicevic, Maher Ghandour, Moh’d Yazan Khasawneh, Amir R. Ghasemi, Ahmad Al Zuabi, Samir Smajic, Mohamad Agha Mahmoud, Koroush Kabir and Ümit Mert
J. Clin. Med. 2025, 14(14), 4996; https://doi.org/10.3390/jcm14144996 - 15 Jul 2025
Viewed by 391
Abstract
Background: Native vertebral osteomyelitis (NVO) presents diagnostic and therapeutic challenges requiring adherence to complex clinical guidelines. The emergence of large language models (LLMs) offers new avenues for real-time clinical decision support, yet their utility in managing NVO has not been formally assessed. [...] Read more.
Background: Native vertebral osteomyelitis (NVO) presents diagnostic and therapeutic challenges requiring adherence to complex clinical guidelines. The emergence of large language models (LLMs) offers new avenues for real-time clinical decision support, yet their utility in managing NVO has not been formally assessed. Methods: This study evaluated four LLMs—Consensus, Gemini, ChatGPT-4o Mini, and ChatGPT-4o—using 13 standardized questions derived from the 2015 IDSA guidelines. Each model generated 13 responses (n = 52), which were independently assessed by three orthopedic surgeons for accuracy (4-point scale) and comprehensiveness (five-point scale). Results: ChatGPT-4o produced the longest responses (428.0 ± 45.4 words), followed by ChatGPT-4o Mini (392.2 ± 97.4), Gemini (358.2 ± 60.5), and Consensus (213.2 ± 68.8). Accuracy ratings showed that ChatGPT-4o and Gemini achieved the highest proportion of “Excellent” responses (54% and 51%, respectively), while Consensus received only 20%. Comprehensiveness scores mirrored this trend, with ChatGPT-4o (3.95 ± 0.79) and Gemini (3.82 ± 0.68) significantly outperforming Consensus (2.87 ± 0.66). Domain-specific analysis revealed that ChatGPT-4o achieved a 100% “Excellent” accuracy rating in therapy-related questions. Statistical analysis confirmed significant inter-model differences (p < 0.001). Conclusions: Advanced LLMs—especially ChatGPT-4o and Gemini—demonstrated high accuracy and depth in interpreting clinical guidelines for NVO. These findings highlight their potential as effective tools in augmenting evidence-based decision-making and improving consistency in clinical care. Full article
(This article belongs to the Special Issue Spine Surgery: Clinical Advances and Future Directions)
Show Figures

Figure 1

21 pages, 1118 KiB  
Review
Integrating Large Language Models into Robotic Autonomy: A Review of Motion, Voice, and Training Pipelines
by Yutong Liu, Qingquan Sun and Dhruvi Rajeshkumar Kapadia
AI 2025, 6(7), 158; https://doi.org/10.3390/ai6070158 - 15 Jul 2025
Viewed by 1319
Abstract
This survey provides a comprehensive review of the integration of large language models (LLMs) into autonomous robotic systems, organized around four key pillars: locomotion, navigation, manipulation, and voice-based interaction. We examine how LLMs enhance robotic autonomy by translating high-level natural language commands into [...] Read more.
This survey provides a comprehensive review of the integration of large language models (LLMs) into autonomous robotic systems, organized around four key pillars: locomotion, navigation, manipulation, and voice-based interaction. We examine how LLMs enhance robotic autonomy by translating high-level natural language commands into low-level control signals, supporting semantic planning and enabling adaptive execution. Systems like SayTap improve gait stability through LLM-generated contact patterns, while TrustNavGPT achieves a 5.7% word error rate (WER) under noisy voice-guided conditions by modeling user uncertainty. Frameworks such as MapGPT, LLM-Planner, and 3D-LOTUS++ integrate multi-modal data—including vision, speech, and proprioception—for robust planning and real-time recovery. We also highlight the use of physics-informed neural networks (PINNs) to model object deformation and support precision in contact-rich manipulation tasks. To bridge the gap between simulation and real-world deployment, we synthesize best practices from benchmark datasets (e.g., RH20T, Open X-Embodiment) and training pipelines designed for one-shot imitation learning and cross-embodiment generalization. Additionally, we analyze deployment trade-offs across cloud, edge, and hybrid architectures, emphasizing latency, scalability, and privacy. The survey concludes with a multi-dimensional taxonomy and cross-domain synthesis, offering design insights and future directions for building intelligent, human-aligned robotic systems powered by LLMs. Full article
Show Figures

Figure 1

37 pages, 2921 KiB  
Article
A Machine-Learning-Based Data Science Framework for Effectively and Efficiently Processing, Managing, and Visualizing Big Sequential Data
by Alfredo Cuzzocrea, Islam Belmerabet, Abderraouf Hafsaoui and Carson K. Leung
Computers 2025, 14(7), 276; https://doi.org/10.3390/computers14070276 - 14 Jul 2025
Viewed by 600
Abstract
In recent years, the open data initiative has led to the willingness of many governments, researchers, and organizations to share their data and make it publicly available. Healthcare, disease, and epidemiological data, such as privacy statistics on patients who have suffered from epidemic [...] Read more.
In recent years, the open data initiative has led to the willingness of many governments, researchers, and organizations to share their data and make it publicly available. Healthcare, disease, and epidemiological data, such as privacy statistics on patients who have suffered from epidemic diseases such as the Coronavirus disease 2019 (COVID-19), are examples of open big data. Therefore, huge volumes of valuable data have been generated and collected at high speed from a wide variety of rich data sources. Analyzing these open big data can be of social benefit. For example, people gain a better understanding of disease by analyzing and mining disease statistics, which can inspire them to participate in disease prevention, detection, control, and combat. Visual representation further improves data understanding and corresponding results for analysis and mining, as a picture is worth a thousand words. In this paper, we present a visual data science solution for the visualization and visual analysis of large sequence data. These ideas are illustrated by the visualization and visual analysis of sequences of real epidemiological data of COVID-19. Through our solution, we enable users to visualize the epidemiological data of COVID-19 over time. It also allows people to visually analyze data and discover relationships between popular features associated with COVID-19 cases. The effectiveness of our visual data science solution in improving the user experience of visualization and visual analysis of large sequence data is demonstrated by the real-life evaluation of these sequenced epidemiological data of COVID-19. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

18 pages, 349 KiB  
Article
Reconsidering the Word–Sacrament and Scripture–Liturgy Debate: A Patristic Perspective
by Ciprian Ioan Streza
Religions 2025, 16(7), 895; https://doi.org/10.3390/rel16070895 - 12 Jul 2025
Viewed by 304
Abstract
The relationship between Scripture and the Liturgy remains one of the most extensively debated subjects in theological discourse. In the wake of the Protestant Reformation and the Catholic Counter-Reformation, a divided Christendom witnessed the rise of a dichotomy between Scripture and Liturgy, as [...] Read more.
The relationship between Scripture and the Liturgy remains one of the most extensively debated subjects in theological discourse. In the wake of the Protestant Reformation and the Catholic Counter-Reformation, a divided Christendom witnessed the rise of a dichotomy between Scripture and Liturgy, as well as between the Word and the Sacrament. This dichotomy, however, is absent from the patristic thought, which perceives the unity and complementarity between Scripture and Liturgy, owing to their shared belonging to the one life of the Church—broadly defined as Tradition—and to the way they are understood and experienced as interconnected modes through which the singular Mystery of Jesus Christ is communicated to the faithful. The present study aims to demonstrate this unity by drawing on a substantial body of patristic writings, highlighting the fact that the life of the Church is one and is lived both as the rule of faith and the rule of prayer, and that through it, one and the same Christ communicates Himself to the faithful both through the Word and through the Holy Sacraments. For the Church Fathers, the Christian faith is not an abstract doctrine about Christ, but a real and personal encounter and communion with Him in the life of the Church. This patristic approach may offer a starting point for contemporary Christianity in addressing the current liturgical crisis and in rethinking and renewing future ecumenical dialogue. Such renewal presupposes a movement beyond secular formalism and nominalism, which have fostered excessive conceptualization and an antithetical view of Scripture and Liturgy, Word and Sacrament. Full article
18 pages, 1871 KiB  
Article
Interpretable Reinforcement Learning for Sequential Strategy Prediction in Language-Based Games
by Jun Zhao, Jintian Ji, Robail Yasrab, Shuxin Wang, Liang Yu and Lingzhen Zhao
Algorithms 2025, 18(7), 427; https://doi.org/10.3390/a18070427 - 11 Jul 2025
Viewed by 374
Abstract
Accurate and interpretable prediction plays a vital role in natural language processing (NLP) tasks, particularly for enhancing user trust and model transparency. However, existing models often struggle with poor adaptability and limited interpretability when applied to dynamic language prediction tasks such as Wordle [...] Read more.
Accurate and interpretable prediction plays a vital role in natural language processing (NLP) tasks, particularly for enhancing user trust and model transparency. However, existing models often struggle with poor adaptability and limited interpretability when applied to dynamic language prediction tasks such as Wordle. To address these challenges, this study proposes an interpretable reinforcement learning framework based on an Enhanced Deep Deterministic Policy Gradient (Enhanced-DDPG) algorithm. By leveraging a custom simulation environment and integrating key linguistic features word frequency, letter frequency, and repeated letter patterns (rep) the model dynamically predicts the number of attempts needed to solve Wordle puzzles. Experimental results demonstrate that Enhanced-DDPG outperforms traditional methods such as Random Forest Regression (RFR), XGBoost, LightGBM, METRA, and SQIRL in terms of both prediction accuracy (MSE = 0.0134, R2 = 0.8439) and robustness under noisy conditions. Furthermore, SHapley Additive exPlanations (SHAP) are employed to interpret the model’s decision process, revealing that repeated letter patterns significantly influence low-attempt predictions, while word and letter frequencies are more relevant for higher attempt scenarios. This research highlights the potential of combining interpretable artificial intelligence (I-AI) and reinforcement learning to develop robust, transparent, and high-performance NLP prediction systems for real-world applications. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

27 pages, 2599 KiB  
Article
AdaGram in Python: An AI Framework for Multi-Sense Embedding in Text and Scientific Formulas
by Arun Josephraj Arokiaraj, Samah Ibrahim, André Then, Bashar Ibrahim and Stephan Peter
Mathematics 2025, 13(14), 2241; https://doi.org/10.3390/math13142241 - 10 Jul 2025
Viewed by 333
Abstract
The Adaptive Skip-gram (AdaGram) algorithm extends traditional word embeddings by learning multiple vector representations per word, enabling the capture of contextual meanings and polysemy. Originally implemented in Julia, AdaGram has seen limited adoption due to ecosystem fragmentation and the comparative scarcity of Julia’s [...] Read more.
The Adaptive Skip-gram (AdaGram) algorithm extends traditional word embeddings by learning multiple vector representations per word, enabling the capture of contextual meanings and polysemy. Originally implemented in Julia, AdaGram has seen limited adoption due to ecosystem fragmentation and the comparative scarcity of Julia’s machine learning tooling compared to Python’s mature frameworks. In this work, we present a Python-based reimplementation of AdaGram that facilitates broader integration with modern machine learning tools. Our implementation expands the model’s applicability beyond natural language, enabling the analysis of scientific notation—particularly chemical and physical formulas encoded in LaTeX. We detail the algorithmic foundations, preprocessing pipeline, and hyperparameter configurations needed for interdisciplinary corpora. Evaluations on real-world texts and LaTeX-encoded formulas demonstrate AdaGram’s effectiveness in unsupervised word sense disambiguation. Comparative analyses highlight the importance of corpus design and parameter tuning. This implementation opens new applications in formula-aware literature search engines, ambiguity reduction in automated scientific summarization, and cross-disciplinary concept alignment. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

26 pages, 3079 KiB  
Article
Implementing CAD API Automated Processes in Engineering Design: A Case Study Approach
by Konstantinos Sofias, Zoe Kanetaki, Constantinos Stergiou, Antreas Kantaros, Sébastien Jacques and Theodore Ganetsos
Appl. Sci. 2025, 15(14), 7692; https://doi.org/10.3390/app15147692 - 9 Jul 2025
Viewed by 595
Abstract
Increasing mechanical design complexity and volume, particularly in component-based manufacturing, require scalable, traceable, and efficient design processes. In this research, a modular in-house automation platform using Autodesk Inventor’s Application Programming Interface (API) and Visual Basic for Applications (VBA) is developed to automate recurrent [...] Read more.
Increasing mechanical design complexity and volume, particularly in component-based manufacturing, require scalable, traceable, and efficient design processes. In this research, a modular in-house automation platform using Autodesk Inventor’s Application Programming Interface (API) and Visual Basic for Applications (VBA) is developed to automate recurrent tasks such as CAD file generation, drawing production, structured archiving, and cost estimation. The proposed framework was implemented and tested on three real-world case studies in a turbocharger reconditioning unit with varying degrees of automation. Findings indicate remarkable time savings of up to 90% in certain documentation tasks with improved consistency, traceability, and reduced manual intervention. Moreover, the system also facilitated automatic generation of metadata-rich Excel and Word documents, allowing centralized documentation and access to data. In comparison with commercial automation software, the solution is flexible, cost-effective, and responsive to project changes and thus suitable for small and medium enterprises. Though automation reduced workload and rendered the system more reliable, some limitations remain, especially in fully removing engineering judgment, especially in complex design scenarios. Overall, this study investigates how API-based automation can significantly increase productivity and data integrity in CAD-intensive environments and explores future integration opportunities using AI and other CAD software. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

19 pages, 528 KiB  
Article
Quantum-Inspired Attention-Based Semantic Dependency Fusion Model for Aspect-Based Sentiment Analysis
by Chenyang Xu, Xihan Wang, Jiacheng Tang, Yihang Wang, Lianhe Shao and Quanli Gao
Axioms 2025, 14(7), 525; https://doi.org/10.3390/axioms14070525 - 9 Jul 2025
Viewed by 303
Abstract
Aspect-Based Sentiment Analysis (ABSA) has gained significant popularity in recent years, which emphasizes the aspect-level sentiment representation of sentences. Current methods for ABSA often use pre-trained models and graph convolution to represent word dependencies. However, they struggle with long-range dependency issues in lengthy [...] Read more.
Aspect-Based Sentiment Analysis (ABSA) has gained significant popularity in recent years, which emphasizes the aspect-level sentiment representation of sentences. Current methods for ABSA often use pre-trained models and graph convolution to represent word dependencies. However, they struggle with long-range dependency issues in lengthy texts, resulting in averaging and loss of contextual semantic information. In this paper, we explore how richer semantic relationships can be encoded more efficiently. Inspired by quantum theory, we construct superposition states from text sequences and utilize them with quantum measurements to explicitly capture complex semantic relationships within word sequences. Specifically, we propose an attention-based semantic dependency fusion method for ABSA, which employs a quantum embedding module to create a superposition state of real-valued word sequence features in a complex-valued Hilbert space. This approach yields a word sequence density matrix representation that enhances the handling of long-range dependencies. Furthermore, we introduce a quantum cross-attention mechanism to integrate sequence features with dependency relationships between specific word pairs, aiming to capture the associations between particular aspects and comments more comprehensively. Our experiments on the SemEval-2014 and Twitter datasets demonstrate the effectiveness of the quantum-inspired attention-based semantic dependency fusion model for the ABSA task. Full article
Show Figures

Figure 1

34 pages, 5774 KiB  
Article
Approach to Semantic Visual SLAM for Bionic Robots Based on Loop Closure Detection with Combinatorial Graph Entropy in Complex Dynamic Scenes
by Dazheng Wang and Jingwen Luo
Biomimetics 2025, 10(7), 446; https://doi.org/10.3390/biomimetics10070446 - 6 Jul 2025
Viewed by 408
Abstract
In complex dynamic environments, the performance of SLAM systems on bionic robots is susceptible to interference from dynamic objects or structural changes in the environment. To address this problem, we propose a semantic visual SLAM (vSLAM) algorithm based on loop closure detection with [...] Read more.
In complex dynamic environments, the performance of SLAM systems on bionic robots is susceptible to interference from dynamic objects or structural changes in the environment. To address this problem, we propose a semantic visual SLAM (vSLAM) algorithm based on loop closure detection with combinatorial graph entropy. First, in terms of the dynamic feature detection results of YOLOv8-seg, the feature points at the edges of the dynamic object are finely judged by calculating the mean absolute deviation (MAD) of the depth of the pixel points. Then, a high-quality keyframe selection strategy is constructed by combining the semantic information, the average coordinates of the semantic objects, and the degree of variation in the dense region of feature points. Subsequently, the unweighted and weighted graphs of keyframes are constructed according to the distribution of feature points, characterization points, and semantic information, and then a high-performance loop closure detection method based on combinatorial graph entropy is developed. The experimental results show that our loop closure detection approach exhibits higher precision and recall in real scenes compared to the bag-of-words (BoW) model. Compared with ORB-SLAM2, the absolute trajectory accuracy in high-dynamic sequences improved by an average of 97.01%, while the number of extracted keyframes decreased by an average of 61.20%. Full article
(This article belongs to the Special Issue Artificial Intelligence for Autonomous Robots: 3rd Edition)
Show Figures

Figure 1

Back to TopTop