Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,787)

Search Parameters:
Keywords = automatic generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 1015 KB  
Review
User Activity Detection and Identification of Energy Habits in Home Energy-Management Systems Using AI and ML: A Comprehensive Review
by Filip Durlik, Jakub Grela, Dominik Latoń, Andrzej Ożadowicz and Lukasz Wisniewski
Energies 2026, 19(3), 641; https://doi.org/10.3390/en19030641 - 26 Jan 2026
Abstract
The residential energy sector contributes substantially to global energy-related emissions. Effective energy management requires an understanding occupant behavior through activity detection and habit identification. Recent advances in artificial intelligence (AI) and machine learning (ML) enable the automatic detection of user activities and prediction [...] Read more.
The residential energy sector contributes substantially to global energy-related emissions. Effective energy management requires an understanding occupant behavior through activity detection and habit identification. Recent advances in artificial intelligence (AI) and machine learning (ML) enable the automatic detection of user activities and prediction of energy needs based on historical consumption data. Non-intrusive load monitoring (NILM) facilitates device-level disaggregation without additional sensors, supporting demand forecasting and behavior-aware control in Home Energy Management Systems (HEMSs). This review synthesizes various AI and ML approaches for detecting user activities and energy habits in HEMSs from 2020 to 2025. The analyses revealed that deep learning (DL) models, with their ability to capture complex temporal and nonlinear patterns in multisensor data, achieve superior accuracy in activity detection and load forecasting, with occupancy detection reaching 95–99% accuracy. Hybrid systems combining neural networks and optimization algorithms demonstrate enhanced robustness, but challenges remain in limited cross-building generalization, insufficient interpretability of deep models, and the absence of dataset standardized. Future work should prioritize lightweight, explainable edge-ready models, federated learning, and integration with digital twins and control systems. It should also extend energy optimization toward occupant wellbeing and grid flexibility, using standardized protocols and open datasets for ensuring trustworthy and sustainability. Full article
(This article belongs to the Collection Energy Efficiency and Environmental Issues)
26 pages, 1707 KB  
Article
Axiom Generation for Automated Ontology Construction from Texts Through Schema Mapping
by Tsitsi Zengeya, Jean Vincent Fonou-Dombeu and Mandlenkosi Gwetu
Mach. Learn. Knowl. Extr. 2026, 8(2), 29; https://doi.org/10.3390/make8020029 - 26 Jan 2026
Abstract
Ontology learning from unstructured text has become a critical task for knowledge-driven applications in Big Data and Artificial Intelligence. While significant advances have been made in the automatic extraction of concepts and relations using neural and Transformer-based models, the generation of formal Description [...] Read more.
Ontology learning from unstructured text has become a critical task for knowledge-driven applications in Big Data and Artificial Intelligence. While significant advances have been made in the automatic extraction of concepts and relations using neural and Transformer-based models, the generation of formal Description Logic axioms required for constructing logically consistent and computationally tractable ontologies remains largely underexplored. This paper puts forward a novel pipeline for automated axiom generation through schema mapping. Our paper introduces three key innovations: a deterministic mapping framework that guarantees logical consistency (unlike stochastic Large Language Models); guaranteed formal consistency verified by OWL reasoners (unaddressed by prior statistical methods); and a transparent, scalable bridge from neural extractions to symbolic logic, eliminating manual post-processing. Technically, the pipeline builds upon the outputs of a Transformer-based fusion model for joint concept and relation extraction. We then map lexical relational phrases to formal ontological properties through a lemmatization-based schema alignment step. Entity typing and hierarchical induction are then employed to infer class structures, as well as domain and range constraints. Using RDFLib and structured data processing, we transform the extracted triples into both assertional (ABox) and terminological (TBox) axioms expressed in Description Logic. Experimental evaluation on benchmark datasets (Conll04 and NYT) demonstrates the efficacy of the approach, with expert validation showing high acceptance rates (>95%) and reasoners confirming zero inconsistencies. The pipeline thus establishes a reliable, scalable foundation for automated ontology learning, advancing the field from extraction to formally verifiable knowledge base construction. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

23 pages, 3441 KB  
Article
Integrating Large Language Models with Deep Learning for Breast Cancer Treatment Decision Support
by Heeseung Park, Serin Ok, Taewoo Kang and Meeyoung Park
Diagnostics 2026, 16(3), 394; https://doi.org/10.3390/diagnostics16030394 - 26 Jan 2026
Abstract
Background/Objectives: Breast cancer is one of the most common malignancies, but its heterogeneous molecular subtypes make treatment decision-making complex and patient-specific. Both the pathology reports and the electronic medical record (EMR) play a critical role for an appropriate treatment decision. This study [...] Read more.
Background/Objectives: Breast cancer is one of the most common malignancies, but its heterogeneous molecular subtypes make treatment decision-making complex and patient-specific. Both the pathology reports and the electronic medical record (EMR) play a critical role for an appropriate treatment decision. This study aimed to develop an integrated clinical decision support system (CDSS) that combines a large language model (LLM)-based pathology analysis with deep learning-based treatment prediction to support standardized and reliable decision-making. Methods: Real-world data (RWD) obtained from a cohort of 5015 patients diagnosed with breast cancer were analyzed. Meta-Llama-3-8B-Instruct automatically extracted the TNM stage and tumor size from the pathology reports, which were then integrated with EMR variables. A multi-label classification of 16 treatment combinations was performed using six models, including Decision Tree, Random Forest, GBM, XGBoost, DNN, and Transformer. Performance was evaluated using accuracy, macro/micro-averaged precision, recall, F1 score, and AUC. Results: Using combined LLM-extracted pathology and EMR features, GBM and XGBoost achieved the highest and most stable predictive performance across all feature subset configurations (macro-F1 ≈ 0.88–0.89; AUC = 0.867–0.868). Both models demonstrated strong discrimination ability and consistent recall and precision, highlighting their robustness for multi-label classification in real-world settings. Decision Tree and Random Forest showed moderate but reliable performance (macro-F1 = 0.84–0.86; AUC = 0.849–0.821), indicating their applicability despite lower predictive capability. By contrast, the DNN and Transformer models produced comparatively lower scores (macro-F1 = 0.74–0.82; AUC = 0.780–0.757), especially when using the full feature set, suggesting limited suitability for structured clinical data without strong contextual dependencies. These findings indicate that gradient-boosting ensemble approaches are better optimized for tabular medical data and generate more clinically reliable treatment recommendations. Conclusions: The proposed artificial intelligence-based CDSS improves accuracy and consistency in breast cancer treatment decision support by integrating automated pathology interpretation with deep learning, demonstrating its potential utility in real-world cancer care. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

23 pages, 12806 KB  
Article
Modality-Bridging for Automated Chain-of-Thought Construction in Meteorological Reasoning: A Study on WeatherQA
by Hang Cui, Jiqing Gu, Jing Peng, Tiejun Wang and Xi Wu
Information 2026, 17(2), 116; https://doi.org/10.3390/info17020116 - 26 Jan 2026
Abstract
This study applies a modality-bridging framework to automatically construct Chain-of-Thought (CoT) reasoning from meteorological images, reducing the need for expert annotation. The proposed pipeline integrates semantic extraction, Pseudo-CoT generation, and logical fusion to produce structured reasoning chains. Using the WeatherQA benchmark, we build [...] Read more.
This study applies a modality-bridging framework to automatically construct Chain-of-Thought (CoT) reasoning from meteorological images, reducing the need for expert annotation. The proposed pipeline integrates semantic extraction, Pseudo-CoT generation, and logical fusion to produce structured reasoning chains. Using the WeatherQA benchmark, we build datasets under single-image, 3-image, and 20-image settings—with automated and Expert-Guided variants—and evaluate performance on Areas Affected and Conditional Concern tasks. The results show near-expert spatial reasoning and more compact, well-aligned CoTs with reduced-image inputs. Multi-image settings reveal challenges in integrating dense visual cues, while semantic classification remains difficult due to label ambiguity. Overall, modality-bridging offers a scalable, interpretable, and low-cost approach for multimodal meteorological reasoning. Full article
Show Figures

Figure 1

18 pages, 1838 KB  
Article
A Deep Learning Model for Wave V Peak Detection in Auditory Brainstem Response Data
by Jun Ma, Nak-Jun Sung, Sungjun Choi, Min Hong and Sungyeup Kim
Electronics 2026, 15(3), 511; https://doi.org/10.3390/electronics15030511 - 25 Jan 2026
Abstract
In this study, we propose a YOLO-based object detection algorithm for the automated and accurate identification of the fifth wave (Wave V) in auditory brainstem response (ABR) graphs. The ABR test plays a critical role in the diagnosis of hearing disorders, with the [...] Read more.
In this study, we propose a YOLO-based object detection algorithm for the automated and accurate identification of the fifth wave (Wave V) in auditory brainstem response (ABR) graphs. The ABR test plays a critical role in the diagnosis of hearing disorders, with the fifth wave serving as a key marker for clinical assessment. However, conventional manual detection is time-consuming and subject to variability depending on the examiner’s expertise. To address these limitations, we developed a real-time detection method that utilizes a YOLO object detection model applied to ABR graph images. Prior to YOLO training, we employed a U-Net-based preprocessing algorithm to automatically remove existing annotated peaks from the ABR images, thereby generating training data suitable for peak detection. The proposed model was evaluated in terms of precision, recall, and mean average precision (mAP). The experimental results demonstrate that the YOLO-based approach achieves high detection performance across these metrics, indicating its potential as an effective tool for reliable Wave V peak localization in audiological applications. Full article
Show Figures

Figure 1

17 pages, 1683 KB  
Article
Dual-Flow GRU and Residual MLP Fusion PROP Based Coordinated Automatic Generation Control with Renewable Energies
by Wenzao Chen, Jianyong Zheng and Xiaoshun Zhang
Energies 2026, 19(3), 610; https://doi.org/10.3390/en19030610 - 24 Jan 2026
Viewed by 53
Abstract
With the growing penetration of renewable energy, automatic generation control (AGC) faces challenges like frequent frequency fluctuations and tie-line power deviations. Traditional proportional (PROP) allocation algorithms, limited by fixed weights, struggle to adapt to dynamic system changes. To address this, this study proposes [...] Read more.
With the growing penetration of renewable energy, automatic generation control (AGC) faces challenges like frequent frequency fluctuations and tie-line power deviations. Traditional proportional (PROP) allocation algorithms, limited by fixed weights, struggle to adapt to dynamic system changes. To address this, this study proposes a coordinated AGC allocation framework fusing a dual-flow Gate Recurrent Unit (GRU) with residual Multilayer Perceptron (MLP) based on PROP, preserving physical prior knowledge while learning adaptive correction terms. Validated on a provincial power grid, the proposed method reduces the cumulative absolute ACE (Sum) by about 0.3–0.9% compared with PROP under 10–100 MW step disturbances. Under random disturbances, it achieves larger reductions of about 3.2% (vs. PROP) and 4.8% (vs. MLP), while maintaining interpretability and deployment feasibility, improving the relevant performance indicators of AGC unit allocation while maintaining interpretability and deployment feasibility, providing an effective solution for AGC under high renewable energy penetration. Full article
26 pages, 2391 KB  
Article
Hybrid Zero-Shot Node-Count Estimation and Growth-Information Sharing for Lisianthus (Eustoma grandiflorum) Cultivation in Fukushima’s Floricultural Revitalization
by Hiroki Naito, Kota Kobayashi, Osamu Inaba, Fumiki Hosoi, Norihiro Hoshi and Yoshimichi Yamashita
Agriculture 2026, 16(3), 296; https://doi.org/10.3390/agriculture16030296 - 23 Jan 2026
Viewed by 72
Abstract
This paper presents a hybrid pipeline based on zero-shot vision models for automatic node count estimation in Lisianthus (Eustoma grandiflorum) cultivation and a system for real-time growth information sharing. The multistage image analysis pipeline integrates Grounding DINO for zero-shot leaf-region detection, [...] Read more.
This paper presents a hybrid pipeline based on zero-shot vision models for automatic node count estimation in Lisianthus (Eustoma grandiflorum) cultivation and a system for real-time growth information sharing. The multistage image analysis pipeline integrates Grounding DINO for zero-shot leaf-region detection, MiDaS for monocular depth estimation, and a YOLO-based classifier, using daily time-lapse images from low-cost fixed cameras in commercial greenhouses. The model parameters are derived from field measurements of 2024 seasonal crops (Trial 1) and then applied to different cropping seasons, growers, and cultivars (Trials 2 and 3) without any additional retraining. Trial 1 indicates high accuracy (R2 = 0.930, mean absolute error (MAE) = 0.73). Generalization performance is confirmed in Trials 2 (MAE = 0.45) and 3 (MAE = 1.14); reproducibility across multiple growers and four cultivars yields MAEs of approximately ±1 node. The model effectively captures the growth progression despite variations in lighting, plant architecture, and grower practices, although errors increase during early growth stages and under unstable leaf detection. Furthermore, an automated Discord-based notification system enables real-time sharing of node trends and analytical images, facilitating communication. The feasibility of combining zero-shot vision models with cloud-based communication tools for sustainable and collaborative floricultural production is thus demonstrated. Full article
25 pages, 904 KB  
Article
Reconfiguring Strategic Capabilities in the Digital Era: How AI-Enabled Dynamic Capability, Data-Driven Culture, and Organizational Learning Shape Firm Performance
by Hassan Samih Ayoub and Joshua Chibuike Sopuru
Sustainability 2026, 18(3), 1157; https://doi.org/10.3390/su18031157 - 23 Jan 2026
Viewed by 79
Abstract
In the era of digital transformation, organizations increasingly invest in Artificial Intelligence (AI) to enhance competitiveness, yet persistent evidence shows that AI investment does not automatically translate into superior firm performance. Drawing on the Resource-Based View (RBV) and Dynamic Capabilities Theory (DCT), this [...] Read more.
In the era of digital transformation, organizations increasingly invest in Artificial Intelligence (AI) to enhance competitiveness, yet persistent evidence shows that AI investment does not automatically translate into superior firm performance. Drawing on the Resource-Based View (RBV) and Dynamic Capabilities Theory (DCT), this study aims to explain this paradox by examining how AI-enabled dynamic capability (AIDC) is converted into performance outcomes through organizational mechanisms. Specifically, the study investigates the mediating roles of organizational data-driven culture (DDC) and organizational learning (OL). Data were collected from 254 senior managers and executives in U.S. firms actively employing AI technologies and analyzed using partial least squares structural equation modeling (PLS-SEM). The results indicate that AIDC exerts a significant direct effect on firm performance as well as indirect effects through both DDC and OL. Serial mediation analysis reveals that AIDC enhances performance by first fostering a data-driven mindset and subsequently institutionalizing learning processes that translate AI-generated insights into actionable organizational routines. Moreover, DDC plays a contingent moderating role in the AIDC–performance relationship, revealing a nonlinear effect whereby excessive reliance on data weakens the marginal performance benefits of AIDC. Taken together, these findings demonstrate the dual role of data-driven culture: while DDC functions as an enabling mediator that facilitates AI value creation, beyond a threshold it constrains dynamic reconfiguration by limiting managerial discretion and strategic flexibility. This insight exposes the “dark side” of data-driven culture and extends the RBV and DCT by introducing a boundary condition to the performance effects of AI-enabled capabilities. From a managerial perspective, the study highlights the importance of balancing analytical discipline with adaptive learning to sustain digital efficiency and strategic agility. Full article
Show Figures

Figure 1

27 pages, 6074 KB  
Article
Automatic Generation of T-Splines with Extraordinary Points Based on Domain Decomposition of Quadrilateral Patches
by João Carlos L. Peixoto, Rafael L. Rangel and Luiz Fernando Martha
Mathematics 2026, 14(3), 392; https://doi.org/10.3390/math14030392 - 23 Jan 2026
Viewed by 60
Abstract
Isogeometric analysis (IGA) is a numerical methodology for solving differential equations by employing basis functions that preserve the exact geometry of the domain. This approach is based on a class of mathematical functions known as NURBS (Non-Uniform Rational B-Splines). Representing a domain with [...] Read more.
Isogeometric analysis (IGA) is a numerical methodology for solving differential equations by employing basis functions that preserve the exact geometry of the domain. This approach is based on a class of mathematical functions known as NURBS (Non-Uniform Rational B-Splines). Representing a domain with NURBS entities often requires multiple patches, especially for complex geometries. Bivariate NURBS, defined as tensor products, enforce global refinements within a patch and, in multi-patch models, these refinements are propagated to other model patches. The use of T-Splines with extraordinary points offers a solution to this issue by enabling local refinements through unstructured meshes. The analysis of T-Spline models is performed using a Bézier extraction technique that relies on extraction operators that map Bézier functions to T-Spline functions. When generating a T-Spline model, careful attention is required to ensure that all T-Spline functions are linearly independent—a necessary condition for IGA—in order to form T-Splines that are suitable for analysis. In this sense, this work proposes a methodology to automate the generation of bidimensional unstructured meshes for IGA through T-Splines with extraordinary points. An algorithm for generating unstructured finite element meshes, based on domain decomposition of quadrilateral patches, is adapted to construct T-Spline models. Validation models demonstrate the methodology’s flexibility in generating locally refined isogeometric models. Full article
(This article belongs to the Special Issue Numerical Modeling and Applications in Mechanical Engineering)
Show Figures

Figure 1

21 pages, 1300 KB  
Article
CAIC-Net: Robust Radio Modulation Classification via Unified Dynamic Cross-Attention and Cross-Signal-to-Noise Ratio Contrastive Learning
by Teng Wu, Quan Zhu, Runze Mao, Changzhen Hu and Shengjun Wei
Sensors 2026, 26(3), 756; https://doi.org/10.3390/s26030756 - 23 Jan 2026
Viewed by 42
Abstract
In complex wireless communication environments, automatic modulation classification (AMC) faces two critical challenges: the lack of robustness under low-signal-to-noise ratio (SNR) conditions and the inefficiency of integrating multi-scale feature representations. To address these issues, this paper proposes CAIC-Net, a robust modulation classification network [...] Read more.
In complex wireless communication environments, automatic modulation classification (AMC) faces two critical challenges: the lack of robustness under low-signal-to-noise ratio (SNR) conditions and the inefficiency of integrating multi-scale feature representations. To address these issues, this paper proposes CAIC-Net, a robust modulation classification network that integrates a dynamic cross-attention mechanism with a cross-SNR contrastive learning strategy. CAIC-Net employs a dual-stream feature extractor composed of ConvLSTM2D and Transformer blocks to capture local temporal dependencies and global contextual relationships, respectively. To enhance fusion effectiveness, we design a Dynamic Cross-Attention Unit (CAU) that enables deep bidirectional interaction between the two branches while incorporating an SNR-aware mechanism to adaptively adjust the fusion strategy under varying channel conditions. In addition, a Cross-SNR Contrastive Learning (CSCL) module is introduced as an auxiliary task, where positive and negative sample pairs are constructed across different SNR levels and optimized using InfoNCE loss. This design significantly strengthens the intrinsic noise-invariant properties of the learned representations. Extensive experiments conducted on two standard datasets demonstrate that CAIC-Net achieves competitive classification performance at moderate-to-high SNRs and exhibits clear advantages in extremely low-SNR scenarios, validating the effectiveness and strong generalization capability of the proposed approach. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

16 pages, 1974 KB  
Article
Edible Oil Adulteration Analysis via QPCA and PSO-LSSVR Based on 3D-FS
by Si-Yuan Wang, Qi-Yang Liu, Ai-Ling Tan and Linan Liu
Processes 2026, 14(2), 390; https://doi.org/10.3390/pr14020390 - 22 Jan 2026
Viewed by 63
Abstract
A method utilizing quaternion principal component analysis (QPCA) for three-dimensional fluorescence spectral (3D FS) feature extraction is employed to identify frying oil in edible oil. Particle swarm optimization partial least squares support vector machine (PSO-LSSVR) is utilized for detecting frying oil concentration. The [...] Read more.
A method utilizing quaternion principal component analysis (QPCA) for three-dimensional fluorescence spectral (3D FS) feature extraction is employed to identify frying oil in edible oil. Particle swarm optimization partial least squares support vector machine (PSO-LSSVR) is utilized for detecting frying oil concentration. The study includes rapeseed oil, soybean oil, peanut oil, blending oil, and corn oil samples. Adulteration involves adding frying oil to these edible oils at concentrations of 0%, 5%, 10%, 30%, 50%, 70%, and 100%. Firstly, the F7000 fluorescence spectrometer is employed to measure the 3D FS of the adulterated edible oil samples, resulting in the generation of contour maps and 3D FS projections. The excitation wavelengths utilized in these measurements are 360 nm, 380 nm, and 400 nm, while the emission wavelengths span from 220 nm to 900 nm. Secondly, leveraging the automatic peak-finding function of the spectrometer, a quaternion parallel representation model of the 3D FS data for frying oil in edible oil is established using the emission spectra data corresponding to the aforementioned excitation wavelengths. Subsequently, in conjunction with the K-nearest neighbor classification (KNN), three feature extraction methods—summation, modulus, and multiplication quaternion feature extraction—are compared to identify the optimal approach. Thirdly, the extracted features are input into KNN, particle swarm optimization support vector machine (PSO-SVM), and genetic algorithm support vector machine (GA-SVM) classifiers to ascertain the most effective discriminant model for adulterated edible oil. Ultimately, a quantitative model for adulterated edible oil is developed based on partial least squares regression, PSO-SVR and PSO-LSSVR. The results indicate that the classification accuracy of QPCA features combined with PSO-SVM achieved 100%. Furthermore, the PSO-LSSVR quantitative model exhibited the best performance. Full article
Show Figures

Figure 1

24 pages, 6118 KB  
Article
Effective Approach for Classifying EMG Signals Through Reconstruction Using Autoencoders
by Natalia Rendón Caballero, Michelle Rojo González, Marcos Aviles, José Manuel Alvarez Alvarado, José Billerman Robles-Ocampo, Perla Yazmin Sevilla-Camacho and Juvenal Rodríguez-Reséndiz
AI 2026, 7(1), 36; https://doi.org/10.3390/ai7010036 - 22 Jan 2026
Viewed by 48
Abstract
The study of muscle signal classification has been widely explored for the control of myoelectric prostheses. Traditional approaches rely on manually designed features extracted from time- or frequency-domain representations, which may limit the generalization and adaptability of EMG-based systems. In this work, an [...] Read more.
The study of muscle signal classification has been widely explored for the control of myoelectric prostheses. Traditional approaches rely on manually designed features extracted from time- or frequency-domain representations, which may limit the generalization and adaptability of EMG-based systems. In this work, an autoencoder-based framework is proposed for automatic feature extraction, enabling the learning of compact latent representations directly from raw EMG signals and reducing dependence on handcrafted features. A custom instrumentation system with three surface EMG sensors was developed and placed on selected forearm muscles to acquire signals associated with five hand movements from 20 healthy participants aged 18 to 40 years. The signals were segmented into 200 ms windows with 75% overlap. The proposed method employs a recurrent autoencoder with a symmetric encoder–decoder architecture, trained independently for each sensor to achieve accurate signal reconstruction, with a minimum reconstruction loss of 3.3×104V2. The encoder’s latent representations were then used to train a dense neural network for gesture classification. An overall efficiency of 93.84% was achieved, demonstrating that the proposed reconstruction-based approach provides high classification performance and represents a promising solution for future EMG-based assistive and control applications. Full article
(This article belongs to the Special Issue Transforming Biomedical Innovation with Artificial Intelligence)
Show Figures

Figure 1

36 pages, 1519 KB  
Review
Thinking Machines: Mathematical Reasoning in the Age of LLMs
by Andrea Asperti, Alberto Naibo and Claudio Sacerdoti Coen
Big Data Cogn. Comput. 2026, 10(1), 38; https://doi.org/10.3390/bdcc10010038 - 22 Jan 2026
Viewed by 64
Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in structured reasoning and symbolic tasks, with coding emerging as a particularly successful application. This progress has naturally motivated efforts to extend these models to mathematics, both in its traditional form, expressed through natural-style mathematical [...] Read more.
Large Language Models (LLMs) have demonstrated impressive capabilities in structured reasoning and symbolic tasks, with coding emerging as a particularly successful application. This progress has naturally motivated efforts to extend these models to mathematics, both in its traditional form, expressed through natural-style mathematical language, and in its formalized counterpart, expressed in a symbolic syntax suitable for automatic verification. Yet, despite apparent parallels between programming and proof construction, advances in formalized mathematics have proven significantly more challenging. This gap raises fundamental questions about the nature of reasoning in current LLM architectures, the role of supervision and feedback, and the extent to which such models maintain an internal notion of computational or deductive state. In this article, we review the current state-of-the-art in mathematical reasoning with LLMs, focusing on recent models and benchmarks. We explore three central issues at the intersection of machine learning and mathematical cognition: (i) the trade-offs between traditional and formalized mathematics as training and evaluation domains; (ii) the structural and methodological reasons why proof synthesis remains more brittle than code generation; and (iii) whether LLMs genuinely represent or merely emulate a notion of evolving logical state. Our goal is not to draw rigid distinctions but to clarify the present boundaries of these systems and outline promising directions for their extension. Full article
Show Figures

Figure 1

32 pages, 5019 KB  
Article
Automatic Synthesis of Planar Multi-Loop Fractionated Kinematic Chains with Multiple Joints: Topological Graph Atlas and a Mine Scaler Manipulator Case Study
by Xiaoxiong Li, Jisong Ding and Huafeng Ding
Machines 2026, 14(1), 129; https://doi.org/10.3390/machines14010129 - 22 Jan 2026
Viewed by 25
Abstract
Planar multi-loop fractionated kinematic chains (FKCs)—kinematic chains that can be decomposed into two or more coupled subchains by separating joints or links—are widely used in heavy-duty manipulators, yet their large design space makes automatic synthesis and application-oriented screening challenging. The novelty of this [...] Read more.
Planar multi-loop fractionated kinematic chains (FKCs)—kinematic chains that can be decomposed into two or more coupled subchains by separating joints or links—are widely used in heavy-duty manipulators, yet their large design space makes automatic synthesis and application-oriented screening challenging. The novelty of this paper is a general automated synthesis-and-screening framework for planar fractionated kinematic chains, regardless of whether multiple joints are present; multiple-joint chains are handled via an equivalent transformation to single-joint models, enabling the construction of a deduplicated topological graph atlas. In the mine scaler manipulator case study, an 18-link, 5-DOF (N18_M5) FKC with two multiple joints is taken as the target and converted into a single-joint equivalent N20_M7 model consisting of three subchains (KC1–KC3). Atlases of the required non-fractionated kinematic chains (NFKCs) for KC1 and KC3 are generated according to their link counts and DOFs. The subchains are then combined as building blocks under joint-fractionation (A-mode) and link-fractionation (B-mode) to enumerate fractionated candidates, and a WL-hash-based procedure is employed for isomorphism discrimination to obtain a non-isomorphic N20_M7 atlas. Finally, a connectivity-calculation-based screening is performed under task-driven structural and functional constraints, yielding 249 feasible configurations for the overall manipulator arm. The proposed pipeline provides standardized representations and reproducible outputs, offering a practical and transferable route from large-scale enumeration to engineering-feasible configuration sets for planar multi-loop FKCs, including those with multiple joints. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

18 pages, 5745 KB  
Article
Graph-Based Design Languages for Engineering Automation: A Formula Student Race Car Case Study
by Julian Borowski and Stephan Rudolph
Vehicles 2026, 8(1), 24; https://doi.org/10.3390/vehicles8010024 - 22 Jan 2026
Viewed by 38
Abstract
The development of modern vehicles faces an increase in complexity, as well as a need for shorter development cycles and a seamless cross-domain integration. In order to meet these challenges, a graph-based design language which formalizes and automates engineering workflows is presented and [...] Read more.
The development of modern vehicles faces an increase in complexity, as well as a need for shorter development cycles and a seamless cross-domain integration. In order to meet these challenges, a graph-based design language which formalizes and automates engineering workflows is presented and applied in a design case study to a Formula Student race car suspension system. The proposed method uses an ontology-based vocabulary definition and executable model transformations to compile design knowledge into a central and consistent design graph. This graph enables the automatic generation of consistent 3D CAD models, domain-specific simulations and suspension kinematic analyses, replacing manual and error-prone tool and data handover processes. The design language captures both the structural and dynamic behavior of the suspension, supports variant exploration and allows for integrated validation, such as 3D collision detection. The study illustrates how graph-based design languages can serve as ‘digital DNA’ for knowledge-based product development, offering a scalable, reusable platform for engineering automation. This approach enhances the digital consistency of data, the digital continuity of processes and the digital interoperability of tools across all relevant engineering disciplines in order to support the validation of early-stage designs and the optimization of complex systems. Full article
(This article belongs to the Special Issue Vehicle Design Processes, 3rd Edition)
Show Figures

Figure 1

Back to TopTop