Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,272)

Search Parameters:
Keywords = automatically modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 23953 KB  
Article
Deepfake Speech Detection Using Perceptual Pathological Features Related to Timbral Attributes and Deep Learning
by Anuwat Chaiwongyen, Khalid Zaman, Kai Li, Suradej Duangpummet, Jessada Karnjana, Waree Kongprawechnon and Masashi Unoki
Appl. Sci. 2026, 16(4), 2077; https://doi.org/10.3390/app16042077 - 20 Feb 2026
Abstract
The detection of deepfake speech has become a significant research area due to rapid advancements in generative AI for speech synthesis. These technologies pose significant security risks in applications such as biometric authentication, voice-controlled systems, and automatic speaker verification (ASV) systems. Therefore, enhancing [...] Read more.
The detection of deepfake speech has become a significant research area due to rapid advancements in generative AI for speech synthesis. These technologies pose significant security risks in applications such as biometric authentication, voice-controlled systems, and automatic speaker verification (ASV) systems. Therefore, enhancing the detection capabilities of such applications is essential to mitigate potential threats. This study investigates perceptual speech-pathological features, which are commonly used to evaluate the unnaturalness of voice disorders in clinical settings, as potential indicators for detecting deepfake speech. Specifically, the timbral attributes of hardness, depth, brightness, roughness, sharpness, warmth, boominess, and reverberation are examined. The analysis reveals that these attributes provide meaningful distinctions between genuine and synthetic speech. Furthermore, the detection performance is enhanced by extending the dimensional representation of timbral attributes, enabling a more comprehensive characterization of the speech signal. This paper proposes a method that combines two models: one utilizing the different dimensions of speech-pathological features with a deep neural network (DNN), and another employing a gammatone filterbank model that simulates the auditory processing mechanism of the human cochlea with ResNet-18 architecture, improving deepfake speech detection. The proposed method is evaluated on the Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof) 2019 dataset. Experimental results demonstrate that the proposed approach outperforms baseline models in terms of Equal Error Rate (EER), achieving an EER of 5.93%. Full article
(This article belongs to the Special Issue AI in Audio Analysis: Spectrogram-Based Recognition)
27 pages, 5588 KB  
Article
Study on Heat Generation Mechanisms and Circumferential Temperature Evolution Characteristics of Journal Bearings Under Different Whirl Motion
by Yang Liu, Xujiang Liu, Tingting Yang and Qi Yuan
Appl. Sci. 2026, 16(4), 2069; https://doi.org/10.3390/app16042069 - 20 Feb 2026
Abstract
To investigate the heat-generation mechanisms of journal bearings under different whirl motion and to clarify the corresponding temperature distribution characteristics, a computational fluid dynamics-based method was developed. The model incorporates temperature-dependent lubricant viscosity and employs an unsteady dynamic-mesh updating approach based on structured [...] Read more.
To investigate the heat-generation mechanisms of journal bearings under different whirl motion and to clarify the corresponding temperature distribution characteristics, a computational fluid dynamics-based method was developed. The model incorporates temperature-dependent lubricant viscosity and employs an unsteady dynamic-mesh updating approach based on structured grids, enabling the automatic iterative tracking of the journal center during whirl motion. A thermal-effect analysis model that accounts for journal whirl trajectories was thereby established. The whirl orbit shape is characterized using elliptical eccentricity, and the effects of whirl direction, elliptical eccentricity, and whirl frequency on the circumferential temperature and pressure distributions of the journal are examined. Results show that under forward whirl, increasing whirl frequency and elliptical eccentricity initially enhances and then weakens local hydrodynamic pressure and viscous shear dissipation in the oil-film convergent region, producing pronounced first-order circumferential temperature nonuniformity and a high risk of thermal bending at intermediate frequencies. Under backward whirl, hydrodynamic effects are reduced and heat generation shifts from localized concentration to global shear dissipation, forming a relatively uniform second-order circumferential temperature field. Increasing elliptical eccentricity causes the whirl orbit to become more linear, improving load-carrying capacity and heat-transfer performance and thereby mitigating thermally induced vibration and oil-film whirl instability. Full article
(This article belongs to the Section Energy Science and Technology)
Show Figures

Figure 1

16 pages, 5894 KB  
Article
An Overlapping-Signal Separation Algorithm Based on a Self-Attention Neural Network for Space-Based ADS-B
by Ziwei Liu, Shuyi Tang, Yehua Cao, Shanshan Zhao, Leiyao Liao and Gengxin Zhang
Sensors 2026, 26(4), 1351; https://doi.org/10.3390/s26041351 - 20 Feb 2026
Abstract
Space-based automatic dependent surveillance–broadcast (ADS-B) systems offer the potential for comprehensive global aircraft surveillance. However, they face substantial challenges due to severe signal collisions resulting from the simultaneous reception of asynchronous ADS-B transmissions from multiple aircraft within a satellite’s expansive coverage area. Traditional [...] Read more.
Space-based automatic dependent surveillance–broadcast (ADS-B) systems offer the potential for comprehensive global aircraft surveillance. However, they face substantial challenges due to severe signal collisions resulting from the simultaneous reception of asynchronous ADS-B transmissions from multiple aircraft within a satellite’s expansive coverage area. Traditional collision mitigation approaches, such as serial interference cancellation and multichannel blind source separation, often have high computational costs, impose strict signal structure constraints, or rely on multiple-antenna configurations, all of which limit their practicality in satellite scenarios. To address these limitations, this paper proposes two novel deep learning–based models, designated SplitNet-2 and SplitNet-3. SplitNet-2 leverages a Transformer-inspired self-attention architecture specifically designed to separate two overlapping ADS-B signals, while SplitNet-3 employs a convolutional residual U-shaped network optimized for disentangling three simultaneous, colliding signals. Extensive simulations under realistic satellite reception conditions demonstrate that the proposed models significantly outperform conventional methods, achieving lower bit error rates (BERs) and improved demodulation accuracy. These advancements offer a promising solution to the critical problem of underdetermined signal separation in space-based ADS-B reception and significantly enhance the reliability and coverage of satellite-based ADS-B surveillance systems. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

21 pages, 28780 KB  
Article
DNA Barcodes and Morphology Reveal Five New Species of Phanerotoma (Hymenoptera, Braconidae, Cheloninae) from China
by Yu Fang, Wenjuan Luo, Cornelis van Achterberg, Xuexin Chen and Pu Tang
Insects 2026, 17(2), 219; https://doi.org/10.3390/insects17020219 - 20 Feb 2026
Abstract
The genus Phanerotoma Wesmael, 1838 (Hymenoptera, Braconidae, Cheloninae, Phane- rotomini) is distributed across all six major zoogeographical regions, with the highest species diversity recorded in the Palaearctic Region. DNA barcoding provides a robust method for species identification, yet its effectiveness for the genus [...] Read more.
The genus Phanerotoma Wesmael, 1838 (Hymenoptera, Braconidae, Cheloninae, Phane- rotomini) is distributed across all six major zoogeographical regions, with the highest species diversity recorded in the Palaearctic Region. DNA barcoding provides a robust method for species identification, yet its effectiveness for the genus Phanerotoma is limited by the scarcity of reliable, species-level data from specific regions in public databases. This gap makes it essential to contribute comprehensive genetic resources to advance taxonomic research. This study presents a comprehensive COI dataset of 92 sequences for the genus Phanerotoma, employing both the Automatic Barcode Gap Discovery (ABGD) method for species delimitation and the bPTP model for phylogenetic inference. The integrated analytical approach revealed 18 distinct species, including five new species; all species new to science are described and illustrated, and updates of the most recent key to the Chinese species are included. Full article
Show Figures

Figure 1

18 pages, 2153 KB  
Article
MusicDiffusionNet: Enhancing Text-to-Music Generation with Adaptive Style and Multi-Scale Temporal Mixup Strategies
by Leiheng Xu, Jiancong Chen, Chengcheng Li and Jinsong Liang
Appl. Sci. 2026, 16(4), 2066; https://doi.org/10.3390/app16042066 - 20 Feb 2026
Abstract
Text-to-music generation aims to automatically produce audio content with semantic consistency and coherent musical structure based on natural language descriptions. However, existing methods still face challenges in terms of style diversity, rhythmic consistency, and long-term structural modeling. To address these issues, we propose [...] Read more.
Text-to-music generation aims to automatically produce audio content with semantic consistency and coherent musical structure based on natural language descriptions. However, existing methods still face challenges in terms of style diversity, rhythmic consistency, and long-term structural modeling. To address these issues, we propose a novel text-to-music generation model, termed MusicDiffusionNet (MDN), which integrates diffusion models with the WaveNet architecture to jointly model musical semantics and temporal structure in a continuous latent space. By decoupling high-level semantic conditioning from low-level audio generation, MDN enhances its ability to model long-range musical structure while improving semantic alignment between text and generated music with stable generation behavior. Building upon this framework, we further design two complementary mixing strategies to improve generation quality and structural coherence. Adaptive Style Mixing (ASM) performs weighted interpolation among stylistically similar music samples in the style embedding space, incorporating key and harmonic compatibility constraints to expand the style distribution while avoiding dissonance. Multi-scale Temporal Mixing (MTM) adopts beat-aware temporal decomposition, mixing, and reorganization across multiple time scales, thereby enhancing the modeling of both local and global temporal variations while preserving rhythmic periodicity and musical groove. Both strategies are integrated into the diffusion process as conditional augmentation mechanisms, contributing to improved learning stability and representational capacity under limited data conditions. Experimental results on the Audiostock dataset demonstrate that MDN and its mixing strategies achieve consistent improvements across multiple objective metrics, including generation quality, style diversity, and rhythmic coherence, validating the effectiveness of the proposed approach for text-to-music generation. Full article
Show Figures

Figure 1

33 pages, 5295 KB  
Article
Payment Rails in Smart Contract as a Service (SCaaS) Solutions from BPMN Models
by Christian Gang Liu, Peter Bodorik and Dawn Jutla
Future Internet 2026, 18(2), 110; https://doi.org/10.3390/fi18020110 - 19 Feb 2026
Abstract
The adoption of blockchain-based smart contracts for the trading of goods and services promises greater transparency, automation, and trustlessness, but also raises challenges related to payment integration and modularity. While business analysts (BAs) can express business logic and control flow using BPMN and [...] Read more.
The adoption of blockchain-based smart contracts for the trading of goods and services promises greater transparency, automation, and trustlessness, but also raises challenges related to payment integration and modularity. While business analysts (BAs) can express business logic and control flow using BPMN and decision rules using DMN, payment tasks that involve concrete transfers (on-chain, off-chain, cross-chain, or hybrid) require careful implementation by developers due to platform-specific constraints and semantic richness. To address this separation of concerns, we introduce a methodology within the context of the smart contract-as-a-service (SCaaS) approach that supports (1) identifying and mapping generic payment tasks in BPMN to pre-deployed payment smart contracts, (2) augmenting BPMN models with matching payment fragments from a pattern repository, and (3) automatically transforming the augmented models into smart contracts that invoke the appropriate payment services. Our approach builds on prior work in automated BPMN-to-smart contract transformation using Discrete Event–Hierarchical State Machine (DE-HSM) multi-modal modeling to capture process semantics and nested transactions, while enabling payment service reuse, extensibility, and the separation of concerns. We illustrate this methodology via representative use cases spanning conventional, DeFi, and cross-chain payments, and discuss the implications for modular contract deployment and maintainability. Full article
Show Figures

Figure 1

17 pages, 1287 KB  
Article
Time-Dependent DCE-MRI Radiomics to Predict Response to Neoadjuvant Therapy in Breast Cancer: A Multicenter Study with External Validation
by Giulia Vatteroni, Riccardo Levi, Paola Nardi, Giulia Pruneddu, Elisa Salpietro, Federica Fici, Cinzia Monti, Rubina Manuela Trimboli and Daniela Bernardi
Diagnostics 2026, 16(4), 611; https://doi.org/10.3390/diagnostics16040611 - 19 Feb 2026
Abstract
Background: The accurate prediction of response to neoadjuvant therapy (NAT) is crucial for optimizing breast cancer management. Conventional breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) radiomics typically relies on single post-contrast phases and may not fully capture temporal enhancement patterns related to [...] Read more.
Background: The accurate prediction of response to neoadjuvant therapy (NAT) is crucial for optimizing breast cancer management. Conventional breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) radiomics typically relies on single post-contrast phases and may not fully capture temporal enhancement patterns related to tumor heterogeneity. This study evaluated a machine learning model based on time-dependent radiomic features extracted from pre-treatment DCE-MRI for predicting NAT response in breast cancer patients. Methods: Breast DCE-MRI examinations of women scheduled for NAT, acquired on 1.5 T scanners from three different vendors, were retrospectively collected from two centers. Tumors were automatically segmented on the third post-contrast DCE image using a 3D nnUNet model trained on 30 lesions. All DCE phases were registered to the reference image, and radiomic features were extracted from a consistent tumor region of interest across all phases. Time-dependent radiomic features were computed using linear regression modeling of feature evolution over time. A random forest classifier integrating static and time-dependent radiomic features was developed to predict pathological complete response (pCR), partial response (pPR), and non-response (pNR). Model performance was evaluated using internal validation (Center 1) and an independent external test cohort (Center 2). Results: A total of 212 patients were included (173 from Center 1 and 39 from Center 2), comprising 103 pCR, 103 pPR and 6 pNR cases. Among 759 extracted features, 30 showed significant differences across response groups. Several time-dependent texture features related to intratumoral heterogeneity were significantly associated with pNR. The model achieved AUC values of 0.80, 0.81, and 0.95 in the internal validation cohort and 0.75, 0.74, and 0.86 in the external test cohort for predicting pCR, pPR, and pNR, respectively. Conclusions: Time-dependent radiomic features derived from pre-treatment breast DCE-MRI enable the accurate prediction of response to NAT, with particularly strong performance in identifying non-responders. This approach may support imaging-based risk stratification and contribute to more personalized treatment. Full article
(This article belongs to the Special Issue Advances in Breast Diagnostics)
Show Figures

Figure 1

21 pages, 600 KB  
Article
The Role of the Different Components of Attention on Observational Learning in Early Primary School Children: New Insights and Educational Implications
by Francesca Foti, Valentina Lucia La Rosa, Luca Pullano, Tiziana Iaquinta and Elena Commodari
Brain Sci. 2026, 16(2), 237; https://doi.org/10.3390/brainsci16020237 - 19 Feb 2026
Abstract
Background/Objectives: Observational learning enables children to acquire new skills by observing others’ actions. Attention is widely recognized as a key supporting process and consists of multiple components that develop substantially during the early school years. Empirical evidence on the association between specific components [...] Read more.
Background/Objectives: Observational learning enables children to acquire new skills by observing others’ actions. Attention is widely recognized as a key supporting process and consists of multiple components that develop substantially during the early school years. Empirical evidence on the association between specific components of attention and observational learning remains limited. Therefore, this study examined the relationship between the main components of attention and observational learning among early primary school children. Methods: Sixty-eight children, aged 6–8, completed a computerized battery assessing the main components of attention (reaction times, simple and related to a choice; focused attention; short-term span of attention; divided and alternating attention) and an observational learning task where children observed an actor detecting a hidden spatial sequence and then reproduced it across detection phase (DP), exercise phase (EP), and automatization phase (AP). Correlational and regression analyses were conducted, controlling for age and gender. Results: Visual and visual–spatial focused attention emerged as significant predictors of performance during DP and EP, with higher levels of focused attention associated with fewer errors and repetitions. Choice reaction time showed phase-specific associations with error rates during early learning phases, whereas age was primarily related to performance during the AP. Conclusions: Observational learning in early primary school relies on specific components of attention rather than on attention as a unitary construct. Visual and visual–spatial focused attention plays a central role during the acquisition and consolidation of observed sequences, with implications for understanding learning from models and for educational practices based on demonstration. Full article
(This article belongs to the Section Developmental Neuroscience)
Show Figures

Figure 1

26 pages, 4675 KB  
Article
Examining Container Terminal Efficiency with Diverse Data Sources: Vessel, Truck, and Container Turnaround Times in Japanese Terminals
by Daigo Shiraishi, Wenru Zhang, Ryuichi Shibasaki and Yesim Elhan-Kayalar
Logistics 2026, 10(2), 51; https://doi.org/10.3390/logistics10020051 - 18 Feb 2026
Viewed by 65
Abstract
Background: Improving container terminal efficiency requires a comprehensive understanding of the interactions between vessel, truck, and container operations, yet existing studies often analyzed these components separately. In Japanese container terminals, where digitalization initiatives are progressing, empirical evidence based on integrated operational data [...] Read more.
Background: Improving container terminal efficiency requires a comprehensive understanding of the interactions between vessel, truck, and container operations, yet existing studies often analyzed these components separately. In Japanese container terminals, where digitalization initiatives are progressing, empirical evidence based on integrated operational data remains limited. Methods: This study empirically analyzes turnaround times for vessels, trucks, and containers at five major Japanese container terminals using a composite dataset that integrates terminal operating system data, automatic identification system data, and liner service information. Descriptive statistical analyses and regression models are applied to examine vessel berthing time, truck arrival patterns and turnaround time, container dwell time within terminals, and container round-trip time outside terminals. Results: The analysis reveals distinct temporal patterns in terminal operations, including systematic morning–afternoon asymmetries and differences across cargo flows. Truck turnaround times increase with vessel calls and vary by time of day, while container dwell times are strongly influenced by terminal policies such as free-time rules. Regression analyses indicate that turnaround times are primarily affected by terminal-controlled factors. Conclusions: These findings demonstrate the importance of synchronizing quayside and landside operations. The study contributes integrated empirical evidence to the port digitalization literature and provides actionable insights for enhancing container terminal efficiency. Full article
(This article belongs to the Section Maritime and Transport Logistics)
Show Figures

Figure 1

15 pages, 1217 KB  
Review
Applications of Artificial Intelligence in Corneal Nerve Images in Ophthalmology
by Raul Hernan Barcelo-Canton, Mingyi Yu, Chang Liu, Aya Takahashi, Isabelle Xin Yu Lee and Yu-Chi Liu
Diagnostics 2026, 16(4), 602; https://doi.org/10.3390/diagnostics16040602 - 18 Feb 2026
Viewed by 44
Abstract
Corneal nerves (CNs) are essential to maintain corneal epithelial integrity and ocular surface homeostasis. In vivo confocal microscopy (IVCM) enables the acquisition of high-resolution visualization of CNs, allowing visualization on a microscopic level. Traditionally, CN images must be analyzed by manual examination, which [...] Read more.
Corneal nerves (CNs) are essential to maintain corneal epithelial integrity and ocular surface homeostasis. In vivo confocal microscopy (IVCM) enables the acquisition of high-resolution visualization of CNs, allowing visualization on a microscopic level. Traditionally, CN images must be analyzed by manual examination, which is time consuming and labor intensive. Artificial intelligence (AI) has facilitated reliable analysis of CN parameters, allowing for automatic and semiautomatic analysis of CNs. These include the identification, segmentation, and quantitative analysis of various CN parameters. This review summarizes the applications of AI-driven, automatic, and semiautomatic models in the CN analysis of IVCM images while also focusing on their diagnostic relevance in dry eye disease (DED) and neuropathic corneal pain (NCP). Recent advancements in AI have transformed IVCM image analysis by improving reproducibility and reducing operator dependency and time. The AI-based algorithm has been demonstrated to have good performance and sensitivity to identify and quantify the CN metrics. AI has also been utilized to improve the diagnostic accuracy of DED with IVCM scans, involving multiple portions of the CNs, such as the inferior whorl region. When employed with IVCM images of patients with NCP, AI-assisted identification of microneuromas and changes in CN metrics has provided an improvement in diagnostic accuracy. Despite promising advances and outcomes, the widespread implementation of these AI models in CN image analysis requires large-scale validation. Future integration of multimodal AI algorithms remains a promising endeavor to enhance diagnostic accuracy and disease stratification. Full article
Show Figures

Figure 1

22 pages, 1264 KB  
Article
A Large Language Model-Driven Strategic Evaluation Framework via Time-Series Directed Acyclic Graphs
by Mingyin Zou, Xiaomin Zhu, Yanqing Ye, Guangrong You and Li Ma
Appl. Sci. 2026, 16(4), 2007; https://doi.org/10.3390/app16042007 - 18 Feb 2026
Viewed by 49
Abstract
Strategic evaluation is essential for decision-making under uncertainty. Yet existing qualitative and quantitative methods—including chat-oriented large language model (LLM) evaluations—are difficult to deploy in complex, dynamic environments. They often fail to represent nonlinear causal dependencies among indicators, account for temporal lags, or support [...] Read more.
Strategic evaluation is essential for decision-making under uncertainty. Yet existing qualitative and quantitative methods—including chat-oriented large language model (LLM) evaluations—are difficult to deploy in complex, dynamic environments. They often fail to represent nonlinear causal dependencies among indicators, account for temporal lags, or support scalable reasoning. To address these limitations, we propose an LLM-driven strategic evaluation framework with three innovations. First, the framework integrates LLMs across the evaluation lifecycle and couples their qualitative reasoning with quantitative model computation, improving both efficiency and deployability. Second, we introduce a Time-Series Directed Acyclic Graph (TS-DAG) indicator system that explicitly encodes causal structure and time-lagged interdependencies. Third, we develop an LLM-driven procedure that automatically derives the TS-DAG architecture and instantiates its computational parameters, reducing reliance on expert-only construction. We validate the framework through an empirical study of the new energy vehicle market, complemented by baseline algorithm comparisons and sensitivity analyses. The results show that the proposed framework can uncover core indicators, capture competitive dynamics, and explain long-term strategic outcomes across varying environmental conditions. Overall, the framework provides a robust solution for strategic evaluation in complex settings, bridging qualitative strategic reasoning and quantitative, data-driven analysis. Full article
(This article belongs to the Special Issue Applied Machine Learning in Industry 4.0)
25 pages, 1477 KB  
Article
A Data-Driven Method for Identifying Similarity in Transmission Sections Considering Energy Storage Regulation Capabilities
by Leibao Wang, Wei Zhao, Junru Gong, Jifeng Liang, Yangzhi Wang and Yifan Su
Electronics 2026, 15(4), 851; https://doi.org/10.3390/electronics15040851 - 17 Feb 2026
Viewed by 78
Abstract
To address the challenges of real-time control in power systems with high renewable penetration, identifying historical transmission sections similar to future scenarios enables efficient reuse of mature control strategies. However, existing data-driven identification methods exhibit two primary limitations: they typically rely on static [...] Read more.
To address the challenges of real-time control in power systems with high renewable penetration, identifying historical transmission sections similar to future scenarios enables efficient reuse of mature control strategies. However, existing data-driven identification methods exhibit two primary limitations: they typically rely on static Total Transfer Capacity (TTC), ignoring the rapid regulation capability of Energy Storage Systems (ESS) in alleviating congestion; and they employ fixed weights for similarity measurement, failing to distinguish the varying importance of different features (e.g., critical line flows vs. ordinary voltages). To overcome these issues, this paper proposes a similarity identification method for transmission sections considering ESS regulation capabilities and adaptive feature weights. First, a hierarchical decision model is utilized to screen basic grid features. An optimization model incorporating ESS charge/discharge constraints and emergency power support potential is established to calculate the Dynamic TTC, constructing a multi-scale feature set that reflects the real-time safety margin of the grid. Second, a Dispersion-Weighted Fuzzy C-Means (DW-FCM) clustering algorithm is proposed. By introducing a dispersion-weighting mechanism, the algorithm utilizes data distribution characteristics to automatically learn and assign higher weights to key features with high distinguishability during the iteration process, overcoming the subjectivity of manual weighting. Furthermore, fuzzy validity indices (XB, PC, FS) are introduced to adaptively determine the optimal number of clusters. Finally, case studies on the IEEE 39-bus system verify that the proposed method significantly improves identification accuracy compared to traditional methods and provides more reliable references for dispatching decisions. Full article
(This article belongs to the Special Issue Security Defense Technologies for the New-Type Power System)
30 pages, 4364 KB  
Article
Research on an Automatic Solution Method for Plane Frames Based on Computer Vision
by Dejiang Wang and Shuzhe Fan
Sensors 2026, 26(4), 1299; https://doi.org/10.3390/s26041299 - 17 Feb 2026
Viewed by 108
Abstract
In the internal force analysis of plane frames, traditional mechanics solutions require the cumbersome derivation of equations and complex numerical calculations, a process that is both time-consuming and error-prone. While general-purpose Finite Element Analysis (FEA) software offers rapid and precise calculations, it is [...] Read more.
In the internal force analysis of plane frames, traditional mechanics solutions require the cumbersome derivation of equations and complex numerical calculations, a process that is both time-consuming and error-prone. While general-purpose Finite Element Analysis (FEA) software offers rapid and precise calculations, it is limited by tedious modeling pre-processing and a steep learning curve, making it difficult to meet the demand for rapid and intelligent solutions. To address these challenges, this paper proposes a deep learning-based automatic solution method for plane frames, enabling the extraction of structural information from printed plane structural schematics and automatically completing the internal force analysis and visualization. First, images of printed plane frame schematics are captured using a smartphone, followed by image pre-processing steps such as rectification and enhancement. Second, the YOLOv8 algorithm is utilized to detect and recognize the plane frame, obtaining structural information including node coordinates, load parameters, and boundary constraints. Finally, the extracted data is input into a static analysis program based on the Matrix Displacement Method to calculate the internal forces of nodes and elements, and to generate the internal force diagrams of the frame. This workflow was validated using structural mechanics problem sets and the analysis of a double-span portal frame structure. Experimental results demonstrate that the detection accuracy of structural primitives reached 99.1%, and the overall solution accuracy of mechanical problems in the final test set exceeded 90%, providing a more convenient and efficient computational method for the analysis of plane frames. Full article
(This article belongs to the Special Issue Object Detection and Recognition Based on Deep Learning)
Show Figures

Figure 1

29 pages, 3365 KB  
Article
A Hybrid Automatic Model for Circle Detection in X-Ray Imagery: A Case Study on Hip Prosthesis Wear
by Mehmet Öztürk and Yahia Adwan
Bioengineering 2026, 13(2), 235; https://doi.org/10.3390/bioengineering13020235 - 17 Feb 2026
Viewed by 220
Abstract
This study presents a fully automatic hybrid framework for circle detection and geometric feature extraction from anteroposterior (AP) X-ray images. Detecting circular structures in X-ray imagery is challenging due to low contrast, noise, and metal-induced artifacts, which often limit the robustness of purely [...] Read more.
This study presents a fully automatic hybrid framework for circle detection and geometric feature extraction from anteroposterior (AP) X-ray images. Detecting circular structures in X-ray imagery is challenging due to low contrast, noise, and metal-induced artifacts, which often limit the robustness of purely learning-based or purely geometric approaches. To address these challenges, a hybrid deep learning and computer vision pipeline is proposed that combines data-driven region localization with robust geometric fitting. A YOLOv5-based detector is first employed to identify a compact region of interest (ROI) containing circular components. Within this ROI, edge-based processing using Canny detection is applied, followed by an Edge-Snap refinement stage and robust RANSAC-based circle fitting with a Hough-transform fallback to ensure anatomically plausible circle estimation. The resulting circle centers and radii provide stable geometric parameters that can be consistently extracted across images with varying contrast, noise levels, and prosthesis appearances. The applicability of the proposed framework is demonstrated through a case study on hip prosthesis wear analysis, where the automatically detected circle parameters are used to compute medial, superior, and resultant displacement components using established two-dimensional radiographic formulations. Experimental evaluation on AP hip radiographs shows that the YOLOv5 detector achieves high ROI localization performance (mAP@0.5 = 0.971) and that the hybrid pipeline produces consistent circle parameters across longitudinal image sequences. Overall, the proposed method provides an end-to-end automatic solution for robust circle detection in X-ray imagery, with hip prosthesis wear presented solely as a case study without clinical or diagnostic claims. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

25 pages, 1558 KB  
Article
Towards Scalable Monitoring: An Interpretable Multimodal Framework for Migration Content Detection on TikTok Under Data Scarcity
by Dimitrios Taranis, Gerasimos Razis and Ioannis Anagnostopoulos
Electronics 2026, 15(4), 850; https://doi.org/10.3390/electronics15040850 - 17 Feb 2026
Viewed by 155
Abstract
Short-form video platforms such as TikTok (TikTok Pte. Ltd., Singapore) host large volumes of user-generated, often ephemeral, content related to irregular migration, where relevant cues are distributed across visual scenes, on-screen text, and multilingual captions. Automatically identifying migration-related videos is challenging due to [...] Read more.
Short-form video platforms such as TikTok (TikTok Pte. Ltd., Singapore) host large volumes of user-generated, often ephemeral, content related to irregular migration, where relevant cues are distributed across visual scenes, on-screen text, and multilingual captions. Automatically identifying migration-related videos is challenging due to this multimodal complexity and the scarcity of labeled data in sensitive domains. This paper presents an interpretable multimodal classification framework designed for deployment under data-scarce conditions. We extract features from platform metadata, automated video analysis (Google Cloud Video Intelligence), and Optical Character Recognition (OCR) text, and compare text-only, OCR-only, and vision-only baselines against a multimodal fusion approach using Logistic Regression, Random Forest, and XGBoost. In this pilot study, multimodal fusion consistently improves class separation over single-modality models, achieving an F1-score of 0.92 for the migration-related class under stratified cross-validation. Given the limited sample size, these results are interpreted as evidence of feature separability rather than definitive generalization. Feature importance and SHAP analyses identify OCR-derived keywords, maritime cues, and regional indicators as the most influential predictors. To assess robustness under data scarcity, we apply SMOTE to synthetically expand the training set to 500 samples and evaluate performance on a small held-out set of real videos, observing stable results that further support feature-level robustness. Finally, we demonstrate scalability by constructing a weakly labeled corpus of 600 videos using the identified multimodal cues, highlighting the suitability of the proposed feature set for weakly supervised monitoring at scale. Overall, this work serves as a methodological blueprint for building interpretable multimodal monitoring pipelines in sensitive, low-resource settings. Full article
(This article belongs to the Special Issue Multimodal Learning for Multimedia Content Analysis and Understanding)
Show Figures

Figure 1

Back to TopTop