Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (791)

Search Parameters:
Keywords = automated sample generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3511 KB  
Article
Automated Mid-Surface Mesh Reconstruction for Automotive Plastic Parts Based on Point Cloud Registration
by Yan Ma, Hongbin Tang, Zehui Huang, Jianjiao Deng, Jingchun Wang, Shibin Wang, Zhiguo Zhang and Zhenjiang Wu
Vehicles 2026, 8(4), 89; https://doi.org/10.3390/vehicles8040089 - 10 Apr 2026
Abstract
In automotive Computer-Aided Engineering (CAE), the fidelity of high-quality shell element meshes is fundamentally governed by the accuracy of mid-surface geometry extraction. Conventional manual extraction for complex automotive plastic components is labor-intensive, error-prone, and often compromises mesh quality. To address these issues, this [...] Read more.
In automotive Computer-Aided Engineering (CAE), the fidelity of high-quality shell element meshes is fundamentally governed by the accuracy of mid-surface geometry extraction. Conventional manual extraction for complex automotive plastic components is labor-intensive, error-prone, and often compromises mesh quality. To address these issues, this paper proposes an automated mid-surface mesh reconstruction method based on point cloud registration, establishing an integrated framework comprising “Multimodal Registration—Displacement Binding—Surface Correction.” Using a source part with an ideal mid-surface as a template, the method integrates Random Sample Consensus (RANSAC) and Iterative Closest Point (ICP) for rigid registration and Coherent Point Drift (CPD) for non-rigid registration to achieve high-precision alignment between the target and source outer-surface point clouds. Subsequently, a K-Nearest Neighbor (K-NN) search-based displacement binding mechanism smoothly transfers the outer-surface displacement field to the source mid-surface point cloud. Following position correction and surface smoothing, a complete and high-quality target mid-surface mesh is generated. Experimental results on typical plastic snap-fit components demonstrate that the normal projection error between the generated mid-surface and the manually refined “gold standard” mesh is less than 0.05 mm. The processing time per component is approximately 38 s, representing an efficiency improvement of over 73% compared to manual extraction using commercial CAE software. This method effectively mitigates common issues such as mid-surface distortion and feature loss, offering a high-precision, fully automated solution for automotive CAE pre-processing. Full article
Show Figures

Figure 1

27 pages, 6782 KB  
Article
Development and Evaluation of a Data Glove-Based System for Assisting Puzzle Solving
by Shashank Srikanth Bharadwaj, Kazuma Sato and Lei Jing
Sensors 2026, 26(8), 2341; https://doi.org/10.3390/s26082341 - 10 Apr 2026
Abstract
Many hands-on tasks remain difficult to fully automate because they require human dexterity and flexible object handling. Data gloves offer a promising interface for sensing hand–object interactions, but most prior systems focus on gesture recognition or object classification rather than closed-loop, step-by-step task [...] Read more.
Many hands-on tasks remain difficult to fully automate because they require human dexterity and flexible object handling. Data gloves offer a promising interface for sensing hand–object interactions, but most prior systems focus on gesture recognition or object classification rather than closed-loop, step-by-step task guidance. In this work, we develop and evaluate a tactile-sensing operation support system using an e-textile data glove with 88 pressure sensors, a tactile pressure sheet for placement verification, and a GUI that provides step-by-step instructions. As a core component, a CNN classifies the grasped state as bare hand or one of four discs with 93.3% accuracy using 16,175 training samples collected from five participants. In a user study on the Tower of Hanoi task as a controlled proxy for multi-step manipulation, the system reduced mean solving time by 51.5% (from 242.6 s to 117.8 s), reduced the number of disc movements (35.4 to 15, about 20 fewer moves on average), and lowered perceived workload (NASA-TLX) by 53.1% (from 68.5 to 32.1), while achieving a SUS score of 75. These results demonstrate the feasibility of tactile-based step verification and guidance in a controlled multi-step task; broader generalization requires evaluation with larger and more diverse participant groups and tasks. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

27 pages, 1880 KB  
Article
Hierarchical Acoustic Encoding Distress in Pigs: Disentangling Individual, Developmental, and Emotional Effects with Subject-Wise Validation
by Irenilza de Alencar Nääs, Danilo Florentino Pereira, Alexandra Ferreira da Silva Cordeiro and Nilsa Duarte da Silva Lima
Animals 2026, 16(8), 1148; https://doi.org/10.3390/ani16081148 - 9 Apr 2026
Abstract
Automated pig-welfare monitoring needs scalable, non-invasive signals that work across ages and individuals. A key methodological contribution of this study is the use of subject-wise validation, which ensures generalization to unseen animals and prevents inflated accuracy caused by growth-related and individual ‘voice’ differences. [...] Read more.
Automated pig-welfare monitoring needs scalable, non-invasive signals that work across ages and individuals. A key methodological contribution of this study is the use of subject-wise validation, which ensures generalization to unseen animals and prevents inflated accuracy caused by growth-related and individual ‘voice’ differences. Vocalizations can help, but growth and individual “voice” differences can confound distress patterns and overstate accuracy without subject-wise validation. In our study, we explicitly accounted for individual variability by including animal identity as a random effect in mixed models and by using grouped cross-validation, where models were tested only on pigs not seen during training. This approach ensures that the reported accuracy reflects generalization across different individuals rather than memorization of specific vocal signatures. We analyzed 2221 vocal samples from 40 pigs (20 males, 20 females) recorded across four growth phases (farrowing, nursery, growing, finishing) under six conditions (pain, hunger, thirst, cold stress, heat stress, normal). Acoustic features extracted in Praat included energy, duration, intensity, pitch, and formants (F1–F4). Using blockwise variance decomposition, we quantified contributions of distress exposure, growth phase, and sex, and estimated the additional variance explained by animal identity. Distress exposure dominated intensity and spectral traits, particularly Formant 2, whereas the growth phase produced systematic shifts in duration and pitch. Animal identity added a modest but consistent increment in explained variance (~+0.02–0.03 R2 beyond sex, phase, and distress). For prediction, we used 5-fold cross-validation grouped by animal. A Random Forest achieved a modest balanced accuracy of 0.609 and macro-F1 of 0.597; pain was most separable (recall 0.825), while other states showed moderate recall, indicating overlap. These results support hierarchical acoustic encoding of distress and establish a benchmark for precision welfare monitoring. Furthermore, they highlight that resolving complex physiological overlaps, such as heat stress and resource competition, requires a shift from unimodal acoustic models to multimodal Precision Livestock Farming (PLF) systems that integrate bioacoustics with continuous environmental and behavioral data streams. Full article
Show Figures

Graphical abstract

20 pages, 1820 KB  
Article
ID-MSNet: An Enhanced Multi-Scale Network with Convolutional Attention for Pixel-Level Steel Defect Segmentation
by Mohammadreza Saberironaghi, Jing Ren and Alireza Saberironaghi
Algorithms 2026, 19(4), 294; https://doi.org/10.3390/a19040294 - 9 Apr 2026
Abstract
Automated pixel-level detection of steel surface defects is a critical challenge in manufacturing quality control, complicated by the variation in defect size and shape, low contrast with background textures, and the diversity of defect patterns. This paper proposes ID-MSNet, an enhanced version of [...] Read more.
Automated pixel-level detection of steel surface defects is a critical challenge in manufacturing quality control, complicated by the variation in defect size and shape, low contrast with background textures, and the diversity of defect patterns. This paper proposes ID-MSNet, an enhanced version of the UNet3+ architecture, designed specifically for the segmentation of three common steel surface defect types: inclusions, patches, and scratches. The proposed architecture introduces three targeted modifications: (1) a multi-scale feature learning module (MSFLM) in the encoder that uses dilated convolutions at multiple rates to capture contextual features across different scales, combined with DropBlock regularization and batch normalization to improve generalization; (2) an improved down-sampling (IDS) module that replaces standard max-pooling with learnable strided convolutions fused via 1 × 1 convolution, preserving richer feature representations; and (3) a convolutional block attention module (CBAM) integrated into the skip connections to selectively focus the model on spatially and channel-wise relevant defect regions. Experiments on the publicly available SD-saliency-900 dataset demonstrate that ID-MSNet achieved an 86.19% mIoU, outperforming all compared state-of-the-art segmentation models while using only 6.7 million parameters—approximately 75% fewer than the original UNet3+. These results establish ID-MSNet as a strong and efficient baseline for steel surface defect segmentation, with potential applicability to automated quality inspection in broader manufacturing contexts. Full article
Show Figures

Figure 1

32 pages, 7135 KB  
Article
Evolutionary Multi-Objective Prompt Learning for Synthetic Text Data Generation with Black-Box Large Language Models
by Diego Pastrián, Nicolás Hidalgo, Víctor Reyes and Erika Rosas
Appl. Sci. 2026, 16(8), 3623; https://doi.org/10.3390/app16083623 - 8 Apr 2026
Viewed by 150
Abstract
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are [...] Read more.
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are scarce or difficult to obtain. Large Language Models (LLMs) provide powerful capabilities for synthetic text generation, yet the quality of generated data strongly depends on the design of input prompts. Prompt engineering is therefore critical, but it remains largely manual and difficult to scale, particularly in black-box settings where model internals are inaccessible. This work introduces EVOLMD-MO, a multi-objective evolutionary framework for automated prompt learning aimed at generating high-quality synthetic text datasets using black-box LLMs. The proposed approach formulates prompt optimization as a multi-objective search problem in which candidate prompts evolve through genetic operators guided by two complementary objectives: semantic fidelity to reference data and generative diversity of the produced samples. To support scalable optimization, the framework integrates a modular multi-agent architecture that decouples prompt evolution, LLM interaction, and evaluation mechanisms. The evolutionary process is implemented using the NSGA-II algorithm, enabling the discovery of diverse Pareto-optimal prompts that balance semantic preservation and diversity. Experimental evaluation using large-scale disaster-related social media data demonstrates that the proposed approach consistently improves prompt quality across generations while maintaining a stable trade-off between fidelity and diversity. Compared with a single-objective baseline, EVOLMD-MO explores a significantly broader semantic search space and produces more diverse yet semantically coherent synthetic datasets. These results indicate that multi-objective evolutionary prompt learning constitutes a promising strategy for black-box LLM-driven data generation, with potential applicability to adaptive data analytics and real-time decision-support systems in highly dynamic environments, pending broader validation across domains and models. Full article
(This article belongs to the Special Issue Resource Management for AI-Centric Computing Systems)
Show Figures

Figure 1

14 pages, 2118 KB  
Article
AI Method for Classification of Diagnosis of Near-Infrared Breast Lesion Images
by Kaiquan Chen, Fangyang Shen, Honggang Wang, Zhengchao Dong, Jizhong Xiao, Ming Ma, Afroza Aktar, Christopher Chow and Wenxiong Zhang
AI 2026, 7(4), 133; https://doi.org/10.3390/ai7040133 - 7 Apr 2026
Viewed by 185
Abstract
In near-infrared optical breast lesion screening and diagnosis systems, high-speed four-dimensional scanners can dynamically acquire tens of thousands of lesion images within a five-minute period. Currently, manual computer annotation is required to generate standard samples from these scanned breast lesion images, a process [...] Read more.
In near-infrared optical breast lesion screening and diagnosis systems, high-speed four-dimensional scanners can dynamically acquire tens of thousands of lesion images within a five-minute period. Currently, manual computer annotation is required to generate standard samples from these scanned breast lesion images, a process that depends heavily on physicians with clinical expertise. On average, a single physician can annotate only approximately ten samples per working day. As a result, this process is time-consuming and labor-intensive, and the collected samples often suffer from low accuracy, large variability, and limited diagnostic reliability. Several AI-based annotation tools, such as QuPath, HALO AI™, and X-AnyLabeling, have been developed to assist this process. However, these tools are primarily manual or semi-automated and are unable to provide rapid and high-precision recognition. To address these limitations, this study proposes a new AI-based method for the rapid, accurate, and fully automated detection and diagnosis of breast lesions. The proposed approach complements existing AI-based annotation and diagnostic methods by enabling automated detection and classification of breast lesion samples. The proposed system employs a deep learning–based classification framework to construct a professional-level AI diagnostic model. The system automatically generates diagnostic outputs based on the annotation criteria used by professional physicians, including positive/negative classification and accuracy metrics. Compared with conventional manual diagnostic methods, the proposed approach provides faster and more reliable diagnostic estimates for new patients. These results demonstrate the potential of the proposed AI-based method to advance automated breast lesion screening and diagnosis and to contribute to future research and clinical applications in this field. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

21 pages, 2700 KB  
Article
Bridging Stochasticity and Fuzziness: Automated Construction of Triangular Fuzzy Numbers via LLM Temperature Sampling for Managerial Decision Support
by Meng Zhang, Wenjie Bai, Yuanfei Guo, Wenlong Xu, Ranjun Wang, Yingdong Chen and Yuliang Zhao
Information 2026, 17(4), 349; https://doi.org/10.3390/info17040349 - 6 Apr 2026
Viewed by 267
Abstract
Traditional fuzzy decision-making often relies on manual expert calibration, which is labor-intensive and susceptible to subjective bias. This study addresses these limitations by proposing a novel framework that transforms the intrinsic probabilistic outputs of Large Language Models (LLMs) into Triangular Fuzzy Numbers (TFNs). [...] Read more.
Traditional fuzzy decision-making often relies on manual expert calibration, which is labor-intensive and susceptible to subjective bias. This study addresses these limitations by proposing a novel framework that transforms the intrinsic probabilistic outputs of Large Language Models (LLMs) into Triangular Fuzzy Numbers (TFNs). We introduce a multi-temperature sampling strategy coupled with weighted quantile aggregation and an adaptive interval adjustment mechanism to systematically map model stochasticity to fuzzy possibility distributions. Empirical validation on a structured prototype dataset demonstrates that the proposed method achieves high consistency with expert consensus, with GPT-4.2 exhibiting superior central accuracy and Gemini-2.5 excelling in uncertainty coverage. Furthermore, in complex unstructured scenarios involving business public opinion, the integration of Model Context Protocol (MCP) and Retrieval-Augmented Generation (RAG) significantly corrects cognitive biases and converges uncertainty boundaries. This research establishes a rigorous pathway from generative AI probabilities to fuzzy decision theory, offering a robust automated solution for quantitative risk assessment and intelligent decision support. Full article
Show Figures

Figure 1

34 pages, 56063 KB  
Article
Deep Learning-Based Intelligent Analysis of Rock Thin Sections: From Cross-Scale Lithology Classification to Grain Segmentation for Quantitative Fabric Characterization
by Wenhao Yang, Ang Li, Liyan Zhang and Xiaoyao Qin
Electronics 2026, 15(7), 1509; https://doi.org/10.3390/electronics15071509 - 3 Apr 2026
Viewed by 264
Abstract
Quantitative microstructure evaluation of sedimentary rock thin sections is essential for revealing reservoir flow mechanisms and assessing reservoir quality. However, traditional manual identification is inefficient and prone to subjectivity. Although current deep learning approaches have improved efficiency, most remain confined to single tasks [...] Read more.
Quantitative microstructure evaluation of sedimentary rock thin sections is essential for revealing reservoir flow mechanisms and assessing reservoir quality. However, traditional manual identification is inefficient and prone to subjectivity. Although current deep learning approaches have improved efficiency, most remain confined to single tasks and lack a pathway to translate image recognition into quantifiable geological parameters. Moreover, these methods struggle with cross-scale feature extraction and accurate grain boundary localization in complex textures. To overcome these limitations, this study proposes a three-stage automated analysis framework integrating intelligent lithology identification, sandstone grain segmentation, and quantitative analysis of fabric parameters. To address scale discrepancies in lithology discrimination, Rock-PLionNet integrates a Partial-to-Whole Context Fusion (PWC-Fusion) module and the Lion optimizer, which mitigates cross-scale feature inconsistencies and enables accurate screening of target sandstone samples. Subsequently, to correct boundary deviations caused by low contrast and grain adhesion, the PetroSAM-CRF strategy integrates polarization-aware enhancement with dense conditional random field (DenseCRF)-based probabilistic refinement to extract precise grain contours. Based on these outputs, the framework automatically calculates key fabric parameters, including grain size and roundness. Experiments on 3290 original multi-source thin-section images show that Rock-PLionNet achieves a classification accuracy of 96.57% on the test set. Furthermore, PetroSAM-CRF reduces segmentation bias observed in general-purpose models under complex texture conditions, enabling accurate parameter estimation with a roundness error of 2.83%. Overall, this study presents an intelligent workflow linking microscopic image recognition with quantitative analysis of geological fabric parameters, providing a practical pathway for digital petrographic evaluation in hydrocarbon exploration. Full article
Show Figures

Figure 1

24 pages, 3958 KB  
Article
MEG-RRT*: A Hierarchical Hybrid Path Planning Framework for Warehouse AGVs Using Multi-Objective Evolutionary Guidance
by Qingli Wu, Qichao Tang, Lei Ma, Duo Zhao and Jieyu Lei
Sensors 2026, 26(7), 2221; https://doi.org/10.3390/s26072221 - 3 Apr 2026
Viewed by 219
Abstract
Autonomous guided vehicle (AGV) navigation in high-density warehouses faces significant challenges due to narrow aisles and complex U-shaped traps. In such environments, traditional sampling-based path planning algorithms often converge slowly and produce suboptimal paths. To solve these issues, a novel hierarchical hybrid planning [...] Read more.
Autonomous guided vehicle (AGV) navigation in high-density warehouses faces significant challenges due to narrow aisles and complex U-shaped traps. In such environments, traditional sampling-based path planning algorithms often converge slowly and produce suboptimal paths. To solve these issues, a novel hierarchical hybrid planning framework named MEG-RRT* (Multi-objective Evolutionary Guided RRT*) is proposed in this study. The proposed MEG-RRT* integrates an optimization engine based on NSGA-II into the sampling process. It guides exploration direction away from local minima by jointly optimizing convergence efficiency and safety-related objectives. Furthermore, a geometry-aware execution layer is introduced to improve motion through narrow passages and to refine the path structure. This layer includes radar-guided steering, adaptive step-size control, and ancestor shortcut operations. Comparative experiments were conducted in simulated scenarios of complex narrow passages and high-density warehouses to verify the superiority of the proposed MEG-RRT*. In complex narrow passages, the proposed algorithm achieves a 100% success rate; it also reduces convergence time by 43.5% compared to standard RRT* and by 44.9% compared to Informed-RRT*. In warehouse environments, it generates smooth, kinematically favorable paths that are 39% shorter than those produced by RRT-Connect. These results demonstrate that MEG-RRT* balances exploration efficiency and solution optimality, making it well suited for automated logistics applications. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

22 pages, 4903 KB  
Article
A Robust Lithium-Ion Battery Capacity Prediction Framework Using Multi-Point Voltage Temporal Features and an OOF-Trained Adaptive Gating Mechanism
by Lun-Yi Lung, Bo-Hao Zhou and Cheng-Chien Kuo
Energies 2026, 19(7), 1745; https://doi.org/10.3390/en19071745 - 2 Apr 2026
Viewed by 256
Abstract
Accurate capacity prediction is paramount for ensuring the operational safety and reliability of lithium-ion battery management systems (BMS). Nevertheless, contemporary data-driven approaches often grapple with limited feature representation—frequently relying solely on aggregate charging duration or noise measures—which compromises the robustness of these approaches. [...] Read more.
Accurate capacity prediction is paramount for ensuring the operational safety and reliability of lithium-ion battery management systems (BMS). Nevertheless, contemporary data-driven approaches often grapple with limited feature representation—frequently relying solely on aggregate charging duration or noise measures—which compromises the robustness of these approaches. To address these limitations, this study proposes a robust framework integrating multi-point voltage temporal sampling (MVTS) with an adaptive gated hybrid ensemble learning strategy. The MVTS method is first used to extract high-dimensional geometric features from the constant-current (CC) charging phase (3.9 V–4.15 V), effectively capturing subtle degradation patterns. Subsequently, an unsupervised isolation forest algorithm is incorporated for automated anomaly detection and rectification, thereby augmenting data stability prior to training. In the fusion stage, a heterogeneous hybrid model comprising eXtreme gradient boosting (XGBoost) and long short-term memory (LSTM) is constructed. An adaptive gating mechanism based on random forest (RF) is added to dynamically weight the base learners. To mitigate data leakage during the stacking process, this study employs an out-of-fold (OOF) training strategy based on leave-one-battery-out (LOBO) cross-validation to generate unbiased meta-features for the gating model. This mechanism dynamically modulates fusion weights contingent upon the multi-point voltage features and model discrepancies, thereby accommodating diverse aging stages and capacity degradation patterns. Experimental results from the NASA battery aging dataset demonstrate that the proposed framework significantly outperforms single-model baselines in terms of RMSE and R2, exhibiting superior adaptability and predictive precision. Full article
Show Figures

Figure 1

27 pages, 5640 KB  
Article
An Integrated Hardware–Software Platform for Automated Thermodynamic Characterization of Gas–Solid Interfaces Using a Resonant Microcantilever
by Chunfeng Luo, Haitao Yu, Naidong Wang, Fan Long, Hua Hong, Weijie Zhou and Chang Chen
Micromachines 2026, 17(4), 428; https://doi.org/10.3390/mi17040428 - 31 Mar 2026
Viewed by 273
Abstract
Measurement of material thermodynamic parameters plays a crucial role in understanding the interactions between host materials and guest species. Therefore, developing a general-purpose system for thermodynamic parameter measurement is of great significance. In this work, a complete gas–solid interface thermodynamic parameter measurement platform [...] Read more.
Measurement of material thermodynamic parameters plays a crucial role in understanding the interactions between host materials and guest species. Therefore, developing a general-purpose system for thermodynamic parameter measurement is of great significance. In this work, a complete gas–solid interface thermodynamic parameter measurement platform was developed based on isothermal adsorption and a resonant microcantilever testing platform. Unlike conventional adsorption measurement systems that rely on manual, multi-cycle adsorption–desorption processes, the proposed platform integrates an automated hardware–software architecture together with a stepwise concentration-gradient protocol and on-chip thermal desorption, enabling continuous and efficient acquisition of adsorption isotherms. The study includes: (i) construction of an improved thermodynamic parameter extraction model based on the Sips model, (ii) development of an integrated resonant microcantilever control and acquisition module using a modified Fourier algorithm, and (iii) implementation of an automated testing and data analysis software framework developed in LabVIEW based on the Queued Message Handler (QMH) architecture. The system was validated from both hardware performance and material testing perspectives using CO2 adsorption on H-SSZ-13 as a representative case. The results show that the system achieves a maximum sampling rate of 10,000 pts (points per second), with minimum root-mean-square (RMS) noise levels of 0.0083 Hz for frequency and 0.0109 °C for temperature. The PID temperature-control settling time (0.1%) is 24.9 ms, and the frequency-response settling time (0.01%) is 9.6 ms. Thermodynamic parameters including entropy change (ΔS), enthalpy change (ΔH), and Gibbs free energy change (ΔG) were successfully extracted during CO2 adsorption at 294.15 K under different relative uptakes. Reproducibility was verified across three independent samples, yielding a standard deviation of 9.1 J·mol−1 for ΔS at 2% relative uptake and relative standard deviations of 6.85% and 8.12% for ΔH and ΔG, respectively. These results demonstrate that the proposed thermodynamic measurement platform features a simple architecture, superior performance, and high reproducibility in gas–solid interface thermodynamic studies, showing strong potential for future commercialization. Full article
Show Figures

Figure 1

18 pages, 9267 KB  
Article
Differentiable Automated Design of Automotive Freeform AR-HUD Optical Systems
by Chengxiang Fan, Jihong Zheng, Xinjun Wan, Xiaoxiao Wei and Yunfeng Nie
Photonics 2026, 13(4), 337; https://doi.org/10.3390/photonics13040337 - 30 Mar 2026
Viewed by 373
Abstract
The automotive augmented reality head-up display (AR-HUD) system projects critical driving information directly into the driver’s line of sight, enhancing driving safety, user experience, and navigation efficiency. However, due to the intrinsic asymmetry of vehicle windshields, existing optical configurations are difficult to use [...] Read more.
The automotive augmented reality head-up display (AR-HUD) system projects critical driving information directly into the driver’s line of sight, enhancing driving safety, user experience, and navigation efficiency. However, due to the intrinsic asymmetry of vehicle windshields, existing optical configurations are difficult to use as effective design starting points. The asymmetric transmission region of the windshield causes the AR-HUD optical system to deviate significantly from the YOZ plane, increasing the complexity of system design and optimization. To address these challenges, this paper proposes an automated design method for automotive AR-HUD optical systems. Given the windshield geometry and system design specifications, a normal-guided iterative construction method is first employed to generate a high-performance initial optical structure with low distortion. Subsequently, differentiable ray tracing combined with optimization algorithms is employed to further improve system performance. Based on the proposed method, an AR-HUD optical system with a 130 mm × 50 mm eye-box and a 13° × 4° field of view was designed. The design results indicate that the maximum optical distortion is 0.51%. At five sampled eye positions within the eye-box, the MTF exceeds 0.5 at the spatial frequency of 6 lp/mm, and the dynamic distortion remains below 5.36′. Finally, a complete experimental prototype was established, and the experimental results verified the feasibility and effectiveness of the proposed automated design method. Full article
(This article belongs to the Special Issue Emerging Topics in Freeform Optics)
Show Figures

Figure 1

34 pages, 393 KB  
Article
Symmetry-Aware Dual-Encoder Architecture for Context-Aware Grammatical Error Correction in Chinese Learner English: Toward a Spaced-Repetition Instructional Structure Sensitive to Individual Differences
by Jun Tian
Symmetry 2026, 18(4), 579; https://doi.org/10.3390/sym18040579 - 28 Mar 2026
Viewed by 300
Abstract
Grammatical error correction (GEC) for Chinese learner English is still dominated by sentence-level modeling, which limits discourse-level consistency and weakens adaptation to learner-specific error profiles. From an instructional perspective, these limitations also reduce the value of automated feedback as a basis for spaced-repetition [...] Read more.
Grammatical error correction (GEC) for Chinese learner English is still dominated by sentence-level modeling, which limits discourse-level consistency and weakens adaptation to learner-specific error profiles. From an instructional perspective, these limitations also reduce the value of automated feedback as a basis for spaced-repetition instructional structures sensitive to individual differences. This study proposes a symmetry-aware dual-encoder architecture for context-aware GEC in Chinese learner English. A context encoder captures preceding-sentence information, while a source encoder integrates BERT-based semantic representations with Bi-GRU-based syntactic features for the current sentence. A gated decoder performs asymmetric fusion of local and contextual evidence. To better reflect corpus-level tendencies in Chinese learner English, a CLEC-informed augmentation strategy generates synthetic errors using empirical category frequencies as a coarse sampling prior. Experiments on CoNLL-2014, JFLEG, and CLEC show consistent improvements over strong neural baselines in F0.5 and GLEU under the current desktop-oriented implementation setting. Nevertheless, the integration of BERT, dual encoders, and gated decoding introduces non-negligible computational overhead, and the present system is therefore better suited to desktop writing-support scenarios than to strict real-time or large-scale online deployment. The proposed framework thus provides a practical technical basis for personalized grammar feedback and for future spaced-repetition instructional designs in ESL writing support. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Natural Language Processing)
33 pages, 19800 KB  
Article
Leveraging Geospatial Techniques and Publicly Available Datasets to Develop a Cost-Effective, Digitized National Sampling Frame: A Case Study of Armenia
by Saida Ismailakhunova, Avralt-Od Purevjav, Tsenguunjav Byambasuren and Sarchil H. Qader
ISPRS Int. J. Geo-Inf. 2026, 15(4), 145; https://doi.org/10.3390/ijgi15040145 - 26 Mar 2026
Viewed by 353
Abstract
The lack of a reliable national sampling frame poses a major challenge for conducting representative population and household surveys, particularly in developing countries affected by displacement and rapid territorial change. This study addresses this gap by developing Armenia’s first digitized national sampling frame, [...] Read more.
The lack of a reliable national sampling frame poses a major challenge for conducting representative population and household surveys, particularly in developing countries affected by displacement and rapid territorial change. This study addresses this gap by developing Armenia’s first digitized national sampling frame, where accessible survey frames are severely limited. We introduce an innovative pre-EA tool to semi-automatically construct the digital sampling frame using publicly available datasets. Compared with traditional approaches, this method outperforms in several ways: it enables rapid, semi-automated frame construction, minimizes resource requirements, eliminates geometric errors associated with manual digitization, and produces pre-census EAs (pre-EAs) that both nest within administrative boundaries and align with visible ground features. The approach also integrates gridded population data to reflect recent urbanization and migration, generating pre-census EAs and urban–rural classifications suitable for national surveys. The sampling frame was successfully applied in the World Bank’s “Listening to Armenia” survey. Overall, the study demonstrates that automated, data-driven approaches can efficiently produce accurate, scalable, and adaptable national sampling frames, offering potential utility in other countries facing similar constraints. Full article
Show Figures

Figure 1

36 pages, 6193 KB  
Article
Preliminary Research on the Possibility of Automating the Identification of Pollen Grains in Melissopalynology Using AI, with Particular Emphasis on Computer Image Analysis Methods
by Kacper Litwińczyk, Michał Podralski, Paulina Skorynko, Ewa Malinowska, Zuzanna Czarnota, Beata Bąk and Artur Janowski
Sensors 2026, 26(7), 2043; https://doi.org/10.3390/s26072043 - 25 Mar 2026
Viewed by 401
Abstract
Melissopalynological analysis is essential for determining the botanical origin of honey, corbicular pollen and bee bread, as well as detecting adulteration. However, it traditionally relies on labor-intensive and subjective manual pollen identification. As a proof-of-concept preceding full honey analysis, this study evaluates artificial [...] Read more.
Melissopalynological analysis is essential for determining the botanical origin of honey, corbicular pollen and bee bread, as well as detecting adulteration. However, it traditionally relies on labor-intensive and subjective manual pollen identification. As a proof-of-concept preceding full honey analysis, this study evaluates artificial intelligence methods for automated pollen grain recognition under controlled conditions. Hazel (Corylus avellana L.) and dandelion (Taraxacum officinale F.H. Wigg.) were used as model taxa to validate the proposed approach before its application to real varietal honey samples. This study introduces a novel three-stage pipeline that decouples object detection from feature extraction, utilizing YOLOv12m for region-of-interest generation and, for the first time in melissopalynology, DINOv3 ConvNeXt-B for deep feature representation. Microscopic images acquired at 400× magnification yielded 2498 dandelion and 1941 hazel pollen grains. The detector achieved an mAP@0.5 of 0.936 with an F1 score of 0.88, while the classifier reached 98.1% accuracy with good class separability (Silhouette coefficient: 0.407). The primary technical contribution is the systematic optimization of the detection-to-classification interface. Context-aware bounding box expansion (12%) and an optimized IoU-NMS threshold (0.65) significantly improve the stability of morphological feature extraction, as confirmed by ablation studies. Computational cost reporting further supports reproducible, deployment-oriented comparison. The results confirm the feasibility of this AI-based framework as an intermediate step toward automated melissopalynological analysis, with future work focusing on standardized microscopy protocols and expanded pollen databases for varietal honey authentication. Full article
(This article belongs to the Special Issue Sensing and Machine Learning Control: Progress and Applications)
Show Figures

Figure 1

Back to TopTop