Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (167)

Search Parameters:
Keywords = custom handle design

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3216 KB  
Review
Stapes Prostheses in Otosclerosis Surgery: Materials, Design Innovations, and Future Perspectives
by Luana-Maria Gherasie, Viorel Zainea, Razvan Hainarosie, Andreea Rusescu, Irina-Gabriela Ionita, Ruxandra-Oana Alius and Catalina Voiosu
Actuators 2025, 14(10), 502; https://doi.org/10.3390/act14100502 - 17 Oct 2025
Viewed by 223
Abstract
Background: Stapes prostheses represent one of the earliest and most widely applied “biomedical actuators” designed to restore hearing in patients with otosclerosis. Unlike conventional actuators, which convert energy into motion, stapes prostheses function as passive or smart micro-actuators, transmitting and modulating acoustic [...] Read more.
Background: Stapes prostheses represent one of the earliest and most widely applied “biomedical actuators” designed to restore hearing in patients with otosclerosis. Unlike conventional actuators, which convert energy into motion, stapes prostheses function as passive or smart micro-actuators, transmitting and modulating acoustic energy through the ossicular chain. Objective: This paper provides a comprehensive analysis of stapes prostheses from an engineering and biomedical perspective, emphasizing design principles, materials science, and recent innovations in smart actuators based on shape-memory alloys combined with surgical applicability. Methods: A narrative review of the evolution of stapes prostheses was consolidated by institutional surgical experience. Comparative evaluation focused on materials (Teflon, Fluoroplastic, Titanium, Nitinol) and design solutions (manual crimping, clip-on, heat-activated prostheses). Special attention was given to endoscopic stapes surgery, which highlights the ergonomic and functional requirements of new device designs. Results: Traditional fluoroplastic and titanium pistons provide reliable sound conduction but require manual crimping, with a higher risk of incus necrosis and displacement. Innovative prostheses, particularly those manufactured from nitinol, act as self-crimping actuators activated by heat, improving coupling precision and reducing surgical trauma. Emerging designs, including bucket-handle and malleus pistons, expand applicability to complex or revision cases. Advances in additive manufacturing and middle ear cement fixation offer opportunities for customized, patient-specific actuators. Conclusions: Stapes prostheses have evolved from simple passive pistons to innovative biomedical actuators exploiting shape-memory and biocompatible materials. Future developments in stapes prosthesis design are closely linked to 3D printing technologies. These developments have the potential to enhance acoustic performance, durability, and patient outcomes, thereby bridging the gap between otologic surgery and biomedical engineering. Full article
(This article belongs to the Section Actuators for Medical Instruments)
Show Figures

Figure 1

24 pages, 4022 KB  
Article
Dynamic Vision Sensor-Driven Spiking Neural Networks for Low-Power Event-Based Tracking and Recognition
by Boyi Feng, Rui Zhu, Yue Zhu, Yan Jin and Jiaqi Ju
Sensors 2025, 25(19), 6048; https://doi.org/10.3390/s25196048 - 1 Oct 2025
Viewed by 715
Abstract
Spiking neural networks (SNNs) have emerged as a promising model for energy-efficient, event-driven processing of asynchronous event streams from Dynamic Vision Sensors (DVSs), a class of neuromorphic image sensors with microsecond-level latency and high dynamic range. Nevertheless, challenges persist in optimising training and [...] Read more.
Spiking neural networks (SNNs) have emerged as a promising model for energy-efficient, event-driven processing of asynchronous event streams from Dynamic Vision Sensors (DVSs), a class of neuromorphic image sensors with microsecond-level latency and high dynamic range. Nevertheless, challenges persist in optimising training and effectively handling spatio-temporal complexity, which limits their potential for real-time applications on embedded sensing systems such as object tracking and recognition. Targeting this neuromorphic sensing pipeline, this paper proposes the Dynamic Tracking with Event Attention Spiking Network (DTEASN), a novel framework designed to address these challenges by employing a pure SNN architecture, bypassing conventional convolutional neural network (CNN) operations, and reducing GPU resource dependency, while tailoring the processing to DVS signal characteristics (asynchrony, sparsity, and polarity). The model incorporates two innovative, self-developed components: an event-driven multi-scale attention mechanism and a spatio-temporal event convolver, both of which significantly enhance spatio-temporal feature extraction from raw DVS events. An Event-Weighted Spiking Loss (EW-SLoss) is introduced to optimise the learning process by prioritising informative events and improving robustness to sensor noise. Additionally, a lightweight event tracking mechanism and a custom synaptic connection rule are proposed to further improve model efficiency for low-power, edge deployment. The efficacy of DTEASN is demonstrated through empirical results on event-based (DVS) object recognition and tracking benchmarks, where it outperforms conventional methods in accuracy, latency, event throughput (events/s) and spike rate (spikes/s), memory footprint, spike-efficiency (energy proxy), and overall computational efficiency under typical DVS settings. By virtue of its event-aligned, sparse computation, the framework is amenable to highly parallel neuromorphic hardware, supporting on- or near-sensor inference for embedded applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 5143 KB  
Article
SymOpt-CNSVR: A Novel Prediction Model Based on Symmetric Optimization for Delivery Duration Forecasting
by Kun Qi, Wangyu Wu and Yao Ni
Symmetry 2025, 17(10), 1608; https://doi.org/10.3390/sym17101608 - 28 Sep 2025
Viewed by 396
Abstract
Accurate prediction of food delivery time is crucial for enhancing operational efficiency and customer satisfaction in real-world logistics and intelligent dispatch systems. To address this challenge, this study proposes a novel symmetric optimization prediction framework, termed SymOpt-CNSVR. The framework is designed to leverage [...] Read more.
Accurate prediction of food delivery time is crucial for enhancing operational efficiency and customer satisfaction in real-world logistics and intelligent dispatch systems. To address this challenge, this study proposes a novel symmetric optimization prediction framework, termed SymOpt-CNSVR. The framework is designed to leverage the strengths of both deep learning and statistical learning models in a complementary architecture. It employs a Convolutional Neural Network (CNN) to extract and assess the importance of multi-feature data. An Enhanced Superb Fairy-Wren Optimization Algorithm (ESFOA) is utilized to optimize the diverse hyperparameters of the CNN, forming an optimal adaptive feature extraction structure. The significant features identified by the CNN are then fed into a Support Vector Regression (SVR) model, whose hyperparameters are optimized using Bayesian optimization, for final prediction. This combination reduces the overall parameter search time and incorporates probabilistic reasoning. Extensive experimental evaluations demonstrate the superior performance of the proposed SymOpt-CNSVR model. It achieves outstanding results with an R2 of 0.9269, MAE of 3.0582, RMSE of 4.1947, and MSLE of 0.1114, outperforming a range of benchmark and state-of-the-art models. Specifically, the MAE was reduced from 4.713 (KNN) and 5.2676 (BiLSTM) to 3.0582, and the RMSE decreased from 6.9073 (KNN) and 6.9194 (BiLSTM) to 4.1947. The results confirm the framework’s powerful capability and robustness in handling high-dimensional delivery time prediction tasks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 11963 KB  
Article
CIRS: A Multi-Agent Machine Learning Framework for Real-Time Accident Detection and Emergency Response
by Sadaf Ayesha, Aqsa Aslam, Muhammad Hassan Zaheer and Muhammad Burhan Khan
Sensors 2025, 25(18), 5845; https://doi.org/10.3390/s25185845 - 19 Sep 2025
Viewed by 907
Abstract
Road traffic accidents remain a leading cause of fatalities worldwide, and the consequences are considerably worsened by delayed detection and emergency response. Although several machine learning-based approaches have been proposed, accident detection systems are not widely deployed, and most existing solutions fail to [...] Read more.
Road traffic accidents remain a leading cause of fatalities worldwide, and the consequences are considerably worsened by delayed detection and emergency response. Although several machine learning-based approaches have been proposed, accident detection systems are not widely deployed, and most existing solutions fail to handle the growing complexity of modern traffic environments. This study introduces Collaborative Intelligence for Road Safety (CIRS), a novel, multi-agent, machine-learning-based framework designed for real-time accident detection, semantic scene understanding, and coordinated emergency response. Each agent in CIRS is designed for a distinct role perception, classification, description, localization, and decision-making, working collaboratively to enhance situational awareness and response efficiency. These agents integrate advanced models: YOLOv11 for high-accuracy accident detection and VideoLLaMA3 for contextual-rich scene description. CIRS bridges the gap between low-level visual analysis and high-level situational awareness. Extensive evaluation on a custom dataset comprising (5200 accident, 4800 nonaccident) frames demonstrates the effectiveness of the proposed approach. YOLOv11 achieves a top-1 accuracy of 86.5% and a perfect top-5 accuracy of 100%, ensuring reliable real-time detection. VideoLLaMA3 outperforms other vision-language models with superior factual accuracy and fewer hallucinations, generating strong results in the metrics of BLEU (0.0755), METEOR (0.2258), and ROUGE-L (0.3625). The decentralized multi-agent architecture of CIRS enables scalability, reduced latency, and the timely dispatch of emergency services while minimizing false positives. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 40956 KB  
Article
The apex MCC: Blueprint of an Open-Source, Secure, CCSDS-Compatible Ground Segment for Sounding Rockets, CubeSats, and Small Lander Missions
by Nico Maas, Sebastian Feles and Jean-Pierre de Vera
Eng 2025, 6(9), 246; https://doi.org/10.3390/eng6090246 - 17 Sep 2025
Cited by 1 | Viewed by 643
Abstract
The operation of microgravity research missions, such as sounding rockets, CubeSats, and small landers, typically relies on proprietary mission control infrastructures, which limit reproducibility, portability, and interdisciplinary use. In this work, we present an open-source blueprint for a distributed ground-segment architecture designed to [...] Read more.
The operation of microgravity research missions, such as sounding rockets, CubeSats, and small landers, typically relies on proprietary mission control infrastructures, which limit reproducibility, portability, and interdisciplinary use. In this work, we present an open-source blueprint for a distributed ground-segment architecture designed to support telemetry, telecommand, and mission operations across institutional and geographic boundaries. The system integrates containerized services, broker bridging for publish–subscribe communication, CCSDS-compliant telemetry and telecommand handling, and secure virtual private networks with two-factor authentication. A modular mission control system based on Yamcs was extended with custom plug-ins for CRC verification, packet reassembly, and command sequencing. The platform was validated during the MAPHEUS-10 sounding rocket mission, where it enabled uninterrupted remote commanding between Sweden and Germany and achieved end-to-end command–response latencies of ~550 ms under flight conditions. To the best of our knowledge, this represents the first open-source ground-segment framework deployed in a space mission. By combining elements from computer science, aerospace engineering, and systems engineering, this work demonstrates how interdisciplinary integration enables resilient, reproducible, and portable mission operations. The blueprint offers a practical foundation for future interdisciplinary research missions, extending beyond sounding rockets to CubeSats, ISS experiments, and planetary landers. This study is part two of a three-part series describing the apex Mk.2/Mk.3 experiments, open-source ground segment, and service module simulator. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

20 pages, 1735 KB  
Article
Multilingual Named Entity Recognition in Arabic and Urdu Tweets Using Pretrained Transfer Learning Models
by Fida Ullah, Muhammad Ahmad, Grigori Sidorov, Ildar Batyrshin, Edgardo Manuel Felipe Riverón and Alexander Gelbukh
Computers 2025, 14(8), 323; https://doi.org/10.3390/computers14080323 - 11 Aug 2025
Viewed by 832
Abstract
The increasing use of Arabic and Urdu on social media platforms, particularly Twitter, has created a growing need for robust Named Entity Recognition (NER) systems capable of handling noisy, informal, and code-mixed content. However, both languages remain significantly underrepresented in NER research, especially [...] Read more.
The increasing use of Arabic and Urdu on social media platforms, particularly Twitter, has created a growing need for robust Named Entity Recognition (NER) systems capable of handling noisy, informal, and code-mixed content. However, both languages remain significantly underrepresented in NER research, especially in social media contexts. To address this gap, this study makes four key contributions: (1) We introduced a manual entity consolidation step to enhance the consistency and accuracy of named entity annotations. In the original datasets, entities such as person names and organization names were often split into multiple tokens (e.g., first name and last name labeled separately). We manually refined the annotations to merge these segments into unified entities, ensuring improved coherence for both training and evaluation. (2) We selected two publicly available datasets from GitHub—one in Arabic and one in Urdu—and applied two novel strategies to tackle low-resource challenges: a joint multilingual approach and a translation-based approach. The joint approach involved merging both datasets to create a unified multilingual corpus, while the translation-based approach utilized automatic translation to generate cross-lingual datasets, enhancing linguistic diversity and model generalizability. (3) We presented a comprehensive and reproducible pseudocode-driven framework that integrates translation, manual refinement, dataset merging, preprocessing, and multilingual model fine-tuning. (4) We designed, implemented, and evaluated a customized XLM-RoBERTa model integrated with a novel attention mechanism, specifically optimized for the morphological and syntactic complexities of Arabic and Urdu. Based on the experiments, our proposed model (XLM-RoBERTa) achieves 0.98 accuracy across Arabic, Urdu, and multilingual datasets. While it shows a 7–8% improvement over traditional baselines (RF), it also achieves a 2.08% improvement over a deep learning (BiLSTM = 0.96), highlighting the effectiveness of our cross-lingual, resource-efficient approach for NER in low-resource, code-mixed social media text. Full article
Show Figures

Figure 1

21 pages, 2559 KB  
Article
A Shape-Aware Lightweight Framework for Real-Time Object Detection in Nuclear Medicine Imaging Equipment
by Weiping Jiang, Guozheng Xu and Aiguo Song
Appl. Sci. 2025, 15(16), 8839; https://doi.org/10.3390/app15168839 - 11 Aug 2025
Viewed by 614
Abstract
Manual calibration of nuclear medicine scanners currently relies on handling phantoms containing radioactive sources, exposing personnel to high radiation doses and elevating cancer risk. We designed an automated detection framework for robotic inspection on the YOLOv8n foundation. It pairs a lightweight backbone with [...] Read more.
Manual calibration of nuclear medicine scanners currently relies on handling phantoms containing radioactive sources, exposing personnel to high radiation doses and elevating cancer risk. We designed an automated detection framework for robotic inspection on the YOLOv8n foundation. It pairs a lightweight backbone with a shape-aware geometric attention module and an anchor-free head. Facing a small training set, we produced extra images with a GAN and then fine-tuned a pretrained network on these augmented data. Evaluations on a custom dataset consisting of PET/CT gantry and table images showed that the SAM-YOLOv8n model achieved a precision of 93.6% and a recall of 92.8%. These results demonstrate fast, accurate, real-time detection, offering a safer and more efficient alternative to manual calibration of nuclear medicine equipment. Full article
(This article belongs to the Section Applied Physics General)
Show Figures

Figure 1

15 pages, 3633 KB  
Article
HSS-YOLO Lightweight Object Detection Model for Intelligent Inspection Robots in Power Distribution Rooms
by Liang Li, Yangfei He, Yingying Wei, Hucheng Pu, Xiangge He, Chunlei Li and Weiliang Zhang
Algorithms 2025, 18(8), 495; https://doi.org/10.3390/a18080495 - 8 Aug 2025
Viewed by 536
Abstract
Currently, YOLO-based object detection is widely employed in intelligent inspection robots. However, under interference factors present in dimly lit substation environments, YOLO exhibits issues such as excessively low accuracy, missed detections, and false detections for critical targets. To address these problems, this paper [...] Read more.
Currently, YOLO-based object detection is widely employed in intelligent inspection robots. However, under interference factors present in dimly lit substation environments, YOLO exhibits issues such as excessively low accuracy, missed detections, and false detections for critical targets. To address these problems, this paper proposes HSS-YOLO, a lightweight object detection model based on YOLOv11. Initially, HetConv is introduced. By combining convolutional kernels of different sizes, it reduces the required number of floating-point operations (FLOPs) and enhances computational efficiency. Subsequently, the integration of Inner-SIoU strengthens the recognition capability for small targets within dim environments. Finally, ShuffleAttention is incorporated to mitigate problems like missed or false detections of small targets under low-light conditions. The experimental results demonstrate that on a custom dataset, the model achieves a precision of 90.5% for critical targets (doors and two types of handles). This represents a 4.6% improvement over YOLOv11, while also reducing parameter count by 10.7% and computational load by 9%. Furthermore, evaluations on public datasets confirm that the proposed model surpasses YOLOv11 in assessment metrics. The improved model presented in this study not only achieves lightweight design but also yields more accurate detection results for doors and handles within dimly lit substation environments. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 3567 KB  
Article
Investigation of the Load-Bearing Capacity of Resin-Printed Components Under Different Printing Strategies
by Brigitta Fruzsina Szívós, Vivien Nemes, Szabolcs Szalai and Szabolcs Fischer
Appl. Sci. 2025, 15(15), 8747; https://doi.org/10.3390/app15158747 - 7 Aug 2025
Viewed by 779
Abstract
This study examines the influence of different printing orientations and infill settings on the strength and flexibility of components produced using resin-based 3D printing, particularly with masked stereolithography (MSLA). Using a common photopolymer resin and a widely available desktop MSLA printer, we produced [...] Read more.
This study examines the influence of different printing orientations and infill settings on the strength and flexibility of components produced using resin-based 3D printing, particularly with masked stereolithography (MSLA). Using a common photopolymer resin and a widely available desktop MSLA printer, we produced and tested a series of samples with varying tilt angles and internal structures. To understand their mechanical behavior, we applied a custom bending test combined with high-precision deformation tracking through the GOM ARAMIS digital image correlation system. The results obtained clearly show that both the angle of printing and the density of the internal infill structure play a significant role in how much strain the printed parts can handle before breaking. Notably, a 75° orientation provided the best deformation performance, and infill rates between 60% and 90% offered a good balance between strength and material efficiency. These findings highlight how adjusting print settings can lead to stronger parts while also saving time and resources—an important consideration for practical applications in engineering, design, and manufacturing. Full article
(This article belongs to the Special Issue Sustainable Mobility and Transportation (SMTS 2025))
Show Figures

Figure 1

25 pages, 3903 KB  
Article
An Integrated Multi-Criteria Decision Method for Remanufacturing Design Considering Carbon Emission and Human Ergonomics
by Changping Hu, Xinfu Lv, Ruotong Wang, Chao Ke, Yingying Zuo, Jie Lu and Ruiying Kuang
Processes 2025, 13(8), 2354; https://doi.org/10.3390/pr13082354 - 24 Jul 2025
Viewed by 542
Abstract
Remanufacturing design is a green design model that considers remanufacturability during the design process to improve the reuse of components. However, traditional remanufacturing design scheme decision making focuses on the remanufacturability indicator and does not fully consider the carbon emissions of the remanufacturing [...] Read more.
Remanufacturing design is a green design model that considers remanufacturability during the design process to improve the reuse of components. However, traditional remanufacturing design scheme decision making focuses on the remanufacturability indicator and does not fully consider the carbon emissions of the remanufacturing process, which will take away the energy-saving and emission reduction benefits of remanufacturing. In addition, remanufacturing design schemes rarely consider the human ergonomics of the product, which leads to uncomfortable handling of the product by the customer. To reduce the remanufacturing carbon emission and improve customer comfort, it is necessary to select a reasonable design scheme to satisfy the carbon emission reduction and ergonomics demand; therefore, this paper proposes an integrated multi-criteria decision-making method for remanufacturing design that considers the carbon emission and human ergonomics. Firstly, an evaluation system of remanufacturing design schemes is constructed to consider the remanufacturability, cost, carbon emission, and human ergonomics of the product, and the evaluation indicators are quantified by the normalization method and the Kansei engineering (KE) method; meanwhile, the hierarchical analysis method (AHP) and entropy weight method (EW) are used for the calculation of the subjective and objective weights. Then, a multi-attribute decision-making method based on the combination of an assignment approximation of ideal solution ranking (TOPSIS) and gray correlation analysis (GRA) is proposed to complete the design scheme selection. Finally, the feasibility of the scheme is verified by taking a household coffee machine as an example. This method has been implemented as an application using Visual Studio 2022 and Microsoft SQL Server 2022. The research results indicate that this decision-making method can quickly and accurately generate reasonable remanufacturing design schemes. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

19 pages, 767 KB  
Article
Enhancing SMBus Protocol Education for Embedded Systems Using Generative AI: A Conceptual Framework with DV-GPT
by Chin-Wen Liao, Yu-Cheng Liao, Cin-De Jhang, Chi-Min Hsu and Ho-Che Lai
Electronics 2025, 14(14), 2832; https://doi.org/10.3390/electronics14142832 - 15 Jul 2025
Cited by 1 | Viewed by 844
Abstract
Teaching of embedded systems, including communication protocols such as SMBus, is commonly faced with difficulties providing the students with interactive and personalized, practical learning experiences. To overcome these shortcomings, this report presents a new conceptual framework that exploits generative artificial intelligence (GenAI) via [...] Read more.
Teaching of embedded systems, including communication protocols such as SMBus, is commonly faced with difficulties providing the students with interactive and personalized, practical learning experiences. To overcome these shortcomings, this report presents a new conceptual framework that exploits generative artificial intelligence (GenAI) via customized DV-GPT. Coupled with prepromises techniques, DV-GPT offers timely targeted support to students and engineers who are studying SMBus protocol design and verification. In contrast to traditional learning, this AI-based tool dynamically adjusts feedback based on the users’ activities, providing greater insight into challenging concepts, including timing synchronization, multi-master arbitration, and error handling. The framework also incorporates the industry de facto standard UVM practices, which helps narrow the gap between education and the professional world. We quantitatively compare with a baseline GPT-4 and show significant improvement in accuracy, specificity, and user satisfaction. The effectiveness and feasibility of the proposed GenAI-enhanced educational approach have been empirically validated through the use of structured student feedback, expert judgment, and statistical analysis. The contribution of this research is a scalable, flexible, interactive model for enhancing embedded systems education that also illustrates how GenAI technologies could find applicability within specialized educational environments. Full article
Show Figures

Figure 1

29 pages, 5459 KB  
Article
Carbon Capture Using Metal Organic Frameworks (MOFs): Novel Custom Ensemble Learning Models for Prediction of CO2 Adsorption
by Zainab Iyiola, Eric Thompson Brantson, Nneoma Juanita Okeke, Kayode Sanni and Promise Longe
Processes 2025, 13(7), 2199; https://doi.org/10.3390/pr13072199 - 9 Jul 2025
Viewed by 1327
Abstract
The accurate prediction of carbon dioxide (CO2) adsorption in metal–organic frameworks (MOFs) is critical for accelerating the discovery of high-performance materials for post-combustion carbon capture. Experimental screening of MOFs is often costly and time-consuming, creating a strong incentive to develop reliable [...] Read more.
The accurate prediction of carbon dioxide (CO2) adsorption in metal–organic frameworks (MOFs) is critical for accelerating the discovery of high-performance materials for post-combustion carbon capture. Experimental screening of MOFs is often costly and time-consuming, creating a strong incentive to develop reliable data-driven models. Despite extensive research, most studies rely on standalone models or generic ensemble strategies that fall short in handling the complex, nonlinear relationships inherent in adsorption data. In this study, a novel ensemble learning framework is developed by integrating five distinct regression algorithms: Random Forest, XGBoost, LightGBM, Support Vector Regression, and Multi-Layer Perceptron. These algorithms are combined into four custom ensemble strategies: equal-weighted voting, performance-weighted voting, stacking, and manual blending. A dataset comprising 1212 experimentally validated MOF entries with input descriptors including BET surface area, pore volume, pressure, temperature, and metal center is used to train and evaluate the models. The stacking ensemble yields the highest performance, with an R2 of 0.9833, an RMSE of 1.0016, and an MAE of 0.6630 on the test set. Model reliability is further confirmed through residual diagnostics, prediction intervals, and permutation importance, revealing pressure and temperature to be the most influential features. Ablation analysis highlights the complementary role of all base models, particularly Random Forest and LightGBM, in boosting ensemble performance. This study demonstrates that custom ensemble learning strategies not only improve predictive accuracy but also enhance model interpretability, offering a scalable and cost-effective tool for guiding experimental MOF design. Full article
Show Figures

Figure 1

24 pages, 13673 KB  
Article
Autonomous Textile Sorting Facility and Digital Twin Utilizing an AI-Reinforced Collaborative Robot
by Torbjørn Seim Halvorsen, Ilya Tyapin and Ajit Jha
Electronics 2025, 14(13), 2706; https://doi.org/10.3390/electronics14132706 - 4 Jul 2025
Cited by 2 | Viewed by 1348
Abstract
This paper presents the design and implementation of an autonomous robotic facility for textile sorting and recycling, leveraging advanced computer vision and machine learning technologies. The system enables real-time textile classification, localization, and sorting on a dynamically moving conveyor belt. A custom-designed pneumatic [...] Read more.
This paper presents the design and implementation of an autonomous robotic facility for textile sorting and recycling, leveraging advanced computer vision and machine learning technologies. The system enables real-time textile classification, localization, and sorting on a dynamically moving conveyor belt. A custom-designed pneumatic gripper is developed for versatile textile handling, optimizing autonomous picking and placing operations. Additionally, digital simulation techniques are utilized to refine robotic motion and enhance overall system reliability before real-world deployment. The multi-threaded architecture facilitates the concurrent and efficient execution of textile classification, robotic manipulation, and conveyor belt operations. Key contributions include (a) dynamic and real-time textile detection and localization, (b) the development and integration of a specialized robotic gripper, (c) real-time autonomous robotic picking from a moving conveyor, and (d) scalability in sorting operations for recycling automation across various industry scales. The system progressively incorporates enhancements, such as queuing management for continuous operation and multi-thread optimization. Advanced material detection techniques are also integrated to ensure compliance with the stringent performance requirements of industrial recycling applications. Full article
(This article belongs to the Special Issue New Insights Into Smart and Intelligent Sensors)
Show Figures

Figure 1

25 pages, 1523 KB  
Systematic Review
AI-Enabled Mobile Food-Ordering Apps and Customer Experience: A Systematic Review and Future Research Agenda
by Mohamad Fouad Shorbaji, Ali Abdallah Alalwan and Raed Algharabat
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 156; https://doi.org/10.3390/jtaer20030156 - 1 Jul 2025
Cited by 1 | Viewed by 5227
Abstract
Artificial intelligence (AI) is reshaping mobile food-ordering apps, yet its impact on customer experience (CX) has not been fully mapped. Following systematic review guidelines (PRISMA 2020), a search of SCOPUS, Web of Science, ScienceDirect, and Google Scholar in March 2025 identified 55 studies [...] Read more.
Artificial intelligence (AI) is reshaping mobile food-ordering apps, yet its impact on customer experience (CX) has not been fully mapped. Following systematic review guidelines (PRISMA 2020), a search of SCOPUS, Web of Science, ScienceDirect, and Google Scholar in March 2025 identified 55 studies published between 2022 and 2025. Since 2022, research has expanded from intention-based studies to include real-time app interactions and live app experiments. This shift has helped to identify five key CX dimensions: (1) instrumental usability: how quickly and smoothly users can order; (2) personalization value: AI-generated menus and meal suggestions; (3) affective engagement: emotional appeal of the app interface; (4) data trust and procedural fairness: users’ confidence in fair pricing and responsible data handling; (5) social co-experience: sharing orders and interacting through live reviews. Studies have shown that personalized recommendations and chatbots enhance relevance and enjoyment, while unclear surge pricing, repetitive menus, and algorithmic anxiety reduce trust and satisfaction. Given the limitations of this study, including its reliance on English-only sources, a cross-sectional design, and limited cultural representation, future research should investigate long-term usage patterns across diverse markets. This approach would help uncover nutritional biases, cultural variations, and sustained effects on customer experience. Full article
Show Figures

Figure 1

25 pages, 2723 KB  
Article
A Human-Centric, Uncertainty-Aware Event-Fused AI Network for Robust Face Recognition in Adverse Conditions
by Akmalbek Abdusalomov, Sabina Umirzakova, Elbek Boymatov, Dilnoza Zaripova, Shukhrat Kamalov, Zavqiddin Temirov, Wonjun Jeong, Hyoungsun Choi and Taeg Keun Whangbo
Appl. Sci. 2025, 15(13), 7381; https://doi.org/10.3390/app15137381 - 30 Jun 2025
Cited by 2 | Viewed by 756
Abstract
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into [...] Read more.
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into a single framework. This study introduces HUE-Net—a Human-centric, Uncertainty-aware, Event-fused Network—designed specifically to thrive under severe environmental stress. HUE-Net marries the visible RGB band with near-infrared (NIR) imagery and high-temporal-event data through an early-fusion pipeline, proven more responsive than serial approaches. A custom hybrid backbone that couples convolutional networks with transformers keeps the model nimble enough for edge devices. Central to the architecture is the perturbed multi-branch variational module, which distills probabilistic identity embeddings while delivering calibrated confidence scores. Complementing this, an Adaptive Spectral Attention mechanism dynamically reweights each stream to amplify the most reliable facial features in real time. Unlike previous efforts that compartmentalize uncertainty handling, spectral blending, or computational thrift, HUE-Net unites all three in a lightweight package. Benchmarks on the IJB-C and N-SpectralFace datasets illustrate that the system not only secures state-of-the-art accuracy but also exhibits unmatched spectral robustness and reliable probability calibration. The results indicate that HUE-Net is well-positioned for forensic missions and humanitarian scenarios where trustworthy identification cannot be deferred. Full article
Show Figures

Figure 1

Back to TopTop