Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,163)

Search Parameters:
Keywords = direct recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 452 KiB  
Review
The Use of Retrieval Practice in the Health Professions: A State-of-the-Art Review
by Michael J. Serra, Althea N. Kaminske, Cynthia Nebel and Kristen M. Coppola
Behav. Sci. 2025, 15(7), 974; https://doi.org/10.3390/bs15070974 (registering DOI) - 17 Jul 2025
Abstract
Retrieval practice, or the active recall of information from memory, is a highly effective learning strategy that strengthens memory and comprehension. This effect is robust and strongly backed by research in cognitive psychology. The health professions—including medicine, nursing, and dentistry—have widely embraced retrieval [...] Read more.
Retrieval practice, or the active recall of information from memory, is a highly effective learning strategy that strengthens memory and comprehension. This effect is robust and strongly backed by research in cognitive psychology. The health professions—including medicine, nursing, and dentistry—have widely embraced retrieval practice as a learning and study tool, particularly for course exams and high-stakes licensing exams. This state-of-the-art review examines the historical development, current applications, and future directions for the use of retrieval practice in health professions education. While retrieval-based learning has long been used informally in these fields, its formal recognition as a scientifically supported study method gained momentum in the early 2000s and then saw a surge in both research interest and curricular adoption between 2010 and 2025. This historical review explores the key factors driving this growth, such as its alignment with assessment-driven education and the increasing availability of third-party study resources that rely on retrieval practice as a guiding principle. Despite its proven benefits for learning, however, barriers persist to its adoption by students, including in the health professions. This article discusses strategies for overcoming these challenges and for enhancing retrieval practice integration into health professions curricula. Full article
(This article belongs to the Special Issue Educational Applications of Cognitive Psychology)
30 pages, 2023 KiB  
Review
Fusion of Computer Vision and AI in Collaborative Robotics: A Review and Future Prospects
by Yuval Cohen, Amir Biton and Shraga Shoval
Appl. Sci. 2025, 15(14), 7905; https://doi.org/10.3390/app15147905 - 15 Jul 2025
Viewed by 56
Abstract
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot [...] Read more.
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot capabilities across perception, planning, and decision-making remains lacking (especially in recent years). Addressing this gap, our review unifies the latest advances in visual recognition, deep learning, and semantic mapping within a structured taxonomy tailored to collaborative robotics. We examine foundational technologies such as object detection, human pose estimation, and environmental modeling, as well as emerging trends including multimodal sensor fusion, explainable AI, and ethically guided autonomy. Unlike prior surveys that focus narrowly on either vision or AI, this review uniquely analyzes their integrated use for real-world human–robot collaboration. Highlighting industrial and service applications, we distill the best practices, identify critical challenges, and present key performance metrics to guide future research. We conclude by proposing strategic directions—from scalable training methods to interoperability standards—to foster safe, robust, and proactive human–robot partnerships in the years ahead. Full article
Show Figures

Figure 1

20 pages, 16333 KiB  
Review
The Burgeoning Importance of Nanomotion Sensors in Microbiology and Biology
by Marco Girasole and Giovanni Longo
Biosensors 2025, 15(7), 455; https://doi.org/10.3390/bios15070455 - 15 Jul 2025
Viewed by 71
Abstract
Nanomotion sensors have emerged as a pivotal technology in microbiology and biology, leveraging advances in nanotechnology, microelectronics, and optics to provide a highly sensitive, label-free detection of biological activity and interactions. These sensors were first limited to nanomechanical oscillators like atomic force microscopy [...] Read more.
Nanomotion sensors have emerged as a pivotal technology in microbiology and biology, leveraging advances in nanotechnology, microelectronics, and optics to provide a highly sensitive, label-free detection of biological activity and interactions. These sensors were first limited to nanomechanical oscillators like atomic force microscopy cantilevers, but now they are expanding into new, more intriguing setups. The idea is to convert the inherent nanoscale movements of living organisms—a direct manifestation of their metabolic activity—into measurable signals. This review highlights the evolution and diverse applications of nanomotion sensing. Key methodologies include Atomic Force Microscopy-based sensors, optical nanomotion detection, graphene drum sensors, and optical fiber-based sensors, each offering unique advantages in sensitivity, cost, and applicability. The analysis of complex nanomotion data is increasingly supported by advanced modeling and the integration of artificial intelligence and machine learning, enhancing pattern recognition and automation. The versatility and real-time, label-free nature of nanomotion sensing position it as a transformative tool that could revolutionize diagnostics, therapeutics, and fundamental biological research. Full article
Show Figures

Figure 1

13 pages, 665 KiB  
Review
Emerging Technologies for Injury Identification in Sports Settings: A Systematic Review
by Luke Canavan Dignam, Lisa Ryan, Michael McCann and Ed Daly
Appl. Sci. 2025, 15(14), 7874; https://doi.org/10.3390/app15147874 - 14 Jul 2025
Viewed by 164
Abstract
Sport injury recognition is rapidly evolving with the integration of new emerging technologies. This systematic review aims to identify and evaluate technologies capable of detecting injuries during sports participation. A comprehensive search of PUBMED, Sport Discus, Web of Science, and ScienceDirect was conducted [...] Read more.
Sport injury recognition is rapidly evolving with the integration of new emerging technologies. This systematic review aims to identify and evaluate technologies capable of detecting injuries during sports participation. A comprehensive search of PUBMED, Sport Discus, Web of Science, and ScienceDirect was conducted following the PRISMA 2020 guidelines. The review was registered on PROSPERO (CRD42024608964). Inclusion criteria focused on prospective studies involving athletes of all ages, evaluating tools which are utilised to identify injuries in sports settings. The review included research between 2014 and 2024; retrospective, conceptual, and fatigue-focused studies were excluded. Risk of bias was assessed using the Critical Appraisal Skills Program (CASP) tool. Of 4283 records screened, 70 full-text articles were assessed, with 21 studies meeting the final inclusion criteria. The technologies were grouped into advanced imaging (Magnetic Resonance Imaging (MRI), Diffusion Tensor Imaging (DFI), and Quantitative Susceptibility Mapping (QSM), with biomarkers (i.e., Neurofilament Light (NfL), Tau protein, Glial Fibrillary Acidic Protein (GFAP), Salivary MicroRNAs, and Immunoglobulin A (IgA), and sideline assessments (i.e., the King–Devick test, KD-Eye Tracking, modified Balance Error Scoring System (mBESS), DETECT, ImPACT structured video analysis, and Instrumented Mouth Guards (iMGs)), which demonstrated feasibility for immediate sideline identification of injury. Future research should improve methodological rigour through larger, diverse samples and controlled designs, with real-world testing environments. Following this guidance, the application of emerging technologies may assist medical staff, coaches, and national governing bodies in identifying injuries in a sports setting, providing real-time assessment. Full article
(This article belongs to the Special Issue Sports Injuries: Prevention and Rehabilitation)
Show Figures

Figure 1

21 pages, 4285 KiB  
Article
Federated Learning for Human Pose Estimation on Non-IID Data via Gradient Coordination
by Peng Ni, Dan Xiang, Dawei Jiang, Jianwei Sun and Jingxiang Cui
Sensors 2025, 25(14), 4372; https://doi.org/10.3390/s25144372 - 12 Jul 2025
Viewed by 204
Abstract
Human pose estimation is an important downstream task in computer vision, with significant applications in action recognition and virtual reality. However, data collected in a decentralized manner often exhibit non-independent and identically distributed (non-IID) characteristics, and traditional federated learning aggregation strategies can lead [...] Read more.
Human pose estimation is an important downstream task in computer vision, with significant applications in action recognition and virtual reality. However, data collected in a decentralized manner often exhibit non-independent and identically distributed (non-IID) characteristics, and traditional federated learning aggregation strategies can lead to gradient conflicts that impair model convergence and accuracy. To address this, we propose the Federated Gradient Harmonization aggregation strategy (FedGH), which coordinates update directions by measuring client gradient discrepancies and integrating gradient-projection correction with a parameter-reconstruction mechanism. Experiments conducted on a self-constructed single-arm robotic dataset and the public Max Planck Institute for Informatics (MPII Human Pose Dataset) dataset demonstrate that FedGH achieves average Percentage of Correct Keypoints (PCK) of 47.14% and 66.31% across all keypoints, representing improvements of 1.82 and 0.36 percentage points over the Federated Adaptive Weighting (FedAW) method. On our self-constructed dataset, FedGH attains a PCK of 86.4% for shoulder detection, surpassing other traditional federated learning methods by 20–30%. Moreover, on the self-constructed dataset, FedGH reaches over 98% accuracy in the keypoint heatmap regression model within the first 10 rounds and remains stable between 98% and 100% thereafter. This method effectively mitigates gradient conflicts in non-IID environments, providing a more robust optimization solution for distributed human pose estimation. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

37 pages, 618 KiB  
Systematic Review
Interaction, Artificial Intelligence, and Motivation in Children’s Speech Learning and Rehabilitation Through Digital Games: A Systematic Literature Review
by Chra Abdoulqadir and Fernando Loizides
Information 2025, 16(7), 599; https://doi.org/10.3390/info16070599 - 12 Jul 2025
Viewed by 222
Abstract
The integration of digital serious games into speech learning (rehabilitation) has demonstrated significant potential in enhancing accessibility and inclusivity for children with speech disabilities. This review of the state of the art examines the role of serious games, Artificial Intelligence (AI), and Natural [...] Read more.
The integration of digital serious games into speech learning (rehabilitation) has demonstrated significant potential in enhancing accessibility and inclusivity for children with speech disabilities. This review of the state of the art examines the role of serious games, Artificial Intelligence (AI), and Natural Language Processing (NLP) in speech rehabilitation, with a particular focus on interaction modalities, engagement autonomy, and motivation. We have reviewed 45 selected studies. Our key findings show how intelligent tutoring systems, adaptive voice-based interfaces, and gamified speech interventions can empower children to engage in self-directed speech learning, reducing dependence on therapists and caregivers. The diversity of interaction modalities, including speech recognition, phoneme-based exercises, and multimodal feedback, demonstrates how AI and Assistive Technology (AT) can personalise learning experiences to accommodate diverse needs. Furthermore, the incorporation of gamification strategies, such as reward systems and adaptive difficulty levels, has been shown to enhance children’s motivation and long-term participation in speech rehabilitation. The gaps identified show that despite advancements, challenges remain in achieving universal accessibility, particularly regarding speech recognition accuracy, multilingual support, and accessibility for users with multiple disabilities. This review advocates for interdisciplinary collaboration across educational technology, special education, cognitive science, and human–computer interaction (HCI). Our work contributes to the ongoing discourse on lifelong inclusive education, reinforcing the potential of AI-driven serious games as transformative tools for bridging learning gaps and promoting speech rehabilitation beyond clinical environments. Full article
Show Figures

Graphical abstract

29 pages, 7197 KiB  
Review
Recent Advances in Electrospun Nanofiber-Based Self-Powered Triboelectric Sensors for Contact and Non-Contact Sensing
by Jinyue Tian, Jiaxun Zhang, Yujie Zhang, Jing Liu, Yun Hu, Chang Liu, Pengcheng Zhu, Lijun Lu and Yanchao Mao
Nanomaterials 2025, 15(14), 1080; https://doi.org/10.3390/nano15141080 - 11 Jul 2025
Viewed by 317
Abstract
Electrospun nanofiber-based triboelectric nanogenerators (TENGs) have emerged as a highly promising class of self-powered sensors for a broad range of applications, particularly in intelligent sensing technologies. By combining the advantages of electrospinning and triboelectric nanogenerators, these sensors offer superior characteristics such as high [...] Read more.
Electrospun nanofiber-based triboelectric nanogenerators (TENGs) have emerged as a highly promising class of self-powered sensors for a broad range of applications, particularly in intelligent sensing technologies. By combining the advantages of electrospinning and triboelectric nanogenerators, these sensors offer superior characteristics such as high sensitivity, mechanical flexibility, lightweight structure, and biocompatibility, enabling their integration into wearable electronics and biomedical interfaces. This review presents a comprehensive overview of recent progress in electrospun nanofiber-based TENGs, covering their working principles, operating modes, and material composition. Both pure polymer and composite nanofibers are discussed, along with various electrospinning techniques that enable control over morphology and performance at the nanoscale. We explore their practical implementations in both contact-type and non-contact-type sensing, such as human–machine interaction, physiological signal monitoring, gesture recognition, and voice detection. These applications demonstrate the potential of TENGs to enable intelligent, low-power, and real-time sensing systems. Furthermore, this paper points out critical challenges and future directions, including durability under long-term operation, scalable and cost-effective fabrication, and seamless integration with wireless communication and artificial intelligence technologies. With ongoing advancements in nanomaterials, fabrication techniques, and system-level integration, electrospun nanofiber-based TENGs are expected to play a pivotal role in shaping the next generation of self-powered, intelligent sensing platforms across diverse fields such as healthcare, environmental monitoring, robotics, and smart wearable systems. Full article
(This article belongs to the Special Issue Self-Powered Flexible Sensors Based on Triboelectric Nanogenerators)
Show Figures

Figure 1

31 pages, 529 KiB  
Review
Advances and Challenges in Respiratory Sound Analysis: A Technique Review Based on the ICBHI2017 Database
by Shaode Yu, Jieyang Yu, Lijun Chen, Bing Zhu, Xiaokun Liang, Yaoqin Xie and Qiurui Sun
Electronics 2025, 14(14), 2794; https://doi.org/10.3390/electronics14142794 - 11 Jul 2025
Viewed by 268
Abstract
Respiratory diseases present significant global health challenges. Recent advances in respiratory sound analysis (RSA) have shown great potential for automated disease diagnosis and patient management. The International Conference on Biomedical and Health Informatics 2017 (ICBHI2017) database stands as one of the most authoritative [...] Read more.
Respiratory diseases present significant global health challenges. Recent advances in respiratory sound analysis (RSA) have shown great potential for automated disease diagnosis and patient management. The International Conference on Biomedical and Health Informatics 2017 (ICBHI2017) database stands as one of the most authoritative open-access RSA datasets. This review systematically examines 135 technical publications utilizing the database, and a comprehensive and timely summary of RSA methodologies is offered for researchers and practitioners in this field. Specifically, this review covers signal processing techniques including data resampling, augmentation, normalization, and filtering; feature extraction approaches spanning time-domain, frequency-domain, joint time–frequency analysis, and deep feature representation from pre-trained models; and classification methods for adventitious sound (AS) categorization and pathological state (PS) recognition. Current achievements for AS and PS classification are summarized across studies using official and custom data splits. Despite promising technique advancements, several challenges remain unresolved. These include a severe class imbalance in the dataset, limited exploration of advanced data augmentation techniques and foundation models, a lack of model interpretability, and insufficient generalization studies across clinical settings. Future directions involve multi-modal data fusion, the development of standardized processing workflows, interpretable artificial intelligence, and integration with broader clinical data sources to enhance diagnostic performance and clinical applicability. Full article
Show Figures

Figure 1

10 pages, 1847 KiB  
Case Report
Methadone-Induced Toxicity—An Unexpected Challenge for the Brain and Heart in ICU Settings: Case Report and Review of the Literature
by Cristina Georgiana Buzatu, Sebastian Isac, Geani-Danut Teodorescu, Teodora Isac, Cristina Martac, Cristian Cobilinschi, Bogdan Pavel, Cristina Veronica Andreescu and Gabriela Droc
Life 2025, 15(7), 1084; https://doi.org/10.3390/life15071084 - 10 Jul 2025
Viewed by 239
Abstract
Introduction: Methadone, a synthetic opioid used for opioid substitution therapy (OST), is typically associated with arrhythmias rather than direct myocardial depression. Neurological complications, especially with concurrent antipsychotic use, have also been reported. Acute left ventricular failure in young adults is uncommon and often [...] Read more.
Introduction: Methadone, a synthetic opioid used for opioid substitution therapy (OST), is typically associated with arrhythmias rather than direct myocardial depression. Neurological complications, especially with concurrent antipsychotic use, have also been reported. Acute left ventricular failure in young adults is uncommon and often linked to genetic or infectious causes. We present a rare case of reversible cardiogenic shock and cerebellar insult due to methadone toxicity. Case Presentation: A 37-year-old man with a history of drug abuse on OST with methadone (130 mg/day) was admitted to the ICU with hemodynamic instability, seizures, and focal neurological deficits. Diagnostic workup revealed low cardiac output syndrome and a right cerebellar insult, attributed to methadone toxicity. The patient received individualized catecholamine support. After 10 days in the ICU, he was transferred to a general ward for ongoing cardiac and neurological rehabilitation and discharged in stable condition seven days later. Conclusions: Methadone-induced reversible left ventricular failure, particularly when accompanied by cerebellar insult, is rare but potentially life-threatening. Early recognition and multidisciplinary management are essential for full recovery in such complex toxicological presentations. Full article
(This article belongs to the Special Issue Critical Issues in Intensive Care Medicine)
Show Figures

Figure 1

13 pages, 1697 KiB  
Article
A Real-Time Vision-Based Adaptive Follow Treadmill for Animal Gait Analysis
by Guanghui Li, Salif Komi, Jakob Fleng Sorensen and Rune W. Berg
Sensors 2025, 25(14), 4289; https://doi.org/10.3390/s25144289 - 9 Jul 2025
Viewed by 253
Abstract
Treadmills are a convenient tool to study animal gait and behavior. Traditional animal treadmill designs often entail preset speeds and therefore have reduced adaptability to animals’ dynamic behavior, thus restricting the experimental scope. Fortunately, advancements in computer vision and automation allow circumvention of [...] Read more.
Treadmills are a convenient tool to study animal gait and behavior. Traditional animal treadmill designs often entail preset speeds and therefore have reduced adaptability to animals’ dynamic behavior, thus restricting the experimental scope. Fortunately, advancements in computer vision and automation allow circumvention of these limitations. Here, we introduce a series of real-time adaptive treadmill systems utilizing both marker-based visual fiducial systems (colored blocks or AprilTags) and marker-free (pre-trained models) tracking methods powered by advanced computer vision to track experimental animals. We demonstrate their real-time object recognition capabilities in specific tasks by conducting practical tests and highlight the performance of the marker-free method using an object detection machine learning algorithm (FOMO MobileNetV2 network), which shows high robustness and accuracy in detecting a moving rat compared to the marker-based method. The combination of this computer vision system together with treadmill control overcome the issues of traditional treadmills by enabling the adjustment of belt speed and direction based on animal movement. Full article
(This article belongs to the Special Issue Object Detection and Recognition Based on Deep Learning)
Show Figures

Graphical abstract

18 pages, 10812 KiB  
Article
Explainable Face Recognition via Improved Localization
by Rashik Shadman, Daqing Hou, Faraz Hussain and M. G. Sarwar Murshed
Electronics 2025, 14(14), 2745; https://doi.org/10.3390/electronics14142745 - 8 Jul 2025
Viewed by 200
Abstract
Biometric authentication has become one of the most widely used tools in the current technological era to authenticate users and to distinguish between genuine users and impostors. The face is the most common form of biometric modality that has proven effective. Deep learning-based [...] Read more.
Biometric authentication has become one of the most widely used tools in the current technological era to authenticate users and to distinguish between genuine users and impostors. The face is the most common form of biometric modality that has proven effective. Deep learning-based face recognition systems are now commonly used across different domains. However, these systems usually operate like black-box models that do not provide necessary explanations or justifications for their decisions. This is a major disadvantage because users cannot trust such artificial intelligence-based biometric systems and may not feel comfortable using them when clear explanations or justifications are not provided. This paper addresses this problem by applying an efficient method for explainable face recognition systems. We use a Class Activation Mapping (CAM)-based discriminative localization (very narrow/specific localization) technique called Scaled Directed Divergence (SDD) to visually explain the results of deep learning-based face recognition systems. We perform fine localization of the face features relevant to the deep learning model for its prediction/decision. Our experiments show that the SDD Class Activation Map (CAM) highlights the relevant face features very specifically and accurately compared to the traditional CAM. The provided visual explanations with narrow localization of relevant features can ensure much-needed transparency and trust for deep learning-based face recognition systems. We also demonstrate the adaptability of the SDD method by applying it to two different techniques: CAM and Score-CAM. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

28 pages, 1602 KiB  
Article
Claiming Space: Domain Positioning and Market Recognition in Blockchain
by Yu-Tong Liu and Eun-Jung Hyun
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 174; https://doi.org/10.3390/jtaer20030174 - 8 Jul 2025
Viewed by 178
Abstract
Prior research has focused on the technical and institutional challenges of blockchain adoption. However, little is known about how blockchain ventures claim categorical space in the market and how such domain positioning influences their visibility and evaluation. This study investigates the relationship between [...] Read more.
Prior research has focused on the technical and institutional challenges of blockchain adoption. However, little is known about how blockchain ventures claim categorical space in the market and how such domain positioning influences their visibility and evaluation. This study investigates the relationship between strategic domain positioning and market recognition among blockchain-based ventures, with a particular focus on applications relevant to e-commerce, such as non-fungible tokens (NFTs) and decentralized finance (DeFi). Drawing on research on categorization, legitimacy, and the technology lifecycle, we propose a domain lifecycle perspective that accounts for the evolving expectations and legitimacy criteria across blockchain domains. Using BERTopic, a transformer-based topic modeling method, we classify 9665 blockchain ventures based on their textual business descriptions. We then test the impact of domain positioning on market recognition—proxied by Crunchbase rank—while examining the moderating effects of external validation signals such as funding events, media attention, and organizational age. Our findings reveal that clear domain positioning significantly enhances market recognition, but the strength and direction of this effect vary by domain. Specifically, NFT ventures experience stronger recognition when young and less institutionally validated, suggesting a novelty premium, while DeFi ventures benefit more from conventional legitimacy signals. These results advance our understanding of how categorical dynamics operate in emerging digital ecosystems and offer practical insights for e-commerce platforms, investors, and entrepreneurs navigating blockchain-enabled innovation. Full article
Show Figures

Figure A1

16 pages, 1351 KiB  
Article
A Comparative Study on Machine Learning Methods for EEG-Based Human Emotion Recognition
by Shokoufeh Davarzani, Simin Masihi, Masoud Panahi, Abdulrahman Olalekan Yusuf and Massood Atashbar
Electronics 2025, 14(14), 2744; https://doi.org/10.3390/electronics14142744 - 8 Jul 2025
Viewed by 311
Abstract
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated [...] Read more.
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated superior performance compared to traditional approaches. This advantage stems from their ability to extract complex features—such as spectral–spatial connectivity, temporal dynamics, and non-linear patterns—from raw EEG data, leading to a more accurate and robust representation of emotional states and better adaptation to diverse data characteristics. This study explores and compares deep and shallow neural networks for human emotion recognition from raw EEG data, with the goal of enabling real-time processing in embedded and edge-deployable systems. Deep learning models—specifically convolutional neural networks (CNNs) and recurrent neural networks (RNNs)—have been benchmarked against traditional approaches such as the multi-layer perceptron (MLP), support vector machine (SVM), and k-nearest neighbors (kNN) algorithms. This comparative study investigates the effectiveness of deep learning techniques in EEG-based emotion recognition by classifying emotions into four categories based on the valence–arousal plane: high arousal, positive valence (HAPV); low arousal, positive valence (LAPV); high arousal, negative valence (HANV); and low arousal, negative valence (LANV). Evaluations were conducted using the DEAP dataset. The results indicate that both the CNN and RNN-STM models have a high classification performance in EEG-based emotion recognition, with an average accuracy of 90.13% and 93.36%, respectively, significantly outperforming shallow algorithms (MLP, SVM, kNN). Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

18 pages, 1876 KiB  
Review
Deep Learning in Food Image Recognition: A Comprehensive Review
by Detianjun Liu, Enguang Zuo, Dingding Wang, Liang He, Liujing Dong and Xinyao Lu
Appl. Sci. 2025, 15(14), 7626; https://doi.org/10.3390/app15147626 - 8 Jul 2025
Viewed by 338
Abstract
Food not only fulfills basic human survival needs but also significantly impacts health and culture. Research on food-related topics holds substantial theoretical and practical significance, with food image recognition being a core task in fine-grained image recognition. This field has broad applications and [...] Read more.
Food not only fulfills basic human survival needs but also significantly impacts health and culture. Research on food-related topics holds substantial theoretical and practical significance, with food image recognition being a core task in fine-grained image recognition. This field has broad applications and promising prospects in smart dining, intelligent healthcare, and smart retail. With the rapid advancement of artificial intelligence, deep learning has emerged as a key technology that enhances recognition efficiency and accuracy, enabling more practical applications. This paper comprehensively reviews the techniques and challenges of deep learning in food image recognition. First, we outline the historical development of food image recognition technologies, categorizing the primary methods into manual feature extraction-based and deep learning-based approaches. Next, we systematically organize existing food image datasets and summarize the characteristics of several representative datasets. Additionally, we analyze typical deep learning models and their performance on different datasets. Finally, we discuss the practical applications of food image recognition in calorie estimation and food safety, identify current research challenges, and propose future research directions. Full article
Show Figures

Figure 1

29 pages, 996 KiB  
Article
Enhancing Environmental Cognition Through Kayaking in Aquavoltaic Systems in a Lagoon Aquaculture Area: The Mediating Role of Perceived Value and Facility Management
by Yu-Chi Sung and Chun-Han Shih
Water 2025, 17(13), 2033; https://doi.org/10.3390/w17132033 - 7 Jul 2025
Viewed by 325
Abstract
Tainan’s Cigu, located on Taiwan’s southwestern coast, is a prominent aquaculture hub known for its extensive ponds, tidal flats, and lagoons. This study explored the novel integration of kayaking within aquavoltaic (APV) aquaculture ponds, creating a unique hybrid tourism landscape that merges industrial [...] Read more.
Tainan’s Cigu, located on Taiwan’s southwestern coast, is a prominent aquaculture hub known for its extensive ponds, tidal flats, and lagoons. This study explored the novel integration of kayaking within aquavoltaic (APV) aquaculture ponds, creating a unique hybrid tourism landscape that merges industrial land use (aquaculture and energy production) with nature-based recreation. We investigated the relationships among facility maintenance and safety professionalism (FM), the perceived value of kayaking training (PV), and green energy and sustainable development recognition (GS) within these APV systems in Cigu, Taiwan. While integrating recreation with renewable energy and aquaculture is an emerging approach to multifunctional land use, the mechanisms influencing visitors’ sustainability perceptions remain underexplored. Using data from 613 kayaking participants and structural equation modeling, we tested a theoretical framework encompassing direct, mediated, and moderated relationships. Our findings reveal that FM significantly influences both PV (β = 0.68, p < 0.001) and GS (β = 0.29, p < 0.001). Furthermore, PV strongly affects GS (β = 0.56, p < 0.001). Importantly, PV partially mediates the relationship between FM and GS, with the indirect effect (0.38) accounting for 57% of the total effect. We also identified significant moderating effects of APV coverage, guide expertise, and operational visibility. Complementary observational data obtained with underwater cameras confirm that non-motorized kayaking causes minimal ecological disturbance to cultured species, exhibiting significantly lower behavioral impacts than motorized alternatives. These findings advance the theoretical understanding of experiential learning in novel technological landscapes and provide evidence-based guidelines for optimizing recreational integration within production environments. Full article
(This article belongs to the Special Issue Aquaculture, Fisheries, Ecology and Environment)
Show Figures

Figure 1

Back to TopTop