Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (155)

Search Parameters:
Keywords = intelligent vision sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 20415 KiB  
Article
FireNet-KD: Swin Transformer-Based Wildfire Detection with Multi-Source Knowledge Distillation
by Naveed Ahmad, Mariam Akbar, Eman H. Alkhammash and Mona M. Jamjoom
Fire 2025, 8(8), 295; https://doi.org/10.3390/fire8080295 - 26 Jul 2025
Viewed by 389
Abstract
Forest fire detection is an essential application in environmental surveillance since wildfires cause devastating damage to ecosystems, human life, and property every year. The effective and accurate detection of fire is necessary to allow for timely response and efficient management of disasters. Traditional [...] Read more.
Forest fire detection is an essential application in environmental surveillance since wildfires cause devastating damage to ecosystems, human life, and property every year. The effective and accurate detection of fire is necessary to allow for timely response and efficient management of disasters. Traditional techniques for fire detection often experience false alarms and delayed responses in various environmental situations. Therefore, developing robust, intelligent, and real-time detection systems has emerged as a central challenge in remote sensing and computer vision research communities. Despite recent achievements in deep learning, current forest fire detection models still face issues with generalizability, lightweight deployment, and accuracy trade-offs. In order to overcome these limitations, we introduce a novel technique (FireNet-KD) that makes use of knowledge distillation, a method that maps the learning of hard models (teachers) to a light and efficient model (student). We specifically utilize two opposing teacher networks: a Vision Transformer (ViT), which is popular for its global attention and contextual learning ability, and a Convolutional Neural Network (CNN), which is esteemed for its spatial locality and inductive biases. These teacher models instruct the learning of a Swin Transformer-based student model that provides hierarchical feature extraction and computational efficiency through shifted window self-attention, and is thus particularly well suited for scalable forest fire detection. By combining the strengths of ViT and CNN with distillation into the Swin Transformer, the FireNet-KD model outperforms state-of-the-art methods with significant improvements. Experimental results show that the FireNet-KD model obtains a precision of 95.16%, recall of 99.61%, F1-score of 97.34%, and mAP@50 of 97.31%, outperforming the existing models. These results prove the effectiveness of FireNet-KD in improving both detection accuracy and model efficiency for forest fire detection. Full article
Show Figures

Figure 1

21 pages, 2514 KiB  
Article
Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory
by Lihong Yang, Zhi Zeng, Hang Ge, Yao Li, Shurui Ge and Kai Hu
Appl. Sci. 2025, 15(15), 8319; https://doi.org/10.3390/app15158319 - 26 Jul 2025
Viewed by 167
Abstract
To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is [...] Read more.
To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is proposed in this paper. The method involves building a two-stage optimization framework: in the first stage, global contrast enhancement is achieved by Retinex preprocessing, which effectively improves the detail information regarding the dark area and the accuracy of the transmittance map and atmospheric light intensity estimation; in the second stage, an a priori compensation model for the dark channel is constructed, and a depth-map-guided transmittance correction mechanism is introduced to obtain a refined transmittance map. At the same time, the atmospheric light intensity is accurately calculated by the Otsu algorithm and edge constraints, which effectively suppresses the halo artifacts and color deviation of the sky region in the dark channel a priori defogging algorithm. The experiments based on self-collected data and public datasets show that the algorithm in this paper presents better detail preservation ability (the visible edge ratio is minimally improved by 0.1305) and color reproduction (the saturated pixel ratio is reduced to about 0) in the subjective evaluation, and the average gradient ratio of the objective indexes reaches a maximum value of 3.8009, which is improved by 36–56% compared with the classical DCP and Tarel algorithms. The method provides a robust image defogging solution for computer vision systems under complex meteorological conditions. Full article
Show Figures

Figure 1

12 pages, 206 KiB  
Entry
Spiritual Intelligence: A New Form of Intelligence for a Sustainable and Humane Future
by Gianfranco Cicotto
Encyclopedia 2025, 5(3), 107; https://doi.org/10.3390/encyclopedia5030107 - 25 Jul 2025
Viewed by 486
Definition
Spiritual intelligence (SI) is defined as a unique form of hermeneutic–relational intelligence that enables individuals to integrate cognitive, emotional, and symbolic dimensions to guide their thoughts and actions with reflection, aiming for existential coherence rooted in a transcendent system of meaning. It functions [...] Read more.
Spiritual intelligence (SI) is defined as a unique form of hermeneutic–relational intelligence that enables individuals to integrate cognitive, emotional, and symbolic dimensions to guide their thoughts and actions with reflection, aiming for existential coherence rooted in a transcendent system of meaning. It functions as a metacognitive framework that unites affective, cognitive, and symbolic levels in dialog with a sense of meaning that is considered sacred or transcendent, where “sacred,” in this context, refers inclusively to any symbolic reference or value that a person or culture perceives as inviolable, fundamental, or orienting. It can derive from religious traditions but also from ethical, philosophical, or civil visions. It functions as a horizon of meaning from which to draw coherence and guidance and which orients the understanding of oneself, the world, and action. SI appears as the ability to interpret one’s experiences through the lens of values and principles, maintaining a sense of continuity in meaning even during times of ambiguity, conflict, or discontinuity. It therefore functions as a metacognitive ability that brings together various mental functions into a cohesive view of reality, rooted in a dynamic dialog between the self and a value system seen as sacred. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
40 pages, 2250 KiB  
Review
Comprehensive Comparative Analysis of Lower Limb Exoskeleton Research: Control, Design, and Application
by Sk Hasan and Nafizul Alam
Actuators 2025, 14(7), 342; https://doi.org/10.3390/act14070342 - 9 Jul 2025
Viewed by 588
Abstract
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric [...] Read more.
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric use, and industrial support. Applications range from sit-to-stand transitions and post-stroke therapy to balance support and real-world navigation. Control approaches vary from traditional impedance and fuzzy logic models to advanced data-driven frameworks, including reinforcement learning, recurrent neural networks, and digital twin-based optimization. These controllers support personalized and adaptive interaction, enabling real-time intent recognition, torque modulation, and gait phase synchronization across different users and tasks. Hardware platforms include powered multi-degree-of-freedom exoskeletons, passive assistive devices, compliant joint systems, and pediatric-specific configurations. Innovations in actuator design, modular architecture, and lightweight materials support increased usability and energy efficiency. Sensor systems integrate EMG, EEG, IMU, vision, and force feedback, supporting multimodal perception for motion prediction, terrain classification, and user monitoring. Human–robot interaction strategies emphasize safe, intuitive, and cooperative engagement. Controllers are increasingly user-specific, leveraging biosignals and gait metrics to tailor assistance. Evaluation methodologies include simulation, phantom testing, and human–subject trials across clinical and real-world environments, with performance measured through joint tracking accuracy, stability indices, and functional mobility scores. Overall, the review highlights the field’s evolution toward intelligent, adaptable, and user-centered systems, offering promising solutions for rehabilitation, mobility enhancement, and assistive autonomy in diverse populations. Following a detailed review of current developments, strategic recommendations are made to enhance and evolve existing exoskeleton technologies. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

30 pages, 17752 KiB  
Article
DMA-Net: Dynamic Morphology-Aware Segmentation Network for Remote Sensing Images
by Chao Deng, Haojian Liang, Xiao Qin and Shaohua Wang
Remote Sens. 2025, 17(14), 2354; https://doi.org/10.3390/rs17142354 - 9 Jul 2025
Viewed by 385
Abstract
Semantic segmentation of remote sensing imagery is a pivotal task for intelligent interpretation, with critical applications in urban monitoring, resource management, and disaster assessment. Recent advancements in deep learning have significantly improved RS image segmentation, particularly through the use of convolutional neural networks, [...] Read more.
Semantic segmentation of remote sensing imagery is a pivotal task for intelligent interpretation, with critical applications in urban monitoring, resource management, and disaster assessment. Recent advancements in deep learning have significantly improved RS image segmentation, particularly through the use of convolutional neural networks, which demonstrate remarkable proficiency in local feature extraction. However, due to the inherent locality of convolutional operations, prevailing methodologies frequently encounter challenges in capturing long-range dependencies, thereby constraining their comprehensive semantic comprehension. Moreover, the preprocessing of high-resolution remote sensing images by dividing them into sub-images disrupts spatial continuity, further complicating the balance between local feature extraction and global context modeling. To address these limitations, we propose DMA-Net, a Dynamic Morphology-Aware Segmentation Network built on an encoder–decoder architecture. The proposed framework incorporates three primary parts: a Multi-Axis Vision Transformer (MaxViT) encoder achieves a balance between local feature extraction and global context modeling through multi-axis self-attention mechanisms; a Hierarchy Attention Decoder (HA-Decoder) enhanced with Hierarchy Convolutional Groups (HCG) for precise recovery of fine-grained spatial details; and a Channel and Spatial Attention Bridge (CSA-Bridge) to mitigate the encoder–decoder semantic gap while amplifying discriminative feature representations. Extensive experimentation has been conducted to demonstrate the state-of-the-art performance of DMA-Net, which has been shown to achieve 87.31% mIoU on Potsdam, 83.23% on Vaihingen, and 54.23% on LoveDA, thereby surpassing existing methods. Full article
Show Figures

Figure 1

15 pages, 2091 KiB  
Review
AI Roles in 4R Crop Pest Management—A Review
by Hengyuan Yang, Yuexia Jin, Lili Jiang, Jia Lu and Guoqi Wen
Agronomy 2025, 15(7), 1629; https://doi.org/10.3390/agronomy15071629 - 3 Jul 2025
Viewed by 868
Abstract
Insect pests are a major threat to agricultural production, causing significant crop yield reductions annually. Integrated pest management (IPM) is well-studied, but its precise application in farmlands is still challenging due to variable weather, diverse insect behaviors, crop variability, and soil heterogeneity. Recent [...] Read more.
Insect pests are a major threat to agricultural production, causing significant crop yield reductions annually. Integrated pest management (IPM) is well-studied, but its precise application in farmlands is still challenging due to variable weather, diverse insect behaviors, crop variability, and soil heterogeneity. Recent advancements in Artificial Intelligence (AI) have shown the potential to revolutionize pest management by implementing 4R pest stewardship: right pest identification, right method selection, right control timing, and right action taken. This review explores the roles of AI technologies within the 4R framework, highlighting AI models for accurate pest identification, computer vision systems for real-time monitoring, predictive analytics for optimizing control timing, and tools for selecting and applying pest control measures. Innovations in remote sensing, UAV surveillance, and IoT-enabled smart traps further strengthen pest monitoring and intervention strategies. By integrating AI into 4R pest management, this study underscores the potential of precision agriculture to develop sustainable, adaptive, and highly efficient pest control systems. Despite these advancements, challenges persist in data availability, model generalization, and economic feasibility for widespread adoption. The lack of interpretability in AI models also makes some agronomists hesitant to adopt these technologies. Future research should focus on scalable AI solutions, interdisciplinary collaborations, and real-world validation to enhance AI-driven pest management in field crops. Full article
Show Figures

Figure 1

16 pages, 3114 KiB  
Article
TDA-L: Reducing Latency and Memory Consumption of Test-Time Adaptation for Real-Time Intelligent Sensing
by Rahim Hossain, Md Tawheedul Islam Bhuian and Kyoung-Don Kang
Sensors 2025, 25(12), 3574; https://doi.org/10.3390/s25123574 - 6 Jun 2025
Viewed by 622
Abstract
Vision–language models learn visual concepts from the supervision of natural language. It can significantly enhance the generalizability of real-time intelligent sensing, such as analyzing camera-captured real-time images for visually impaired users. However, adapting vision–language models to distribution shifts at test time, caused by [...] Read more.
Vision–language models learn visual concepts from the supervision of natural language. It can significantly enhance the generalizability of real-time intelligent sensing, such as analyzing camera-captured real-time images for visually impaired users. However, adapting vision–language models to distribution shifts at test time, caused by several factors such as lighting or weather changes, remains challenging. In particular, most existing test-time adaptation methods rely on gradient-based fine-tuning and backpropagation, making them computationally expensive and unsuitable for real-time applications. To address this challenge, the Training-Free Dynamic Adapter (TDA) has recently been introduced as a lightweight alternative that uses a dynamic key–value cache and pseudo-label refinement for test-time adaptation without backpropagation. Building on this, we propose TDA-L, a new framework that integrates Low-Rank Adaptation (LoRA) to reduce the size of feature representations and related computational overhead at test time using pre-learned low-rank matrices. TDA-L applies LoRA transformations to both query and cached features during inference, cost-efficiently improving robustness to distribution shifts while maintaining the training-free nature of TDA. Experimental results on seven benchmarks show that TDA-L maintains accuracy but achieves lower latency, less memory consumption, and higher throughput, making it well-suited for AI-based real-time sensing. Full article
(This article belongs to the Special Issue Edge AI for Wearables and IoT)
Show Figures

Figure 1

18 pages, 3976 KiB  
Proceeding Paper
Survey on Comprehensive Visual Perception Technology for Future Air–Ground Intelligent Transportation Vehicles in All Scenarios
by Guixin Ren, Fei Chen, Shichun Yang, Fan Zhou and Bin Xu
Eng. Proc. 2024, 80(1), 50; https://doi.org/10.3390/engproc2024080050 - 30 May 2025
Viewed by 449
Abstract
As an essential part of the low-altitude economy, low-altitude carriers are an important cornerstone of its development and a new industry that cannot be ignored strategically. However, it is difficult for the existing two-dimensional vehicle autonomous driving perception scheme to meet the needs [...] Read more.
As an essential part of the low-altitude economy, low-altitude carriers are an important cornerstone of its development and a new industry that cannot be ignored strategically. However, it is difficult for the existing two-dimensional vehicle autonomous driving perception scheme to meet the needs of general key technologies for all-scene perception such as the global high-precision map construction of low-altitude vehicles in a three-dimensional space, the perception identification of local environmental traffic participants, and the extraction of key visual information under extreme conditions. Therefore, it is urgent to explore the development and verification of all-scene universal sensing technology for low-altitude intelligent vehicles. In this paper, the literature on vision-based urban rail transit and general perception technology in low-altitude flight environment is studied, and the paper summarizes the research status and innovation points from five aspects, namely the environment perception algorithm based on visual SLAM, the environment perception algorithm based on BEV, the environment perception algorithm based on image enhancement, the performance optimization of the perception algorithm using cloud computing, and the rapid deployment of the perception algorithm using edge nodes, and puts forward the future optimization direction of this topic. Full article
(This article belongs to the Proceedings of 2nd International Conference on Green Aviation (ICGA 2024))
Show Figures

Figure 1

22 pages, 11179 KiB  
Article
Study on Lightweight Bridge Crack Detection Algorithm Based on YOLO11
by Xuwei Dong, Jiashuo Yuan and Jinpeng Dai
Sensors 2025, 25(11), 3276; https://doi.org/10.3390/s25113276 - 23 May 2025
Cited by 1 | Viewed by 944
Abstract
Bridge crack detection is a key factor in ensuring the safety and extending the lifespan of bridges. Traditional detection methods often suffer from low efficiency and insufficient accuracy. The development of computer vision has gradually made bridge crack detection methods based on deep [...] Read more.
Bridge crack detection is a key factor in ensuring the safety and extending the lifespan of bridges. Traditional detection methods often suffer from low efficiency and insufficient accuracy. The development of computer vision has gradually made bridge crack detection methods based on deep learning to become a research hotspot. In this study, a lightweight bridge crack detection algorithm, YOLO11-Bridge Detection (YOLO11-BD), is proposed based on the optimization of the YOLO11 model. This algorithm uses an efficient multiscale conv all (EMSCA) module to enhance channel and spatial attention, thereby strengthening its ability to extract crack features. Additionally, the algorithm improves detection accuracy without increasing the model size. Furthermore, a lightweight detection head (LDH) is introduced to process feature information from different channels using efficient grouped convolutions. It reduces the model’s parameters and computations whilst preserving accuracy, thereby achieving a lightweight model. Experimental results show that compared with the original YOLO11, the YOLO11-BD algorithm improves mAP50 and mAP50-95 on the bridge crack dataset by 3.1% and 4.8%, respectively, whilst significantly reducing GFLOPs by 19.05%. Its frame per second remains higher than 500, demonstrating excellent real-time detection capability and high computational efficiency. The algorithm proposed in this study provides an efficient and flexible solution for the monitoring of bridge cracks using remote sensing devices such as drones, and it has significant practical application value. Its lightweight design ensures strong cross-platform adaptability and provides reliable technical support for intelligent bridge management and maintenance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

30 pages, 10124 KiB  
Review
Innovations in Sensor-Based Systems and Sustainable Energy Solutions for Smart Agriculture: A Review
by Md. Mahadi Hasan Sajib and Abu Sadat Md. Sayem
Encyclopedia 2025, 5(2), 67; https://doi.org/10.3390/encyclopedia5020067 - 20 May 2025
Viewed by 1535
Abstract
Smart agriculture is transforming traditional farming by integrating advanced sensor-based systems, intelligent control technologies, and sustainable energy solutions to meet the growing global demand for food while reducing environmental impact. This review presents a comprehensive analysis of recent innovations in smart agriculture, focusing [...] Read more.
Smart agriculture is transforming traditional farming by integrating advanced sensor-based systems, intelligent control technologies, and sustainable energy solutions to meet the growing global demand for food while reducing environmental impact. This review presents a comprehensive analysis of recent innovations in smart agriculture, focusing on the deployment of IoT-based sensors, wireless communication protocols, energy-harvesting methods, and automated irrigation and fertilization systems. Furthermore, the paper explores the role of artificial intelligence (AI), machine learning (ML), computer vision, and big data analytics in monitoring and managing key agricultural parameters such as crop health, pest and disease detection, soil conditions, and water usage. Special attention is given to decision-support systems, precision agriculture techniques, and the application of remote and proximal sensing technologies like hyperspectral imaging, thermal imaging, and NDVI-based indices. By evaluating the benefits, limitations, and emerging trends of these technologies, this review aims to provide insights into how smart agriculture can enhance productivity, resource efficiency, and sustainability in modern farming systems. The findings serve as a valuable reference for researchers, practitioners, and policymakers working towards sustainable agricultural innovation. Full article
(This article belongs to the Section Engineering)
Show Figures

Graphical abstract

37 pages, 59030 KiB  
Review
Integration of Hyperspectral Imaging and AI Techniques for Crop Type Mapping: Present Status, Trends, and Challenges
by Mohamed Bourriz, Hicham Hajji, Ahmed Laamrani, Nadir Elbouanani, Hamd Ait Abdelali, François Bourzeix, Ali El-Battay, Abdelhakim Amazirh and Abdelghani Chehbouni
Remote Sens. 2025, 17(9), 1574; https://doi.org/10.3390/rs17091574 - 29 Apr 2025
Viewed by 1987
Abstract
Accurate and efficient crop maps are essential for decision-makers to improve agricultural monitoring and management, thereby ensuring food security. The integration of advanced artificial intelligence (AI) models with hyperspectral remote sensing data, which provide richer spectral information than multispectral imaging, has proven highly [...] Read more.
Accurate and efficient crop maps are essential for decision-makers to improve agricultural monitoring and management, thereby ensuring food security. The integration of advanced artificial intelligence (AI) models with hyperspectral remote sensing data, which provide richer spectral information than multispectral imaging, has proven highly effective in the precise discrimination of crop types. This systematic review examines the evolution of hyperspectral platforms, from Unmanned Aerial Vehicle (UAV)-mounted sensors to space-borne satellites (e.g., EnMAP, PRISMA), and explores recent scientific advances in AI methodologies for crop mapping. A review protocol was applied to identify 47 studies from databases of peer-reviewed scientific publications, focusing on hyperspectral sensors, input features, and classification architectures. The analysis highlights the significant contributions of Deep Learning (DL) models, particularly Vision Transformers (ViTs) and hybrid architectures, in improving classification accuracy. However, the review also identifies critical gaps, including the under-utilization of hyperspectral space-borne imaging, the limited integration of multi-sensor data, and the need for advanced modeling approaches such as Graph Neural Networks (GNNs)-based methods and geospatial foundation models (GFMs) for large-scale crop type mapping. Furthermore, the findings highlight the importance of developing scalable, interpretable, and transparent models to maximize the potential of hyperspectral imaging (HSI), particularly in underrepresented regions such as Africa, where research remains limited. This review provides valuable insights to guide future researchers in adopting HSI and advanced AI models for reliable large-scale crop mapping, contributing to sustainable agriculture and global food security. Full article
Show Figures

Figure 1

19 pages, 2016 KiB  
Article
A Study on the Driving Mechanism of Future Community Building in China from the Perspective of Resident Participation
by Lianbo Zhu, Yunshu Xie, Xun Liu, Sha Ye and Lingna Lin
Buildings 2025, 15(7), 1203; https://doi.org/10.3390/buildings15071203 - 7 Apr 2025
Viewed by 405
Abstract
A future community is a community with the core mission of realizing people’s vision of a better life, focusing on meeting the all-round life needs of community residents. Residents’ participation in the construction of a future community is also regarded as one of [...] Read more.
A future community is a community with the core mission of realizing people’s vision of a better life, focusing on meeting the all-round life needs of community residents. Residents’ participation in the construction of a future community is also regarded as one of the core driving forces to promote the sustainable development and innovation of future community construction. Therefore, to better facilitate the construction of future communities, based on relevant studies at home and abroad, this thesis combines questionnaire surveys and expert interviews, identifies 20 driving factors from five levels of human nature, ecology, intelligence, convenience, livability, etc., according to which it constructs a system dynamics model and carries out a simulation analysis, observes the effects of the driving factors at each level on residents’ sense of belonging and sense of participation, and finally, the results of the analyses are combined to put forward relevant suggestions for future community building. The results of this study show that residents’ perception of future community construction, their demand for intelligent life, the degree of promotion of a 15 min community living circle, and the degree of improvement of a community’s disaster warning and emergency response mechanism are the key factors driving resident participation in the construction of a future community, with residents’ demand for intelligence at different times being the most central driving factor. The research results of this thesis provide theoretical references for stimulating resident participation and building livable future communities while offering insights applicable to global contexts, particularly in regions undergoing rapid urbanization and digital transformation. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

38 pages, 18311 KiB  
Article
Design of an Interactive Exercise and Leisure System for the Elderly Integrating Artificial Intelligence and Motion-Sensing Technology
by Chao-Ming Wang, Cheng-Hao Shao and Yu-Ching Lin
Sensors 2025, 25(7), 2315; https://doi.org/10.3390/s25072315 - 5 Apr 2025
Viewed by 688
Abstract
In response to the global trend of population aging, the issue of providing elderly individuals suitable leisure and entertainment has become increasingly important. In this study, it aims to utilize artificial intelligence (AI) technology to offer the elderly with a healthy and enjoyable [...] Read more.
In response to the global trend of population aging, the issue of providing elderly individuals suitable leisure and entertainment has become increasingly important. In this study, it aims to utilize artificial intelligence (AI) technology to offer the elderly with a healthy and enjoyable exercise and leisure experience. A human–machine interactive system is designed using computer vision, a subfield of AI, to promote positive physical adaptation for the elderly. The relevant literature on the needs of the elderly, technology, exercise, leisure, and AI techniques is reviewed. Case studies of interactive devices for exercise and leisure for the elderly, both domestically and internationally, are summarized to establish the prototype concept for system design. The proposed interactive exercise and leisure system is developed by integrating motion-sensing interfaces and real-time object detection using the YOLO algorithm. The system’s effectiveness is evaluated through questionnaire surveys and participant interviews, with the collected survey data analyzed statistically using IBM SPSS 26 and AMOS 23. Findings indicate that (1) AI technology provides new and enjoyable interactive experiences for the elderly’s exercise and leisure; (2) positive impacts are made on the elderly’s health and well-being; and (3) the system’s acceptance and attractiveness increase when elements related to personal experiences are incorporated into the system. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

13 pages, 1868 KiB  
Review
Designs and Challenges in Fluid Antenna System Hardware
by Kin-Fai Tong, Baiyang Liu and Kai-Kit Wong
Electronics 2025, 14(7), 1458; https://doi.org/10.3390/electronics14071458 - 3 Apr 2025
Viewed by 1091
Abstract
Fluid Antenna Systems (FASs) have recently emerged as a promising solution to address the demanding performance indicators (KPIs) and scalability challenges of future 6G mobile communications. By enabling agile control over both radiating position and antenna shape, FAS can significantly improve diversity gain [...] Read more.
Fluid Antenna Systems (FASs) have recently emerged as a promising solution to address the demanding performance indicators (KPIs) and scalability challenges of future 6G mobile communications. By enabling agile control over both radiating position and antenna shape, FAS can significantly improve diversity gain and reduce outage probability through dynamic selection of the optimal radiation point, also known as port. Numerous theoretical studies have explored novel FAS concepts, often in conjunction with other wireless communication technologies such as multiple-input multiple-output (MIMO), Non-Orthogonal Multiple Access (NOMA), Reconfigurable Intelligent Surfaces (RIS), Integrated Sensing and Communication (ISAC), backscatter communication, and Semantic communication. To validate these theoretical concepts, several early-stage FAS hardware prototypes have been developed, including liquid–metal fluid antennas, mechanically movable antennas, pixel-reconfigurable antennas, and meta-fluid antennas. Initial measurements have demonstrated the potential advantages of FAS. This article provides a brief review of these early FAS hardware technologies. Furthermore, we share our vision for future hardware development and the corresponding challenges, aiming to fully release the potential of FAS and stimulate further research and development within the antenna research community. Full article
Show Figures

Figure 1

27 pages, 1603 KiB  
Review
Remote Vital Sensing in Clinical Veterinary Medicine: A Comprehensive Review of Recent Advances, Accomplishments, Challenges, and Future Perspectives
by Xinyue Zhao, Ryou Tanaka, Ahmed S. Mandour, Kazumi Shimada and Lina Hamabe
Animals 2025, 15(7), 1033; https://doi.org/10.3390/ani15071033 - 3 Apr 2025
Cited by 2 | Viewed by 2059
Abstract
Remote vital sensing in veterinary medicine is a relatively new area of practice, which involves the acquisition of data without invasion of the body cavities of live animals. This paper aims to review several technologies in remote vital sensing: infrared thermography, remote photoplethysmography [...] Read more.
Remote vital sensing in veterinary medicine is a relatively new area of practice, which involves the acquisition of data without invasion of the body cavities of live animals. This paper aims to review several technologies in remote vital sensing: infrared thermography, remote photoplethysmography (rPPG), radar, wearable sensors, and computer vision and machine learning. In each of these technologies, we outline its concepts, uses, strengths, and limitations in multiple animal species, and its potential to reshape health surveillance, welfare evaluation, and clinical medicine in animals. The review also provides information about the problems associated with applying these technologies, including species differences, external conditions, and the question of the reliability and classification of these technologies. Additional topics discussed in this review include future developments such as the use of artificial intelligence, combining different sensing methods, and creating monitoring solutions tailored to specific animal species. This contribution gives a clear understanding of the status and future possibilities of remote vital sensing in veterinary applications and stresses the importance of that technology for the development of the veterinary field in terms of animal health and science. Full article
(This article belongs to the Special Issue Advances in Veterinary Surgical, Anesthetic, and Patient Monitoring)
Show Figures

Figure 1

Back to TopTop