Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (684)

Search Parameters:
Keywords = human visual perception

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 2732 KB  
Review
Processing of Visual Mirror Symmetry by Human Observers; Mechanisms and Models
by Cayla A. Bellagarda, J. Edwin Dickinson, Jason Bell, Paul V. McGraw and David R. Badcock
Symmetry 2026, 18(2), 247; https://doi.org/10.3390/sym18020247 - 30 Jan 2026
Viewed by 48
Abstract
Mirror symmetry is an important and common feature of the visual world, which has attracted the interest of scientists, artists, and philosophers for centuries. The human visual system is very sensitive to mirror symmetry; symmetry is detected quickly and accurately and influences perception [...] Read more.
Mirror symmetry is an important and common feature of the visual world, which has attracted the interest of scientists, artists, and philosophers for centuries. The human visual system is very sensitive to mirror symmetry; symmetry is detected quickly and accurately and influences perception even when not relevant to the task at hand. Neuroimaging studies have identified mirror symmetry-specific haemodynamic and electrophysiological responses in extra-striate regions of the visual cortex, and these findings closely align with behavioural psychophysical findings when only considering the magnitude and sensitivity of the response. However, as we go on to discuss later, the location of these responses is at odds with where psychophysical models based on early visual filters would predict. In attempts to capture and explain mirror symmetry perception, various models have been developed and refined as our understanding of the factors influencing mirror symmetry perception has grown. The current review provides a contemporary overview of the psychophysical and neuroimaging understanding of mirror symmetry perception in human vision. We then consider how new findings align with predominant spatial filtering models of mirror symmetry perception to identify key factors that need to be accounted for in current and future iterations. Full article
(This article belongs to the Section Life Sciences)
Show Figures

Figure 1

20 pages, 1389 KB  
Article
Visual Evaluation Strategies in Art Image Viewing: An Eye-Tracking Comparison of Art-Educated and Non-Art Participants
by Adem Korkmaz, Sevinc Gülsecen and Grigor Mihaylov
J. Eye Mov. Res. 2026, 19(1), 14; https://doi.org/10.3390/jemr19010014 - 30 Jan 2026
Viewed by 49
Abstract
Understanding how tacit knowledge embedded in visual materials is accessed and utilized during evaluation tasks remains a key challenge in human–computer interaction and visual expertise research. Although eye-tracking studies have identified systematic differences between experts and novices, findings remain inconsistent, particularly in art-related [...] Read more.
Understanding how tacit knowledge embedded in visual materials is accessed and utilized during evaluation tasks remains a key challenge in human–computer interaction and visual expertise research. Although eye-tracking studies have identified systematic differences between experts and novices, findings remain inconsistent, particularly in art-related visual evaluation contexts. This study examines whether tacit aspects of visual evaluation can be inferred from gaze behavior by comparing individuals with and without formal art education. Visual evaluation was assessed using a structured, prompt-based task in which participants inspected artistic images and responded to items targeting specific visual elements. Eye movements were recorded using a screen-based eye-tracking system. Areas of Interest (AOIs) corresponding to correct-answer regions were defined a priori based on expert judgment and item prompts. Both AOI-level metrics (e.g., fixation count, mean, and total visit and gaze durations) and image-level metrics (e.g., fixation count, saccade count, and pupil size) were analyzed using appropriate parametric and non-parametric statistical tests. The results showed that participants with an art-education background produced more fixations within AOIs, exhibited longer mean and total AOI visit and gaze durations, and demonstrated lower saccade counts than participants without art education. These patterns indicate more systematic and goal-directed gaze behavior during visual evaluation, suggesting that formal art education may shape tacit visual evaluation strategies. The findings also highlight the potential of eye tracking as a methodological tool for studying expertise-related differences in visual evaluation processes. Full article
Show Figures

Figure 1

25 pages, 4008 KB  
Article
SLD-YOLO11: A Topology-Reconstructed Lightweight Detector for Fine-Grained Maize–Weed Discrimination in Complex Field Environments
by Meichen Liu and Jing Gao
Agronomy 2026, 16(3), 328; https://doi.org/10.3390/agronomy16030328 - 28 Jan 2026
Viewed by 129
Abstract
Precise identification of weeds at the maize seedling stage is pivotal for implementing Site-Specific Weed Management and minimizing herbicide environmental pollution. However, the performance of existing lightweight detectors is severely bottlenecked by unstructured field environments, characterized by the “green-on-green” spectral similarity between crops [...] Read more.
Precise identification of weeds at the maize seedling stage is pivotal for implementing Site-Specific Weed Management and minimizing herbicide environmental pollution. However, the performance of existing lightweight detectors is severely bottlenecked by unstructured field environments, characterized by the “green-on-green” spectral similarity between crops and weeds, diminutive seedling targets, and complex mutual occlusion of leaves. To address these challenges, this study proposes SLD-YOLO11, a topology-reconstructed lightweight detection model tailored for complex field environments. First, to mitigate the feature loss of tiny targets, a Lossless Downsampling Topology based on Space-to-Depth Convolution (SPD-Conv) is constructed, transforming spatial information into depth channels to preserve fine-grained features. Second, a Decomposed Large Kernel Attention (D-LKA) mechanism is designed to mimic the wide receptive field of human vision. By modeling long-range spatial dependencies with decomposed large-kernel attention, it enhances discrimination under severe occlusion by leveraging global structural context. Third, the DySample operator is introduced to replace static interpolation, enabling content-aware feature flow reconstruction. Experimental results demonstrate that SLD-YOLO11 achieves an mAP@0.5 of 97.4% on a self-collected maize field dataset, significantly outperforming YOLOv8n, YOLOv10n, YOLOv11n, and mainstream lightweight variants. Notably, the model achieves Zero Inter-class Misclassification between maize and weeds, establishing high safety standards for weeding operations. To further bridge the gap between visual perception and precision operations, a Visual Weed-Crop Competition Index (VWCI) is innovatively proposed. By integrating detection bounding boxes with species-specific morphological correction coefficients, the VWCI quantifies field weed pressure with low cost and high throughput. Regression analysis reveals a high consistency (R2 = 0.70) between the automated VWCI and manual ground-truth coverage. This study not only provides a robust detector but also offers a reliable decision-making basis for real-time variable-rate spraying by intelligent weeding robots. Full article
(This article belongs to the Section Farming Sustainability)
Show Figures

Figure 1

30 pages, 746 KB  
Article
From the Visible to the Invisible: On the Phenomenal Gradient of Appearance
by Baingio Pinna, Daniele Porcheddu and Jurģis Šķilters
Brain Sci. 2026, 16(1), 114; https://doi.org/10.3390/brainsci16010114 - 21 Jan 2026
Viewed by 170
Abstract
Background: By exploring the principles of Gestalt psychology, the neural mechanisms of perception, and computational models, scientists aim to unravel the complex processes that enable us to perceive a coherent and organized world. This multidisciplinary approach continues to advance our understanding of [...] Read more.
Background: By exploring the principles of Gestalt psychology, the neural mechanisms of perception, and computational models, scientists aim to unravel the complex processes that enable us to perceive a coherent and organized world. This multidisciplinary approach continues to advance our understanding of how the brain constructs a perceptual world from sensory inputs. Objectives and Methods: This study investigates the nature of visual perception through an experimental paradigm and method based on a comparative analysis of human and artificial intelligence (AI) responses to a series of modified square images. We introduce the concept of a “phenomenal gradient” in human visual perception, where different attributes of an object are organized syntactically and hierarchically in terms of their perceptual salience. Results: Our findings reveal that human visual processing involves complex mechanisms including shape prioritization, causal inference, amodal completion, and the perception of visible invisibles. In contrast, AI responses, while geometrically precise, lack these sophisticated interpretative capabilities. These differences highlight the richness of human visual cognition and the current limitations of model-generated descriptions in capturing causal, completion-based, and context-dependent inferences. The present work introduces the notion of a ‘phenomenal gradient’ as a descriptive framework and provides an initial comparative analysis that motivates testable hypotheses for future behavioral and computational studies, rather than direct claims about improving AI systems. Conclusions: By bridging phenomenology, information theory, and cognitive science, this research challenges existing paradigms and suggests a more integrated approach to studying visual consciousness. Full article
Show Figures

Figure 1

32 pages, 483 KB  
Review
The Complexity of Communication in Mammals: From Social and Emotional Mechanisms to Human Influence and Multimodal Applications
by Krzysztof Górski, Stanisław Kondracki and Katarzyna Kępka-Borkowska
Animals 2026, 16(2), 265; https://doi.org/10.3390/ani16020265 - 15 Jan 2026
Viewed by 380
Abstract
Communication in mammals constitutes a complex, multimodal system that integrates visual, acoustic, tactile, and chemical signals whose functions extend beyond simple information transfer to include the regulation of social relationships, coordination of behaviour, and expression of emotional states. This article examines the fundamental [...] Read more.
Communication in mammals constitutes a complex, multimodal system that integrates visual, acoustic, tactile, and chemical signals whose functions extend beyond simple information transfer to include the regulation of social relationships, coordination of behaviour, and expression of emotional states. This article examines the fundamental mechanisms of communication from biological, neuroethological, and behavioural perspectives, with particular emphasis on domesticated and farmed species. Analysis of sensory signals demonstrates that their perception and interpretation are closely linked to the physiology of sensory organs as well as to social experience and environmental context. In companion animals such as dogs and cats, domestication has significantly modified communicative repertoires ranging from the development of specialised facial musculature in dogs to adaptive diversification of vocalisations in cats. The neurobiological foundations of communication, including the activity of the amygdala, limbic structures, and mirror-neuron systems, provide evidence for homologous mechanisms of emotion recognition across species. The article also highlights the role of communication in shaping social structures and the influence of husbandry conditions on the behaviour of farm animals. In intensive production environments, acoustic, visual, and chemical signals are often shaped or distorted by crowding, noise, and chronic stress, with direct consequences for welfare. Furthermore, the growing importance of multimodal technologies such as Precision Livestock Farming (PLF) and Animal–Computer Interaction (ACI) is discussed, particularly their role in enabling objective monitoring of emotional states and behaviour and supporting individualised care. Overall, the analysis underscores that communication forms the foundation of social functioning in mammals, and that understanding this complexity is essential for ethology, animal welfare, training practices, and the design of modern technologies facilitating human–animal interaction. Full article
(This article belongs to the Section Human-Animal Interactions, Animal Behaviour and Emotion)
24 pages, 28157 KB  
Article
YOLO-ERCD: An Upgraded YOLO Framework for Efficient Road Crack Detection
by Xiao Li, Ying Chu, Thorsten Chan, Wai Lun Lo and Hong Fu
Sensors 2026, 26(2), 564; https://doi.org/10.3390/s26020564 - 14 Jan 2026
Viewed by 266
Abstract
Efficient and reliable road damage detection is a critical component of intelligent transportation and infrastructure control systems that rely on visual sensing technologies. Existing road damage detection models are facing challenges such as missed detection of fine cracks, poor adaptability to lighting changes, [...] Read more.
Efficient and reliable road damage detection is a critical component of intelligent transportation and infrastructure control systems that rely on visual sensing technologies. Existing road damage detection models are facing challenges such as missed detection of fine cracks, poor adaptability to lighting changes, and false positives under complex backgrounds. In this study, we propose an enhanced YOLO-based framework, YOLO-ERCD, designed to improve the accuracy and robustness of sensor-acquired image data for road crack detection. The datasets used in this work were collected from vehicle-mounted and traffic surveillance camera sensors, representing typical visual sensing systems in automated road inspection. The proposed architecture integrates three key components: (1) a residual convolutional block attention module, which preserves original feature information through residual connections while strengthening spatial and channel feature representation; (2) a channel-wise adaptive gamma correction module that models the nonlinear response of the human visual system to light intensity, adaptively enhancing brightness details for improved robustness under diverse lighting conditions; (3) a visual focus noise modulation module that reduces background interference by selectively introducing noise, emphasizing damage-specific features. These three modules are specifically designed to address the limitations of YOLOv10 in feature representation, lighting adaptation, and background interference suppression, working synergistically to enhance the model’s detection accuracy and robustness, and closely aligning with the practical needs of road monitoring applications. Experimental results on both proprietary and public datasets demonstrate that YOLO-ERCD outperforms recent road damage detection models in accuracy and computational efficiency. The lightweight design also supports real-time deployment on edge sensing and control devices. These findings highlight the potential of integrating AI-based visual sensing and intelligent control, contributing to the development of robust, efficient, and perception-aware road monitoring systems. Full article
Show Figures

Figure 1

34 pages, 9272 KB  
Article
An Integrated Framework for Architectural Visual Assessment: Validation of Visual Equilibrium Using Fractal Analysis and Subjective Perception
by Mohammed A. Aloshan and Ehab Momin Mohammed Sanad
Buildings 2026, 16(2), 345; https://doi.org/10.3390/buildings16020345 - 14 Jan 2026
Viewed by 300
Abstract
In recent decades, multiple approaches have emerged to assess architectural visual character, including fractal dimension analysis, visual equilibrium calculations, and visual preference surveys. However, the relationships among these methods and their alignment with subjective perception remain unclear. This study applies all three techniques [...] Read more.
In recent decades, multiple approaches have emerged to assess architectural visual character, including fractal dimension analysis, visual equilibrium calculations, and visual preference surveys. However, the relationships among these methods and their alignment with subjective perception remain unclear. This study applies all three techniques to sample mosques in Riyadh, Saudi Arabia, to evaluate their validity and interconnections. Findings reveal a within-sample tendency toward low visual complexity, with fractal dimensions ranging from 1.2 to 1.547. Within this small, exploratory sample of five large main-road mosques in Riyadh, correlations between computed visual equilibrium and survey results provide preliminary, sample-specific convergent-validity evidence for Larrosa’s visual-forces method, rather than general validation. Within this sample, traditional façades with separate minarets tended to score as more visually balanced than more contemporary compositions. This triangulated approach offers an exploratory framework for architectural visual assessment that integrates objective metrics with human perception. Full article
(This article belongs to the Special Issue Advanced Studies in Urban and Regional Planning—2nd Edition)
Show Figures

Figure 1

31 pages, 1884 KB  
Article
Achieving Robotic Data Efficiency Through Machine-Centric FDCT Vision Processing
by Yair Wiseman
Sensors 2026, 26(2), 518; https://doi.org/10.3390/s26020518 - 13 Jan 2026
Viewed by 177
Abstract
To enhance a robot’s capacity to perceive and interpret its environment, an advanced vision system tailored specifically for machine perception was developed, moving away from human-oriented visual processing. This system improves robotic functionality by incorporating algorithms optimized for how computerized devices process visual [...] Read more.
To enhance a robot’s capacity to perceive and interpret its environment, an advanced vision system tailored specifically for machine perception was developed, moving away from human-oriented visual processing. This system improves robotic functionality by incorporating algorithms optimized for how computerized devices process visual information. Central to this paper’s approach is an improved Fast Discrete Cosine Transform (FDCT) algorithm, customized for robotic systems, which enhances object and obstacle detection in machine vision. By prioritizing higher frequencies and eliminating less critical lower frequencies, the algorithm sharpens focus on essential details. Instead of adapting the data stream for human vision, the FDCT and quantization tables were adjusted to suit machine vision requirements, achieving a file size reduction to about one-third of the original while preserving highly relevant data for robotic processing. This innovative approach significantly improves robots’ ability to navigate complex environments, perform tasks such as object recognition, motion detection, and obstacle avoidance with greater accuracy and efficiency. Full article
Show Figures

Figure 1

14 pages, 617 KB  
Article
Integrating ESP32-Based IoT Architectures and Cloud Visualization to Foster Data Literacy in Early Engineering Education
by Jael Zambrano-Mieles, Miguel Tupac-Yupanqui, Salutar Mari-Loardo and Cristian Vidal-Silva
Computers 2026, 15(1), 51; https://doi.org/10.3390/computers15010051 - 13 Jan 2026
Viewed by 241
Abstract
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time [...] Read more.
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time environmental monitoring systems for agriculture and beekeeping. Through a sixteen-week Project-Based Learning (PBL) intervention with 91 participants, we evaluated how this technological stack influences technical proficiency. Results indicate that the transition from local code execution to cloud-based telemetry increased perceived learning confidence from μ=3.9 (Challenge phase) to μ=4.6 (Reflection phase) on a 5-point scale. Furthermore, 96% of students identified the visualization dashboards as essential Human–Computer Interfaces (HCI) for debugging, effectively bridging the gap between raw sensor data and evidence-based argumentation. These findings demonstrate that integrating open-source IoT architectures provides a scalable mechanism to cultivate data literacy in early engineering education. Full article
Show Figures

Figure 1

21 pages, 83627 KB  
Article
Research on Urban Perception of Zhengzhou City Based on Interpretable Machine Learning
by Mengjing Zhang, Chen Pan, Xiaohua Huang, Lujia Zhang and Mengshun Lee
Buildings 2026, 16(2), 314; https://doi.org/10.3390/buildings16020314 - 11 Jan 2026
Viewed by 197
Abstract
Urban perception research has long focused on global metropolises, but has overlooked many cities with complex functions and spatial structures, resulting in insufficient universality of existing theories when facing diverse urban contexts. This study constructed an analytical framework that integrates street scene images [...] Read more.
Urban perception research has long focused on global metropolises, but has overlooked many cities with complex functions and spatial structures, resulting in insufficient universality of existing theories when facing diverse urban contexts. This study constructed an analytical framework that integrates street scene images and interpretable machine learning. Taking Zhengzhou City as the research object, it extracted street visual elements based on deep learning technology and systematically analyzed the formation mechanism of multi-dimensional urban perception by combining the LightGBM model and SHAP method. The main findings of the research are as follows: (1) The urban perception of Zhengzhou City shows a significant east–west difference with Zhongzhou Avenue as the boundary. Positive perceptions such as safety and vitality are concentrated in the central business district and historical districts, while negative perceptions are more common in the urban fringe areas with chaotic built environments and single functions. (2) The visibility of greenery, the openness of the sky and the continuity of the building interface are identified as key visual elements affecting perception, and their directions and intensifies of action show significant differences due to different perception dimensions. (3) The influence of visual elements on perception has a complex mechanism of action. For instance, the promoting effect of greenery visibility on beauty perception tends to level off after reaching a certain threshold. The research results of this study can provide quantitative basis and strategic reference for the improvement in urban space quality and humanized street design. Full article
Show Figures

Figure 1

24 pages, 7205 KB  
Article
Low-Cost Optical–Inertial Point Cloud Acquisition and Sketch System
by Tung-Chen Chao, Hsi-Fu Shih, Chuen-Lin Tien and Han-Yen Tu
Sensors 2026, 26(2), 476; https://doi.org/10.3390/s26020476 - 11 Jan 2026
Viewed by 292
Abstract
This paper proposes an optical three-dimensional (3D) point cloud acquisition and sketching system, which is not limited by the measurement size, unlike traditional 3D object measurement techniques. The system employs an optical displacement sensor for surface displacement scanning and a six-axis inertial sensor [...] Read more.
This paper proposes an optical three-dimensional (3D) point cloud acquisition and sketching system, which is not limited by the measurement size, unlike traditional 3D object measurement techniques. The system employs an optical displacement sensor for surface displacement scanning and a six-axis inertial sensor (accelerometer and gyroscope) for spatial attitude perception. A microprocessor control unit (MCU) is responsible for acquiring, merging, and calculating data from the sensors, converting it into 3D point clouds. Butterworth filtering and Mahoney complementary filtering are used for sensor signal preprocessing and calculation, respectively. Furthermore, a human–machine interface is designed to visualize the point cloud and display the scanning path and measurement trajectory in real time. Compared to existing works in the literature, this system has a simpler hardware architecture, more efficient algorithms, and better operation, inspection, and observation features. The experimental results show that the maximum measurement error on 2D planes is 4.7% with a root mean square (RMS) error of 2.1%, corresponding to the reference length of 10.3 cm. For 3D objects, the maximum measurement error is 5.3% with the RMS error of 2.4%, corresponding to the reference length of 9.3 cm. Finally, it was verified that this system can also be applied to large-sized 3D objects for outlines. Full article
(This article belongs to the Special Issue Imaging and Sensing in Fiber Optics and Photonics: 2nd Edition)
Show Figures

Figure 1

14 pages, 1061 KB  
Article
Influence of Multi-Cue Interaction on Human Depth Perception in Three-Dimensional Space
by Qiang Liu, Shuai Li, Qiang Yang, Caihong Dai, Shufang He and Hiroaki Shigemasu
Sensors 2026, 26(2), 413; https://doi.org/10.3390/s26020413 - 8 Jan 2026
Viewed by 225
Abstract
Background: With the widespread application of three-dimensional (3D) display technology, enhancing the realism of users’ experience in virtual 3D space has become important. A deep understanding of the mechanisms of human depth perception is therefore crucial. Objective: This study aims to investigate the [...] Read more.
Background: With the widespread application of three-dimensional (3D) display technology, enhancing the realism of users’ experience in virtual 3D space has become important. A deep understanding of the mechanisms of human depth perception is therefore crucial. Objective: This study aims to investigate the influence of motion parallax, color, and object position cues on depth perception in 3D space. Method: Random-dot stereograms based on binocular disparity cues were constructed; three experiments were designed, varying the stimulus movement speed, color, and position; two-alternative forced-choice (2AFC) psychophysical paradigms were employed to collect participants’ responses regarding depth perception; and statistical analyses were conducted to examine the influences of these three cues on depth perception specified by binocular disparity. Results: A relatively small amount of motion parallax indicated a certain inhibitory effect on depth perception, whereas a larger amount might enhance the perceived depth. Introducing red, green, or blue color to the moving stimuli might also have a certain promoting effect. Furthermore, a significant difference in perceived depth was observed when the positions of the Test Stimulus and the Standard Stimulus differed within a trial, which might involve areas of higher-level brain function (such as visual attention). In conclusion, when multiple visual cues are present concurrently, they exhibit complex interactions that affect human depth perception. Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
Show Figures

Figure 1

27 pages, 4932 KB  
Article
Automated Facial Pain Assessment Using Dual-Attention CNN with Clinically Calibrated High-Reliability and Reproducibility Framework
by Albert Patrick Sankoh, Ali Raza, Khadija Parwez, Wesam Shishah, Ayman Alharbi, Mubeen Javed and Muhammad Bilal
Biomimetics 2026, 11(1), 51; https://doi.org/10.3390/biomimetics11010051 - 8 Jan 2026
Viewed by 402
Abstract
Accurate and quantitative pain assessment remains a major challenge in clinical medicine, especially for patients unable to verbalize discomfort. Conventional methods based on self-reports or clinician observation are subjective and inconsistent. This study introduces a novel automated facial pain assessment framework built on [...] Read more.
Accurate and quantitative pain assessment remains a major challenge in clinical medicine, especially for patients unable to verbalize discomfort. Conventional methods based on self-reports or clinician observation are subjective and inconsistent. This study introduces a novel automated facial pain assessment framework built on a dual-attention convolutional neural network (CNN) that achieves clinically calibrated, high-reliability performance and interpretability. The architecture combines multi-head spatial attention to localize pain-relevant facial regions with an enhanced channel attention block employing triple-pooling (average, max, and standard deviation) to capture discriminative intensity features. Regularization through label smoothing (α = 0.1) and AdamW optimization ensures calibrated, stable convergence. Evaluated on a clinically annotated dataset using subject-wise stratified sampling, the proposed model achieved a test accuracy of 90.19% ± 0.94%, with an average 5-fold cross-validation accuracy of 83.60% ± 1.55%. The model further attained an F1-score of 0.90 and Cohen’s κ = 0.876, with macro- and micro-AUCs of 0.991 and 0.992, respectively. The evaluation covers five pain classes (No Pain, Mid Pain, Moderate Pain, Severe Pain, and Very Pain) using subject-wise splits comprising 5840 total images and 1160 test samples. Comparative benchmarking and ablation experiments confirm each module’s contribution, while Grad-CAM visualizations highlight physiologically relevant facial regions. The results demonstrate a robust, explainable, and reproducible framework suitable for integration into real-world automated pain-monitoring systems. Inspired by biological pain perception mechanisms and human facial muscle responses, the proposed framework aligns with biomimetic sensing principles by emulating how localized facial cues contribute to pain interpretation. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Biomedical Engineering: 2nd Edition)
Show Figures

Figure 1

18 pages, 3240 KB  
Article
A Waist-Mounted Interface for Mobile Viewpoint-Height Transformation Affecting Spatial Perception
by Jun Aoki, Hideki Kadone and Kenji Suzuki
Sensors 2026, 26(2), 372; https://doi.org/10.3390/s26020372 - 6 Jan 2026
Viewed by 317
Abstract
Visual information shapes spatial perception and body representation in human augmentation. However, the perceptual consequences of viewpoint-height changes produced by sensor–display geometry are not well understood. To address this gap, we developed an interface that maps a waist-mounted stereo fisheye camera to an [...] Read more.
Visual information shapes spatial perception and body representation in human augmentation. However, the perceptual consequences of viewpoint-height changes produced by sensor–display geometry are not well understood. To address this gap, we developed an interface that maps a waist-mounted stereo fisheye camera to an eye-level viewpoint on a head-mounted display in real time. Geometric and timing calibration kept latency low enough to preserve a sense of agency and enable stable untethered walking. In a within-subject study comparing head- and waist-level viewpoints, participants approached adjustable gaps, rated passability confidence (1–7), and attempted passage when confident. We also recorded walking speed and assessed post-task body representation using a questionnaire. High gaps were judged passable and low gaps were not, irrespective of viewpoint. At the middle gap, confidence decreased with a head-level viewpoint and increased with a waist-level viewpoint, and walking speed decreased when a waist-level viewpoint was combined with a chest-height gap, consistent with added caution near the decision boundary. Body image reports most often indicated a lowered head position relative to the torso, consistent with visually driven rescaling rather than morphological change. These findings show that a waist-mounted interface for mobile viewpoint-height transformation can reliably shift spatial perception. Full article
(This article belongs to the Special Issue Sensors and Wearables for AR/VR Applications)
Show Figures

Figure 1

21 pages, 7371 KB  
Article
Enhancing Risk Perception and Information Communication: An Evidence-Based Design of Flood Hazard Map Interfaces
by Jia-Xin Guo, Szu-Chi Chen and Meng-Cong Zheng
Smart Cities 2026, 9(1), 8; https://doi.org/10.3390/smartcities9010008 - 2 Jan 2026
Viewed by 477
Abstract
Floods are among the most destructive natural disasters, posing major challenges to human safety, property, and urban resilience. Effective communication of flood risk is therefore crucial for disaster preparedness and the sustainable management of smart cities. This study explores how interface design elements [...] Read more.
Floods are among the most destructive natural disasters, posing major challenges to human safety, property, and urban resilience. Effective communication of flood risk is therefore crucial for disaster preparedness and the sustainable management of smart cities. This study explores how interface design elements of flood hazard maps, including interaction modes and legend color schemes, influence users’ risk perception, decision support, and usability. An online questionnaire survey (N = 776) and a controlled 2 × 2 experiment (N = 40) were conducted to assess user comprehension, cognitive load, and behavioral responses when interacting with different visualization formats. Results show that slider-based interaction significantly reduces task completion and map-reading times compared with drop-down menus, enhancing usability and information efficiency. Multicolor legends, although requiring higher cognitive effort, improve users’ risk perception, engagement, and memory of flood-related information. These findings suggest that integrating cognitive principles into interactive design can enhance the effectiveness of digital disaster communication tools. By combining human–computer interaction, visual cognition, and smart governance, this study provides evidence-based design strategies for developing intelligent and user-centered flood hazard mapping systems. The proposed framework contributes to the advancement of smart urban resilience and supports the broader goal of building safer and more sustainable cities. Full article
(This article belongs to the Section Smart Urban Energies and Integrated Systems)
Show Figures

Figure 1

Back to TopTop