Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (54)

Search Parameters:
Keywords = multisensory images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6378 KiB  
Article
Cross-Modal Insights into Urban Green Spaces Preferences
by Jiayi Yan, Fan Zhang and Bing Qiu
Buildings 2025, 15(14), 2563; https://doi.org/10.3390/buildings15142563 - 20 Jul 2025
Viewed by 244
Abstract
Urban green spaces (UGSs) and forests play a vital role in shaping sustainable and livable cities, offering not only ecological benefits but also spaces that are essential for human well-being, social interactions, and everyday life. Understanding the landscape features that resonate most with [...] Read more.
Urban green spaces (UGSs) and forests play a vital role in shaping sustainable and livable cities, offering not only ecological benefits but also spaces that are essential for human well-being, social interactions, and everyday life. Understanding the landscape features that resonate most with public preferences is essential for enhancing the appeal, accessibility, and functionality of these environments. However, traditional approaches—such as surveys or single-data analyses—often lack the nuance needed to capture the complex and multisensory nature of human responses to green spaces. This study explores a cross-modal methodology that integrates natural language processing (NLP) and deep learning techniques to analyze text and image data collected from public reviews of 19 urban parks in Nanjing. By capturing both subjective emotional expressions and objective visual impressions, this study reveals a consistent public preference for natural landscapes, particularly those featuring evergreen trees, shrubs, and floral elements. Text-based data reflect users’ lived experiences and nuanced perceptions, while image data offers insights into visual appeal and spatial composition. By bridging human-centered insights with data-driven analysis, this research provides a robust framework for evaluating landscape preferences. It also underscores the importance of designing green spaces that are not only ecologically sound but also emotionally resonant and socially inclusive. The findings offer valuable guidance for the planning, design, and adaptive management of urban green infrastructure in ways that support healthier, more responsive, and smarter urban environments. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

22 pages, 2072 KiB  
Article
Does Identifying with Another Face Alter Body Image Disturbance in Women with an Eating Disorder? An Enfacement Illusion Study
by Jade Portingale, David Butler and Isabel Krug
Nutrients 2025, 17(11), 1861; https://doi.org/10.3390/nu17111861 - 29 May 2025
Viewed by 647
Abstract
Background/Objectives: Individuals with eating disorders (EDs) experience stronger body illusions than control participants, suggesting that abnormalities in multisensory integration may underlie distorted body perception in these conditions. These illusions can also temporarily reduce body image disturbance. Given the centrality of the face [...] Read more.
Background/Objectives: Individuals with eating disorders (EDs) experience stronger body illusions than control participants, suggesting that abnormalities in multisensory integration may underlie distorted body perception in these conditions. These illusions can also temporarily reduce body image disturbance. Given the centrality of the face to identity and social functioning—and emerging evidence of face image disturbance in EDs—this study examined, for the first time, whether individuals with EDs exhibit heightened susceptibility to a facial illusion (the enfacement illusion) and whether experiencing this illusion improves face and/or body image. Methods: White Australian female participants (19 with an ED and 24 controls) completed synchronous and asynchronous facial mimicry tasks to induce the enfacement illusion. Susceptibility was assessed via self-report and an objective self-face recognition task, alongside pre- and post-task measures of perceived facial attractiveness, facial adiposity estimation, and head/body dissatisfaction. Results: The illusion was successfully induced across both groups. Contrary to predictions, ED and control participants demonstrated comparable susceptibility, and neither group experienced improvements in face or body image. Notably, participants with EDs experienced increased head dissatisfaction following the illusion. Conclusions: These findings indicate that the multisensory integration processes underlying self-face perception, unlike those underlying body perception, may remain intact in EDs. Participant reflections suggested that the limited therapeutic benefit of the enfacement illusion for EDs may reflect the influence of maladaptive social-evaluative processing biases inadvertently triggered during the illusion. A novel dual-process model is proposed in which distorted self-face perception in EDs may arise from biased social-cognitive processing rather than sensory dysfunction alone. Full article
(This article belongs to the Special Issue Cognitive and Dietary Behaviour Interventions in Eating Disorders)
Show Figures

Figure 1

22 pages, 3059 KiB  
Review
Rapid Eye Movements in Sleep Furnish a Unique Probe into the Ontogenetic and Phylogenetic Development of the Visual Brain: Implications for Autism Research
by Charles Chong-Hwa Hong
Brain Sci. 2025, 15(6), 574; https://doi.org/10.3390/brainsci15060574 - 26 May 2025
Viewed by 877
Abstract
With positron emission tomography followed by functional magnetic resonance imaging (fMRI), we demonstrated that rapid eye movements (REMs) in sleep are saccades that scan dream imagery. The brain “sees” essentially the same way while awake and while dreaming in REM sleep. As expected, [...] Read more.
With positron emission tomography followed by functional magnetic resonance imaging (fMRI), we demonstrated that rapid eye movements (REMs) in sleep are saccades that scan dream imagery. The brain “sees” essentially the same way while awake and while dreaming in REM sleep. As expected, an event-related fMRI study (events = REMs) showed activation time-locked to REMs in sleep (“REM-locked” activation) in the oculomotor circuit that controls saccadic eye movements and visual attention. More crucially, the fMRI study provided a series of unexpected findings, including REM-locked multisensory integration. REMs in sleep index the processing of endogenous visual information and the hierarchical generation of dream imagery through multisensory integration. The neural processes concurrent with REMs overlap extensively with those reported to be atypical in autism spectrum disorder (ASD). Studies on ASD have shown atypical visual processing and multisensory integration, emerging early in infancy and subsequently developing into autistic symptoms. MRI studies of infants at high risk for ASD are typically conducted during natural sleep. Simply timing REMs may improve the accuracy of early detection and identify markers for stratification in heterogeneous ASD patients. REMs serve as a task-free probe useful for studying both infants and animals, who cannot comply with conventional visual activation tasks. Note that REM-probe studies would be easier to implement in early infancy because REM sleep, which is markedly preponderant in the last trimester of pregnancy, is still pronounced in early infancy. The brain may practice seeing the world during REM sleep in utero before birth. The REM-probe controls the level of attention across both the lifespan and typical-atypical neurodevelopment. Longitudinal REM-probe studies may elucidate how the brain develops the ability to “see” and how this goes awry in autism. REMs in sleep may allow a straightforward comparison of animal and human data. REM-probe studies of animal models of autism have great potential. This narrative review puts forth every reason to believe that employing REMs as a probe into the development of the visual brain will have far-reaching implications. Full article
(This article belongs to the Special Issue Multimodal Imaging in Brain Development)
Show Figures

Figure 1

22 pages, 1195 KiB  
Article
Harmonizing Sight and Sound: The Impact of Auditory Emotional Arousal, Visual Variation, and Their Congruence on Consumer Engagement in Short Video Marketing
by Qiang Yang, Yudan Wang, Qin Wang, Yushi Jiang and Jingpeng Li
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 69; https://doi.org/10.3390/jtaer20020069 - 8 Apr 2025
Cited by 1 | Viewed by 3210
Abstract
Social media influencers strategically design the auditory and visual features of short videos to enhance consumer engagement. Among these, auditory emotional arousal and visual variation play crucial roles, yet their interactive effects remain underexplored. Drawing on multichannel integration theory, this study applies multimodal [...] Read more.
Social media influencers strategically design the auditory and visual features of short videos to enhance consumer engagement. Among these, auditory emotional arousal and visual variation play crucial roles, yet their interactive effects remain underexplored. Drawing on multichannel integration theory, this study applies multimodal machine learning to analyze 12,842 short videos from Douyin, integrating text analysis, sound recognition, and image processing. The results reveal an inverted U-shaped relationship between auditory emotional arousal and consumer engagement, where moderate arousal maximizes interaction while excessively high or low arousal reduces engagement. Visual variation, however, exhibits a positive linear effect, with greater variation driving higher engagement. Notably, audiovisual congruence significantly enhances engagement, as high alignment between arousal and visual variation optimizes consumer information processing. These findings advance short video marketing research by uncovering the multisensory interplay in consumer engagement. They also provide practical guidance for influencers in optimizing voice and visual design strategies to enhance content effectiveness. Full article
(This article belongs to the Topic Interactive Marketing in the Digital Era)
Show Figures

Figure 1

19 pages, 8304 KiB  
Article
Visualisation of Fossilised Tree Trunks for XR, Using Geospatial Digitisation Techniques Derived from UAS and Terrestrial Data, Aided by Computational Photography
by Charalampos Psarros, Nikolaos Zouros and Nikolaos Soulakellis
Electronics 2025, 14(6), 1146; https://doi.org/10.3390/electronics14061146 - 14 Mar 2025
Viewed by 655
Abstract
The aim of this research is to investigate and use a variety of immersive multisensory media techniques in order to create convincing digital models of fossilised tree trunks for use in XR (Extended Reality). This is made possible through the use of geospatial [...] Read more.
The aim of this research is to investigate and use a variety of immersive multisensory media techniques in order to create convincing digital models of fossilised tree trunks for use in XR (Extended Reality). This is made possible through the use of geospatial data derived from aerial imaging using UASs, terrestrial material captured using cameras and the incorporation of both the visual and audio elements for better immersion, accessed and explored in 6 Degrees of Freedom (6DoF). Immersiveness is a key factor of output that is especially engaging to the user. Both conventional and alternative methods are explored and compared, emphasising the advantages made possible with the help of Machine Learning Computational Photography. The material is collected using both UAS and terrestrial camera devices, including a multi-sensor 3D-360° camera, using stitched panoramas as sources for photogrammetry processing. Difficulties such as capturing large free-standing objects using terrestrial means are overcome using practical solutions involving mounts and remote streaming solutions. The key research contributions are comparisons between different imaging techniques and photogrammetry processes, resulting in significantly higher fidelity outputs. Conclusions indicate that superior fidelity can be achieved through the help of Machine Learning Computational Photography processes, and higher resolutions and technical specs of equipment do not necessarily translate into superior outputs. Full article
(This article belongs to the Special Issue AI Synergy: Vision, Language, and Modality)
Show Figures

Figure 1

21 pages, 20898 KiB  
Article
Combining UAV and Sentinel Satellite Data to Delineate Ecotones at Multiscale
by Yuxin Ma, Zhangjian Xie, Xiaolin She, Hans J. De Boeck, Weihong Liu, Chaoying Yang, Ninglv Li, Bin Wang, Wenjun Liu and Zhiming Zhang
Forests 2025, 16(3), 422; https://doi.org/10.3390/f16030422 - 26 Feb 2025
Viewed by 731
Abstract
Ecotones, i.e., transition zones between habitats, are important landscape features, yet they are often ignored in landscape monitoring. This study addresses the challenge of delineating ecotones at multiple scales by integrating multisource remote sensing data, including ultra-high-resolution RGB images, LiDAR data from UAVs, [...] Read more.
Ecotones, i.e., transition zones between habitats, are important landscape features, yet they are often ignored in landscape monitoring. This study addresses the challenge of delineating ecotones at multiple scales by integrating multisource remote sensing data, including ultra-high-resolution RGB images, LiDAR data from UAVs, and satellite data. We first developed a fine-resolution landcover map of three plots in Yunnan, China, with accurate delineation of ecotones using orthoimages and canopy height data derived from UAV-LiDAR. These maps were subsequently used as the training set for four machine learning models, from which the most effective model was selected as an upscaling model. The satellite data, encompassing Synthetic Aperture Radar (SAR; Sentinel-1), multispectral imagery (Sentinel-2), and topographic data, functioned as explanatory variables. The Random Forest model performed the best among the four models (kappa coefficient = 0.78), with the red band, shortwave infrared band, and vegetation red edge band as the most significant spectral variables. Using this RF model, we compared landscape patterns between 2017 and 2023 to test the model’s ability to quantify ecotone dynamics. We found an increase in ecotone over this period that can be attributed to an expansion of 0.287 km2 (1.1%). In sum, this study demonstrates the effectiveness of combining UAV and satellite data for precise, large-scale ecotone detection. This can enhance our understanding of the dynamic relationship between ecological processes and landscape pattern evolution. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

13 pages, 3960 KiB  
Article
Vestibular Testing Results in a World-Famous Tightrope Walker
by Alexander A. Tarnutzer, Fausto Romano, Nina Feddermann-Demont, Urs Scheifele, Marco Piccirelli, Giovanni Bertolini, Jürg Kesselring and Dominik Straumann
Clin. Transl. Neurosci. 2025, 9(1), 9; https://doi.org/10.3390/ctn9010009 - 17 Feb 2025
Viewed by 780
Abstract
Purpose: Accurate and precise navigation in space and postural stability rely on the central integration of multisensory input (vestibular, proprioceptive, visual), weighted according to its reliability, to continuously update the internal estimate of the direction of gravity. In this study, we examined both [...] Read more.
Purpose: Accurate and precise navigation in space and postural stability rely on the central integration of multisensory input (vestibular, proprioceptive, visual), weighted according to its reliability, to continuously update the internal estimate of the direction of gravity. In this study, we examined both peripheral and central vestibular functions in a world-renowned 53-year-old male tightrope walker and investigated the extent to which his exceptional performance was reflected in our findings. Methods: Comprehensive assessments were conducted, including semicircular canal function tests (caloric irrigation, rotatory-chair testing, video head impulse testing of all six canals, dynamic visual acuity) and otolith function evaluations (subjective visual vertical, fundus photography, ocular/cervical vestibular-evoked myogenic potentials [oVEMPs/cVEMPs]). Additionally, static and dynamic posturography, as well as video-oculography (smooth-pursuit eye movements, saccades, nystagmus testing), were performed. The participant’s results were compared to established normative values. High-resolution diffusion tensor magnetic resonance imaging (DT-MRI) was utilized to assess motor tract integrity. Results: Semicircular canal testing revealed normal results except for a slightly reduced response to right-sided caloric irrigation (26% asymmetry ratio; cut-off = 25%). Otolith testing, however, showed marked asymmetry in oVEMP amplitudes, confirmed with two devices (37% and 53% weaker on the left side; cut-off = 30%). Bone-conducted cVEMP amplitudes were mildly reduced bilaterally. Posturography, video-oculography, and subjective visual vertical testing were all within normal ranges. Diffusion tensor MRI revealed no structural abnormalities correlating with the observed functional asymmetry. Conclusions: This professional tightrope walker’s exceptional balance skills contrast starkly with significant peripheral vestibular (otolithic) deficits, while MR imaging, including diffusion tensor imaging, remained normal. These findings highlight the critical role of central computational mechanisms in optimizing multisensory input signals and fully compensating for vestibular asymmetries in this unique case. Full article
(This article belongs to the Section Clinical Neurophysiology)
Show Figures

Figure 1

16 pages, 1724 KiB  
Article
New Materialist Mapping the Lived Experiencing of Trauma in Perinatal and Infant Mental Health
by Emma van Daal and Ariel Moy
Soc. Sci. 2024, 13(12), 682; https://doi.org/10.3390/socsci13120682 - 16 Dec 2024
Viewed by 1838
Abstract
Contemporary therapeutic trauma practice privileges symptom-based models that overlook the potential of materiality and space in trauma healing. The responsibility for recovery is situated in the individual (i.e., the parent). We suggest that trauma and lived experiencing produce and are produced by the [...] Read more.
Contemporary therapeutic trauma practice privileges symptom-based models that overlook the potential of materiality and space in trauma healing. The responsibility for recovery is situated in the individual (i.e., the parent). We suggest that trauma and lived experiencing produce and are produced by the complex relational entanglings of parent, infant, and the dyad with the world. Employing a new materialist orientation to perinatal and infant mental health and trauma, we propose multimodal mapping as an approach that can move with the multisensorial, multidimensional rhythms of trauma and trauma healing as they unfold in a series of now moments; moments that emerge within the context of the parent–infant relationship. This article re-presents the conceptual material and multimodal maps that emerged from our presentation and experiential invitation at the Big Trauma, Big Change Forum, 2024. Organised into two interconnected parts, we begin by emphasising the capacity of multimodal mapping to enable a nuanced translation of lived experiencing for parents and infants, in research and practice, that can transform trauma and potentiate healing. The second part brings focus to a new mapping experiment whereby the audience engaged in a multimodal process of re-configuring the lived experiencing of parent–infant night-time spaces using collage, images, and group process. We include three illustrations of night-time spaces common to parents and infants, exploring the power of materiality, the arts, and objects in transforming the affective, sensory, and embodied affordances that shape mental health. Arts-based mapping interventions can profoundly shape how we understand and respond to trauma, moving us towards a “more-than” conceptualisation of lived experiencing that is sensed and animated in everyday and every “thing” moments. Our hope is to inspire the audience in adopting a relational orientation that innovates new processes of discovery by mapping the human and more-than-human elements involved in parent–infant well-being and the unravelling of trauma. Full article
Show Figures

Figure 1

13 pages, 4726 KiB  
Article
Enhancing Multisensory Virtual Reality Environments through Olfactory Stimuli for Autobiographical Memory Retrieval
by Vasilică-Gabriel Sasu, Dragoș Cîrneci, Nicolae Goga, Ramona Popa, Răzvan-Florin Neacșu, Maria Goga, Ioana Podina, Ioan Alexandru Bratosin, Cosmin-Andrei Bordea, Laurențiu Nicolae Pomana, Antonio Valentin Stan and Bianca Popescu
Appl. Sci. 2024, 14(19), 8826; https://doi.org/10.3390/app14198826 - 1 Oct 2024
Cited by 2 | Viewed by 3026
Abstract
This paper examines the use of multisensory virtual reality (VR) as a novel approach in psychological therapy for autobiographical memory retrieval with benefits for cognitive enhancement, stress reduction, etc. Previous studies demonstrated improved outcomes in treating various psychological conditions (affective disorders and PTSD). [...] Read more.
This paper examines the use of multisensory virtual reality (VR) as a novel approach in psychological therapy for autobiographical memory retrieval with benefits for cognitive enhancement, stress reduction, etc. Previous studies demonstrated improved outcomes in treating various psychological conditions (affective disorders and PTSD). Technological advancements in VR, such as olfactory integration, can contribute to the realism and therapeutic potential of these environments. The integration of various physical stimuli with VR holds promising potential for psychological therapies and highlights the need for further interdisciplinary research. In this pilot study, we tested the efficacy of a new system for triggering autobiographical memory retrieval. For this, we used images combined with odors in a congruent manner and offering participants the chance to interact with the VR environment by using two virtual hands. We evaluated the efficacy of this system using qualitative methods, with emphasis on the evaluation of the emotions associated with memory recollection and the ease of triggering memories. All participants in our pilot study experienced intense emotions related to childhood or adolescence, and the pleasant feelings they had during the experiment persisted even after it ended. This is an advancement to what exists currently and provides original research elements for our paper. Full article
Show Figures

Figure 1

6 pages, 1006 KiB  
Proceeding Paper
Visual and Environmental Stimuli Preferences in Pediatric Spaces
by Anggra Ayu Rucitra, Purwanita Setijanti, Asri Dinapradipta and Ruka Kosuge
Eng. Proc. 2024, 74(1), 49; https://doi.org/10.3390/engproc2024074049 - 4 Sep 2024
Viewed by 708
Abstract
Interior design is considered and practiced as a visual discipline in architecture. The environment and buildings are appreciated through visual representations. We explored how sensory interactions shape a genuine multisensory experience with visual stimulation as a primary focus in architecture and interior design. [...] Read more.
Interior design is considered and practiced as a visual discipline in architecture. The environment and buildings are appreciated through visual representations. We explored how sensory interactions shape a genuine multisensory experience with visual stimulation as a primary focus in architecture and interior design. It is important to consider different factors that contribute to visual stimulation when designing spaces. Visual stimulation is experienced differently, depending on the observer, and it is important to understand how children perceive stimuli. Therefore, we determined the visual factors captured by children. The embedded design method was used for qualitative mapping of visual factors and 3D animation creation for visualization. Eye-tracking experiments were conducted to examine the factors that captured the attention of children. Children were attracted to moving objects such as videos, followed by images on walls, playgrounds, windows, and furniture. Fostering positive distraction is important in designing spaces for children. Full article
Show Figures

Figure 1

19 pages, 9910 KiB  
Article
Defect Identification for Mild Steel in Arc Welding Using Multi-Sensor and Neighborhood Rough Set Approach
by Xianping Zeng, Zhiqiang Feng, Xiaohong Xiang, Xin Li, Xiaohu Huang, Zufu Pan, Bingqian Li and Quan Li
Appl. Sci. 2024, 14(12), 4978; https://doi.org/10.3390/app14124978 - 7 Jun 2024
Cited by 2 | Viewed by 1349
Abstract
Welding technology plays a vital role in the manufacturing process of ships, automobiles, and aerospace vehicles because it directly impacts their operational safety and reliability. Hence, the development of an accurate system for identifying welding defects in arc welding is crucial to enhancing [...] Read more.
Welding technology plays a vital role in the manufacturing process of ships, automobiles, and aerospace vehicles because it directly impacts their operational safety and reliability. Hence, the development of an accurate system for identifying welding defects in arc welding is crucial to enhancing the quality of welding production. In this study, a defect recognition method combining the Neighborhood Rough Set (NRS) with the Dingo Optimization Algorithm Support Vector Machine (DOA-SVM) in a multisensory framework is proposed. The 195-dimensional decision-making system mentioned above was constructed to integrate multi-source information from molten pool images, welding current, and vibration signals. To optimize the system, it was further refined to a 12-dimensional decision-making setup through outlier processing and feature selection based on the Neighborhood Rough Set. Subsequently, the DOA-SVM is employed for detecting welding defects. Experimental results demonstrate a 98.98% accuracy rate in identifying welding defects using our model. Importantly, this method outperforms comparative techniques in terms of quickly and accurately identifying five common welding defects, thereby affirming its suitability for arc welding. The proposed method not only achieves high accuracy but also simplifies the model structure, enhances detection efficiency, and streamlines network training. Full article
Show Figures

Figure 1

51 pages, 60499 KiB  
Article
The Body of Christ and the Embodied Viewer in Rubens’s Rockox Epitaph
by Kendra Grimmett
Arts 2023, 12(6), 251; https://doi.org/10.3390/arts12060251 - 13 Dec 2023
Cited by 1 | Viewed by 4485
Abstract
On behalf of the Catholic Church, the Council of Trent (1545–1563) confirmed the usefulness of religious images and multisensory worship practices for engaging the bodies and the minds of congregants, and for moving pious devotees to empathize with Christ. In the center panel [...] Read more.
On behalf of the Catholic Church, the Council of Trent (1545–1563) confirmed the usefulness of religious images and multisensory worship practices for engaging the bodies and the minds of congregants, and for moving pious devotees to empathize with Christ. In the center panel of the Rockox Epitaph (c. 1613–1615), a funerary triptych commissioned by the Antwerp mayor Nicolaas Rockox (1560–1640) and his wife Adriana Perez (1568–1619) to hang over their tomb, Peter Paul Rubens (1577–1640) paints an awe-inspiring, hopeful image of the Risen Lord that alludes to the promise of humankind’s corporeal resurrection at the Last Judgment. In the wings, Rockox and Perez demonstrate affective worship with prayer aids and welcome onlookers to gaze upon Christ’s renewed body. Rubens’s juxtaposition of the eternal, incorruptible body of Jesus alongside five mortal figures—the two patrons and the three apostles, Peter, Paul, and John—prompted living viewers to meditate on their relationship with God, to compare their bodies with those depicted, and to contemplate their own embodiment and mortality. Ultimately, the idealized body of Christ reminds faithful audiences of both the corporeal renewal and the spiritual salvation made possible through Jesus’s death and resurrection. Full article
(This article belongs to the Special Issue Affective Art)
Show Figures

Figure 1

15 pages, 1309 KiB  
Article
An Attention-Based Odometry Framework for Multisensory Unmanned Ground Vehicles (UGVs)
by Zhiyao Xiao and Guobao Zhang
Drones 2023, 7(12), 699; https://doi.org/10.3390/drones7120699 - 9 Dec 2023
Viewed by 2265
Abstract
Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial [...] Read more.
Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial data are used as inputs to overcome the limitations of a single sensor. Convolutional neural network (CNN) and recurrent neural network (RNN) are employed as encoders for different sensor modalities. In contrast to previous multisensory odometry methods, our framework introduces a novel attention-based fusion module that remaps feature vectors to adapt to various scenes. Evaluations on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) odometry benchmark demonstrate the effectiveness of our framework. Full article
Show Figures

Figure 1

18 pages, 795 KiB  
Article
A Pilot Multisensory Approach for Emotional Eating: Pivoting from Virtual Reality to a 2-D Telemedicine Intervention during the COVID-19 Pandemic
by Clelia Malighetti, Ciara Kelly Schnitzer, Sophie Lou YorkWilliams, Luca Bernardelli, Cristin D. Runfola, Giuseppe Riva and Debra L. Safer
J. Clin. Med. 2023, 12(23), 7402; https://doi.org/10.3390/jcm12237402 - 29 Nov 2023
Cited by 3 | Viewed by 2074
Abstract
Background and Objectives: Emotional eating (EE), or eating in response to negative emotions or stress, can be understood as a manifestation of difficulties regulating emotions among individuals with eating disorders. To date, many virtual reality treatments for eating disorders have focused on body [...] Read more.
Background and Objectives: Emotional eating (EE), or eating in response to negative emotions or stress, can be understood as a manifestation of difficulties regulating emotions among individuals with eating disorders. To date, many virtual reality treatments for eating disorders have focused on body image or exposure methods and have not exclusively targeted EE. There has been a call made by experts in the field for a “new generation” of virtual reality interventions, capable of utilizing virtual reality’s potential more fully. We developed a novel emotion regulation (ER) intervention based upon virtual reality to improve EE among adults with an eating disorder diagnosis. The study hypothesized that a novel ER protocol utilizing evidence-based strategies, as well as innovative techniques, would be feasible and acceptable and show preliminary signals of effectiveness for EE. Materials and Methods: Due to COVID-19, the study pivoted from the original completely immersive intervention to a 2-D intervention deliverable over telehealth. Twenty-one patients were recruited from the Adult Eating Disorders Program within Stanford University to receive seven weekly one-hour virtual experiences (VEs) focusing on ER. Participants were not randomized but, as part of a pragmatic study design, chose between the novel VE-Emotion Regulation (VE-ER) intervention or continuing their treatment as usual. Before and after the seven sessions, participants completed an assessment by filling out online questionnaires. Results: Overall, VE-ER treatment was feasible, and the participant and therapist acceptability of VE-ER treatment was fairly high. In terms of preliminary effectiveness, the results showed a significant reduction in the frequencies of disordered eating behaviors in both groups, but a greater improvement in EE in the VE-ER group and a significant reduction in emotion dysregulation after the treatment. Conclusions: This novel pilot study makes a valuable contribution to the scant literature by demonstrating the feasibility, acceptability, and preliminary effectiveness of combining somatic, multisensory, and cognitive manipulations delivered via telemedicine to help patients with EE to manage their emotions. The findings can serve as the basis for larger, controlled studies evaluating the translation of the somatic marker theory from the research literature into real-world U.S. clinic settings. Full article
(This article belongs to the Section Mental Health)
Show Figures

Figure 1

18 pages, 24549 KiB  
Article
Improved Object-Based Mapping of Aboveground Biomass Using Geographic Stratification with GEDI Data and Multi-Sensor Imagery
by Lin Chen, Chunying Ren, Bai Zhang, Zongming Wang, Weidong Man and Mingyue Liu
Remote Sens. 2023, 15(10), 2625; https://doi.org/10.3390/rs15102625 - 18 May 2023
Cited by 9 | Viewed by 2783
Abstract
Aboveground biomass (AGB) mapping using spaceborne LiDAR data and multi-sensor images is essential for efficient carbon monitoring and climate change mitigation actions in heterogeneous forests. The optimal predictors of remote sensing-based AGB vary greatly with geographic stratification, such as topography and forest type, [...] Read more.
Aboveground biomass (AGB) mapping using spaceborne LiDAR data and multi-sensor images is essential for efficient carbon monitoring and climate change mitigation actions in heterogeneous forests. The optimal predictors of remote sensing-based AGB vary greatly with geographic stratification, such as topography and forest type, while the way in which geographic stratification influences the contributions of predictor variables in object-based AGB mapping is insufficiently studied. To address the improvement of mapping forest AGB by geographic stratification in heterogeneous forests, satellite multisensory data from global ecosystem dynamics investigation (GEDI) and series of advanced land observing satellite (ALOS) and Sentinel were integrated. Multi-sensor predictors for the AGB modeling of different types of forests were selected using a correlation analysis of variables calculated from topographically stratified objects. Random forests models were built with GEDI-based AGB and geographically stratified predictors to acquire wall-to-wall biomass values. It was illustrated that the mapped biomass had a similar distribution and was approximate to the sampled forest AGB. Through an accuracy comparison using independent validation samples, it was determined that the geographic stratification approach improved the accuracy by 34.79% compared to the unstratified process. Stratification of forest type further increased the mapped AGB accuracy compared to that of topography. Topographical stratification greatly influenced the predictors’ contributions to AGB mapping in mixed broadleaf–conifer and broad-leaved forests, but only slightly impacted coniferous forests. Optical variables were predominant for deciduous forests, while for evergreen forests, SAR indices outweighed the other predictors. As a pioneering estimation of forest AGB with geographic stratification using satellite multisensory data, this study offers optimal predictors and an advanced method for obtaining carbon maps in heterogeneous regional landscapes. Full article
Show Figures

Graphical abstract

Back to TopTop