Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (927)

Search Parameters:
Keywords = visual cue

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 37858 KB  
Article
Seeing Through Sparse Foliage: Quality–Occlusion-Guided RGB–Thermal Fusion for Drone-Based Person Detection
by Ziming Gui, Shaobo Liu, Dong Yang, Tongyuan Zou, Haoran Zhu and Wen Yang
Remote Sens. 2026, 18(5), 774; https://doi.org/10.3390/rs18050774 (registering DOI) - 4 Mar 2026
Abstract
Drone-based RGBT person detection facilitates critical applications such as search and rescue, owing to its high maneuverability and inherent capability to mitigate visual occlusion. However, despite the complementary nature of RGBT systems, existing detectors often overlook the specific impact of occlusion during the [...] Read more.
Drone-based RGBT person detection facilitates critical applications such as search and rescue, owing to its high maneuverability and inherent capability to mitigate visual occlusion. However, despite the complementary nature of RGBT systems, existing detectors often overlook the specific impact of occlusion during the fusion process, leading to feature contamination and subsequent detection failures. In this work, we address this limitation by formally defining two categories of occlusion: “soft occlusion,” where targets remain partially visible in at least one modality, and “hard occlusion,” which involves complete obstruction. To tackle these challenges, we propose Unveiling Occluded Targets (UOT), a novel multi-modal fusion framework that implements a Quality–Occlusion Arbitration (QOA) mechanism. By leveraging both quality-related and occlusion-related cues, UOT dynamically arbitrates the fusion process to maximize information recovery from the clearer modality. Extensive experiments on the RGBTDronePerson and VTUAV-det datasets demonstrate significant improvements, achieving an mAP50all of 53.42% and an mAP50tiny of 54.70% in densely occluded scenes. Qualitative analysis further confirms UOT’s robustness in reliably identifying targets obstructed by sparse foliage. Full article
Show Figures

Figure 1

14 pages, 1615 KB  
Article
Species-Specific Color Preferences During Foraging in Aedes aegypti, Aedes albopictus, and Culex quinquefasciatus Across Varying Light Conditions
by Fanny Hellhammer, Hella Heidtmann, Fritjof Freise and Stefanie C. Becker
Insects 2026, 17(3), 276; https://doi.org/10.3390/insects17030276 - 3 Mar 2026
Abstract
Mosquitoes are key vectors of numerous infectious diseases, making the study of their behavior essential for effective control strategies. This study investigates the color preferences of Aedes aegypti, Aedes albopictus, and Culex quinquefasciatus during foraging, using an ink-based staining method to [...] Read more.
Mosquitoes are key vectors of numerous infectious diseases, making the study of their behavior essential for effective control strategies. This study investigates the color preferences of Aedes aegypti, Aedes albopictus, and Culex quinquefasciatus during foraging, using an ink-based staining method to assess feeding behavior under varying light intensities (0, 130 and 1600 lx). At 0 lx, no consistent visual preferences emerged, confirming reliance on olfactory cues only. Under dusk-like illumination (130 lx), diurnal Aedes exhibited a tendency to approach red stimuli (probably perceived as grey) over darker targets, with Ae. albopictus females and males showing significant preference for red over green responses, indicating early salience of red contrasts. At high illumination (1600 lx), Aedes shifted preference toward black, especially in males, reflecting dominance of achromatic contrast and camouflage considerations. In contrast, crepuscular Cx. quinquefasciatus showed strong attraction to black at dusk-like light in both sexes; at high illumination, females’ preferences shifted from black to red, whereas males maintained or reverted to black preference across assays. These divergent patterns align with differences in photoreceptor sensitivity, contrast processing, and ecological niches governing host- and swarm-seeking. Identifying how dusk-like versus bright light modulates color-driven behavior provides insights for designing trap colors and illumination regimes optimized for specific mosquito species and sexes, thereby enhancing targeted vector-control strategies. Full article
(This article belongs to the Section Insect Behavior and Pathology)
Show Figures

Figure 1

17 pages, 1932 KB  
Article
Enhancing Immersion in Virtual Reality Martial Arts Training: Toward Realistic and Practical Applications
by Leonie Laskowitz, Karsten Huffstadt and Nicholas Müller
Virtual Worlds 2026, 5(1), 11; https://doi.org/10.3390/virtualworlds5010011 - 2 Mar 2026
Viewed by 42
Abstract
Immersive virtual reality (VR) offers promising opportunities for skill acquisition in complex motor domains, yet its specific potential for martial arts training remains underexplored. This pilot study examined how visual and auditory feedback are associated with subjective immersion and motor performance during the [...] Read more.
Immersive virtual reality (VR) offers promising opportunities for skill acquisition in complex motor domains, yet its specific potential for martial arts training remains underexplored. This pilot study examined how visual and auditory feedback are associated with subjective immersion and motor performance during the execution of a standardized martial arts sidekick in VR. Ten technically experienced participants completed four training conditions, while full-body kinematics were captured using a synchronized VR-MoCap setup. Subjective ratings of immersion and presence were collected after each condition, and three expert interviews provided complementary qualitative perspectives. Exploratory analyses indicated that high-fidelity visual feedback elicited higher immersion and more stable chamber-phase posture, while voice feedback was associated with smoother timing and improved kick alignment. Experts highlighted multisensory coherence as a key design principle and pointed to concrete opportunities for VR-supported technique refinement. These convergent findings suggest that immersive VR can support technically relevant performance cues in martial arts training while also highlighting design considerations for future high-precision VR coaching systems. As a pilot study, the results provide methodological groundwork and signal directions for larger, confirmatory investigations. Full article
Show Figures

Figure 1

17 pages, 1184 KB  
Article
Showing Behaviour in One Hundred and One Dogs: Gazing, Breed and Cephalic Index
by Samuele Commauda, Veronica Maglieri, Emanuela Prato-Previde and Elisabetta Palagi
Animals 2026, 16(5), 760; https://doi.org/10.3390/ani16050760 - 1 Mar 2026
Viewed by 138
Abstract
Dogs exhibit sophisticated interspecific communication skills, including the use of visual signals to indicate the location of inaccessible resources, known as showing behaviour. Previous studies have investigated factors such as age and training, but the effects of breed and cranial morphology remain unclear. [...] Read more.
Dogs exhibit sophisticated interspecific communication skills, including the use of visual signals to indicate the location of inaccessible resources, known as showing behaviour. Previous studies have investigated factors such as age and training, but the effects of breed and cranial morphology remain unclear. Here, we tested a uniquely large sample of 101 pet dogs from 43 different breeds, using a standardized out-of-reach/hidden object task to assess three key visual behaviours: gaze at the owner, gaze at the reward, and gaze alternation between owner and reward. Dogs were tested in familiar environments without pre-training, and owners were instructed to remain passive to avoid unintentional cues. Our results confirm the importance of gaze alternation and gazing at the reward as central components of showing behaviour, particularly when both owner and reward were present. Contrary to expectations, we found no effect of breed or Cephalic Index on these behavioural patterns, suggesting that life experiences rather than artificial selection can influence visual communicative strategies in this specific context. The exceptionally large and diversified sample of this study provides unprecedented insight into the consistency of visual signalling across dog breeds. Full article
Show Figures

Figure 1

20 pages, 1419 KB  
Article
Building Prototype Evolution Pathway for Emotion Recognition in User-Generated Videos
by Yujie Liu, Zhenyang Dong, Yante Li and Guoying Zhao
Big Data Cogn. Comput. 2026, 10(3), 73; https://doi.org/10.3390/bdcc10030073 - 28 Feb 2026
Viewed by 125
Abstract
Large-scale pretrained foundation models are increasingly essential for affective analysis in user-generated videos. However, current approaches typically reuse generic multi-modal representations directly with task-specific adapters learned from scratch, and their performance is limited by the large affective domain gap and scarce emotion annotations. [...] Read more.
Large-scale pretrained foundation models are increasingly essential for affective analysis in user-generated videos. However, current approaches typically reuse generic multi-modal representations directly with task-specific adapters learned from scratch, and their performance is limited by the large affective domain gap and scarce emotion annotations. To address these issues, we introduce a novel paradigm that leverages auxiliary cross-modal priors to enhance unimodal emotion modeling, effectively exploiting modality-shared semantics and modality-specific inductive biases. Specifically, we propose a progressive prototype evolution framework that gradually transforms a neutral prototype into discriminative emotional representations through fine-grained cross-modal interactions with visual cues. The auxiliary prior serves as a structural constraint, reframing the adaptation challenge from a difficult domain shift problem into a more tractable prototype shift within the affective space. To ensure robust prototype construction and guided evolution, we further design category-aggregated prompting and bidirectional supervision mechanisms. Extensive experiments on VideoEmotion-8, Ekman-6, and MusicVideo-6 validate the superiority of our approach, achieving state-of-the-art results and demonstrating the effectiveness of leveraging auxiliary modality priors for foundation-model-based emotion recognition. Full article
(This article belongs to the Special Issue Sentiment Analysis in the Context of Big Data)
Show Figures

Figure 1

13 pages, 230 KB  
Article
Non-Verbal Communication in Nursing Home Settings
by Zunera Khan, Miguel Vasconcelos Da Silva, Daniel Kramarczyk, Lise Birgitte Holteng Austbø, Martha Therese Gjestsen, Ingelin Testad and Clive Ballard
Healthcare 2026, 14(5), 614; https://doi.org/10.3390/healthcare14050614 - 28 Feb 2026
Viewed by 123
Abstract
Background: People living with dementia in nursing homes commonly experience progressive impairments in cognition, communication, and functional ability, contributing to neuropsychiatric symptoms and reduced quality of life. As verbal communication declines, non-verbal communication (NVC) including facial expressions, gestures, eye contact, posture, and touch [...] Read more.
Background: People living with dementia in nursing homes commonly experience progressive impairments in cognition, communication, and functional ability, contributing to neuropsychiatric symptoms and reduced quality of life. As verbal communication declines, non-verbal communication (NVC) including facial expressions, gestures, eye contact, posture, and touch becomes increasingly important for maintaining meaningful interactions. Objectives: This study aims to explore current NVC practices between nursing home staff and residents living with dementia. Methods: A mixed methods, cross-sectional design was employed. NH staff completed an anonymous online questionnaire consisting of 13 items assessing NVC use and demographic characteristics. Quantitative items were rated using Likert scales, and qualitative responses were analysed using Giorgi’s phenomenological approach. Results: Quantitative findings showed that residents most frequently relied on facial expressions, reported as used very often in 24 of 33 NHs, followed by eye contact in 17 NHs and touch in 16 NHs. NH staff also reported extensive use of NVC during care interactions, particularly facial expressions (very often in 79% of NHs), eye contact (82%), and hand gestures (76%). Qualitative findings underscored the central role of NVC in interpreting residents’ needs, fostering emotional connection, and managing behavioural and psychological symptoms of dementia through subtle cues, visual prompts, and individualised strategies. Conclusions: Overall, the findings demonstrate that NVC is a fundamental component of communication and care delivery in dementia settings and highlight the need for structured training interventions to support staff in recognising and responding effectively to non-verbal signals. Full article
30 pages, 4182 KB  
Review
Digital Storytelling for Primary Heritage Learning: Early Sustainability Relevant Meaning-Making in an Industrial Heritage Case
by Xin Bian, André Brown and Bruno Marques
Sustainability 2026, 18(5), 2319; https://doi.org/10.3390/su18052319 - 27 Feb 2026
Viewed by 170
Abstract
Heritage education is increasingly expected to connect past evidence with questions of responsibility, environmental change, and sustainable futures, yet primary learners often encounter heritage through fragmented, visually driven exposure with limited support for interpretation beyond factual recognition. This mixed-methods study applies an SRT [...] Read more.
Heritage education is increasingly expected to connect past evidence with questions of responsibility, environmental change, and sustainable futures, yet primary learners often encounter heritage through fragmented, visually driven exposure with limited support for interpretation beyond factual recognition. This mixed-methods study applies an SRT framework (Supply–Response–Transformation) to examine early, sustainability-relevant meaning-making in primary heritage learning supported by a short animation-based digital story, with an industrial heritage site serving as the case context. Evidence includes stakeholder interviews (n = 39), a student pre-test (n = 399), a post-viewing survey (n = 452), student drawings (n = 12), and classroom observations. Findings indicate that narrative-visual mediation aligns with students’ reported curiosity and comprehension-related cues under classroom conditions, and that post-viewing responses cluster around four classroom-observable outcome signals: valued historical understanding, responsibility and care, change–consequence–restoration reasoning, and personal and cultural positioning. This study interprets digital storytelling as a classroom-feasible mediation format through which meaning-making signals become observable in early meaning-making beyond factual recall. It provides an interpretable chain for judging the visibility and elaboration of early meaning-making signals under real classroom constraints. Full article
(This article belongs to the Collection Sustainable Citizenship and Education)
Show Figures

Figure 1

21 pages, 20486 KB  
Article
Semantic–Physical Sensor Fusion for Safe Physical Human–Robot Interaction in Dual-Arm Rehabilitation
by Disha Zhu, Xuefeng Wang and Shaomei Shang
Sensors 2026, 26(5), 1510; https://doi.org/10.3390/s26051510 - 27 Feb 2026
Viewed by 133
Abstract
A safe physical human–robot interaction (pHRI) in rehabilitation requires reliable perception and low-latency decision making under heterogeneous and unreliable sensor inputs. This paper presents a multimodal sensor-fusion-based safety framework that integrates physical state estimation, semantic information fusion, and an edge-deployed large language model [...] Read more.
A safe physical human–robot interaction (pHRI) in rehabilitation requires reliable perception and low-latency decision making under heterogeneous and unreliable sensor inputs. This paper presents a multimodal sensor-fusion-based safety framework that integrates physical state estimation, semantic information fusion, and an edge-deployed large language model (LLM) for real-time pHRI safety control. A dynamics-based virtual sensing method is introduced to estimate internal joint torques from external force–torque measurements, achieving a normalized mean absolute error of 18.5% in real-world experiments. An asynchronous semantic state pool with a time-to-live mechanism is designed to fuse visual, force, posture, and human semantic cues while maintaining robustness to sensor delays and dropouts. Based on structured multimodal tokens, an instruction-tuned edge LLM outputs discrete safety decisions that are further mapped to continuous compliant control parameters. The framework is trained using a hybrid dataset consisting of limited real-world samples and LLM-augmented synthetic data, and evaluated on unseen real and mixed-condition scenarios. Experimental results show reliable detection of safety-critical events with a low emergency misdetection rate, while maintaining an end-to-end decision latency of approximately 223 ms on edge hardware. Real-world experiments on a rehabilitation robot demonstrate effective responses to impacts, user instability, and visual occlusions, indicating the practical applicability of the proposed approach for real-time pHRI safety monitoring. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

29 pages, 14318 KB  
Article
A High-Resolution Remote Sensing Building Extraction Network Integrating Multi-Scale Sequence Modeling and Spatial Adaptive Enhancement
by Chang Zuo and Xiaoji Lan
ISPRS Int. J. Geo-Inf. 2026, 15(3), 96; https://doi.org/10.3390/ijgi15030096 - 26 Feb 2026
Viewed by 229
Abstract
Building extraction from high-resolution remote sensing imagery holds significant value for urban planning, disaster assessment, and geospatial analysis. However, current semantic segmentation models still face limitations when handling complex scenarios characterized by diverse building morphologies, significant scale variations, and blurred boundaries. To address [...] Read more.
Building extraction from high-resolution remote sensing imagery holds significant value for urban planning, disaster assessment, and geospatial analysis. However, current semantic segmentation models still face limitations when handling complex scenarios characterized by diverse building morphologies, significant scale variations, and blurred boundaries. To address the challenges of insufficient long-range dependency modeling, suboptimal multi-scale feature representation, and weak spatial adaptability, this paper proposes a building extraction network that integrates multi-scale sequence modeling with spatial adaptive enhancement. Adopting UPerNet (equipped with ConvNeXt-Tiny) as the baseline framework, the proposed method introduces a dedicated PyramidSSM-based neck (PyramidSSMNeck) as the primary design for multi-scale feature alignment and fusion, and further integrates three enhancement components (S6 (SSM-based), LSKNet, and SAFM) that provide additional improvements mainly reflected in boundary delineation. Specifically, PyramidSSMNeck performs structured cross-scale feature projection, alignment, and aggregation to strengthen multi-scale representation; S6 enhances long-range contextual modeling, LSKNet adaptively adjusts spatial receptive fields to accommodate scale variations, and SAFM modulates feature responses with spatial cues to refine boundaries and fine details—forming a unified framework in which PyramidSSMNeck primarily drives multi-scale alignment and fusion, while S6, LSKNet, and SAFM further enhance long-range context modeling and spatial adaptivity, mainly benefiting boundary preservation and fine-detail integrity. Experiments were conducted on the WHU Building, INRIA, and a self-constructed Ganzhou urban dataset, and the results indicate that the proposed method achieved IoU scores of 91.29%, 81.96%, and 88.18% across the three datasets, outperforming the baseline UPerNet (ConvNeXt-Tiny) by 2.37%, 0.88%, and 3.68%, respectively, with F1-scores consistently exceeding 90%. Importantly, ablation results indicate that the majority of region-level gains (IoU/F1) come from PyramidSSMNeck, whereas the additional modules contribute more prominently to boundary quality, yielding a Boundary IoU increase from 63.29% to 65.63% (+2.34) from the neck-only setting to the full model. Visualization results further support the method’s advantages in boundary preservation and detail integrity, and additional cross-domain transfer experiments (zero-shot and few-shot from WHU to Ganzhou) suggest improved robustness under domain shift. Full article
(This article belongs to the Topic Geospatial AI: Systems, Model, Methods, and Applications)
Show Figures

Figure 1

19 pages, 3734 KB  
Protocol
Beyond the Image Frame: An Art-Based Pedagogical Framework for Teaching Diagnostic Reasoning in Breast Ultrasound to Medical Students
by Marcin Śniadecki, Maria Morawska, Patrycja Kijańska, Olga Kondratowicz, Julia Nowakowska, Oliwia Musielak, Abhishek Singla, Ritu Amit Chhabria, Hanaf Alvi, Amelia Banaszak, Lena Grono, Diana Akhmed, Klaudia Kokot, Maksymilian Grzelak, Konrad Duszyński, Katsiaryna Marozik, Patrycja Jaworska, Jakub Majchrzak, Natallia Krupovich, Zuzanna Boyke, Julia Respondek, Weronika Ciećko, Ewa Bandurska, Jakub Szałek, Agata Rutkowska, Martyna Danielkiewicz, Patryk Poniewierza, Ewelina Klimik, Jarosław Meyer-Szary, Cynthia Aristei, Anna Malitowska and on behalf of Senological Gynecology Working Groupadd Show full author list remove Hide full author list
Diagnostics 2026, 16(4), 642; https://doi.org/10.3390/diagnostics16040642 - 23 Feb 2026
Viewed by 338
Abstract
Breast ultrasound is a key diagnostic method for breast cancer and relies heavily on the interpretation of visual cues. At the same time, medical education is increasingly being driven by time constraints, which favors rapid pattern recognition, limiting the scope for reflective image [...] Read more.
Breast ultrasound is a key diagnostic method for breast cancer and relies heavily on the interpretation of visual cues. At the same time, medical education is increasingly being driven by time constraints, which favors rapid pattern recognition, limiting the scope for reflective image analysis and the diagnostic process. Therefore, the aim of this study was to propose and evaluate an artistic and pedagogical teaching model, inspired by the interpretive practices of Italian High Renaissance painting, as a tool to support the development of diagnostic reasoning in breast ultrasound. This model focuses on careful observation, analysis of the relationship between detail and the overall image, and the conscious transformation of visual cues into clinical meaning. This study was conducted during the four-day ARSA Think Tank Meeting (ARSATTM). Medical students worked in four groups; two groups received methodological training based on visual cue analysis, and two did not. All groups performed identical tasks involving the interpretation of breast ultrasound images and ultrasound examinations on real patients. The results indicate that an artistic–pedagogical teaching model to promote more coherent and reflective diagnostic reasoning in breast ultrasound is feasible. Therefore, integrating this approach may be a valuable addition to medical students’ ultrasound education in the realities of limited clinical time. Full article
(This article belongs to the Special Issue Frontline of Breast Imaging)
Show Figures

Graphical abstract

15 pages, 2211 KB  
Article
Comparison of Toe Clearance Characteristics Between Simulated Obstacle Crossing Using Visual Height Cues and Actual Obstacle Crossing
by Mao Kasai, Yumi Machida, Miku Washizu, Kenichi Sugawara and Tomotaka Suzuki
Brain Sci. 2026, 16(2), 248; https://doi.org/10.3390/brainsci16020248 - 23 Feb 2026
Viewed by 227
Abstract
Background/Objectives: Tripping is a major cause of falls and necessitates accessible training. This study aimed to fundamentally evaluate the biomechanical fidelity of a simplified simulated obstacle-crossing paradigm using visual height cues. Methods: Two experiments that included healthy young adults evaluated toe [...] Read more.
Background/Objectives: Tripping is a major cause of falls and necessitates accessible training. This study aimed to fundamentally evaluate the biomechanical fidelity of a simplified simulated obstacle-crossing paradigm using visual height cues. Methods: Two experiments that included healthy young adults evaluated toe clearance (TC) responsiveness during simulated crossing to four visual cue heights (Experiment 1: n = 16) and compared it with actual crossing (4–16% leg length) to assess biomechanical fidelity (Experiment 2: n = 18). Linear mixed models were used to analyze the effects of obstacle height, task condition, and walking course on vertical TC metrics, including minimum and maximum clearance and quartile coefficient of variation (QCV) for both the lead and trail limbs. Results: In Experiment 1, TC parameters scaled systematically with cue height (p < 0.001), confirming that visual cues elicited adaptive gait adjustments. In Experiment 2, although the maximum TC scaled similarly across conditions, the minimum TC was systematically reduced in the simulated condition compared to actual obstacle crossing (p < 0.001). Furthermore, the simulated condition exhibited increased QCV (p < 0.001), particularly for the trail limb at the highest obstacle height. Conclusions: Motor intention and execution precision were dissociated in the simulated obstacle crossing. Without physical risk, the central nervous system appeared to prioritize effort economy over the precise fine-tuning of safety margins. These results suggest that task repetition in risk-free simulations alone may be insufficient for acquiring safe obstacle-crossing strategies and highlight the importance of task-relevant feedback for ensuring biomechanical fidelity in fall-prevention research. Full article
Show Figures

Figure 1

23 pages, 1013 KB  
Article
Occlusion-Robust Swarm Motion via Pheromone-Modulated Orientation Change
by Liwei Xuan, Mingyong Liu, Guoyuan He and Zhiqiang Yan
J. Mar. Sci. Eng. 2026, 14(4), 399; https://doi.org/10.3390/jmse14040399 - 22 Feb 2026
Viewed by 153
Abstract
Effective collective motion hinges on the seamless transfer of local information, yet vision-based mechanisms, while potent for generating rapid consensus, are inherently fragile. Visual links can be severed instantly by occlusions, leading to a phenomenon characterized as “sensory amnesia.” Seeking to fortify this [...] Read more.
Effective collective motion hinges on the seamless transfer of local information, yet vision-based mechanisms, while potent for generating rapid consensus, are inherently fragile. Visual links can be severed instantly by occlusions, leading to a phenomenon characterized as “sensory amnesia.” Seeking to fortify this vulnerability, Pheromone-Modulated Body Orientation Change (PM-BOC) is introduced as a dual-channel framework that fuses transient visual cues with a persistent environmental memory. Rather than treating these inputs in isolation, motion salience is quantified via BOC and mapped onto a decaying virtual pheromone field, dynamically modulating interaction weights by coupling instantaneous visual projections with local pheromone concentrations. This strategy effectively constructs a temporal buffer, bridging the informational voids left by blind spots. Validation, spanning from systematic physics simulations to high-fidelity simulations with a swarm of 50 UUVs, reveals that PM-BOC sustains superior cohesion in obstacle-laden environments where baseline visual models falter. Notably, this coupling suppresses high-frequency sensory noise while inducing resilient, scale-free velocity correlations that scale linearly with system size. By reconciling the trade-off between the immediacy of visual responsiveness and the robustness of environmental memory, this study offers a scalable paradigm for engineering resilient swarm systems capable of navigating the uncertainties of perception-limited environments. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

35 pages, 776 KB  
Review
Agronomic Applications of Light: Spectral Strategies for Crop Growth, Defense, and Postharvest Quality
by Issoufou Maino, Laure Sandoval, Vincent Gloaguen and Céline Faugeron Girard
AgriEngineering 2026, 8(2), 74; https://doi.org/10.3390/agriengineering8020074 - 22 Feb 2026
Viewed by 497
Abstract
In the past two decades, important progress has allowed a better understanding of how light signals are perceived by plants, not only as a source of energy for photosynthesis but also as environmental cues that modulate growth, development, and stress responses. These advances [...] Read more.
In the past two decades, important progress has allowed a better understanding of how light signals are perceived by plants, not only as a source of energy for photosynthesis but also as environmental cues that modulate growth, development, and stress responses. These advances open up promising prospects for light-based treatments in agriculture. This review synthesizes recent scientific findings on the application of specific wavelengths (from ultraviolet to infrared) to improve crop yield, quality, and resilience. The analysis focuses on controlled environment agriculture, where most experimental data have been generated and where the integration of lighting strategies is technically more feasible compared to open-field settings. Preharvest: we explore how spectral quality, intensity, and duration can be used to modulate plant growth, photosynthesis, defense pathways, and the accumulation of nutritional compounds. Postharvest: the focus shifts to how light can help maintain visual and nutritional quality, regulate ripening, limit pathogen development, and extend shelf-life. The review emphasizes plant photoreceptors and signal transduction pathways, as well as technical parameters such as spectrum selection, application timing, and lighting configuration. A selection of recent patents illustrates how fundamental research is being translated into deployable, energy-efficient lighting technologies for sustainable crop management. Full article
Show Figures

Figure 1

15 pages, 2261 KB  
Article
Comparative Analysis of Eye Traits and Visual Resolution Among Three Hatchery-Bred Giant Clams (Tridacna crocea, T. squamosa, T. maxima)
by Wanjie Liu, Jun Li, Zhen Zhao, Jinkuan Wei, Jingyue Huang, Qisheng Zheng, Yanping Qin, Haitao Ma, Ziniu Yu, Ying Pan and Yuehuan Zhang
Biology 2026, 15(4), 363; https://doi.org/10.3390/biology15040363 - 21 Feb 2026
Viewed by 243
Abstract
Bivalves possess a diverse array of photoreceptive organs that are significant for their evolutionary success and systematic classification. Giant clams are the largest bivalve mollusks, with mantle tissue permanently extended in nature to maintain symbiosis with zooxanthellae and perceive environmental cues. Eyes serve [...] Read more.
Bivalves possess a diverse array of photoreceptive organs that are significant for their evolutionary success and systematic classification. Giant clams are the largest bivalve mollusks, with mantle tissue permanently extended in nature to maintain symbiosis with zooxanthellae and perceive environmental cues. Eyes serve as critical sensory organs for these organisms, yet the structural and functional characteristics of tridacnine eyes remain inadequately understood. This study systematically investigated the ocular traits and visual resolution of three ecologically distinct giant clam species (Tridacna crocea, T. squamosa, T. maxima) using morphometric analysis, hematoxylin-eosin (HE) staining, transmission electron microscopy (TEM), and grating stimulation assays. Significant interspecific differences were observed in eye count, diameter, and pupil-to-eye ratio (PER): T. maxima exhibited the highest mean eye count (221 ± 8), T. squamosa the largest mean eye diameter (0.490 ± 0.082 mm), and T. crocea the highest mean PER (0.363 ± 0.041). Eyes were numerically symmetric on the left and right mantles but positionally asymmetric, showing random distribution patterns along the mantle margin without fixed corresponding locations across species. All three species possessed typical pinhole eyes lacking lenses and retinas, primarily composed of filler cells, receptor cells, and sparse neurons, with symbiotic zooxanthellae distributed in the surrounding mantle tissue. Grating stimulation assays revealed resolvable stripe periods of 5.82–11.64° (T. crocea), 8.62–13.16° (T. squamosa), and 10.15–12.26° (T. maxima), confirming T. crocea as the species with the highest visual resolution. These ocular variations are inferred to reflect adaptive evolution driven by ecological niches and habitat-specific factors (water depth or light intensity), while the simplified pinhole morphology is consistent with their sedentary lifestyle and metabolic dependence on symbiotic zooxanthellae. These ocular variations provide potential morphological markers for the systematic classification of Tridacninae and offer valuable insights for researchers studying the evolutionary plasticity of bivalve visual systems. Full article
(This article belongs to the Section Behavioural Biology)
Show Figures

Figure 1

27 pages, 608 KB  
Article
AI-Augmented Authenticity: Multimodal Artificial Intelligence and Trust Formation in Cultural Consumer Evaluation
by Martina Arsić, Ivana Brdar and Aleksandra Vujko
World 2026, 7(2), 30; https://doi.org/10.3390/world7020030 - 20 Feb 2026
Viewed by 292
Abstract
This study examines how artificial intelligence (AI) contributes to contemporary processes of authenticity evaluation by functioning as a multimodal diagnostic cue in consumer decision-making. Drawing on survey data collected from 468 visitors at Terra Madre Salone del Gusto in Turin, Italy, the study [...] Read more.
This study examines how artificial intelligence (AI) contributes to contemporary processes of authenticity evaluation by functioning as a multimodal diagnostic cue in consumer decision-making. Drawing on survey data collected from 468 visitors at Terra Madre Salone del Gusto in Turin, Italy, the study tests a structural model comprising five latent constructs: Authenticity Trust, Perceived AI Usefulness and Diagnosticity, Multimodal Value, User Engagement, and Behavioural Intentions. The findings indicate that heritage-based and institutional authenticity cues remain foundational in consumers’ evaluations, but are increasingly associated with interaction with AI-supported information perceived as credible and diagnostically informative. Multimodal inputs—particularly the integration of textual, visual, and auditory narratives—are positively associated with perceived multimodal value and user engagement within AI-supported evaluation. Experiential enjoyment during interaction with the AI system is positively associated with behavioural intentions to adopt AI-supported evaluation tools, while behavioural intentions encompass both adoption readiness and a stated willingness to pay a premium for products perceived as authentic. Although the use of a convenience sample limits generalisability, the results highlight the broader potential of multimodal AI systems to enhance perceived diagnostic clarity and evaluative confidence in complex cultural and consumer environments. Conceptually, the study advances the notion of augmented authenticity, defined as a hybrid evaluative process in which tradition-based trust mechanisms are interpreted in relation to perceived AI diagnosticity and multimodal coherence. By situating AI within culturally embedded processes of meaning-making rather than purely instrumental evaluation, the findings contribute to interdisciplinary debates on technology-supported trust processes, consumer judgement, and the societal implications of AI-supported decision-making. Full article
(This article belongs to the Special Issue AI-Powered Horizons: Shaping Our Future World)
Show Figures

Figure 1

Back to TopTop