Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,365)

Search Parameters:
Keywords = light perception

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1408 KiB  
Article
Self-Supervised Learning of End-to-End 3D LiDAR Odometry for Urban Scene Modeling
by Shuting Chen, Zhiyong Wang, Chengxi Hong, Yanwen Sun, Hong Jia and Weiquan Liu
Remote Sens. 2025, 17(15), 2661; https://doi.org/10.3390/rs17152661 (registering DOI) - 1 Aug 2025
Abstract
Accurate and robust spatial perception is fundamental for dynamic 3D city modeling and urban environmental sensing. High-resolution remote sensing data, particularly LiDAR point clouds, are pivotal for these tasks due to their lighting invariance and precise geometric information. However, processing and aligning sequential [...] Read more.
Accurate and robust spatial perception is fundamental for dynamic 3D city modeling and urban environmental sensing. High-resolution remote sensing data, particularly LiDAR point clouds, are pivotal for these tasks due to their lighting invariance and precise geometric information. However, processing and aligning sequential LiDAR point clouds in complex urban environments presents significant challenges: traditional point-based or feature-matching methods are often sensitive to urban dynamics (e.g., moving vehicles and pedestrians) and struggle to establish reliable correspondences. While deep learning offers solutions, current approaches for point cloud alignment exhibit key limitations: self-supervised losses often neglect inherent alignment uncertainties, and supervised methods require costly pixel-level correspondence annotations. To address these challenges, we propose UnMinkLO-Net, an end-to-end self-supervised LiDAR odometry framework. Our method is as follows: (1) we efficiently encode 3D point cloud structures using voxel-based sparse convolution, and (2) we model inherent alignment uncertainty via covariance matrices, enabling novel self-supervised loss based on uncertainty modeling. Extensive evaluations on the KITTI urban dataset demonstrate UnMinkLO-Net’s effectiveness in achieving highly accurate point cloud registration. Our self-supervised approach, eliminating the need for manual annotations, provides a powerful foundation for processing and analyzing LiDAR data within multi-sensor urban sensing frameworks. Full article
Show Figures

Figure 1

19 pages, 2733 KiB  
Article
Quantifying Threespine Stickleback Gasterosteus aculeatus L. (Perciformes: Gasterosteidae) Coloration for Population Analysis: Method Development and Validation
by Ekaterina V. Nadtochii, Anna S. Genelt-Yanovskaya, Evgeny A. Genelt-Yanovskiy, Mikhail V. Ivanov and Dmitry L. Lajus
Hydrobiology 2025, 4(3), 20; https://doi.org/10.3390/hydrobiology4030020 - 31 Jul 2025
Viewed by 34
Abstract
Fish coloration plays an important role in reproduction and camouflage, yet capturing color variation under field conditions remains challenging. We present a standardized, semi-automated protocol for measuring body coloration in the popular model fish threespine stickleback (Gasterosteus aculeatus). Individuals are photographed [...] Read more.
Fish coloration plays an important role in reproduction and camouflage, yet capturing color variation under field conditions remains challenging. We present a standardized, semi-automated protocol for measuring body coloration in the popular model fish threespine stickleback (Gasterosteus aculeatus). Individuals are photographed in a controlled light box within minutes of capture, and color is sampled from eight anatomically defined standard sites in human-perception-based CIELAB space. Analyses combine univariate color metrics, multivariate statistics, and the ΔE* perceptual difference index to detect subtle shifts in hue and brightness. Validation on pre-spawning fish shows the method reliably distinguishes males and females well before full breeding colors develop. Although it currently omits ultraviolet signals and fine-scale patterning, the approach scales efficiently to large sample sizes and varying lighting conditions, making it well suited for population-level surveys of camouflage dynamics, sexual dimorphism, and environmental influences on coloration in sticklebacks. Full article
Show Figures

Figure 1

15 pages, 792 KiB  
Article
Koffka Ring Perception in Digital Environments with Brightness Modulation
by Mile Matijević, Željko Bosančić and Martina Hajdek
Appl. Sci. 2025, 15(15), 8501; https://doi.org/10.3390/app15158501 (registering DOI) - 31 Jul 2025
Viewed by 39
Abstract
Various parameters and observation conditions contribute to the emergence of color. This phenomenon poses a challenge in modern visual communication systems, which are continuously being enhanced through new insights gained from research into specific psychophysical effects. One such effect is the psychophysical phenomenon [...] Read more.
Various parameters and observation conditions contribute to the emergence of color. This phenomenon poses a challenge in modern visual communication systems, which are continuously being enhanced through new insights gained from research into specific psychophysical effects. One such effect is the psychophysical phenomenon of simultaneous contrast. Nearly 90 years ago, Kurt Koffka described one of the earliest illusions related to simultaneous contrast. This study examined the perception of gray tone variations in the Koffka ring against different background color combinations (red, blue, green) displayed on a computer screen. The intensity of the effect was measured using lightness difference ΔL00 across light-, medium-, and dark-gray tones. The results were analyzed using descriptive statistics, while statistically significant differences were determined using the Friedman ANOVA and post hoc Wilcox tests. The strongest visual effect was observed the for dark-gray tones of the Koffka ring on blue/green and red/green backgrounds, indicating that perceptual organization and spatial parameters influence the illusion’s magnitude. The findings suggest important implications for digital media design, where understanding these effects can help avoid unintended color tone distortions caused by simultaneous contrast. Full article
Show Figures

Figure 1

18 pages, 1999 KiB  
Article
Circadian Light Manipulation and Melatonin Supplementation Enhance Morphine Antinociception in a Neuropathic Pain Rat Model
by Nian-Cih Huang and Chih-Shung Wong
Int. J. Mol. Sci. 2025, 26(15), 7372; https://doi.org/10.3390/ijms26157372 - 30 Jul 2025
Viewed by 161
Abstract
Disruption of circadian rhythms by abnormal light exposure and reduced melatonin secretion has been linked to heightened pain sensitivity and opioid tolerance. This study evaluated how environmental light manipulation and exogenous melatonin supplementation influence pain perception and morphine tolerance in a rat model [...] Read more.
Disruption of circadian rhythms by abnormal light exposure and reduced melatonin secretion has been linked to heightened pain sensitivity and opioid tolerance. This study evaluated how environmental light manipulation and exogenous melatonin supplementation influence pain perception and morphine tolerance in a rat model of neuropathic pain induced by partial sciatic nerve transection (PSNT). Rats were exposed to constant darkness, constant light, or a 12 h/12 h light–dark cycle for one week before PSNT surgery. Behavioral assays and continuous intrathecal (i.t.) infusion of morphine, melatonin, or their combination were conducted over a 7-day period beginning immediately after PSNT. On Day 7, after discontinued drugs infusion, an acute intrathecal morphine challenge (15 µg, i.t.) was administered to assess tolerance expression. Constant light suppressed melatonin levels, exacerbated pain behaviors, and accelerated morphine tolerance. In contrast, circadian-aligned lighting preserved melatonin rhythms and mitigated these effects. Melatonin co-infusion attenuated morphine tolerance and enhanced morphine analgesia. Reduced pro-inflammatory cytokine expression and increase anti-inflammatory cytokine IL-10 level and suppressed astrocyte activation were also observed by melatonin co-infusion during morphine tolerance induction. These findings highlight the potential of melatonin and circadian regulation in improving opioid efficacy and reduced morphine tolerance in managing neuropathic pain. Full article
(This article belongs to the Section Molecular Neurobiology)
Show Figures

Figure 1

22 pages, 3506 KiB  
Review
Spectroscopic and Imaging Technologies Combined with Machine Learning for Intelligent Perception of Pesticide Residues in Fruits and Vegetables
by Haiyan He, Zhoutao Li, Qian Qin, Yue Yu, Yuanxin Guo, Sheng Cai and Zhanming Li
Foods 2025, 14(15), 2679; https://doi.org/10.3390/foods14152679 - 30 Jul 2025
Viewed by 213
Abstract
Pesticide residues in fruits and vegetables pose a serious threat to food safety. Traditional detection methods have defects such as complex operation, high cost, and long detection time. Therefore, it is of great significance to develop rapid, non-destructive, and efficient detection technologies and [...] Read more.
Pesticide residues in fruits and vegetables pose a serious threat to food safety. Traditional detection methods have defects such as complex operation, high cost, and long detection time. Therefore, it is of great significance to develop rapid, non-destructive, and efficient detection technologies and equipment. In recent years, the combination of spectroscopic techniques and imaging technologies with machine learning algorithms has developed rapidly, providing a new attempt to solve this problem. This review focuses on the research progress of the combination of spectroscopic techniques (near-infrared spectroscopy (NIRS), hyperspectral imaging technology (HSI), surface-enhanced Raman scattering (SERS), laser-induced breakdown spectroscopy (LIBS), and imaging techniques (visible light (VIS) imaging, NIRS imaging, HSI technology, terahertz imaging) with machine learning algorithms in the detection of pesticide residues in fruits and vegetables. It also explores the huge challenges faced by the application of spectroscopic and imaging technologies combined with machine learning algorithms in the intelligent perception of pesticide residues in fruits and vegetables: the performance of machine learning models requires further enhancement, the fusion of imaging and spectral data presents technical difficulties, and the commercialization of hardware devices remains underdeveloped. This review has proposed an innovative method that integrates spectral and image data, enhancing the accuracy of pesticide residue detection through the construction of interpretable machine learning algorithms, and providing support for the intelligent sensing and analysis of agricultural and food products. Full article
Show Figures

Figure 1

30 pages, 7259 KiB  
Article
Multimodal Data-Driven Hourly Dynamic Assessment of Walkability on Urban Streets and Exploration of Regulatory Mechanisms for Diurnal Changes: A Case Study of Wuhan City
by Xingyao Wang, Ziyi Peng and Xue Yang
Land 2025, 14(8), 1551; https://doi.org/10.3390/land14081551 - 28 Jul 2025
Viewed by 238
Abstract
The use of multimodal data can effectively compensate for the lack of temporal resolution in streetscape imagery-based studies and achieve hourly refinement in the study of street walkability dynamics. Exploring the 24 h dynamic pattern of urban street walkability and its diurnal variation [...] Read more.
The use of multimodal data can effectively compensate for the lack of temporal resolution in streetscape imagery-based studies and achieve hourly refinement in the study of street walkability dynamics. Exploring the 24 h dynamic pattern of urban street walkability and its diurnal variation characteristics is a crucial step in understanding and responding to the accelerated urban metabolism. Aiming at the shortcomings of existing studies, which are mostly limited to static assessment or only at coarse time scales, this study integrates multimodal data such as streetscape images, remote sensing images of nighttime lights, and text-described crowd activity information and introduces a novel approach to enhance the simulation of pedestrian perception through a visual–textual multimodal deep learning model. A baseline model for dynamic assessment of walkability with street as a spatial unit and hour as a time granularity is generated. In order to deeply explore the dynamic regulation mechanism of street walkability under the influence of diurnal shift, the 24 h dynamic score of walkability is calculated, and the quantification system of walkability diurnal change characteristics is further proposed. The results of spatio-temporal cluster analysis and quantitative calculations show that the intensity of economic activities and pedestrian experience significantly shape the diurnal pattern of walkability, e.g., urban high-energy areas (e.g., along the riverside) show unique nocturnal activity characteristics and abnormal recovery speeds during the dawn transition. This study fills the gap in the study of hourly street dynamics at the micro-scale, and its multimodal assessment framework and dynamic quantitative index system provide important references for future urban spatial dynamics planning. Full article
Show Figures

Figure 1

25 pages, 19515 KiB  
Article
Towards Efficient SAR Ship Detection: Multi-Level Feature Fusion and Lightweight Network Design
by Wei Xu, Zengyuan Guo, Pingping Huang, Weixian Tan and Zhiqi Gao
Remote Sens. 2025, 17(15), 2588; https://doi.org/10.3390/rs17152588 - 24 Jul 2025
Viewed by 339
Abstract
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where [...] Read more.
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where model size, computational load, and power consumption are tightly restricted. Thus, guided by the principles of lightweight design, robustness, and energy efficiency optimization, this study proposes a three-stage collaborative multi-level feature fusion framework to reduce model complexity without compromising detection performance. Firstly, the backbone network integrates depthwise separable convolutions and a Convolutional Block Attention Module (CBAM) to suppress background clutter and extract effective features. Building upon this, a cross-layer feature interaction mechanism is introduced via the Multi-Scale Coordinated Fusion (MSCF) and Bi-EMA Enhanced Fusion (Bi-EF) modules to strengthen joint spatial-channel perception. To further enhance the detection capability, Efficient Feature Learning (EFL) modules are embedded in the neck to improve feature representation. Experiments on the Synthetic Aperture Radar (SAR) Ship Detection Dataset (SSDD) show that this method, with only 1.6 M parameters, achieves a mean average precision (mAP) of 98.35% in complex scenarios, including inshore and offshore environments. It balances the difficult problem of being unable to simultaneously consider accuracy and hardware resource requirements in traditional methods, providing a new technical path for real-time SAR ship detection on satellite platforms. Full article
Show Figures

Figure 1

29 pages, 766 KiB  
Article
Interpretable Fuzzy Control for Energy Management in Smart Buildings Using JFML-IoT and IEEE Std 1855-2016
by María Martínez-Rojas, Carlos Cano, Jesús Alcalá-Fdez and José Manuel Soto-Hidalgo
Appl. Sci. 2025, 15(15), 8208; https://doi.org/10.3390/app15158208 - 23 Jul 2025
Viewed by 181
Abstract
This paper presents an interpretable and modular framework for energy management in smart buildings based on fuzzy logic and the IEEE Std 1855-2016. The proposed system builds upon the JFML-IoT library, enabling the integration and execution of fuzzy rule-based systems on resource-constrained IoT [...] Read more.
This paper presents an interpretable and modular framework for energy management in smart buildings based on fuzzy logic and the IEEE Std 1855-2016. The proposed system builds upon the JFML-IoT library, enabling the integration and execution of fuzzy rule-based systems on resource-constrained IoT devices using a lightweight and extensible architecture. Unlike conventional data-driven controllers, this approach emphasizes semantic transparency, expert-driven control logic, and compliance with fuzzy markup standards. The system is designed to enhance both operational efficiency and user comfort through transparent and explainable decision-making. A four-layer architecture structures the system into Perception, Communication, Processing, and Application layers, supporting real-time decisions based on environmental data. The fuzzy logic rules are defined collaboratively with domain experts and encoded in Fuzzy Markup Language to ensure interoperability and formalization of expert knowledge. While adherence to IEEE Std 1855-2016 facilitates system integration and standardization, the scientific contribution lies in the deployment of an interpretable, IoT-based control system validated in real conditions. A case study is conducted in a realistic indoor environment, using temperature, humidity, illuminance, occupancy, and CO2 sensors, along with HVAC and lighting actuators. The results demonstrate that the fuzzy inference engine generates context-aware control actions aligned with expert expectations. The proposed framework also opens possibilities for incorporating user-specific preferences and adaptive comfort strategies in future developments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1927 KiB  
Article
ConvTransNet-S: A CNN-Transformer Hybrid Disease Recognition Model for Complex Field Environments
by Shangyun Jia, Guanping Wang, Hongling Li, Yan Liu, Linrong Shi and Sen Yang
Plants 2025, 14(15), 2252; https://doi.org/10.3390/plants14152252 - 22 Jul 2025
Viewed by 334
Abstract
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification [...] Read more.
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification tasks. Unlike existing hybrid approaches, ConvTransNet-S uniquely introduces three key innovations: First, a Local Perception Unit (LPU) and Lightweight Multi-Head Self-Attention (LMHSA) modules were introduced to synergistically enhance the extraction of fine-grained plant disease details and model global dependency relationships, respectively. Second, an Inverted Residual Feed-Forward Network (IRFFN) was employed to optimize the feature propagation path, thereby enhancing the model’s robustness against interferences such as lighting variations and leaf occlusions. This novel combination of a LPU, LMHSA, and an IRFFN achieves a dynamic equilibrium between local texture perception and global context modeling—effectively resolving the trade-offs inherent in standalone CNNs or transformers. Finally, through a phased architecture design, efficient fusion of multi-scale disease features is achieved, which enhances feature discriminability while reducing model complexity. The experimental results indicated that ConvTransNet-S achieved a recognition accuracy of 98.85% on the PlantVillage public dataset. This model operates with only 25.14 million parameters, a computational load of 3.762 GFLOPs, and an inference time of 7.56 ms. Testing on a self-built in-field complex scene dataset comprising 10,441 images revealed that ConvTransNet-S achieved an accuracy of 88.53%, which represents improvements of 14.22%, 2.75%, and 0.34% over EfficientNetV2, Vision Transformer, and Swin Transformer, respectively. Furthermore, the ConvTransNet-S model achieved up to 14.22% higher disease recognition accuracy under complex background conditions while reducing the parameter count by 46.8%. This confirms that its unique multi-scale feature mechanism can effectively distinguish disease from background features, providing a novel technical approach for disease diagnosis in complex agricultural scenarios and demonstrating significant application value for intelligent agricultural management. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

20 pages, 4420 KiB  
Article
Perception of Light Environment in University Classrooms Based on Parametric Optical Simulation and Virtual Reality Technology
by Zhenhua Xu, Jiaying Chang, Cong Han and Hao Wu
Buildings 2025, 15(15), 2585; https://doi.org/10.3390/buildings15152585 - 22 Jul 2025
Viewed by 277
Abstract
University classrooms, core to higher education, have indoor light environments that directly affect students’ learning efficiency, visual health, and psychological states. This study integrates parametric optical simulation and virtual reality (VR) to explore light environment perception in ordinary university classrooms. Forty college students [...] Read more.
University classrooms, core to higher education, have indoor light environments that directly affect students’ learning efficiency, visual health, and psychological states. This study integrates parametric optical simulation and virtual reality (VR) to explore light environment perception in ordinary university classrooms. Forty college students (18–25 years, ~1:1 gender ratio) participated in real virtual comparative experiments. VR scenarios were optimized via real-time rendering and physical calibration. The results showed no significant differences in subjects’ perception evaluations between environments (p > 0.05), verifying virtual environments as effective experimental carriers. The analysis of eight virtual conditions (varying window-to-wall ratios and lighting methods) revealed that mixed lighting performed best in light perception, spatial perception, and overall evaluation. Light perception had the greatest influence on overall evaluation (0.905), with glare as the core factor (0.68); closure sense contributed most to spatial perception (0.45). Structural equation modeling showed that window-to-wall ratio and lighting power density positively correlated with subjective evaluations. Window-to-wall ratio had a 0.412 direct effect on spatial perception and a 0.84 total mediating effect (67.1% of total effect), exceeding the lighting power density’s 0.57 mediating effect sum. This study confirms mixed lighting and window-to-wall ratio optimization as keys to improving classroom light quality, providing an experimental paradigm and parameter basis for user-perception-oriented design. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

19 pages, 1563 KiB  
Review
Autonomous Earthwork Machinery for Urban Construction: A Review of Integrated Control, Fleet Coordination, and Safety Assurance
by Zeru Liu and Jung In Kim
Buildings 2025, 15(14), 2570; https://doi.org/10.3390/buildings15142570 - 21 Jul 2025
Viewed by 248
Abstract
Autonomous earthwork machinery is gaining traction as a means to boost productivity and safety on space-constrained urban sites, yet the fast-growing literature has not been fully integrated. To clarify current knowledge, we systematically searched Scopus and screened 597 records, retaining 157 peer-reviewed papers [...] Read more.
Autonomous earthwork machinery is gaining traction as a means to boost productivity and safety on space-constrained urban sites, yet the fast-growing literature has not been fully integrated. To clarify current knowledge, we systematically searched Scopus and screened 597 records, retaining 157 peer-reviewed papers (2015–March 2025) that address autonomy, integrated control, or risk mitigation for excavators, bulldozers, and loaders. Descriptive statistics, VOSviewer mapping, and qualitative synthesis show the output rising rapidly and peaking at 30 papers in 2024, led by China, Korea, and the USA. Four tightly linked themes dominate: perception-driven machine autonomy, IoT-enabled integrated control systems, multi-sensor safety strategies, and the first demonstrations of fleet-level collaboration (e.g., coordinated excavator clusters and unmanned aerial vehicle and unmanned ground vehicle (UAV–UGV) site preparation). Advances include centimeter-scale path tracking, real-time vision-light detection and ranging (LiDAR) fusion and geofenced safety envelopes, but formal validation protocols and robust inter-machine communication remain open challenges. The review distils five research priorities, including adaptive perception and artificial intelligence (AI), digital-twin integration with building information modeling (BIM), cooperative multi-robot planning, rigorous safety assurance, and human–automation partnership that must be addressed to transform isolated prototypes into connected, self-optimizing fleets capable of delivering safer, faster, and more sustainable urban construction. Full article
(This article belongs to the Special Issue Automation and Robotics in Building Design and Construction)
Show Figures

Figure 1

14 pages, 2822 KiB  
Article
Accuracy and Reliability of Smartphone Versus Mirrorless Camera Images-Assisted Digital Shade Guides: An In Vitro Study
by Soo Teng Chew, Suet Yeo Soo, Mohd Zulkifli Kassim, Khai Yin Lim and In Meei Tew
Appl. Sci. 2025, 15(14), 8070; https://doi.org/10.3390/app15148070 - 20 Jul 2025
Viewed by 327
Abstract
Image-assisted digital shade guides are increasingly popular for shade matching; however, research on their accuracy remains limited. This study aimed to compare the accuracy and reliability of color coordination in image-assisted digital shade guides constructed using calibrated images of their shade tabs captured [...] Read more.
Image-assisted digital shade guides are increasingly popular for shade matching; however, research on their accuracy remains limited. This study aimed to compare the accuracy and reliability of color coordination in image-assisted digital shade guides constructed using calibrated images of their shade tabs captured by a mirrorless camera (Canon, Tokyo, Japan) (MC-DSG) and a smartphone camera (Samsung, Seoul, Korea) (SC-DSG), using a spectrophotometer as the reference standard. Twenty-nine VITA Linearguide 3D-Master shade tabs were photographed under controlled settings with both cameras equipped with cross-polarizing filters. Images were calibrated using Adobe Photoshop (Adobe Inc., San Jose, CA, USA). The L* (lightness), a* (red-green chromaticity), and b* (yellow-blue chromaticity) values, which represent the color attributes in the CIELAB color space, were computed at the middle third of each shade tab using Adobe Photoshop. Specifically, L* indicates the brightness of a color (ranging from black [0] to white [100]), a* denotes the position between red (+a*) and green (–a*), and b* represents the position between yellow (+b*) and blue (–b*). These values were used to quantify tooth shade and compare them to reference measurements obtained from a spectrophotometer (VITA Easyshade V, VITA Zahnfabrik, Bad Säckingen, Germany). Mean color differences (∆E00) between MC-DSG and SC-DSG, relative to the spectrophotometer, were compared using a independent t-test. The ∆E00 values were also evaluated against perceptibility (PT = 0.8) and acceptability (AT = 1.8) thresholds. Reliability was evaluated using intraclass correlation coefficients (ICC), and group differences were analyzed via one-way ANOVA and Bonferroni post hoc tests (α = 0.05). SC-DSG showed significantly lower ΔE00 deviations than MC-DSG (p < 0.001), falling within acceptable clinical AT. The L* values from MC-DSG were significantly higher than SC-DSG (p = 0.024). All methods showed excellent reliability (ICC > 0.9). The findings support the potential of smartphone image-assisted digital shade guides for accurate and reliable tooth shade assessment. Full article
(This article belongs to the Special Issue Advances in Dental Materials, Instruments, and Their New Applications)
Show Figures

Figure 1

17 pages, 3817 KiB  
Article
Molecular Mechanism of Body Color Change in the Ecological Seedling Breeding Model of Apostichopus japonicus
by Lingshu Han, Pengfei Hao, Haoran Xiao, Weiyan Li, Yichen Fan, Wanrong Tian, Ye Tian, Luo Wang, Yaqing Chang and Jun Ding
Biology 2025, 14(7), 873; https://doi.org/10.3390/biology14070873 - 17 Jul 2025
Viewed by 270
Abstract
The mismatch between the rapid expansion of breeding scale and outdated techniques has hindered the development of the sea cucumber (A. japonicus) industry. Our previous work revealed that ecological seedling breeding can produce red-colored A. japonicus, a phenotype not observed [...] Read more.
The mismatch between the rapid expansion of breeding scale and outdated techniques has hindered the development of the sea cucumber (A. japonicus) industry. Our previous work revealed that ecological seedling breeding can produce red-colored A. japonicus, a phenotype not observed in traditional artificial breeding, where individuals are typically green. To investigate the molecular and genetic basis of this novel red coloration, we compared the growth conditions of red sea cucumbers and green sea cucumbers, as well as the differences in the pigment composition, gene expression and metabolites of their body walls. Red individuals showed higher body length and weight, and elevated levels of astaxanthin, lutein, canthaxanthin, and β-carotene in the body wall. Transcriptomic and metabolomic analyses identified differentially expressed genes and metabolites associated with pigmentation. In particular, FMO2 and WDR18, involved in the cytochrome P450 drug metabolism pathway, were significantly upregulated in red individuals and are known to play roles in pigment biosynthesis and light signal perception. Key metabolites such as astaxanthin and fucoxanthin were implicated in body color formation. Moreover, genes in the arachidonic acid metabolism pathway were highly expressed, suggesting that dietary factors may contribute to pigment synthesis and accumulation. These findings provide novel insights into the mechanisms underlying body color variation in A. japonicus and offer potential for improved breeding strategies. Full article
(This article belongs to the Section Marine Biology)
Show Figures

Graphical abstract

23 pages, 396 KiB  
Article
Navigating Hybrid Work: An Optimal Office–Remote Mix and the Manager–Employee Perception Gap in IT
by Milos Loncar, Jovanka Vukmirovic, Aleksandra Vukmirovic, Dragan Vukmirovic and Ratko Lasica
Sustainability 2025, 17(14), 6542; https://doi.org/10.3390/su17146542 - 17 Jul 2025
Viewed by 470
Abstract
The transition to hybrid work has become a defining feature of the post-pandemic IT sector, yet organizations lack empirical benchmarks for balancing flexibility with performance and well-being. This study addresses this gap by identifying an optimal hybrid work structure and exposing systematic perception [...] Read more.
The transition to hybrid work has become a defining feature of the post-pandemic IT sector, yet organizations lack empirical benchmarks for balancing flexibility with performance and well-being. This study addresses this gap by identifying an optimal hybrid work structure and exposing systematic perception gaps between employees and managers. Grounded in Self-Determination Theory and the Job Demands–Resources model, our research analyses survey data from 1003 employees and 252 managers across 46 countries. The findings identify a hybrid “sweet spot” of 6–10 office days per month. Employees in this window report significantly higher perceived efficiency (Odds Ratio (OR) ≈ 2.12) and marginally lower office-related stress. Critically, the study uncovers a significant perception gap: contrary to the initial hypothesis, managers are nearly twice as likely as employees to rate hybrid work as most efficient (OR ≈ 1.95) and consistently evaluate remote-work resources more favourably (OR ≈ 2.64). This “supervisor-optimism bias” suggests a disconnect between policy design and frontline experience. The study concludes that while a light-to-moderate hybrid model offers clear benefits, organizations must actively address this perceptual divide and remedy resource shortages to realize the potential of hybrid work fully. This research provides data-driven guidelines for creating sustainable, high-performance work environments in the IT sector. Full article
Show Figures

Figure 1

21 pages, 12122 KiB  
Article
RA3T: An Innovative Region-Aligned 3D Transformer for Self-Supervised Sim-to-Real Adaptation in Low-Altitude UAV Vision
by Xingrao Ma, Jie Xie, Di Shao, Aiting Yao and Chengzu Dong
Electronics 2025, 14(14), 2797; https://doi.org/10.3390/electronics14142797 - 11 Jul 2025
Viewed by 280
Abstract
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework [...] Read more.
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework that enables robust Sim-to-Real adaptation. Specifically, we first develop a dual-branch strategy for self-supervised feature learning, integrating Masked Autoencoders and contrastive learning. This approach extracts domain-invariant representations from unlabeled simulated imagery to enhance robustness against occlusion while reducing annotation dependency. Leveraging these learned features, we then introduce a 3D Transformer fusion module that unifies multi-view RGB and LiDAR point clouds through cross-modal attention. By explicitly modeling spatial layouts and height differentials, this component significantly improves recognition of small and occluded targets in complex low-altitude environments. To address persistent fine-grained domain shifts, we finally design region-level adversarial calibration that deploys local discriminators on partitioned feature maps. This mechanism directly aligns texture, shadow, and illumination discrepancies which challenge conventional global alignment methods. Extensive experiments on UAV benchmarks VisDrone and DOTA demonstrate the effectiveness of RA3T. The framework achieves +5.1% mAP on VisDrone and +7.4% mAP on DOTA over the 2D adversarial baseline, particularly on small objects and sparse occlusions, while maintaining real-time performance of 17 FPS at 1024 × 1024 resolution on an RTX 4080 GPU. Visual analysis confirms that the synergistic integration of 3D geometric encoding and local adversarial alignment effectively mitigates domain gaps caused by uneven illumination and perspective variations, establishing an efficient pathway for simulation-to-reality UAV perception. Full article
(This article belongs to the Special Issue Innovative Technologies and Services for Unmanned Aerial Vehicles)
Show Figures

Figure 1

Back to TopTop