Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (139)

Search Parameters:
Keywords = color prior model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 16026 KB  
Article
CIR-SSM: A Cross and Inter Resolution State-Space Model for Underwater Image Enhancement
by Fengxian Liu, Ning Ye and Haitao Wang
Appl. Sci. 2026, 16(3), 1297; https://doi.org/10.3390/app16031297 - 27 Jan 2026
Abstract
Underwater images often suffer from strong color casts, low contrast, and blurred textures. It is observed that low resolution can provide globally correct color, so low-resolution priors can guide high-resolution correction. While many recent methods combine Transformer and CNN components, Mamba offers an [...] Read more.
Underwater images often suffer from strong color casts, low contrast, and blurred textures. It is observed that low resolution can provide globally correct color, so low-resolution priors can guide high-resolution correction. While many recent methods combine Transformer and CNN components, Mamba offers an efficient alternative for global dependency modeling. Motivated by these insights, this paper proposes a cross- and inter-resolution state-space model for underwater image enhancement (CIR-SSM). The method consists of three sub-networks at full, 1/2, and 1/4 resolutions, each stacking color–texture Mamba modules. Each module includes a color Mamba block, a texture Mamba block, and a color–texture fusion Mamba block. The color Mamba block injects low-resolution color priors into the state-space trajectory to steer global color reconstruction in the high-resolution branch. In parallel, the texture Mamba block operates at the native resolution to capture fine-grained structural dependencies for texture restoration. The fusion Mamba block adaptively merges the enhanced color and texture representations within the state-space framework to produce the restored image. Comprehensive quantitative assessments on both the UIEB and SQUID benchmarks show that the proposed framework achieves the highest evaluated scores, outperforming several representative state-of-the-art methods. Full article
(This article belongs to the Special Issue Computational Imaging: Algorithms, Technologies, and Applications)
19 pages, 2136 KB  
Article
Transformer-Based Multi-Class Classification of Bangladeshi Rice Varieties Using Image Data
by Israt Tabassum and Vimala Nunavath
Appl. Sci. 2026, 16(3), 1279; https://doi.org/10.3390/app16031279 - 27 Jan 2026
Viewed by 57
Abstract
Rice (Oryza sativa L.) is a staple food for over half of the global population, with significant economic, agricultural, and cultural importance, particularly in Asia. Thousands of rice varieties exist worldwide, differing in size, shape, color, and texture, making accurate classification essential [...] Read more.
Rice (Oryza sativa L.) is a staple food for over half of the global population, with significant economic, agricultural, and cultural importance, particularly in Asia. Thousands of rice varieties exist worldwide, differing in size, shape, color, and texture, making accurate classification essential for quality control, breeding programs, and authenticity verification in trade and research. Traditional manual identification of rice varieties is time-consuming, error-prone, and heavily reliant on expert knowledge. Deep learning provides an efficient alternative by automatically extracting discriminative features from rice grain images for precise classification. While prior studies have primarily employed deep learning models such as CNN, VGG, InceptionV3, MobileNet, and DenseNet201, transformer-based models remain underexplored for rice variety classification. This study addresses this gap by applying two deep learning models such as Swin Transformer and Vision Transformer for multi-class classification of rice varieties using the publicly available PRBD dataset from Bangladesh. Experimental results demonstrate that the ViT model achieved an accuracy of 99.86% with precision, recall, and F1-score all at 0.9986, while the Swin Transformer model obtained an accuracy of 99.44% with a precision of 0.9944, recall of 0.9944, and F1-score of 0.9943. These results highlight the effectiveness of transformer-based models for high-accuracy rice variety classification. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 4309 KB  
Article
Characterization and Optimization of the Ultrasound-Assisted Extraction Process of an Unexplored Amazonian Drupe (Chondrodendron tomentosum): A Novel Source of Anthocyanins and Phenolic Compounds
by Disbexy Huaman-Huaman, Segundo G. Chavez, Laydy Mena-Chacon, José Marcelo-Peña, Hans Minchán-Velayarce and Ralph Rivera-Botonares
Processes 2026, 14(2), 357; https://doi.org/10.3390/pr14020357 - 20 Jan 2026
Viewed by 172
Abstract
This study presents the first comprehensive physicochemical and bioactive characterization of the fruit of Chondrodendron tomentosum Ruiz & Pav. (Menispermaceae). Biometric and physicochemical parameters were characterized across three fruit ripening stages (green, turning, ripe). Additionally, proximate composition was determined in ripe fruits, and [...] Read more.
This study presents the first comprehensive physicochemical and bioactive characterization of the fruit of Chondrodendron tomentosum Ruiz & Pav. (Menispermaceae). Biometric and physicochemical parameters were characterized across three fruit ripening stages (green, turning, ripe). Additionally, proximate composition was determined in ripe fruits, and methanol concentration (25–75%), ultrasonic amplitude (30–70%), and time (1–15 min) were optimized using response surface methodology with a Box–Behnken design. During ripening, weight increased by +47.7% (3.89 to 5.74 g; p < 0.0001), TSS by +26.1% (7.00 to 8.83 °Brix), pH decreased by 32.0% (6.28 to 4.27), and acidity increased by 276% (0.25 to 0.94%). The quadratic models demonstrated high predictive accuracy (R2 > 96.5%; p < 0.004). Optimal conditions (57% methanol, 70% amplitude, and 15 min) maximized total anthocyanin content (120.71 ± 1.89 mg cyanidin-3-glucoside/L), total phenols (672.46 ± 5.84 mg GAE/100 g), and DPPH radical scavenging capacity (5857.55 ± 60.20 µmol Trolox/100 g) in ripe fruits. Unripe fruits do not contain anthocyanins, reaching 46.01 mg C3G/L in turning fruits and 120.71 mg/L in ripe fruits (162% higher than turning fruits). Principal component analysis (90.6% variance) revealed synchronized co-accumulation of anthocyanins and phenols, enhanced by vacuolar acidification. These results suggest ripe C. tomentosum fruits as a potential source for natural colorants, nutraceuticals, and functional foods, pending prior development of green, human-safe extraction processes. Full article
(This article belongs to the Special Issue Advances in Green Extraction and Separation Processes)
Show Figures

Graphical abstract

22 pages, 3772 KB  
Article
A Degradation-Aware Dual-Path Network with Spatially Adaptive Attention for Underwater Image Enhancement
by Shasha Tian, Adisorn Sirikham, Jessada Konpang and Chuyang Wang
Electronics 2026, 15(2), 435; https://doi.org/10.3390/electronics15020435 - 19 Jan 2026
Viewed by 116
Abstract
Underwater image enhancement remains challenging due to wavelength-dependent absorption, spatially varying scattering, and non-uniform illumination, which jointly cause severe color distortion, contrast degradation, and structural information loss. To address these issues, we propose UCS-Net, a degradation-aware dual-path framework that exploits the complementarity between [...] Read more.
Underwater image enhancement remains challenging due to wavelength-dependent absorption, spatially varying scattering, and non-uniform illumination, which jointly cause severe color distortion, contrast degradation, and structural information loss. To address these issues, we propose UCS-Net, a degradation-aware dual-path framework that exploits the complementarity between global and local representations. A spatial color balance module first stabilizes the chromatic distribution of degraded inputs through a learnable gray-world-guided normalization, mitigating wavelength-induced color bias prior to feature extraction. The network then adopts a dual-branch architecture, where a hierarchical Swin Transformer branch models long-range contextual dependencies and global color relationships, while a multi-scale residual convolutional branch focuses on recovering local textures and structural details suppressed by scattering. Furthermore, a multi-scale attention fusion mechanism adaptively integrates features from both branches in a degradation-aware manner, enabling dynamic emphasis on global or local cues according to regional attenuation severity. A hue-preserving reconstruction module is finally employed to suppress color artifacts and ensure faithful color rendition. Extensive experiments on UIEB, EUVP, and UFO benchmarks demonstrate that UCS-Net consistently outperforms state-of-the-art methods in both full-reference and non-reference evaluations. Qualitative results further confirm its effectiveness in restoring fine structural details while maintaining globally consistent and visually realistic colors across diverse underwater scenes. Full article
(This article belongs to the Special Issue Image Processing and Analysis)
Show Figures

Figure 1

24 pages, 11080 KB  
Article
Graph-Based and Multi-Stage Constraints for Hand–Object Reconstruction
by Wenrun Wang, Jianwu Dang, Yangping Wang and Hui Yu
Sensors 2026, 26(2), 535; https://doi.org/10.3390/s26020535 - 13 Jan 2026
Viewed by 205
Abstract
Reconstructing hand and object shapes from a single view during interaction remains challenging due to severe mutual occlusion and the need for high physical plausibility. To address this, we propose a novel framework for hand–object interaction reconstruction based on holistic, multi-stage collaborative optimization. [...] Read more.
Reconstructing hand and object shapes from a single view during interaction remains challenging due to severe mutual occlusion and the need for high physical plausibility. To address this, we propose a novel framework for hand–object interaction reconstruction based on holistic, multi-stage collaborative optimization. Unlike methods that process hands and objects independently or apply constraints as late-stage post-processing, our model progressively enforces physical consistency and geometric accuracy throughout the entire reconstruction pipeline. Our network takes an RGB-D image as input. An adaptive feature fusion module first combines color and depth information to improve robustness against sensing uncertainties. We then introduce structural priors for 2D pose estimation and leverage texture cues to refine depth-based 3D pose initialization. Central to our approach is the iterative application of a dense mutual attention mechanism during sparse-to-dense mesh recovery, which dynamically captures interaction dependencies while refining geometry. Finally, we use a Signed Distance Function (SDF) representation explicitly designed for contact surfaces to prevent interpenetration and ensure physically plausible results. Through comprehensive experiments, our method demonstrates significant improvements on the challenging ObMan and DexYCB benchmarks, outperforming state-of-the-art techniques. Specifically, on the ObMan dataset, our approach achieves hand CDh and object CDo metrics of 0.077 cm2 and 0.483 cm2, respectively. Similarly, on the DexYCB dataset, it attains hand CDh and object CDo values of 0.251 cm2 and 1.127 cm2, respectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 58532 KB  
Article
Joint Inference of Image Enhancement and Object Detection via Cross-Domain Fusion Transformer
by Bingxun Zhao and Yuan Chen
Computers 2026, 15(1), 43; https://doi.org/10.3390/computers15010043 - 10 Jan 2026
Viewed by 149
Abstract
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to [...] Read more.
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to improve visual quality prior to detection. However, image enhancement and object detection are optimized for fundamentally different objectives, and directly cascading them leads to feature distribution mismatch. Moreover, prevailing dual-branch architectures process enhancement and detection independently, overlooking multi-scale interactions across domains and thus constraining the learning of cross-domain feature representation. To overcome these limitations, We propose an underwater cross-domain fusion Transformer detector (UCF-DETR). UCF-DETR jointly leverages image enhancement and object detection by exploiting the complementary information from the enhanced and original image domains. Specifically, an underwater image enhancement module is employed to improve visibility. We then design a cross-domain feature pyramid to integrate fine-grained structural details from the enhanced domain with semantic representations from the original domain. Cross-domain query interaction mechanism is introduced to model inter-domain query relationships, leading to accurate object localization and boundary delineation. Extensive experiments on the challenging DUO and UDD benchmarks demonstrate that UCF-DETR consistently outperforms state-of-the-art methods for UOD. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

25 pages, 1225 KB  
Article
Research on the Influence of Interface Visual Design Features of Mobile News on Cognitive Load: A Study of Elderly Users in China
by Chang Liu and Qing-Xing Qu
Behav. Sci. 2026, 16(1), 32; https://doi.org/10.3390/bs16010032 - 23 Dec 2025
Viewed by 757
Abstract
This study addresses specific gaps in current research on user-experience interface design for news and information apps targeted at elderly users, particularly in the context of human factors and ergonomics. To investigate how interface design features of mobile news clients affect the cognitive [...] Read more.
This study addresses specific gaps in current research on user-experience interface design for news and information apps targeted at elderly users, particularly in the context of human factors and ergonomics. To investigate how interface design features of mobile news clients affect the cognitive load of elderly users, an in-depth analysis was conducted using a combination of objective eye movement tests and subjective evaluation scales. Mobile news client interfaces with systematically varied visual complexity were designed by orthogonally manipulating three core elements identified from top-ranked Chinese news apps and prior literature, and within-subject repeated experiments were performed to collect subjective cognitive load data, objective eye movement data, and behavioral data, validating the proposed hypothesis model. The results indicate that the visual complexity of mobile news client interfaces significantly impacts the cognitive load of elderly users, with keyword color substantially modulating this effect. These findings contribute to the knowledge base on mobile news client interface design for elderly users and provide practical recommendations for designers to create more equitable interfaces, enhancing usability for this demographic. Full article
Show Figures

Figure 1

27 pages, 9039 KB  
Article
Source(s) of the Smooth Caloris Exterior Plains on Mercury: Mapping, Remote Analyses, and Scenarios for Future Testing with BepiColombo Data
by Keenan B. Golder, Bradley J. Thomson, Lillian R. Ostrach, Devon M. Burr, Joshua P. Emery and Harald Hiesinger
Remote Sens. 2026, 18(1), 19; https://doi.org/10.3390/rs18010019 - 20 Dec 2025
Viewed by 420
Abstract
Mercury hosts widespread smooth plains that are concentrated in the Caloris impact basin, in an annulus surrounding the Caloris basin, and in the adjacent northern smooth plains. The origins of these smooth plains are uncertain, although prior work suggests these plains in the [...] Read more.
Mercury hosts widespread smooth plains that are concentrated in the Caloris impact basin, in an annulus surrounding the Caloris basin, and in the adjacent northern smooth plains. The origins of these smooth plains are uncertain, although prior work suggests these plains in the northwestern Caloris annulus might reflect volcanic activity, impact ejecta, or a combination of the two. Deciphering the timing and mode of emplacement of these plains would provide a critical constraint on regional late-stage volcanism or impact effects. In this work, the region northwest of Caloris was investigated using geomorphological and color-based mapping, crater counting techniques, and spectral analyses with the goal of placing constraints on the source of the observed units and identifying the primary emplacement mechanism. Mapping and spectral analyses confirm previous findings of two distinct, yet intermingled, units within these plains, each with similar crater count model ages that postdate the formation of the Caloris impact basin. Mapping, spectra analysis, ages, and the identification of potential flow pathways are more consistent with a predominantly volcanic origin for the smooth plains materials, although these data do not rule out contributions from impact ejecta or impact melt. We propose several hypothetical scenarios, including post-emplacement modification by near-surface volatiles, to explain these observations and clarify the emplacement mechanism for these specific smooth plains regions. Further observations from the BepiColombo mission should provide data to potentially address the outstanding questions from this work. Full article
Show Figures

Figure 1

21 pages, 17206 KB  
Article
Mean-Curvature-Regularized Deep Image Prior with Soft Attention for Image Denoising and Deblurring
by Muhammad Israr, Shahbaz Ahmad, Muhammad Nabeel Asghar and Saad Arif
Mathematics 2025, 13(24), 3906; https://doi.org/10.3390/math13243906 - 6 Dec 2025
Viewed by 452
Abstract
Sparsity-driven regularization has undergone significant development in single-image restoration, particularly with the transition from handcrafted priors to trainable deep architectures. In this work, a geometric prior-enhanced deep image prior (DIP) framework, termed DIP-MC, is proposed that integrates mean curvature (MC) regularization to promote [...] Read more.
Sparsity-driven regularization has undergone significant development in single-image restoration, particularly with the transition from handcrafted priors to trainable deep architectures. In this work, a geometric prior-enhanced deep image prior (DIP) framework, termed DIP-MC, is proposed that integrates mean curvature (MC) regularization to promote natural smoothness and structural coherence in reconstructed images. To strengthen the representational capacity of DIP, a self-attention module is incorporated between the encoder and decoder, enabling the network to capture long-range dependencies and preserve fine-scale textures. In contrast to total variation (TV), which frequently produces piecewise-constant artifacts and staircasing, MC regularization leverages curvature information, resulting in smoother transitions while maintaining sharp structural boundaries. DIP-MC is evaluated on standard grayscale and color image denoising and deblurring tasks using benchmark datasets including BSD68, Classic5, LIVE1, Set5, Set12, Set14, and the Levin dataset. Quantitative performance is assessed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) metrics. Experimental results demonstrate that DIP-MC consistently outperformed the DIP-TV baseline with 26.49 PSNR and 0.9 SSIM. It achieved competitive performance relative to BM3D and EPLL models with 28.6 PSNR and 0.87 SSIM while producing visually more natural reconstructions with improved detail fidelity. Furthermore, the learning dynamics of DIP-MC are analyzed by examining update-cost behavior during optimization, visualizing the best-performing network weights, and monitoring PSNR and SSIM progression across training epochs. These evaluations indicate that DIP-MC exhibits superior stability and convergence characteristics. Overall, DIP-MC establishes itself as a robust, scalable, and geometrically informed framework for high-quality single-image restoration. Full article
(This article belongs to the Special Issue Mathematical Methods for Image Processing and Understanding)
Show Figures

Figure 1

12 pages, 795 KB  
Article
Intraocular Cytokine Level Prediction from Fundus Images and Optical Coherence Tomography
by Hidenori Takahashi, Taiki Tsuge, Yusuke Kondo, Yasuo Yanagi, Satoru Inoda, Shohei Morikawa, Yuki Senoo, Toshikatsu Kaburaki, Tetsuro Oshika and Toshihiko Yamasaki
Sensors 2025, 25(23), 7382; https://doi.org/10.3390/s25237382 - 4 Dec 2025
Viewed by 508
Abstract
The relationship between retinal images and intraocular cytokine profiles remains largely unexplored, and no prior work has systematically compared fundus- and OCT-based deep learning models for cytokine prediction. We aimed to predict intraocular cytokine concentrations using color fundus photographs (CFP) and retinal optical [...] Read more.
The relationship between retinal images and intraocular cytokine profiles remains largely unexplored, and no prior work has systematically compared fundus- and OCT-based deep learning models for cytokine prediction. We aimed to predict intraocular cytokine concentrations using color fundus photographs (CFP) and retinal optical coherence tomography (OCT) with deep learning. Our pipeline consisted of image preprocessing, convolutional neural network–based feature extraction, and regression modeling for each cytokine. Deep learning was implemented using AutoGluon, which automatically explored multiple architectures and converged on ResNet18, reflecting the small dataset size. Four approaches were tested: (1) CFP alone, (2) CFP plus demographic/clinical features, (3) OCT alone, and (4) OCT plus these features. Prediction performance was defined as the mean coefficient of determination (R2) across 34 cytokines, and differences were evaluated using paired two-tailed t-tests. We used data from 139 patients (152 eyes) and 176 aqueous humor samples. The cohort consisted of 85 males (61%) with a mean age of 73 (SD 9.8). Diseases included 64 exudative age-related macular degeneration, 29 brolucizumab-associated endophthalmitis, 19 cataract surgeries, 15 retinal vein occlusion, and 8 diabetic macular edema. Prediction performance was generally poor, with mean R2 values below zero across all approaches. The CFP-only model (–0.19) outperformed CFP plus demographics (–24.1; p = 0.0373), and the OCT-only model (–0.18) outperformed OCT plus demographics (–14.7; p = 0.0080). No significant difference was observed between CFP and OCT (p = 0.9281). Notably, VEGF showed low predictability (31st with CFP, 12th with OCT). Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

11 pages, 550 KB  
Article
The Dermatology Fast Track as a Model for Integrated Care Pathways Between Emergency Medicine and Outpatient Specialty Services
by Edoardo Cammarata, Chiara Airoldi, Elisa Zavattaro, Francesco Gavelli, Ugo Fazzini, Mattia Bellan, Paola Savoia and on behalf of the DFT Group
Medicina 2025, 61(12), 2133; https://doi.org/10.3390/medicina61122133 - 29 Nov 2025
Viewed by 435
Abstract
Background and Objectives: Dermatological conditions account for a significant proportion of Emergency Department (ED) visits but are often misclassified at triage and managed without timely specialist input. A Dermatology Fast Track (DFT) pathway was implemented to improve diagnostic accuracy, optimize resource use, and [...] Read more.
Background and Objectives: Dermatological conditions account for a significant proportion of Emergency Department (ED) visits but are often misclassified at triage and managed without timely specialist input. A Dermatology Fast Track (DFT) pathway was implemented to improve diagnostic accuracy, optimize resource use, and enhance integration between ED and dermatology services. Materials and Methods: We conducted a retrospective study of patients referred through the DFT between April 2023 and October 2024. Demographics, triage codes, diagnoses, comorbidities, prior healthcare utilization, treatments, and follow-up were analyzed. Concordance between ED and dermatology-assigned triage codes was assessed using Cohen’s kappa, and temporal trends in referrals were explored. Results: Of 621 patients referred, 554 were included (mean age of 47.7 years and balanced sex distribution). Most were triaged green (75.6%) or white (23.1%), and 99.5% were discharged home. Infectious dermatoses (21.1%) and eczema (17.7%) were most frequent, with age-specific variations. Combined topical and systemic therapy was prescribed in 66.1% of cases, and 30.9% were referred for follow-up. Concordance between ED and dermatology triage codes was limited (58.7% agreement; Cohen’s kappa 0.25), with frequent down-grading of priority by dermatologists. Seasonal peaks were observed, with higher demand during summer months. Conclusions: The DFT pathway streamlines ED care, ensuring timely management of acute dermatological conditions and reducing overcrowding. Seasonal demand fluctuations and discrepancies in triage highlight the need for targeted staff training, structured follow-up, and resource planning. Overall, the DFT is an effective model for enhancing ED efficiency, diagnostic accuracy, and patient care; however, as outcomes were assessed only in the DFT cohort and the study was conducted in a single center using Italy’s color-coded triage system, the generalizability of these findings may be limited. Multicenter studies are needed to confirm its broader applicability. Full article
(This article belongs to the Section Dermatology)
Show Figures

Figure 1

20 pages, 1021 KB  
Article
Combined Effects of Taro Starch-Based Edible Coating, Osmotic Dehydration, and Ultrasonication on Drying Kinetics and Quality Attributes of Pears
by Betül Aslan Yılmaz, Dilek Demirbüker Kavak and Hande Demir
Processes 2025, 13(11), 3695; https://doi.org/10.3390/pr13113695 - 15 Nov 2025
Viewed by 666
Abstract
The pursuit of efficient drying methods that preserve fruit quality remains a major challenge in food processing. Non-thermal pre-treatments such as ultrasonication (U), edible film coating (F), and osmotic dehydration (O) can improve drying performance but show limited effectiveness when applied individually. This [...] Read more.
The pursuit of efficient drying methods that preserve fruit quality remains a major challenge in food processing. Non-thermal pre-treatments such as ultrasonication (U), edible film coating (F), and osmotic dehydration (O) can improve drying performance but show limited effectiveness when applied individually. This study investigates a combined pre-treatment strategy for pear drying, evaluating a taro starch-based edible coating used alone and in combination with U and O. Pear slices received individual and combined pre-treatments (F, OF, UF, and UOF) prior to drying at temperatures of 60, 70, and 80 °C. The drying kinetics were modeled, and quality parameters such as effective moisture diffusivity (Deff), rehydration capacity, microstructure, color, total phenolic content (TPC), antioxidant activity, and vitamin C, were assessed. The Page model fitted the drying data the best (R2 > 0.9935). UF achieved the shortest drying time and a porous microstructure, thereby enhancing rehydration. OF showed the highest Deff and best color retention, but the lowest rehydration. Conversely, UOF caused the greatest losses in bioactive compounds (TPC: 54.29 mg GAE/100 g; antioxidant activity: 15.39%; 0.48 mg vitamin C/100 g). Unlike single-technology studies, this sequential pre-treatment strategy for pears uniquely tailors the final quality, targeting efficiency, color, bioactivity, or structural properties. Full article
(This article belongs to the Special Issue Feature Papers in the "Food Process Engineering" Section)
Show Figures

Figure 1

27 pages, 1122 KB  
Review
Molecular Mechanisms Underlying Floral Development Mediated by Blue Light and Other Integrated Signals: Research Findings and Perspectives
by Yun Kong and Youbin Zheng
Crops 2025, 5(5), 72; https://doi.org/10.3390/crops5050072 - 15 Oct 2025
Viewed by 1225
Abstract
Blue light (BL) is a key environmental signal influencing plant flowering, yet its role in floral development beyond the transition phase remains underexplored. This review provides a comprehensive synthesis of current research on BL-mediated floral development, with a particular emphasis on horticultural crops [...] Read more.
Blue light (BL) is a key environmental signal influencing plant flowering, yet its role in floral development beyond the transition phase remains underexplored. This review provides a comprehensive synthesis of current research on BL-mediated floral development, with a particular emphasis on horticultural crops grown in a controlled environment. Unlike prior reviews that focus primarily on floral induction, this article systematically examines BL’s effects on later stages of flowering, including floral organ morphogenesis, sex expression, bud abortion, flower opening, scent emission, coloration, pollination, and senescence. Drawing on evidence from both model plants (e.g., Arabidopsis thaliana) and crop species, this review identifies key photoreceptors, hormonal regulators, and signaling components involved in BL responses. It also highlights species-specific and context-dependent outcomes of BL manipulation, proposes mechanistic hypotheses to explain conflicting findings, and outlines critical knowledge gaps. By integrating molecular, physiological, and environmental perspectives, this review offers a framework for optimizing BL applications to improve flowering traits and postharvest quality in horticultural production systems. Full article
Show Figures

Figure 1

22 pages, 1536 KB  
Article
Hybrid CNN–Transformer with Fusion Discriminator for Ovarian Tumor Ultrasound Imaging Classification
by Donglei Xu, Xinyi He, Ruoyun Zhang, Yinuo Zhang, Manzhou Li and Yan Zhan
Electronics 2025, 14(20), 4040; https://doi.org/10.3390/electronics14204040 - 14 Oct 2025
Cited by 1 | Viewed by 854
Abstract
We propose a local–global attention fusion network for benign–malignant discrimination of ovarian tumors in color Doppler ultrasound (CDFI). The framework integrates three complementary modules: a local enhancement module (LEM) to capture fine-grained texture and boundary cues, a Global Attention Module (GAM) to model [...] Read more.
We propose a local–global attention fusion network for benign–malignant discrimination of ovarian tumors in color Doppler ultrasound (CDFI). The framework integrates three complementary modules: a local enhancement module (LEM) to capture fine-grained texture and boundary cues, a Global Attention Module (GAM) to model long-range dependencies with flow-aware priors, and a Fusion Discriminator (FD) to align and adaptively reweight heterogeneous evidence for robust decision-making. The method was evaluated on a multi-center clinical dataset comprising 820 patient cases (482 benign and 338 malignant), ensuring a realistic and moderately imbalanced distribution. Compared with classical baselines including ResNet-50, DenseNet-121, ViT, Hybrid CNN–Transformer, U-Net, and SegNet, our approach achieved an accuracy of 0.923, sensitivity of 0.911, specificity of 0.934, AUC of 0.962, and F1-score of 0.918, yielding improvements of about three percentage points in the AUC and F1-score over the strongest baseline. Ablation experiments confirmed the necessity of each module, with the performance degrading notably when the GAM or the LEM was removed, while the complete design provided the best results, highlighting the benefit of local–global synergy. Five-fold cross-validation further demonstrated stable generalization (accuracy: 0.922; AUC: 0.961). These findings indicate that the proposed system offers accurate and robust assistance for preoperative triage, surgical decision support, and follow-up management of ovarian tumors. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images, 2nd Edition)
Show Figures

Figure 1

24 pages, 34370 KB  
Article
A Semi-Automatic and Visual Leaf Area Measurement System Integrating Hough Transform and Gaussian Level-Set Method
by Linjuan Wang, Chengyi Hao, Xiaoying Zhang, Wenfeng Guo, Zhifang Bi, Zhaoqing Lan, Lili Zhang and Yuanhuai Han
Agriculture 2025, 15(19), 2101; https://doi.org/10.3390/agriculture15192101 - 9 Oct 2025
Viewed by 660
Abstract
Accurate leaf area measurement is essential for plant growth monitoring and ecological research; however, it is often challenged by perspective distortion and color inconsistencies resulting from variations in shooting conditions and plant status. To address these issues, this study proposes a visual and [...] Read more.
Accurate leaf area measurement is essential for plant growth monitoring and ecological research; however, it is often challenged by perspective distortion and color inconsistencies resulting from variations in shooting conditions and plant status. To address these issues, this study proposes a visual and semi-automatic measurement system. The system utilizes Hough transform-based perspective transformation to correct perspective distortions and incorporates manually sampled points to obtain prior color information, effectively mitigating color inconsistency. Based on this prior knowledge, the level-set function is automatically initialized. The leaf extraction is achieved through level-set curve evolution that minimizes an energy function derived from a multivariate Gaussian distribution model, and the evolution process allows visual monitoring of the leaf extraction progress. Experimental results demonstrate robust performance under diverse conditions: the standard deviation remains below 1 cm2, the relative error is under 1%, the coefficient of variation is less than 3%, and processing time is under 10 s for most images. Compared to the traditional labor-intensive and time-consuming manual photocopy-weighing approach, as well as OpenPheno (which lacks parameter adjustability) and ImageJ 1.54g (whose results are highly operator-dependent), the proposed system provides a more flexible, controllable, and robust semi-automatic solution. It significantly reduces operational barriers while enhancing measurement stability, demonstrating considerable practical application value. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop