Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,231)

Search Parameters:
Keywords = image digital analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 20579 KB  
Article
A Deep Learning Approach for High-Throughput Multi-Tissue Cell Segmentation and Phenotypic Analysis in Chinese Cabbage Leaf Cross-Sections
by Zhiming Zhang, Jun Zhang, Tianyi Ren, Minggeng Liu and Lei Sun
Agronomy 2026, 16(6), 612; https://doi.org/10.3390/agronomy16060612 - 13 Mar 2026
Abstract
Quantitative analysis of leaf cell microstructure is crucial for deciphering agronomic traits in Chinese cabbage, including photosynthetic efficiency, stress tolerance, and yield potential. Traditional manual observation methods are inefficient and highly subjective, failing to meet the demands of large-scale breeding for high-throughput, reproducible [...] Read more.
Quantitative analysis of leaf cell microstructure is crucial for deciphering agronomic traits in Chinese cabbage, including photosynthetic efficiency, stress tolerance, and yield potential. Traditional manual observation methods are inefficient and highly subjective, failing to meet the demands of large-scale breeding for high-throughput, reproducible microscopic phenotyping. To transition breeding practices from experience-driven to data-driven, there is an urgent need to establish automated, standardized systems for acquiring cell-scale phenotypes. Therefore, this study proposes an automated instance segmentation and phenotyping analysis framework for multi-tissue cells in Chinese cabbage leaf cross-sections. This framework systematically optimizes Mask R-CNN by introducing an attention mechanism to enhance cellular feature responses in complex backgrounds. It employs weighted multi-scale feature fusion to process densely distributed small-scale cells and integrates a refined boundary optimization module to improve recognition accuracy in adherent and blurred regions. On a microscopic image dataset spanning multiple varieties, this method achieves high-precision predictions in instance segmentation tasks. Based on the predicted cell masks, an interactive phenotyping analysis tool was further developed to automatically extract standardized single-cell morphological parameters, including area, perimeter, and Feret’s diameter. The measured parameters exhibit high consistency with manual annotations (correlation coefficients (r) all exceed 0.97). This framework enables high-throughput, standardized phenotypic analysis at the cellular level of leaf cross-sections, providing a reliable method for the digital and automated interpretation of crop microscopic traits. This technical solution not only supports the systematic integration of microscopic phenotypes in Chinese cabbage breeding but also offers a scalable solution for cellular-scale phenotypic research in other crops. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

21 pages, 2017 KB  
Article
CNN-Based Classification of Façade Motifs in Market-Developed Housing: A Computational Approach to Tel Aviv’s 1980s–1990s Urban Fabric
by Yiftach Ashkenazi, Dana Silverstein-Duani, Yasha Jacob Grobman and Yael Allweil
Land 2026, 15(3), 460; https://doi.org/10.3390/land15030460 - 13 Mar 2026
Abstract
This study applies deep learning to classify façade features in Tel Aviv’s market-developed apartment housing (1980s–1990s), a vast landscape typically excluded from architectural history due to its non-iconic character. We constructed a curated corpus of 877 expert-labeled high-resolution façade images and evaluated whether [...] Read more.
This study applies deep learning to classify façade features in Tel Aviv’s market-developed apartment housing (1980s–1990s), a vast landscape typically excluded from architectural history due to its non-iconic character. We constructed a curated corpus of 877 expert-labeled high-resolution façade images and evaluated whether convolutional neural networks can detect historically meaningful patterns at urban scale. Focusing on the “staggered balcony” motif—linked to national regulation 5442/1992—we show that a ConvNeXt-Tiny model achieved robust classification performance (96.6% accuracy, 90.3% F1) after rigorous dataset curation and expert relabeling. Initial experiments on noisier data produced inconsistent results, underscoring the importance of domain expertise in operationalizing historical categories. Rather than treating machine learning as definitive classification, we present an iterative workflow where architectural historians use model outputs to refine categories, test morphological hypotheses, and identify overlooked variations. The findings demonstrate how CNN-based analysis can advance empirical research on non-iconic built environments and open methodological pathways for cultural heritage studies and digital architectural humanities. Full article
(This article belongs to the Special Issue Landscape Governance in the Age of Social Media, 3rd Edition)
Show Figures

Figure 1

20 pages, 3878 KB  
Article
A Hybrid Multimodal Cancer Diagnostic Framework Integrating Deep Learning of Histopathology and Whispering Gallery Mode Optical Sensors
by Shereen Afifi, Amir R. Ali, Nada Haytham Abdelbasset, Youssef Poulis, Yasmin Yousry, Mohamed Zinal, Hatem S. Abdullah, Miral Y. Selim and Mohamed Hamed
Diagnostics 2026, 16(6), 848; https://doi.org/10.3390/diagnostics16060848 - 12 Mar 2026
Abstract
Background/Objectives: Biopsy examination remains the gold standard for cancer diagnosis, relying on histopathological assessment of tissue samples to identify malignant changes. However, manual interpretation of histopathological slides is time-consuming, subjective, and susceptible to inter-observer variability. The digitization of histopathological images enables automated analysis [...] Read more.
Background/Objectives: Biopsy examination remains the gold standard for cancer diagnosis, relying on histopathological assessment of tissue samples to identify malignant changes. However, manual interpretation of histopathological slides is time-consuming, subjective, and susceptible to inter-observer variability. The digitization of histopathological images enables automated analysis and offers opportunities to support clinicians with more consistent and objective diagnostic tools. This study aims to enhance cancer diagnosis by proposing a hybrid framework that integrates deep-learning-based histopathological image analysis with Whispering Gallery Mode (WGM) optical sensing for complementary tissue characterization. Methods: The proposed framework combines automated tumor classification from histopathological images with biochemical signal analysis obtained from WGM optical sensors. Deep learning models, including EfficientNet-B0, InceptionV3, and Vision Transformer (ViT), were employed for binary and multi-class tumor classification using the BreakHis dataset. To address class imbalance, a Deep Convolutional Generative Adversarial Network (DCGAN) was utilized to generate synthetic histopathological images alongside conventional data augmentation techniques. In parallel, WGM optical sensors were incorporated to capture subtle tissue-specific signatures, with machine learning algorithms enabling automated feature extraction and classification of the acquired signals. Results: In multi-class classification, InceptionV3 combined with DCGAN-based augmentation achieved an accuracy of 94.45%, while binary classification reached 96.49%. Fine-tuned Vision Transformer models achieved a higher classification accuracy of 98% on the BreakHis dataset. The integration of WGM optical sensing provided additional biochemical information, offering complementary insights to image-based analysis and supporting more robust diagnostic decision-making. Conclusions: The proposed hybrid framework demonstrates the potential of combining deep-learning-based histopathological image analysis with WGM optical sensing to improve the accuracy and reliability of cancer classification. By integrating morphological and biochemical information, the framework offers a promising approach for enhanced, objective, and supportive cancer diagnostic systems. Full article
Show Figures

Figure 1

20 pages, 13678 KB  
Data Descriptor
MultiPolar: A Benchmark Dataset for Digital Photoelasticity Using a Pixelated Polarization Camera
by Juan Camilo Hernández-Gómez, Juan Carlos Briñez-de León, Mateo Rico-García, José López-Prado and Hermes Fandiño-Toro
Data 2026, 11(3), 55; https://doi.org/10.3390/data11030055 - 12 Mar 2026
Abstract
Digital photoelasticity enables non-contact, full-field stress analysis through optical fringe patterns, yet its practical deployment is often constrained by experimental complexity and the limited availability of open, standardized datasets. The emergence of multi-polarizer array cameras provides polarization-resolved measurements with high information content, enabling [...] Read more.
Digital photoelasticity enables non-contact, full-field stress analysis through optical fringe patterns, yet its practical deployment is often constrained by experimental complexity and the limited availability of open, standardized datasets. The emergence of multi-polarizer array cameras provides polarization-resolved measurements with high information content, enabling advanced analysis strategies beyond conventional single-image approaches. This work presents a public experimental dataset composed of synchronized image sequences acquired using a polarizer array camera and a conventional RGB camera under incremental mechanical loading. The dataset comprises nine experiments, including four benchmark specimens and five bio-inspired geometries, each recorded over 720 load steps. In total, the dataset releases 25,920 polarization-resolved images and 6480 RGB images, all provided in lossless format and accompanied by experiment-specific segmentation templates. Although classical and hybrid load-stepping methods are used to demonstrate the utility of the dataset, its scope is not limited to this application. The dataset is intended as a flexible platform for exploring a wide range of photoelastic analysis techniques that leverage polarization information, while enabling direct comparison with conventional color demodulation techniques. Full article
Show Figures

Figure 1

16 pages, 1171 KB  
Article
Three-Dimensional Quantitative Analysis of Maxillary Arch Morphology Across Sagittal and Vertical Skeletal Patterns
by Reem M. Al-Eryani, R. Lale Taner, K. Müfide Dinçer and Orhan Özdiler
Appl. Sci. 2026, 16(6), 2708; https://doi.org/10.3390/app16062708 - 12 Mar 2026
Abstract
Background: Contemporary three-dimensional morphometric analysis emphasizes quantitative modeling of anatomical interactions. However, the interplay between sagittal and vertical dimensions in determining maxillary transverse morphology remains inadequately characterized. This study introduces the Sagittal Modulation Effect (SME) framework—a model characterizing how sagittal relationships modify [...] Read more.
Background: Contemporary three-dimensional morphometric analysis emphasizes quantitative modeling of anatomical interactions. However, the interplay between sagittal and vertical dimensions in determining maxillary transverse morphology remains inadequately characterized. This study introduces the Sagittal Modulation Effect (SME) framework—a model characterizing how sagittal relationships modify the association between vertical pattern and maxillary arch morphology. Methods: A retrospective cross-sectional analysis was conducted on 165 skeletally mature adults (mean age: 25.4 ± 4.8 years), stratified into skeletal Class I, II, and III groups (n = 55 each). Lateral cephalometric analysis and 3D maxillary digital models were obtained. A validated automated algorithm performed arch morphometric analyses. The primary hypothesis was tested using multiple linear regression with interaction terms, validated via bootstrap analysis and cross-validation. Results: A significant SME was identified (p < 0.001). The inverse correlation between SN-MP and maxillary width intensified progressively: minimal in Class I (r = −0.047, p_adj = 0.891), moderate in Class II (r = −0.387, p_adj_ = 0.024), and strong in Class III (r = −0.645, p_adj_ < 0.001). Regression confirmed significant interaction effects (SN-MP × Class III: β = −0.45, p < 0.001; SN-MP × Class II: β = −0.31, p = 0.003). Exploratory analysis identified cohort-specific statistical descriptors associated with narrower arches: SN-MP > 34.2° in Class III (AUC = 0.84) and SN-MP > 36.5° in Class II (AUC = 0.78). These require external validation. Conclusions: This study provides evidence that sagittal classification modulates the vertical–transverse relationship. The SME framework offers class-specific coefficients and exploratory stratification tools for future research pending multi-center validation. Full article
Show Figures

Figure 1

47 pages, 8613 KB  
Review
2D-to-3D Image Reconstruction in Agriculture: A Review of Methods, Challenges, and AI-Driven Opportunities
by Hemanth Reddy Sankaramaddi, Won Suk Lee, Kyoungchul Kim and Youngki Hong
Sensors 2026, 26(6), 1775; https://doi.org/10.3390/s26061775 - 11 Mar 2026
Abstract
Agriculture is rapidly becoming a data-driven field where automation relies on transforming 2D images into accurate 3D models. However, selecting the most effective method remains challenging due to the unconstrained nature of the environment. This review assesses the effectiveness of geometry-based, sensor-based, and [...] Read more.
Agriculture is rapidly becoming a data-driven field where automation relies on transforming 2D images into accurate 3D models. However, selecting the most effective method remains challenging due to the unconstrained nature of the environment. This review assesses the effectiveness of geometry-based, sensor-based, and learning-based reconstruction methodologies in agricultural settings. We analyze photogrammetric pipelines, active sensing, and neural rendering methods based on their geometric accuracy, data processing speed, and field performance against wind or occlusion. Our analysis indicates that while Light Detection and Ranging (LiDAR) is highly accurate, it is too expensive for widespread adoption. Conversely, geometry-based methods are inexpensive but struggle with complex biological structures. Learning-based methods, especially 3D Gaussian Splatting (3DGS), have revolutionized the field by enabling a balance between visual fidelity and real-time inference speed. We conclude that the best chance for scalability and accuracy lies in hybrid pipelines that integrate Vision Foundation Models (VFMs) with geometric priors. We believe that “hybrid intelligence” systems, such as edge-native 3D Gaussian Splatting combined with semantic priors, are the future of 3D reconstruction. These systems will enable the creation of real-time, spatiotemporal (4D) digital twins that drive automated decision-making in precision agriculture. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2025)
Show Figures

Figure 1

37 pages, 2901 KB  
Review
Organs-on-Chips in Drug Development: Engineering Foundations, Artificial Intelligence, and Clinical Translation
by Nilanjan Roy and Luca Cucullo
Biosensors 2026, 16(3), 155; https://doi.org/10.3390/bios16030155 - 11 Mar 2026
Viewed by 32
Abstract
Organ-on-a-chip (OoC) technologies, also termed microphysiological systems (MPSs), integrate microfluidics, engineered biomaterials, human-derived cells, and on-chip biosensing to model human physiology in microscale devices that deliver quantitative, time-resolved readouts. This review surveys the 2010–2025 literature, emphasizing how sensing, standardized sampling, and analytics enable [...] Read more.
Organ-on-a-chip (OoC) technologies, also termed microphysiological systems (MPSs), integrate microfluidics, engineered biomaterials, human-derived cells, and on-chip biosensing to model human physiology in microscale devices that deliver quantitative, time-resolved readouts. This review surveys the 2010–2025 literature, emphasizing how sensing, standardized sampling, and analytics enable clinical concordance and fit-for-purpose regulatory use. We synthesize advances in (i) materials, fabrication, and microfluidic design; (ii) organ- and disease-focused case studies; and (iii) translational benchmarks that align chip outputs with clinical pharmacokinetics, toxicology, and biomarker datasets. Across organ systems, platforms increasingly incorporate vascularization, immune components, and organoid hybrids, paired with real-time measurements of barrier integrity, metabolism, electrophysiology, and secreted biomarkers using impedance (TEER), electrochemical, and optical modalities. Representative benchmarking studies report cardiac OoCs achieving AUROC ≥ 0.85 for torsadogenic risk classification, and renal chips improving prediction of transporter-mediated clearance relative to conventional in vitro assays. We summarize validation approaches and regulatory developments relevant to new approach methodologies, including the FDA Modernization Act 2.0, and discuss how AI and multi-omics can automate signal and image analysis, harmonize cross-platform datasets, and support digital-twin workflows that couple OoC measurements to in silico models. Overall, biosensor-enabled OoCs are progressing toward quantitatively benchmarked platforms for safety pharmacology, ADME/PK–PD, and precision medicine. Full article
Show Figures

Figure 1

17 pages, 6516 KB  
Article
Algorithmic Resistance Through Material Praxis: Exhibiting Post-Extractive Futures in Digital Capitalism’s Shadow
by Adina-Iuliana Deacu
Arts 2026, 15(3), 53; https://doi.org/10.3390/arts15030053 - 11 Mar 2026
Viewed by 42
Abstract
Digital capitalism has generated new forms of extractivism that extend beyond natural resources to encompass data, attention, affect, and planetary materials. This article examines how exhibition practices can function as forms of algorithmic resistance by foregrounding material praxis, embodied engagement, and curatorial strategies [...] Read more.
Digital capitalism has generated new forms of extractivism that extend beyond natural resources to encompass data, attention, affect, and planetary materials. This article examines how exhibition practices can function as forms of algorithmic resistance by foregrounding material praxis, embodied engagement, and curatorial strategies of care. Drawing on a practice-based research approach, the paper develops a theoretical framework around extractivism, materiality, and relational ethics, and applies it to two case studies: the author’s exhibition Nature Reclaims: Images of Healing, which cultivates regenerative imaginaries through urban rewilding photography, tactile installations, and trauma-informed reflective tools; and Fossil Fables, curated by the Global Extraction Observatory (GEO), which exposes the infrastructural, political, and ideological architectures sustaining extractive industries and digital technologies. Through comparative analysis, the article introduces the concept of symbiotic curation to describe a post-extractive curatorial method that holds critical exposure and regenerative proposition in sustained tension. The findings illustrate how exhibitions can reorganize perception, recalibrate temporality, and render hidden infrastructures visible, while also cultivating embodied relations of care, ecological attunement, and collective reflection. By positioning curatorial practice as an epistemic process in which theoretical propositions are tested through spatial, material, and affective decisions, the article identifies transferable principles for post-extractive cultural work. It argues that exhibitions can operate as laboratories for algorithmic resistance and as sites for rehearsing alternative relations between humans, technologies, and more-than-human worlds. Full article
Show Figures

Figure 1

13 pages, 1641 KB  
Article
Ki-67 Proliferation Index in Pulmonary Neuroendocrine Neoplasms: Interobserver Agreement Among Pathologists and Comparison of Two Artificial Intelligence-Based Image Analysis Systems
by Gizem Teoman, Zeynep Turkmen Usta, Zeynep Sagnak Yilmaz and Safak Ersoz
Biomedicines 2026, 14(3), 627; https://doi.org/10.3390/biomedicines14030627 - 11 Mar 2026
Viewed by 52
Abstract
Background/Objectives: Although Ki-67 is not formally incorporated into the grading system of pulmonary neuroendocrine neoplasms (PNENs), it is widely used as an adjunct marker to reflect proliferative activity and support diagnostic stratification. Manual Ki-67 assessment is subject to interobserver variability and methodological limitations. [...] Read more.
Background/Objectives: Although Ki-67 is not formally incorporated into the grading system of pulmonary neuroendocrine neoplasms (PNENs), it is widely used as an adjunct marker to reflect proliferative activity and support diagnostic stratification. Manual Ki-67 assessment is subject to interobserver variability and methodological limitations. This study aimed to evaluate the reliability and performance of two artificial intelligence (AI)-based image analysis systems in Ki-67 index assessment and to compare their results with expert pathologist evaluation in pulmonary neuroendocrine tumors. Methods: A total of 63 pulmonary neuroendocrine neoplasm cases, including typical carcinoid (n = 29), atypical carcinoid (n = 13), and large cell neuroendocrine carcinoma (n = 21), were retrospectively analyzed. Ki-67 proliferation indices were independently assessed by four pathologists within predefined hotspot regions, counting approximately 2000 tumor cells per case. The same regions were analyzed using two AI-based image analysis systems (Roche uPath Ki-67 and Virasoft Virasight Ki-67). Interobserver agreement among pathologists was evaluated using the intraclass correlation coefficient (ICC), and concordance between manual and AI-based assessments was assessed using Spearman’s correlation and linear regression analyses. To account for potential scanner/platform effects, slides were digitized using two different whole-slide scanners (VENTANA DP® 600 and Leica Aperio AT2), and color normalization and quality control procedures were applied prior to AI-based analysis. For clinical interpretability, Ki-67 indices were stratified into categorical groups based on tumor subtype-specific thresholds (0–<10%: low, 10–25%: intermediate, >25%: high), and agreement between manual and AI-based categorical scoring was evaluated using Cohen’s kappa coefficient. Results: Among the 63 pulmonary neuroendocrine neoplasm cases, Ki-67 proliferation indices varied across tumor subtypes, with typical carcinoids showing low, atypical carcinoids intermediate, and large cell neuroendocrine carcinomas high proliferative activity. Interobserver agreement among four pathologists was excellent (ICC = 0.998, 95% CI: 0.996–0.998). Strong correlations were observed between manual Ki-67 assessments and AI-derived indices, with Spearman correlation coefficients of 0.961 (95% CI: 0.918–0.982) for Roche AI and 0.904 (95% CI: 0.821–0.949) for Virasoft AI, and 0.926 (95% CI: 0.842–0.968) between the two AI systems. Bland–Altman analyses demonstrated minimal mean differences and most cases within the 95% limits of agreement, indicating high concordance without systematic bias. Categorical agreement analysis, using subtype-specific Ki-67 thresholds (0–<10%: low; 10–25%: intermediate; >25%: high), showed excellent concordance between manual and AI-based scoring (Cohen’s kappa 0.877 for Roche AI and 0.827 for Virasoft AI; p < 0.001), confirming the clinical interpretability and reproducibility of AI-based Ki-67 assessment. Conclusions: AI-based Ki-67 index assessment shows strong concordance with expert pathologist evaluation and reflects biologically relevant differences among pulmonary neuroendocrine neoplasm subtypes. These results suggest that AI-assisted Ki-67 analysis may serve as a reproducible and objective adjunct to routine diagnostic practice in pulmonary neuroendocrine tumors. Full article
Show Figures

Figure 1

25 pages, 9221 KB  
Article
Research on Building Recognition in Ethnic Minority Villages Based on Multi-Feature Fusion
by Xiaoqiong Sun, Jiafang Yang, Wei Li, Ting Luo and Dongdong Xie
Buildings 2026, 16(6), 1099; https://doi.org/10.3390/buildings16061099 - 10 Mar 2026
Viewed by 66
Abstract
As a unique cultural heritage of Chinese ethnic minorities, Dong architecture provides rich historical and cultural information. Rapid and accurate extraction of ethnic building information from remote sensing images in complex terrain and high-density settlement environments is highly important for the protection of [...] Read more.
As a unique cultural heritage of Chinese ethnic minorities, Dong architecture provides rich historical and cultural information. Rapid and accurate extraction of ethnic building information from remote sensing images in complex terrain and high-density settlement environments is highly important for the protection of architectural heritage and the management of rural space. Huanggang Dong Village in Liping County, Guizhou Province, China, is taken as a case study. This paper develops a multifeature fusion machine learning framework for the automatic recognition of Dong ethnic architecture based on centimeter-level visible images captured by UAV. First, the vegetation index, HSI color features and texture features based on the gray level co-occurrence matrix are extracted from the UAV visible light orthophoto image. Through the random forest feature importance ranking and correlation test, six key features, namely, the VDVI, HSI-S, HSI-I, mean, variance and contrast, are selected to construct a multifeature space. This step constitutes the feature construction stage of the proposed methodology and provides the basis for subsequent classification. Second, on the basis of a support vector machine (SVM) and random forest (RF), classification models are constructed. The effects of different feature combinations and different algorithms on classification accuracy are systematically compared, and the results are evaluated in terms of overall accuracy (OA), the kappa coefficient, user accuracy (UA) and producer accuracy (PA). This second part highlights the classification phase of the methodology, which tests the feature space using different algorithms and evaluates the performance of the models. The experimental data fully show that under the condition of a single feature, the SVM model dominated by texture features performs best, with an OA of 85.33% and a kappa of 0.799; under the condition of multifeature fusion, the RF algorithm has a stronger ability to integrate multisource features. The accuracy of building category recognition based on the total feature and dimensionality reduction feature space is particularly prominent. The total feature and overall accuracy reach 89.00%, and the kappa coefficient is 0.850. The UA and PA reached 89.66% and 94.55%, respectively. Through in-depth comparative analysis, the vegetation index–color–texture multifeature fusion and machine learning classification framework based on UAV visible light images can achieve high-precision extraction of Dong architecture without relying on high-cost sensors. It can effectively alleviate the confusion between water bodies and shadows and between dark roofs and vegetation and effectively separate traditional Dong architecture from roads, vegetation and other elements. It provides a low-cost and feasible way for digital archiving, dynamic monitoring and protection management of the traditional village architectural heritage of ethnic minorities. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

26 pages, 23966 KB  
Article
ClearScope: A Fully Integrated Light-Sheet Theta Microscope for Sub-Micron-Resolution Imaging Without Lateral Size Constraints
by Matthew G. Fay, Peter J. Lang, David S. Denu, Nathan J. O’Connor, Benjamin Haydock, Jeffrey Blaisdell, Nicolas Roussel, Alissa Wilson, Sage R. Aronson, Veronica Pessino, Paul J. Angstman, Cheng Gong, Tanvi Butola, Orrin Devinsky, Jayeeta Basu, Raju Tomer and Jacob R. Glaser
J. Imaging 2026, 12(3), 118; https://doi.org/10.3390/jimaging12030118 - 10 Mar 2026
Viewed by 228
Abstract
Three-dimensional (3D) ex vivo imaging of cleared tissue from intact brains from animal models, human brain surgical specimens, and large postmortem human and non-human primate brain specimens is essential for understanding physiological neural connectivity and pathological alterations underlying neurological and neuropsychiatric disorders. Contemporary [...] Read more.
Three-dimensional (3D) ex vivo imaging of cleared tissue from intact brains from animal models, human brain surgical specimens, and large postmortem human and non-human primate brain specimens is essential for understanding physiological neural connectivity and pathological alterations underlying neurological and neuropsychiatric disorders. Contemporary light-sheet microscopy enables rapid, high-resolution imaging of large, cleared samples but is limited by the orthogonal arrangement of illumination and detection optics, which constrains specimen size. Light-sheet theta microscopy (LSTM) overcomes this limitation by employing two oblique illumination paths while maintaining a perpendicular detection geometry. Here, we report the development of a next-generation, fully integrated and user-friendly LSTM system that enables uniform subcellular-resolution imaging (with subcellular resolution determined by the lateral performance of the system) throughout large specimens without constraining lateral (XY) dimensions. The system provides a seamless workflow encompassing image acquisition, data storage, pre- and post-processing, enhancement and quantitative analysis. Performance is demonstrated by high-resolution 3D imaging of intact mouse brains and human brain samples, including complete downstream analyses such as digital neuron tracing, vascular reconstruction and design-based stereological analysis. This enhanced and accessible LSTM implementation enables rapid quantitative mapping of molecular and cellular features in very large biological specimens. Full article
(This article belongs to the Section Neuroimaging and Neuroinformatics)
Show Figures

Figure 1

15 pages, 2613 KB  
Article
Intra-Crown Microclimatic Heterogeneity and Phenological Buffering: A High-Resolution UAV Study of Flowering and Autumn Leaf Senescence
by Min-Kyu Park, Hun-Gi Choi, Yun-Young Kim and Dong-Hak Kim
Forests 2026, 17(3), 342; https://doi.org/10.3390/f17030342 - 10 Mar 2026
Viewed by 195
Abstract
While climate change shifts plant phenology, conventional satellite-based studies often overlook intra-individual variations due to spatial averaging. This study utilized high-resolution UAV imagery and Digital Surface Models (DSMs) to investigate how intra-crown microclimatic heterogeneity affects the spatiotemporal patterns of flowering and autumn leaf [...] Read more.
While climate change shifts plant phenology, conventional satellite-based studies often overlook intra-individual variations due to spatial averaging. This study utilized high-resolution UAV imagery and Digital Surface Models (DSMs) to investigate how intra-crown microclimatic heterogeneity affects the spatiotemporal patterns of flowering and autumn leaf senescence. Rhododendron yedoense f. poukhanense (H.Lév.) M. Sugim (RY) and Acer triflorum Kom. (AT) were monitored at the Korea National Arboretum, with 23 time-series images acquired between April and November 2025. Cumulative solar duration was calculated for 0.5 m intra-crown grids, and phenological events were detected using derivative analysis of vegetation indices (Red Chromatic Coordinate [RCC] and Green Chromatic Coordinate [GCC]). The results confirmed asynchrony in phenological events within single individuals depending on crown sectors. However, the linear relationship between intra-crown microclimatic heterogeneity and phenological duration was statistically weak (ρ > 0.05), suggesting that strong physiological buffering mitigates the direct impact of spatial light variation. Despite this buffering, species-specific response patterns were observed: RY exhibited spatially independent flowering responses, whereas AT maintained relatively higher synchrony. Furthermore, AT showed a “Phenological Velocity” gap, where sunlit sectors tended to experience senescence approximately 1.12 days later than shaded areas**, while RY showed no significant directional lag.** This research demonstrates that phenological responses can be spatially dispersed even within an individual, and the buffering mechanisms against environmental variability differ by crown structure and growth form. These findings highlight the necessity of individual-level spatial resolution in understanding plant responses to climate change. Full article
Show Figures

Figure 1

18 pages, 2888 KB  
Article
Assessing RGB Color Reliability via Simultaneous Comparison with Hyperspectral Data on Pantone® Fabrics
by Cindy Lorena Gómez-Heredia, Jose David Ardila-Useda, Andrés Felipe Cerón-Molina, Jhonny Osorio-Gallego and Jorge Andrés Ramírez-Rincón
J. Imaging 2026, 12(3), 116; https://doi.org/10.3390/jimaging12030116 - 10 Mar 2026
Viewed by 188
Abstract
Accurate color property measurements are critical for advancing artificial vision in real-time industrial applications. RGB imaging remains highly applicable and widely used due to its practicality, accessibility, and high spatial resolution. However, significant uncertainties in extracting chromatic information highlight the need to define [...] Read more.
Accurate color property measurements are critical for advancing artificial vision in real-time industrial applications. RGB imaging remains highly applicable and widely used due to its practicality, accessibility, and high spatial resolution. However, significant uncertainties in extracting chromatic information highlight the need to define when conventional digital images can reliably provide accurate color data. This work simultaneously compares six chromatic properties across 700 Pantone® TCX fabric samples, using optical data acquired simultaneously from both hyperspectral (HSI) and digital (RGB) cameras. The results indicate that the accurate interpretation of optical information from RGB (sRGB and REC2020) images is significantly influenced by lightness (L*) values. Samples with bright and unsaturated colors (L*> 50) reach ratio-to-performance-deviation (RPD) values above 2.5 for four properties (L*, a*, b* hab), indicating a good correlation between HSI and RGB information. Absolute color difference comparisons (Ea) between HSI and RGB images yield values exceeding 5.5 units for red-yellow-green samples and up to 9.0 units for blue and purple tones. In contrast, relative color differences (Er) comparisons show a significant decrease, with values falling below 3.0 for all lightness values, indicating the practical equivalence of both methodologies according to the Two One-Sided Test (TOST) statistical analysis. These results confirm that RGB imagery achieves reliable color consistency when evaluated against a practical reference. Full article
Show Figures

Graphical abstract

21 pages, 4339 KB  
Article
Radiation Dose Metrics and Local Diagnostic Reference Levels in Low-Dose Stent-Assisted Coiling of Intracranial Aneurysms
by Mariusz Stanisław Sowa, Joanna Sowa, Kamil Adam Węglarz and Maciej Budzanowski
J. Clin. Med. 2026, 15(5), 2059; https://doi.org/10.3390/jcm15052059 - 8 Mar 2026
Viewed by 161
Abstract
Background/Objectives: Operator experience, the implementation of low frame rates during both fluoroscopy and digital subtraction angiography (DSA), and the use of modern angiographic systems are essential for maintaining diagnostic image quality while minimizing ionizing radiation exposure during stent-assisted endovascular treatment of intracranial aneurysms. [...] Read more.
Background/Objectives: Operator experience, the implementation of low frame rates during both fluoroscopy and digital subtraction angiography (DSA), and the use of modern angiographic systems are essential for maintaining diagnostic image quality while minimizing ionizing radiation exposure during stent-assisted endovascular treatment of intracranial aneurysms. At the study center, a low-dose protocol is employed, using the lowest available fluoroscopy frame rate (3.125 frames per second) and a nominal acquisition rate of 2 frames per second for DSA, three-dimensional (3D) rotational angiography, 2D/3D mapping, and roadmapping. Methods: A retrospective analysis was performed on 132 stent-assisted procedures conducted at a single tertiary center between 2018 and 2024. For each procedure, data were collected for dose-area product (DAP), reference air kerma (Ka,r), fluoroscopy time (FT), and the total number of DSA frames. Local diagnostic reference levels (DRLs; 75th percentile [P75]) and typical values (50th percentile [P50]) were established and compared with values reported in the literature. Results: For all patients the P75 values, representing DRLs, were 19.89 Gy·cm2 for DAP, 332 mGy for Ka,r, 25 min 32 s for FT, and 354 DSA frames. The P50 values were 13.71 Gy·cm2 for DAP, 219.5 mGy for Ka,r, 20 min 36 s for FT, and 277 DSA frames. Conclusions: In this single-center cohort, dose metrics for stent-assisted coil embolization were within the lower range of published values. Cross-study comparisons remain descriptive and require cautious interpretation. The proposed local DRLs may support quality assurance, dose optimization, and patient safety in similar clinical settings. Further multicenter and multi-operator studies are necessary to assess transferability and applicability beyond coil-only procedures. Limitations include the retrospective single-center design (single operator) and the lack of a contemporaneous control group and formal image-quality/outcome assessment. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Graphical abstract

22 pages, 4806 KB  
Article
GPU-Accelerated Fractal Compression Dimension Estimation
by Ángel Díaz-Herrezuelo and Pedro Chamorro-Posada
Fractal Fract. 2026, 10(3), 174; https://doi.org/10.3390/fractalfract10030174 - 6 Mar 2026
Viewed by 174
Abstract
Fractal dimension is widely used as a quantitative descriptor of structural complexity in digital images. However, its practical implementation often involves methodological and computational trade-offs. The compression-based estimator provides an information-theoretic formulation that operates directly on grayscale images without mandatory binarization. Although the [...] Read more.
Fractal dimension is widely used as a quantitative descriptor of structural complexity in digital images. However, its practical implementation often involves methodological and computational trade-offs. The compression-based estimator provides an information-theoretic formulation that operates directly on grayscale images without mandatory binarization. Although the method is theoretically grounded and has been applied in real-world scenarios, its implementation-level behavior and computational characteristics have not been systematically analyzed under controlled conditions. To address this gap, this work presents a structured GPU-enabled validation framework for this estimator using synthetic Julia sets with known theoretical fractal dimensions. By focusing on their planar boundaries, which enable direct ground-truth comparison across multiple resolutions, numerical accuracy, statistical stability, and execution time are jointly evaluated across CPU and GPU implementations. Furthermore, additional experiments assess sensitivity to progressive Gaussian blur and exploratory behavior on grayscale textures from the Brodatz dataset, revealing that boundary-dominated fractals consistently yield dimensions between 1 and 2, whereas volumetric textures produce values greater than 2 without modifying the estimation framework. Performance profiling identifies distinct computational regimes and highlights a trade-off between robustness and execution time in the double-compression GPU configuration. This approach establishes a reproducible evaluation framework that supports the practical deployment of compression-based fractal dimension estimation in large-scale and time-constrained image analysis systems. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

Back to TopTop