Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,284)

Search Parameters:
Keywords = projection imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13707 KB  
Article
Phase-Domain Peak-Based Correspondence Extraction for Robust Structured-Light Imaging
by Andrijana Ćurković, Milan Ćurković and Alen Grebo
J. Imaging 2026, 12(5), 182; https://doi.org/10.3390/jimaging12050182 - 23 Apr 2026
Abstract
Standard fringe-based structured-light processing estimates wrapped phase from phase-shifted sinusoidal images and commonly relies on phase unwrapping to obtain a globally consistent phase representation. In practical measurements, this approach may become unstable on reflective objects and under low or non-uniform illumination, where the [...] Read more.
Standard fringe-based structured-light processing estimates wrapped phase from phase-shifted sinusoidal images and commonly relies on phase unwrapping to obtain a globally consistent phase representation. In practical measurements, this approach may become unstable on reflective objects and under low or non-uniform illumination, where the recorded fringe signal is distorted and the recovered phase becomes unreliable. To address these limitations, we propose a correspondence extraction method based on subpixel peak localization performed directly on phase-domain images. The wrapped phase is transformed into absolute value phase profiles, Φ=|ϕw|, whose local structure follows the projected fringe pattern and is less affected by object-dependent intensity variations. The proposed method reformulates correspondence extraction as a local signal-based estimation problem in the phase-domain, thereby reducing reliance on global phase-consistency constraints at the correspondence stage. A practical advantage observed in the evaluated examples is that the method remained usable in some regions where the phase became locally flat because of low modulation, saturation, or reflective surface effects. In such regions, conventional processing relies on sufficiently reliable phase gradients and subsequent unwrapping, whereas the proposed method uses local peak geometry in the transformed phase representation. In the implementation used here, Gray-code information is employed only for pixel-wise phase extension and reference indexing, not as a spatial phase-unwrapping mechanism. The method does not require machine learning models or training data and can be integrated as a correspondence analysis stage in practical structured-light systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
23 pages, 11430 KB  
Article
Symmetry-Aware Gradient Coordination for Physics-Guided Non-Line-of-Sight Imaging
by Yijun Ling, Wenjin Zhao, Mengjia Zhao and Jie Yang
Symmetry 2026, 18(5), 711; https://doi.org/10.3390/sym18050711 - 23 Apr 2026
Abstract
Physics-guided computational imaging typically aggregates data fidelity, geometric reconstruction, and sensor consistency into a single scalar loss. In low signal-to-noise ratio (low-SNR) non-line-of-sight imaging, this centralized approach creates asymmetric gradient conflicts where the dominant constraints suppress physically meaningful updates. We propose treating multi-constraint [...] Read more.
Physics-guided computational imaging typically aggregates data fidelity, geometric reconstruction, and sensor consistency into a single scalar loss. In low signal-to-noise ratio (low-SNR) non-line-of-sight imaging, this centralized approach creates asymmetric gradient conflicts where the dominant constraints suppress physically meaningful updates. We propose treating multi-constraint training as a gradient coordination problem rather than scalar balancing. Our framework coordinates heterogeneous objectives through branch-wise gradient routing: soft conflict projection (PCGrad), hard physical constraint enforcement (PhysGuard), learnable sensor calibration, and a staged training protocol that decouples representation learning from nuisance parameter estimation. On held-out test scenes, the fully staged model improved the peak signal-to-noise ratio (PSNR) from 19.09 dB to 20.49 dB and the structural similarity index (SSIM) from 0.67 to 0.71 over the baseline, with consistent gains across the 48, 28, and 25 dB SNR levels. Qualitative evaluation on seven real-world scenes indicates sharper structure recovery and fewer artifacts. In this NLOS setting, gradient-level coordination is more reliable than scalar aggregation under heterogeneous constraints. Full article
(This article belongs to the Section Computer)
30 pages, 98630 KB  
Article
A Method for Paired Comparisons of Glo Germ Quantity in Images of Hands Before and After Washing
by Jordan Ali Rashid and Stuart Criley
J. Imaging 2026, 12(4), 178; https://doi.org/10.3390/jimaging12040178 - 21 Apr 2026
Viewed by 196
Abstract
We present a reproducible pipeline that converts color images into quantitative fluorescence maps by combining spectral measurement with a linear mixture model. The method is designed specifically for quantitative comparisons of Glo Germ™ on images of hands taken under different experimental conditions with [...] Read more.
We present a reproducible pipeline that converts color images into quantitative fluorescence maps by combining spectral measurement with a linear mixture model. The method is designed specifically for quantitative comparisons of Glo Germ™ on images of hands taken under different experimental conditions with controlled illumination. The emission spectrum of Glo Germ is measured using a spectral photometer and normalized to obtain its spectral power density function. This spectrum is projected into CIE XYZ coordinates and incorporated into a linear mixture model in which each pixel contains contributions from white light, UV-illuminated skin reflectance, and fluorophore emission. Component magnitudes are estimated with non-negative least squares, yielding a grayscale image whose intensity is a monotonic proxy for local fluorophore density. Spatial integration provides an image-level summary proportional to total detected material. Compared with single-channel proxies, the observer suppresses background structure, improves contrast, and remains radiometrically interpretable. Because the method depends only on measurable spectra and linear transforms, it can be reproduced across cameras and extended to other fluorophores. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

38 pages, 11665 KB  
Article
Eccentricity Correction Methods for Circular Targets in Perspective Projection
by Frank Liebold and Hans-Gerd Maas
Metrology 2026, 6(2), 28; https://doi.org/10.3390/metrology6020028 - 20 Apr 2026
Viewed by 104
Abstract
In a perspective projection, a circular target appears as an ellipse for an oblique view. Herein, the ellipse center obtained from image coordinate measurement operators differs from the projection of the circle center. This discrepancy is called eccentricity and may lead to systematic [...] Read more.
In a perspective projection, a circular target appears as an ellipse for an oblique view. Herein, the ellipse center obtained from image coordinate measurement operators differs from the projection of the circle center. This discrepancy is called eccentricity and may lead to systematic errors. This article documents the significance of these discrepancies and discusses five different correction methods that can be applied in the image space or as a model adaptation. Two of the methods include the determination of the circle radius and thus also offer a possibility to define the scale. The eccentricity correction procedures are validated in a series of experiments, which proved that even extreme eccentricity effects can be fully compensated. In the experiment on the approaches including scale determination, the precision and accuracy of the scale definition is investigated, obtaining relative accuracies of 0.5–1%. Full article
(This article belongs to the Special Issue Advances in Optical 3D Metrology)
15 pages, 3994 KB  
Article
Three-Dimensional Shape Measurement Using Speckle-Assisted Phase-Order Lines Without Phase Unwrapping
by Ziyou Zhang and Weipeng Yang
Sensors 2026, 26(8), 2534; https://doi.org/10.3390/s26082534 - 20 Apr 2026
Viewed by 231
Abstract
Achieving high-accuracy and high-speed 3D shape measurement remains a significant challenge. This paper presents a novel technique using phase-order lines (POLs), which eliminates the need for phase unwrapping in a binocular system. By combining phase-shifting for high resolution and speckle projection for robust [...] Read more.
Achieving high-accuracy and high-speed 3D shape measurement remains a significant challenge. This paper presents a novel technique using phase-order lines (POLs), which eliminates the need for phase unwrapping in a binocular system. By combining phase-shifting for high resolution and speckle projection for robust features, our method extracts POLs directly from the wrapped phase. The speckle patterns are then used to establish robust POL correspondences between stereo images. These matched POLs serve as reliable seeds to guide dense, sub-pixel matching directly on the wrapped phase, thus bypassing the complex phase unwrapping process. This approach significantly reduces the number of required patterns. The experimental results demonstrate that our method achieves a root-mean-square (RMS) error of 0.058 mm using only five patterns, delivering accuracy comparable to a 12-pattern temporal phase unwrapping (TPU) method while being significantly faster. Full article
Show Figures

Figure 1

18 pages, 2182 KB  
Article
Quantitative Evaluation of Pectoral Muscle Visualisation as an Indicator of Positioning Quality in Screening Mammography
by Maja Karić, Doris Šegota Ritoša and Petra Valković Zujić
Diagnostics 2026, 16(8), 1218; https://doi.org/10.3390/diagnostics16081218 - 19 Apr 2026
Viewed by 204
Abstract
Background/Objectives: Image quality of mammograms in breast cancer screening is strongly operator-dependent, particularly in the mediolateral oblique (MLO) projection where adequate visualisation of the pectoralis major muscle serves as a surrogate marker of posterior tissue inclusion. Current positioning assessment is predominantly qualitative and [...] Read more.
Background/Objectives: Image quality of mammograms in breast cancer screening is strongly operator-dependent, particularly in the mediolateral oblique (MLO) projection where adequate visualisation of the pectoralis major muscle serves as a surrogate marker of posterior tissue inclusion. Current positioning assessment is predominantly qualitative and subject to inter-observer variability. This study aimed to quantitatively evaluate pectoral muscle visualisation and compression force variability among radiographers participating in a national screening programme. Methods: A retrospective observational study was conducted at Clinical Hospital Center Rijeka in January and February 2020. A total of 464 digital MLO mammograms were analysed. Images from nine radiographers were randomly retrieved from the institutional Picture Archiving and Communication System (PACS). Pectoral muscle length and width were measured using a standard clinical workstation with an integrated distance measurement tool. Additional variables included radiographer gender, breast side (LMLO vs. RMLO), imaging order, and applied compression force. Statistical analyses included Welch’s ANOVA, one-way ANOVA, t-tests, and appropriate post hoc comparisons. Results: Across all MLO projections, the combined mean pectoral muscle width was 41.0 ± 11.4 mm and the mean length was 134.3 ± 21.7 mm. Significant inter-operator differences were observed in pectoral muscle width (p < 0.001) and length (p = 0.023). Mean muscle width ranged from 35.0 mm to 54.2 mm, and mean length from 126.5 mm to 139.4 mm across radiographers. No significant differences were found with respect to radiographer gender, breast side, or imaging order (all p > 0.05). Compression force differed significantly among radiographers (p < 0.001), ranging from 117.0 ± 18.3 N to 184.8 ± 33.9 N. Conclusions: This study demonstrates significant inter-operator variability in both pectoral muscle visualisation and applied compression force during MLO mammography. These findings indicate that important technical aspects of mammographic examination remain strongly operator-dependent and highlight the need for more consistent positioning practices within screening programmes. Quantitative measurement of pectoral muscle dimensions may serve as a practical and objective approach for monitoring positioning quality and supporting quality assurance in routine clinical practice. Full article
(This article belongs to the Special Issue Recent Advances in Breast Cancer Imaging 2026)
Show Figures

Figure 1

18 pages, 4266 KB  
Article
Global Calibration of a Collaborative Multi-Line-Scan Camera Measurement System
by Yuanshen Xie, Nanhui Wu, Yueqiao Hou, Weixin Xu, Jiangjie Yu, Zichao Yin and Dapeng Tan
Sensors 2026, 26(8), 2498; https://doi.org/10.3390/s26082498 - 17 Apr 2026
Viewed by 172
Abstract
Multi-line-scan camera systems provide high-frequency sampling and wide field-of-view coverage, making them valuable for three-dimensional measurement and dynamic reconstruction. However, their one-dimensional projection property introduces scale ambiguity and strong parameter coupling during calibration, which limits the consistency and stability of local optimization in [...] Read more.
Multi-line-scan camera systems provide high-frequency sampling and wide field-of-view coverage, making them valuable for three-dimensional measurement and dynamic reconstruction. However, their one-dimensional projection property introduces scale ambiguity and strong parameter coupling during calibration, which limits the consistency and stability of local optimization in multi-camera systems. To address this issue, this paper proposes a global calibration method based on physical constraints and hierarchical optimization. A unified imaging and motion model is constructed by incorporating physical scale constraints and structural priors, and geometric scale information is introduced into the joint optimization to reduce scale ambiguity and parameter coupling. Parameter normalization and staged optimization are further adopted to improve numerical stability for variables of different magnitudes and enable consistent estimation of multi-camera parameters within a unified framework. Simulation and experimental results show that the method achieves stable convergence under focal-length initialization perturbation, baseline deviation, and noise interference, with a three-dimensional reconstruction error below 0.67 mm and a convergence probability of at least 99.7%. These results indicate that the proposed method effectively reduces calibration uncertainty in multi-line-scan camera systems and supports high-precision online measurement and dynamic three-dimensional perception. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 2980 KB  
Article
Artificial Intelligence to Predict Major Arrhythmic Events Based on Left Ventricular Electroanatomic Mapping Data
by Yari Valeri, Paolo Compagnucci, Marialucia Narducci, Paolo Veri, Emanuele Pecorari, Isabel Concetti, Giuliano Santagata, Giovanni Volpato, Francesca Campanelli, Leonardo D’Angelo, Martina Apicella, Vincenzo Schillaci, Giuseppe Sgarito, Sergio Conti, Roberto Scacciavillani, Francesco Solimene, Gemma Pelargonio, Antonio Dello Russo, Francesco Piva and Michela Casella
J. Clin. Med. 2026, 15(8), 3078; https://doi.org/10.3390/jcm15083078 - 17 Apr 2026
Viewed by 238
Abstract
Background/Objectives: Electroanatomic mapping (EAM) provides high-resolution spatial and electrogram information, but the prognostic utility of quantitative EAM features has not been systematically evaluated with contemporary artificial intelligence (AI) methods. We investigated whether an AI analysis of quantitative EAM exports from the CARTO [...] Read more.
Background/Objectives: Electroanatomic mapping (EAM) provides high-resolution spatial and electrogram information, but the prognostic utility of quantitative EAM features has not been systematically evaluated with contemporary artificial intelligence (AI) methods. We investigated whether an AI analysis of quantitative EAM exports from the CARTO system enhances the prediction of major arrhythmic events (MAEs). Methods: In this retrospective, multicenter cohort study, 248 consecutive patients undergoing left ventricular EAM at four tertiary electrophysiology centers were analyzed. Numerical EAM descriptors (spatial coordinates, unipolar/bipolar voltages, local activation time, impedance) were transformed into derived metrics, including local activation heterogeneity (GR), late-potential extent (LAT), bipolar–unipolar discrepancy (VLT), and low-amplitude scar extent (Scar Areas), and were spatially normalized via spherical projection. Clinical, anamnestic, and imaging variables were integrated. Machine learning and deep learning models were trained with an 80:20 train/test split and evaluated using three-fold cross-validation. Performance metrics included area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and precision. Results: Models incorporating both clinical and AI-processed EAM features achieved high discriminatory performance (test AUC up to 0.92; accuracy up to 0.896). Specificity was consistently high (≈0.97–0.998), whereas sensitivity remained modest (≈0.39–0.58). Among the EAM-derived features, GR was the most consistently informative predictor across algorithms and analyses; VLT, LAT, and Scar Areas also contributed substantially. Regionally, basal sub-mitral, subaortic, and posterolateral basal-to-mid zones exhibited the strongest associations with MAEs. Conclusions: AI-driven quantitative analysis of left ventricular EAM exports augments risk stratification for MAEs beyond conventional clinical and binary EAM descriptors. Reflecting local conduction heterogeneity, GR emerged as the dominant EAM predictor. Prospective validation in larger, disease-specific cohorts and real-time integration within EAM platforms are warranted. Full article
(This article belongs to the Special Issue Cardiac Electrophysiology: Focus on Clinical Practice)
Show Figures

Figure 1

21 pages, 6338 KB  
Article
Asymmetric Cross-Modal Prototypical Networks for Few-Shot Image Classification
by Shengyu Xie, Guobin Deng, Xingxing Yang, Jie Zhou, Jinyun Tang and Ke-Jing Huang
Symmetry 2026, 18(4), 670; https://doi.org/10.3390/sym18040670 - 17 Apr 2026
Viewed by 238
Abstract
Few-shot image classification requires models to generalize from limited labeled examples. While metric-based approaches such as Prototypical Networks have demonstrated strong performance, they rely exclusively on visual features and ignore the rich semantic information encoded in class names. This paper presents a systematic [...] Read more.
Few-shot image classification requires models to generalize from limited labeled examples. While metric-based approaches such as Prototypical Networks have demonstrated strong performance, they rely exclusively on visual features and ignore the rich semantic information encoded in class names. This paper presents a systematic empirical study investigating the interaction between visual and semantic modalities in few-shot learning. We present Asymmetric Cross-Modal Prototypical Networks(ACM-ProtoNet), a controlled experimental framework which augments standard prototypical learning with frozen CLIP text encoders to incorporate zero-cost linguistic priors. Our method explicitly models the symmetric relationshipbetween visual and semantic modalities through learnable projection heads that map both image and text features into a shared embedding space. Image and text prototypes are fused via a learnable scalar gate α(0,1), allowing adaptive balancing of modalities. Under our experimental setup (frozen CLIP encoders, scalar fusion gate, simple template-based prompts), we observe an asymmetric pattern in comprehensive ablation studies on miniImageNet: cross-modal integration yields a statistically significant improvement in five-shot (+2.12 pp, p=0.03125, Wilcoxon signed-rank test over five seeds) but not in one-shot (0.09 pp, n.s.) learning. Our key contribution is not achieving state-of-the-art accuracy but rather providing controlled empirical evidence about cross-modal interaction patterns under specific design constraints. Further analysis shows that: (1) structured semantic information is essential—random text features harm performance by 7.48.1 percentage points; (2) projection heads provide asymmetric benefits, more critical in one-shot (2.85 pp when removed) than in five-shot learning (0.74 pp); (3) text-only prototypes achieve near-random performance (≈20%), suggesting that semantics alone are insufficient in our setup; (4) shuffled-class-name ablation confirms genuine semantic binding, where randomly permuting class-name assignments causes consistent degradation (five-shot: 5.74 pp, p<0.001; one-shot: 3.83 pp, p<0.001 across five seeds). These findings, specific to our simple fusion design, reveal an asymmetric pattern that is equally consistent with two hypotheses: (i) semantic priors may require sufficient visual context to be useful, or (ii) our scalar fusion gate may lack the capacity to leverage text in the extreme low-data regime of one-shot learning. This ambiguity motivates future work with more expressive fusion mechanisms and stronger text representations. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 18342 KB  
Article
Parameter- and Compute-Efficient Spatial–Spectral Transformer Framework for Pixel-Level Classification of Foreign Plastic Objects on Broiler Meat Using NIR–Hyperspectral Imaging
by Zirak Khan, Seung-Chul Yoon and Suchendra M. Bhandarkar
Sensors 2026, 26(8), 2459; https://doi.org/10.3390/s26082459 - 16 Apr 2026
Viewed by 326
Abstract
Foreign plastic objects (FPOs) in poultry products present significant food safety risks and cause economic losses for the industry. Conventional detection methods, including X-rays and color imaging, often struggle to identify small or low-density plastics. Hyperspectral imaging (HSI) offers both spatial and spectral [...] Read more.
Foreign plastic objects (FPOs) in poultry products present significant food safety risks and cause economic losses for the industry. Conventional detection methods, including X-rays and color imaging, often struggle to identify small or low-density plastics. Hyperspectral imaging (HSI) offers both spatial and spectral information but suffers from high computational cost when applied for FPO identification in industrial environments. This study introduces a parameter-efficient and computationally efficient spatial–spectral transformer framework for pixel-level classification of FPOs on broiler meat using NIR-HSI (1000–1700 nm). The framework integrates three innovations: (1) center-focused linear attention (CFLA) to reduce computational complexity from O(n2) to O(n); (2) patch-local mixed-axis 2D rotary position embedding to preserve geometric relationships within hyperspectral patches; and (3) low-rank factorized projection (LRP) matrices to reduce parameters by approximately 50% within projection weight matrices. The framework was trained and evaluated on a dataset of 52 chicken fillets, comprising 295,340 labeled target hyperspectral pixels from 12 common polymer types and 1 fillet class. The model achieved 99.39% overall accuracy, 99.57% average accuracy, and a 99.31 Kappa coefficient across 248,540 test pixels. Per-class precision, recall, and F1-score exceeded 98.05%, 98.59%, and 98.76%, respectively, across all classes. Efficiency analyses showed an 83% reduction in multiply–accumulate operations (MACs), a 22% reduction in trainable parameters, and a model size reduction from 1.72 MB to 1.35 MB relative to the baseline configuration. These gains also translated into practical inference benefits, with the final model achieving a throughput of 212,971.5 hyperspectral patch cubes/s and a 4.19× speedup over the baseline. These results demonstrate that the proposed framework combines strong classification performance with high efficiency, supporting high-throughput inference for real-time monitoring and enabling contamination source traceability and preventive quality control in industrial poultry processing. The approach provides a benchmark for applying transformer-based models to food safety inspection tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

7 pages, 2549 KB  
Interesting Images
Anterior Segment OCT in Fulminant Pseudomonas aeruginosa Corneal Ulcer with Stromal Melting Requiring Emergency Penetrating Keratoplasty
by Wojciech Luboń, Monika Sarnat-Kucharczyk and Mariola Dorecka
Diagnostics 2026, 16(8), 1189; https://doi.org/10.3390/diagnostics16081189 - 16 Apr 2026
Viewed by 166
Abstract
Rapidly progressive infectious keratitis may involve the anterior uveal tract and lead to anterior segment inflammation, resulting in severe structural damage of the cornea and potentially causing corneal perforation or endophthalmitis if not promptly treated. We report the case of a 63-year-old male [...] Read more.
Rapidly progressive infectious keratitis may involve the anterior uveal tract and lead to anterior segment inflammation, resulting in severe structural damage of the cornea and potentially causing corneal perforation or endophthalmitis if not promptly treated. We report the case of a 63-year-old male admitted to the Emergency Ophthalmology Department of the University Clinical Center in Katowice, Poland, with a rapidly progressive corneal ulcer of the left eye that had not responded to two weeks of outpatient topical antibiotic therapy. The condition developed after ocular trauma sustained while chopping wood. At presentation, visual acuity was limited to light perception with preserved projection. Multimodal imaging, including slit-lamp examination, anterior segment optical coherence tomography (AS-OCT), and in vivo confocal microscopy, revealed extensive corneal ulceration with severe stromal destruction, progressive corneal melting, and marked anterior segment inflammation, with an imminent risk of perforation. Microbiological cultures identified Pseudomonas aeruginosa. Despite intensive empiric topical antimicrobial therapy targeting both bacterial infection and a possible fungal component related to trauma with organic material, rapid clinical deterioration necessitated emergency therapeutic penetrating keratoplasty (PK). The procedure resulted in rapid resolution of inflammation and improvement in visual acuity, with best-corrected visual acuity (BCVA) reaching 0.3 logMAR during follow-up. At the three-month follow-up, the corneal graft remained clear with stable visual acuity and no recurrence of infection. The patient remains under regular long-term follow-up, with ongoing monitoring of graft clarity, intraocular pressure (IOP), and visual function. This case differs from routine presentations of infectious keratitis by demonstrating exceptionally rapid stromal melting despite promptly initiated empiric topical therapy. Multimodal imaging, particularly AS-OCT provided clinically meaningful information by revealing structural instability and an imminent risk of perforation not fully appreciable on slit-lamp examination, thereby supporting timely urgent keratoplasty. These findings highlight the practical diagnostic value of imaging-based assessment in advanced infectious keratitis and underscore its role in guiding surgical decision-making in eyes at high risk of corneal perforation. Full article
(This article belongs to the Special Issue Diagnostic Imaging in Ocular Surface)
Show Figures

Figure 1

18 pages, 24719 KB  
Article
Auto-Focusing Imaging and Performance Analysis of Ka-Band Carrier-Frequency-Agility SAR
by Yushan Zhou, Yijiang Nan, Da Liang, Zhiyuan Xue, Yuesheng Chen, Haiwei Zhou and Yawei Zhao
Remote Sens. 2026, 18(8), 1197; https://doi.org/10.3390/rs18081197 - 16 Apr 2026
Viewed by 246
Abstract
Ka-band carrier-frequency-agility (CFA) synthetic aperture radar (SAR) employs pulse-to-pulse random wide-range frequency hopping to enhance anti-interference capability. However, the random hopping disrupts the azimuth phase continuity, and the millimeter-wave wavelength of the Ka band makes the imaging quality extremely sensitive to motion errors. [...] Read more.
Ka-band carrier-frequency-agility (CFA) synthetic aperture radar (SAR) employs pulse-to-pulse random wide-range frequency hopping to enhance anti-interference capability. However, the random hopping disrupts the azimuth phase continuity, and the millimeter-wave wavelength of the Ka band makes the imaging quality extremely sensitive to motion errors. To address these challenges, this paper proposes an auto-focusing imaging framework and performs a performance analysis for Ka-band CFA SAR. First, a back-projection (BP)-based imaging model is derived to restore the coherent phase history from the hopped echoes. Second, to compensate for the residual phase errors inevitable in high-resolution millimeter-wave imaging, an auto-focusing framework is developed. This framework incorporates a dynamic sub-aperture strategy and an adaptive spectral notching mechanism to ensure precise phase error estimation in complex scattering environments. Furthermore, the imaging performance under different frequency-selection modes is analyzed to provide a guideline for the parameter selection of the Ka-band CFA SAR. Experiments with a vehicle-mounted Ka-band SAR system demonstrate that the proposed method achieves well-focused images with 5 cm resolution. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

24 pages, 2737 KB  
Article
Impact of Sowing Space and Depth on Canopy Architecture and Vertical Leaf Traits in Dryland Wheat
by Haima Haider Asha, Yulun Chen, Qishou Ding, Linqian Fu, Edwin O. Amisi and Gaoming Xu
Agriculture 2026, 16(8), 877; https://doi.org/10.3390/agriculture16080877 - 15 Apr 2026
Viewed by 221
Abstract
Sowing space and depth critically influence wheat canopy architecture, yet their layer-specific effects remain poorly understood. This two-year field study evaluated the effects of three sowing spaces (1.5, 3.0, 4.5 cm) and three sowing depths (2, 3, 6 cm) on canopy projection area, [...] Read more.
Sowing space and depth critically influence wheat canopy architecture, yet their layer-specific effects remain poorly understood. This two-year field study evaluated the effects of three sowing spaces (1.5, 3.0, 4.5 cm) and three sowing depths (2, 3, 6 cm) on canopy projection area, leaf inclination angle, leaf area distribution, and leaf area index (LAI) of dryland wheat (Triticum aestivum ‘Ningmai 13’) in Luhe, Nanjing, China, using image-based phenotyping with manual validation. Narrow spacing (1.5 cm) with intermediate depth (3 cm) produced the largest canopy projection area (0.239–0.245 m2) and an increase in leaf erectness in the middle canopy layer (+23% above average). The highest LAI values (4.23–4.28 m2 m−2) were achieved with narrow spacing (A1B1, A1B2), demonstrating that dense canopies can be established under dryland conditions. Grain yield (g/plant) was measured as a supporting agronomic indicator; the highest yield per plant (14.36 g/plant) was observed in A3B1. Image-based measurements showed excellent agreement with manual methods (R2 > 0.97 for all traits), validating the phenotyping pipeline. These findings contribute to a deeper understanding of how sowing parameters shape wheat canopies in dryland systems. Full article
(This article belongs to the Section Crop Production)
Show Figures

Figure 1

48 pages, 10331 KB  
Article
The ABC of Avante-Garde Bridge Construction, or, How Henry Miller & Vladimir Mayakovsky’s Bridges Were Built
by Andrey Astvatsaturov and Feodor Dviniatin
Arts 2026, 15(4), 81; https://doi.org/10.3390/arts15040081 - 15 Apr 2026
Viewed by 146
Abstract
The article discusses contexts of Henry Miller’s works (“Black Spring”, Tropic of Capricorn”) and the poem Brooklyn Bridge by Vladimir Mayakovsky, which have in common the theme and imagery of a Bridge and the avant-garde era of creation. The authors of the article [...] Read more.
The article discusses contexts of Henry Miller’s works (“Black Spring”, Tropic of Capricorn”) and the poem Brooklyn Bridge by Vladimir Mayakovsky, which have in common the theme and imagery of a Bridge and the avant-garde era of creation. The authors of the article analyze not so much the “intersection” as the “union” of Miller and Mayakovsky, that is, not so much coincidences and closeness as complements that allow us to trace the entire breadth of the avant-garde literary project. In Henry Miller’s works the semantics of the image of a bridge referring to Nietzche’s Thus Spake Zarathustra is primarily noted and analyzed. In the analysis of Mayakovsky’s poem, special attention is paid to the verse and thematic composition of the text; metaphors; sound repetitions and echoes and their semantics; the specific historicism; and an important concept of reconstruction from traces, remains, and reflexes, turning to which Mayakovsky comes closer to, the unknown to him, Charles S. Peirce (abduction) and Carlo Ginzburg (keys), who was not yet born in the year the text was written. Full article
16 pages, 1911 KB  
Article
Development of 28 nm CMOS Front-End Channels for the Readout of Hybrid Pixel Sensors in Future Colliders and Photon Science Applications
by Luigi Gaioni, Simone Gerardin, Valerio Re and Gianluca Traversi
Electronics 2026, 15(8), 1641; https://doi.org/10.3390/electronics15081641 - 14 Apr 2026
Viewed by 360
Abstract
This paper describes two front-end architectures developed in a 28 nm CMOS process for the readout of pixel detectors in future high-energy physics (HEP) colliders and advanced X-ray imaging instrumentation. The front-end channels have been developed in the framework of the PiHEX project, [...] Read more.
This paper describes two front-end architectures developed in a 28 nm CMOS process for the readout of pixel detectors in future high-energy physics (HEP) colliders and advanced X-ray imaging instrumentation. The front-end channels have been developed in the framework of the PiHEX project, funded by the Italian Ministry of University and Research. PiHEX aims to improve the state of the art of pixel readout chip technology in high-luminosity colliders and X-ray imagers in the next generation of free electron lasers (FELs) by developing, in 28 nm CMOS technology, the fundamental microelectronic building blocks for pixel readout chips. Such blocks, also implementing innovative circuit ideas, will enable, in future applications, the integration of large-scale readout chips, meeting a set of challenging requirements, such as high spatial resolution, high signal-to-noise ratio, very wide dynamic range and the capability to withstand unprecedented radiation levels. Two different front-end channels were designed, integrated into two prototype chips, and tested. One architecture, featuring a pixel size of 25 µm × 100 µm, was optimized for tracking applications in high-energy physics experiments, like the ones that take place at CERN in the high-luminosity upgrade of the Large Hadron Collider (LHC), while the second one, featuring a pixel size of 110 µm × 55 µm, was devised for X-ray imaging applications in FELs. Full article
(This article belongs to the Special Issue New Trends in CMOS: Devices, Technologies, and Applications)
Show Figures

Figure 1

Back to TopTop