Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,511)

Search Parameters:
Keywords = single-pixel

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 20184 KB  
Article
Estimation of Canopy Traits and Yield in Maize–Soybean Intercropping Systems Using UAV Multispectral Imagery and Machine Learning
by Li Wang, Shujie Jia, Jinguang Zhao, Canru Liang and Wuping Zhang
Agriculture 2026, 16(4), 487; https://doi.org/10.3390/agriculture16040487 (registering DOI) - 22 Feb 2026
Abstract
Strip intercropping of maize and soybean is a key practice for improving land productivity and ensuring food and oil security in the hilly regions of the Loess Plateau. However, complex interspecific interactions generate highly heterogeneous canopy structures, making it difficult for traditional linear [...] Read more.
Strip intercropping of maize and soybean is a key practice for improving land productivity and ensuring food and oil security in the hilly regions of the Loess Plateau. However, complex interspecific interactions generate highly heterogeneous canopy structures, making it difficult for traditional linear models to capture yield variability within mixed pixels. Based on a single-season (2025) field experiment, this study developed a UAV multispectral imagery-based yield estimation framework integrating multiple machine-learning algorithms. Shapley additive explanations (SHAP) and partial dependence plots (PDP) were used to interpret the spectral–yield relationships under different spatial configurations. The predictive performance of linear regression and eight nonlinear algorithms was compared using 20 spectral features. Ensemble learning outperformed linear approaches in all intercropping scenarios. In the maize–soybean 3:2 pattern, the GBDT model delivered the highest accuracy (R2 = 0.849; NRMSE = 9.28%), whereas in the 4:2 pattern with stronger shading stress on soybean, the random forest model showed the greatest robustness (R2 = 0.724). Interpretation results indicated that yield in monoculture systems was mainly driven by physiological traits characterized by visible-band indices, while yield in intercropping systems was dominated by structural and stress-response traits represented by near-infrared and soil-adjusted vegetation indices. The generated centimeter-scale yield maps revealed clear strip-like spatial variability driven by interspecific competition. Overall, explainable machine learning combined with UAV multispectral data shows promise for within-season yield estimation in intercropping systems and can support spatially differentiated precision management under the sampled conditions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 2109 KB  
Article
An FPGA-Based YOLOv5n Accelerator for Online Multi-Track Particle Localization
by Zixuan Song, Wangwang Tang, Wendi Deng, Hongxia Wang, Guangming Huang, Haoran Wu, Yueting Guo, Jun Liu, Kai Jin and Zhiyuan Ma
Electronics 2026, 15(4), 810; https://doi.org/10.3390/electronics15040810 - 13 Feb 2026
Viewed by 129
Abstract
Reliability testing for Single Event Effects (SEEs) requires accurate localization of heavy-ion tracks from projection images. Conventional localization often relies on handcrafted features and geometric fitting, which is sensitive to noise and difficult to accelerate in hardware. This paper presents a lightweight detector [...] Read more.
Reliability testing for Single Event Effects (SEEs) requires accurate localization of heavy-ion tracks from projection images. Conventional localization often relies on handcrafted features and geometric fitting, which is sensitive to noise and difficult to accelerate in hardware. This paper presents a lightweight detector based on YOLOv5n that treats charge tracks in Topmetal pixel sensor projections as distinct objects and directly regresses the track angle and intercept, along with bounding boxes, in a single forward pass. On a synthetic dataset, the model achieves a precision of 0.9626 and a recall of 0.9493, with line-parameter errors of 0.3930° in angle and 0.4842 pixels in intercept. On experimental krypton beam data, the detector reaches a precision of 0.92 and a recall of 0.96, with a position resolution of 52.05 μm. We further deploy the model on an Xilinx Alveo U200, achieving an average per-frame accelerator latency of 3.1 ms while preserving measurement quality. This approach enables accurate, online track localization for SEE monitoring on Field-Programmable Gate Array (FPGA) platforms. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

22 pages, 4890 KB  
Article
Super-Resolution Reconstruction and Detector Geometric Error Correction for Parallel-Beam Low-Resolution Multi-Detector SPECT: A Proof of Concept
by Zhibiao Cheng, Jun Zhang, Ping Chen and Junhai Wen
Tomography 2026, 12(2), 23; https://doi.org/10.3390/tomography12020023 - 12 Feb 2026
Viewed by 180
Abstract
Objectives: Due to collimator limitations, Single-Photon Emission Computed Tomography (SPECT) suffers from relatively low spatial resolution, which hampers the detection of small lesions. This study proposes a super-resolution (SR) reconstruction algorithm for a parallel-beam, low-resolution (LR) multi-detector SPECT system and employs a neural [...] Read more.
Objectives: Due to collimator limitations, Single-Photon Emission Computed Tomography (SPECT) suffers from relatively low spatial resolution, which hampers the detection of small lesions. This study proposes a super-resolution (SR) reconstruction algorithm for a parallel-beam, low-resolution (LR) multi-detector SPECT system and employs a neural network to estimate and correct for geometric errors in the LR detectors. Methods: A parallel-beam LR multi-detector SPECT system is presented, in which the detectors perform relative sub-pixel shifts. At each sampling angle, an SR reconstruction algorithm synthesizes high-resolution (HR) SPECT images from LR projections acquired by four offset LR detectors. To correct for geometric errors among these detectors, a randomly distributed gamma point source was designed to generate training data. A neural network was then employed to estimate the geometric errors, thereby refining the SR reconstruction. Results: Numerical simulation demonstrated that the proposed neural network could accurately identify the displacement-based geometric errors of the LR detectors. Utilizing these estimated parameters to correct the SR reconstruction process yielded results comparable to those obtained from direct reconstruction of HR projections, achieving a two-fold resolution improvement. Conclusions: Preliminary proof-of-principle for SR reconstruction in a parallel-beam LR multi-detector SPECT system was established. Further validation of the hardware performance is warranted. Full article
Show Figures

Figure 1

18 pages, 7882 KB  
Article
Denoising of Binary Built-Up Maps Using Multi-Temporal Image Processing Thresholding
by Sarah J. Becker and Nicole M. Wayant
Land 2026, 15(2), 271; https://doi.org/10.3390/land15020271 - 6 Feb 2026
Viewed by 196
Abstract
Accurate identification of built-up land from remotely sensed imagery is essential for urban planning, environmental monitoring, and disaster response. However, binary built-up maps derived from single-date classifications often contain semantic noise—misclassified pixels resulting from shadows, bare soil confusion, or seasonal conditions. Common denoising [...] Read more.
Accurate identification of built-up land from remotely sensed imagery is essential for urban planning, environmental monitoring, and disaster response. However, binary built-up maps derived from single-date classifications often contain semantic noise—misclassified pixels resulting from shadows, bare soil confusion, or seasonal conditions. Common denoising methodologies, such as smoothing or filtering, are designed for continuous imagery and can distort small or fragmented features and fail to correct underlying classification errors. To overcome these limitations, this study evaluated a multi-date summation and thresholding workflow as a denoising alternative. Five Sentinel-2 images per site were classified as built-up maps, summed into a composite “built-up frequency” raster, and thresholded using Otsu, adaptive, and voting methods to produce refined binary maps. The results across nine international study sites show that the Otsu thresholding method outperformed the other methods in most locations when comparing their accuracies using the Matthews Correlation Coefficient (MCC), showing that using multiple images can improve identification of built-up land. Full article
Show Figures

Figure 1

30 pages, 25344 KB  
Article
PTU-Net: A Polarization-Temporal U-Net for Multi-Temporal Sentinel-1 SAR Crop Classification
by Feng Tan, Xikai Fu, Huiming Chai and Xiaolei Lv
Remote Sens. 2026, 18(3), 514; https://doi.org/10.3390/rs18030514 - 5 Feb 2026
Viewed by 155
Abstract
Accurate crop type mapping remains challenging in regions where persistent cloud cover limits the availability of optical imagery. Multi-temporal dual-polarization Sentinel-1 SAR data offer an all-weather alternative, yet existing approaches often underutilize polarization information and rely on single-scale temporal aggregation. This study proposes [...] Read more.
Accurate crop type mapping remains challenging in regions where persistent cloud cover limits the availability of optical imagery. Multi-temporal dual-polarization Sentinel-1 SAR data offer an all-weather alternative, yet existing approaches often underutilize polarization information and rely on single-scale temporal aggregation. This study proposes PTU-Net, a polarization–temporal U-Net designed specifically for pixel-wise crop segmentation from SAR time series. The model introduces a Polarization Channel Attention module to construct physically meaningful VV/VH combinations and adaptively enhance their contributions. It also incorporates a Multi-Scale Temporal Self-Attention mechanism to model pixel-level backscatter trajectories across multiple spatial resolutions. Using a 12-date Sentinel-1 stack over Kings County, California, and high-quality crop-type reference labels, the model was trained and evaluated under a spatially independent split. Results show that PTU-Net outperforms GRU, ConvLSTM, 3D U-Net, and U-Net–ConvLSTM baselines, achieving the highest overall accuracy and mean IoU among all tested models. Ablation studies confirm that both polarization enhancement and multi-scale temporal modeling contribute substantially to performance gains. These findings demonstrate that integrating polarization-aware feature construction with scale-adaptive temporal reasoning can substantially improve the effectiveness of SAR-based crop mapping, offering a promising direction for operational agricultural monitoring. Full article
Show Figures

Graphical abstract

17 pages, 3823 KB  
Article
Advancing Leafy Vegetable Yield Estimation Through Image Inpainting to Mitigate Occlusion Effects
by Dan Xu, Shuoguo Li, Zhuopeng Gu, Guanyun Xi and Juncheng Ma
Agronomy 2026, 16(3), 368; https://doi.org/10.3390/agronomy16030368 - 2 Feb 2026
Viewed by 198
Abstract
Non-destructive estimation of leafy vegetable fresh weight is crucial for precision management in both greenhouse and open-field production. However, mutual occlusion between plants in dense canopies poses a significant challenge to image-based estimation accuracy. This study systematically investigates the potential of deep learning-based [...] Read more.
Non-destructive estimation of leafy vegetable fresh weight is crucial for precision management in both greenhouse and open-field production. However, mutual occlusion between plants in dense canopies poses a significant challenge to image-based estimation accuracy. This study systematically investigates the potential of deep learning-based image inpainting methods to reconstruct occluded regions in RGB lettuce images, thereby improving input data quality for downstream weight estimation models. Three state-of-the-art inpainting models—Vision Transformer-based Denoising Autoencoder (ViT-DAE), Aggregated Contextual–Transformation Generative Adversarial Network (AOT-GAN), and a conditional Diffusion Model (CDM)—were implemented and evaluated. A dataset comprising 503 individual lettuce images with artificially generated random occlusions was used for training and testing. Performance was assessed using pixel-level metrics (PSNR, SSIM) and, more importantly, by evaluating the fresh weight estimation accuracy (R2, NRMSE, MAPE) of a pre-trained CNN model (CNN_284) using the inpainted images. Results indicated that AOT-GAN achieved the best overall performance, with an SSIM of 0.9379 and an R2 of 0.8480 for weight estimation after inpainting under single-direction occlusion, closely matching the performance using original non-occluded images (R2 = 0.8365). In complex multi-direction occlusion scenarios, AOT-GAN demonstrated superior robustness, maintaining an R2 of 0.7914 and an MAPE of 12.02% for weight prediction, significantly outperforming the other models. This study demonstrates that advanced inpainting techniques, particularly AOT-GAN, can effectively mitigate the impact of occlusion, enhancing the reliability of vision-based leafy vegetable biomass estimation in practical production. Full article
(This article belongs to the Special Issue Application of Machine Learning and Modelling in Food Crops)
Show Figures

Figure 1

17 pages, 4838 KB  
Article
Unseen Hazard Recognition in Autonomous Driving Using Vision–Language and Sensor-Based Temporal Models
by Faisal Mehmood, Sajid Ur Rehman, Asif Mehmood and Young-Jin Kim
Appl. Sci. 2026, 16(3), 1503; https://doi.org/10.3390/app16031503 - 2 Feb 2026
Viewed by 325
Abstract
Autonomous driving (AD) systems remain vulnerable to rare, ambiguous, and out-of-label (OOL) hazards that are insufficiently represented in conventional training datasets. This work investigates perception robustness under such conditions by using the Challenge of Out-Of-Label (COOOL) benchmark dataset, which consists of 200 dashcam [...] Read more.
Autonomous driving (AD) systems remain vulnerable to rare, ambiguous, and out-of-label (OOL) hazards that are insufficiently represented in conventional training datasets. This work investigates perception robustness under such conditions by using the Challenge of Out-Of-Label (COOOL) benchmark dataset, which consists of 200 dashcam video sequences annotated with both common and uncommon traffic hazards. We analyze that the behavior of widely used methods in the perception of components and present a multimodal pipeline in which we integrate YOLO11x for object detection, Hough Transform for lane estimation, and GPT-4o for scene description, and for temporal modeling, we use Long Short-Term Memory (LSTM) networks. On the COOOL benchmark, YOLO11x achieves an mAP@0.5 of 54.1% on the common object categories, whereas the detection of rare and OFL hazards remains challenging, with a recall of 72.6%. Incorporating temporal risk modeling improves hazard recall to 71.8%, indicating a modest but consistent gain in recognizing uncommon events. Hough Transform shows the stable behavior in standard conditions for lane estimation, with a mean lateral deviation of 8.9 pixels in daylight scenes and 13.4 pixels under low-light conditions. The temporal anomaly detection module attains an AUROC of 0.65, reflecting the limitation but meaningful discrimination between nominal and anomalous driving situations. For interpretability, the GPT-4o scene description module generates context-aware textual explanations with an object coverage score of 0.72 and a factual consistency rate of 78%, as assessed through manual inspection. The end-to-end pipeline operates at approximately 10–12 frames per second on a single GPU, supporting near-real-time analysis and optimization. Our results confirm that state-of-the-art perception models struggle with OOL hazards and that multimodal vision–language–temporal integration provides incremental improvements in robustness and interpretability when evaluated under the standardized out-of-distribution conditions. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics—2nd Edition)
Show Figures

Figure 1

17 pages, 4768 KB  
Article
On Segment-Aware Monocular Depth Estimation Using Vision Transformers
by Vasileios Arampatzakis, George Pavlidis, Nikolaos Mitianoudis and Nikos Papamarkos
Information 2026, 17(2), 145; https://doi.org/10.3390/info17020145 - 2 Feb 2026
Viewed by 290
Abstract
Monocular Depth Estimation (MDE) infers per-pixel scene geometry from a single RGB image. Despite recent progress, global MDE models often blur depth discontinuities at object boundaries and fail to capture object-level structure. Segment-aware depth estimation addresses this limitation by exploiting semantic segmentation to [...] Read more.
Monocular Depth Estimation (MDE) infers per-pixel scene geometry from a single RGB image. Despite recent progress, global MDE models often blur depth discontinuities at object boundaries and fail to capture object-level structure. Segment-aware depth estimation addresses this limitation by exploiting semantic segmentation to decompose depth prediction into simpler, class-specific subproblems. In this work, we study semantic-aware MDE in a multi-branch design where each semantic class is handled by a lightweight Vision Transformer (ViT) branch that predicts dense depth for its class while suppressing interference from other regions. We further examine fusion strategies that merge the branch outputs into a single prediction: (i) a learnable cross-attention fusion module that predicts depth from the stack of per-class proposals and masks, and (ii) a parameter-free stitched summation that sums mask-gated outputs. The proposed architecture is simple, scalable, end-to-end trainable, and compatible with arbitrary transformer backbones. Experiments on Virtual KITTI 2, where ground-truth depth and semantic labels are available, show that segment-aware modeling produces sharper depth boundaries and improves standard error metrics compared to a single-branch baseline (AbsRel 0.243→0.152; RMSE 11.952→9.101). Finally, we find that the parameter-free summation matches, and in most cases improves upon, the accuracy of learned fusion while adding no computational overhead. Full article
Show Figures

Figure 1

20 pages, 4691 KB  
Article
Two-Stage Extraction of Large-Area Water Bodies Based on Multi-Modal Remote Sensing Data
by Lisheng Li, Weitao Han and Qinghua Qiao
Sustainability 2026, 18(3), 1362; https://doi.org/10.3390/su18031362 - 29 Jan 2026
Viewed by 171
Abstract
In view of the current remote sensing-based water body extraction research mostly relying on single data sources, being limited to specific water body types or regions, failing to leverage the advantages of multi-source data, and having difficulty in achieving large-scale, high-precision and rapid [...] Read more.
In view of the current remote sensing-based water body extraction research mostly relying on single data sources, being limited to specific water body types or regions, failing to leverage the advantages of multi-source data, and having difficulty in achieving large-scale, high-precision and rapid extraction, this paper integrates optical images and Synthetic Aperture Radar (SAR) data, and adopts an adaptive threshold segmentation method to propose a technical approach suitable for high-precision water body extraction on a monthly scale in large regions, which can efficiently extract water body information in large regions. Taking Beijing as the study area, the monthly spatial distribution of water bodies from 2019 to 2020 was extracted, and the pixel-level accuracy verification was carried out using the JRC Global Surface Water Dataset from the European Commission’s Joint Research Centre. The experimental results show that the water body extraction results are good, the extraction precision is generally higher than 0.8, and most of them can reach over 0.95. Finally, the method was applied to extract and analyze water body changes caused by heavy rainfall in Beijing in July 2025. This analysis further confirmed the effectiveness, accuracy, and practical utility of the proposed method. Full article
Show Figures

Figure 1

20 pages, 2389 KB  
Article
A Monocular Depth Estimation Method for Autonomous Driving Vehicles Based on Gaussian Neural Radiance Fields
by Ziqin Nie, Zhouxing Zhao, Jieying Pan, Yilong Ren, Haiyang Yu and Liang Xu
Sensors 2026, 26(3), 896; https://doi.org/10.3390/s26030896 - 29 Jan 2026
Viewed by 379
Abstract
Monocular depth estimation is one of the key tasks in autonomous driving, which derives depth information of the scene from a single image. And it is a fundamental component for vehicle decision-making and perception. However, approaches currently face challenges such as visual artifacts, [...] Read more.
Monocular depth estimation is one of the key tasks in autonomous driving, which derives depth information of the scene from a single image. And it is a fundamental component for vehicle decision-making and perception. However, approaches currently face challenges such as visual artifacts, scale ambiguity and occlusion handling. These limitations lead to suboptimal performance in complex environments, reducing model efficiency and generalization and hindering their broader use in autonomous driving and other applications. To solve these challenges, this paper introduces a Neural Radiance Field (NeRF)-based monocular depth estimation method for autonomous driving. It introduces a Gaussian probability-based ray sampling strategy to effectively solve the problem of massive sampling points in large complex scenes and reduce computational costs. To improve generalization, a lightweight spherical network incorporating a fine-grained adaptive channel attention mechanism is designed to capture detailed pixel-level features. These features are subsequently mapped to 3D spatial sampling locations, resulting in diverse and expressive point representations for improving the generalizability of the NeRF model. Our approach exhibits remarkable performance on the KITTI benchmark, surpassing traditional methods in depth estimation tasks. This work contributes significant technical advancements for practical monocular depth estimation in autonomous driving applications. Full article
Show Figures

Figure 1

27 pages, 20812 KB  
Article
A Lightweight Radar–Camera Fusion Deep Learning Model for Human Activity Recognition
by Minkyung Jeon and Sungmin Woo
Sensors 2026, 26(3), 894; https://doi.org/10.3390/s26030894 - 29 Jan 2026
Viewed by 365
Abstract
Human activity recognition in privacy-sensitive indoor environments requires sensing modalities that remain robust under illumination variation and background clutter while preserving user anonymity. To this end, this study proposes a lightweight radar–camera fusion deep learning model that integrates motion signatures from FMCW radar [...] Read more.
Human activity recognition in privacy-sensitive indoor environments requires sensing modalities that remain robust under illumination variation and background clutter while preserving user anonymity. To this end, this study proposes a lightweight radar–camera fusion deep learning model that integrates motion signatures from FMCW radar with coarse spatial cues from ultra-low-resolution camera frames. The radar stream is processed as a Range–Doppler–Time cube, where each frame is flattened and sequentially encoded using a Transformer-based temporal model to capture fine-grained micro-Doppler patterns. The visual stream employs a privacy-preserving 4×5-pixel camera input, from which a temporal sequence of difference frames is extracted and modeled with a dedicated camera Transformer encoder. The two modality-specific feature vectors—each representing the temporal dynamics of motion—are concatenated and passed through a lightweight fully connected classifier to predict human activity categories. A multimodal dataset of synchronized radar cubes and ultra-low-resolution camera sequences across 15 activity classes was constructed for evaluation. Experimental results show that the proposed fusion model achieves 98.74% classification accuracy, significantly outperforming single-modality baselines (single-radar and single-camera). Despite its performance, the entire model requires only 11 million floating-point operations (11 MFLOPs), making it highly efficient for deployment on embedded or edge devices. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

14 pages, 3940 KB  
Article
A Low-Noise and High-Integration Readout IC with Pixel-Level Single-Ended CDS for Short-Wave Infrared Focal Plane Arrays
by Hongyi Wang, Songlei Huang, Zhenghua Peng, Song Jing, Runze Xia, Yu Chen, Panjie Dai and Jiaxiong Fang
Sensors 2026, 26(3), 847; https://doi.org/10.3390/s26030847 - 28 Jan 2026
Viewed by 185
Abstract
Improving sensitivity in short-wave infrared (SWIR) detection is crucial for low-signal applications, such as astronomy and hyperspectral imaging, which demand readout integrated circuits (ROICs) with minimal noise and high density. However, conventional differential pixels with correlated double sampling (CDS) are difficult to integrate [...] Read more.
Improving sensitivity in short-wave infrared (SWIR) detection is crucial for low-signal applications, such as astronomy and hyperspectral imaging, which demand readout integrated circuits (ROICs) with minimal noise and high density. However, conventional differential pixels with correlated double sampling (CDS) are difficult to integrate due to spatial limitations. In order to tackle this issue, we propose a compact, pixel-level, single-ended charge-domain architecture. It integrates single-ended CDS within each pixel, guaranteeing compatibility with the integrate-while-read (IWR) mode while suppressing reset and 1/f noise. A capacitor reuse technique is also proposed to enable the integration capacitor to function as an auxiliary load, which optimizes the noise–area trade-off. Fabricated in 180 nm CMOS, our 1296 × 256 ROIC attains a noise floor of 0.50 mV (achieving a reduction of approximately 70% compared to conventional architectures under identical conditions), consumes under 200 mW, and operates at frequencies exceeding 200 Hz. It also exhibits great linearity (0.9999) and supports both integrate-then-read (ITR) mode and integrate-while-read (IWR) mode, while also providing a row-level gain selecting function. Validated at 15 μm pitch, this design provides an effective option for high-density SWIR systems. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

19 pages, 7297 KB  
Article
Single-Die-Level MEMS Post-Processing for Prototyping CMOS-Based Neural Probes Combined with Optical Fibers for Optogenetic Neuromodulation
by Gabor Orban, Alberto Perna, Matteo Vincenzi, Raffaele Adamo, Gian Nicola Angotzi, Luca Berdondini and João Filipe Ribeiro
Micromachines 2026, 17(2), 159; https://doi.org/10.3390/mi17020159 - 26 Jan 2026
Viewed by 293
Abstract
The integration of complementary metal–oxide–semiconductor (CMOS) and micro-electromechanical systems (MEMSs) technologies for miniaturized biosensor fabrication enables unprecedented spatiotemporal resolution in monitoring the bioelectrical activity of the nervous system. Wafer-level CMOS technology incurs high costs, but multi-project wafer (MPW) runs mitigate this by allowing [...] Read more.
The integration of complementary metal–oxide–semiconductor (CMOS) and micro-electromechanical systems (MEMSs) technologies for miniaturized biosensor fabrication enables unprecedented spatiotemporal resolution in monitoring the bioelectrical activity of the nervous system. Wafer-level CMOS technology incurs high costs, but multi-project wafer (MPW) runs mitigate this by allowing multiple users to share a single wafer. Still, monolithic CMOS biosensors require specialized surface materials or device geometries incompatible with standard CMOS processes. Performing MEMS post-processing on the few square millimeters available in MPW dies remains a significant challenge. In this paper, we present a MEMS post-processing workflow tailored for CMOS dies that supports both surface material modification and layout shaping for intracortical biosensing applications. To address lithographic limitations on small substrates, we optimized spray-coating photolithography methods that suppress edge effects and enable reliable patterning and lift-off of diverse materials. We fabricated a needle-like, 512-channel simultaneous neural recording active pixel sensor (SiNAPS) technology based neural probe designed for integration with optical fibers for optogenetic studies. To mitigate photoelectric effects induced by light stimulation, we incorporated a photoelectric shield through simple modifications to the photolithography mask. Optical bench testing demonstrated >96% light-shielding effectiveness at 3 mW of light power applied directly to the probe electrodes. In vivo experiments confirmed the probe’s capability for high-resolution electrophysiological measurements. Full article
(This article belongs to the Special Issue CMOS-MEMS Fabrication Technologies and Devices, 2nd Edition)
Show Figures

Figure 1

17 pages, 2764 KB  
Article
Radiomics as a Decision Support Tool for Detecting Occult Periapical Lesions on Intraoral Radiographs
by Barbara Obuchowicz, Joanna Zarzecka, Marzena Jakubowska, Rafał Obuchowicz, Michał Strzelecki, Adam Piórkowski, Joanna Gołda, Karolina Nurzynska and Julia Lasek
J. Clin. Med. 2026, 15(3), 971; https://doi.org/10.3390/jcm15030971 - 25 Jan 2026
Viewed by 263
Abstract
Background: Periapical lesions are common consequences of pulp necrosis but may remain undetectable on conventional intraoral radiographs, becoming evident only on cone-beam computed tomography (CBCT). Improving lesion recognition on plain radiographs is therefore of high clinical relevance. Methods: This retrospective, single-center study analyzed [...] Read more.
Background: Periapical lesions are common consequences of pulp necrosis but may remain undetectable on conventional intraoral radiographs, becoming evident only on cone-beam computed tomography (CBCT). Improving lesion recognition on plain radiographs is therefore of high clinical relevance. Methods: This retrospective, single-center study analyzed 56 matched pairs of intraoral periapical radiographs (RVG) and CBCT scans. A total of 109 regions of interest (ROIs) were included, which were classified as CBCT-positive/RVG-negative (onlyCBCT, n = 64) or true negative (noLesion, n = 45). Radiomic texture features were extracted from circular ROIs on RVG images using PyRadiomics. Feature distributions were compared using Mann–Whitney U tests with false discovery rate correction, and classification was performed using a logistic regression model with nested cross-validation. Results: Forty-four radiomic texture features showed statistically significant differences between onlyCBCT and noLesion ROIs, predominantly with small to medium effect sizes. For a 40-pixel ROI radius, the classifier achieved a mean area under the ROC curve of 0.71, mean accuracy of 68%, and mean sensitivity of 73%. Smaller ROIs (20–40 pixels) yielded higher AUCs and substantially better accuracy than larger sampling regions (≥60 pixels). Conclusions: Quantifiable radiomic signatures of periapical pathology are present on conventional radiographs even when lesions are visually occult. Radiomics may serve as a complementary decision support tool for identifying CBCT-only periapical lesions in routine clinical imaging. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

9 pages, 1364 KB  
Communication
Multiband Infrared Photodetection Based on Colloidal Quantum Dot
by Yingying Xu, Xiaomeng Xue, Lixiong Wu, Zhikai Gan, Menglu Chen and Qun Hao
Photonics 2026, 13(1), 89; https://doi.org/10.3390/photonics13010089 - 20 Jan 2026
Viewed by 339
Abstract
Multispectral infrared detection plays a crucial role in advanced applications spanning environmental monitoring, military surveillance, and biomedical diagnostics, offering superior target identification accuracy compared to single-band imaging techniques. In this work, we synthesized four distinct bands of colloidal quantum dots (CQDs)—specifically, a cut-off [...] Read more.
Multispectral infrared detection plays a crucial role in advanced applications spanning environmental monitoring, military surveillance, and biomedical diagnostics, offering superior target identification accuracy compared to single-band imaging techniques. In this work, we synthesized four distinct bands of colloidal quantum dots (CQDs)—specifically, a cut-off of 1.3 µm with PbS CQDs and 1.8 µm, 2.6 µm, and 3.5 µm with HgTe CQDs—and employed them to construct planar multiband infrared photodetectors. The device exhibited a clear photoresponse at room temperature from 0.8 µm to 3.5 µm, with responsivity of 5.39 A/W and specific detectivity of 2.01 × 1011 Jones at 1.8 µm. This materials–device co-design strategy integrates wavelength-selective CQD synthesis with planar pixel-level patterning, providing a versatile pathway for developing low-cost, solution-processed, multiband infrared photodetectors. Full article
(This article belongs to the Special Issue New Perspectives in Micro-Nano Optical Design and Manufacturing)
Show Figures

Figure 1

Back to TopTop