Skip Content
You are currently on the new version of our website. Access the old version .

Sensors

Sensors is an international, peer-reviewed, open access journal on the science and technology of sensors, published semimonthly online by MDPI. 
Indexed in PubMed | Quartile Ranking JCR - Q2 (Instruments and Instrumentation | Chemistry, Analytical | Engineering, Electrical and Electronic)

All Articles (75,869)

Recent Progress in Silicon-Based On-Chip Integrated Infrared Photodetectors

  • Yu He,
  • Hongling Peng and
  • Qiandong Zhuang
  • + 5 authors

Infrared (IR) photodetectors are indispensable to modern optoelectronic systems, ranging from night vision imaging, surveillance, and industrial process control to environmental monitoring and medical diagnostics. However, traditional detectors based on bulk semiconductors are constrained by prohibitive fabrication costs and the stringent requirement for bulky cryogenic cooling, which severely hinders their widespread deployment in Size, Weight, and Power (SWaP)-sensitive scenarios. Silicon-based on-chip integration, leveraging compatibility with mature CMOS processes, has emerged as a transformative paradigm. It enables the realization of fully functional photonic integrated circuits (PICs) capable of on-chip sensing and high-speed data transmission, offering a pathway toward miniaturized and cost-effective architectures. This article provides a review of recent progress in silicon-based infrared photodetectors across three core material systems: Group IV (Ge/GeSn), III–V compounds, and two-dimensional (2D) materials. In the end, we offer an outlook on the development trends of next-generation intelligent sensing systems driven by optoelectronic convergence.

9 February 2026

(a) Structure of the U-shaped electrode PDs. (b) Measured eye diagrams. (c) I–V characteristics of the two kinds of PDs. (d) Photocurrent with the input optical power at −1 V. (e) Normalized 
  
    S
    21
  
 parameters of the PDs. Reproduced with permission from Ref. [32].

The individual-specific steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) is characterized by individual calibration data, resulting in satisfactory performance. However, existing individual-specific SSVEP-BCIs employ generalized channels and task-related subspaces, which seriously limit their potential advantages and lead to suboptimal solutions. In this study, AS-TRCA was proposed to develop a purely individual-specific SSVEP-BCI by fully exploiting individual-specific knowledge. AS-TRCA involves optimal channel learning and selection (OCLS) as well as optimal subspace selection (OSS). OCLS aims to pick the optimal subject-specific channels by employing sparse learning with spatial distance constraints. Meanwhile, OSS adaptively determines the appropriate number of optimal subject-specific task-related subspaces by maximizing profile likelihood. The extensive experimental results demonstrate that AS-TRCA can acquire meaningful channels and determine the proper number of task-related subspaces for each subject compared to traditional methods. Furthermore, combining AS-TRCA with existing advanced calibration-based SSVEP decoding methods, including deep learning methods, to establish a purely individual-specific SSVEP-BCI can further enhance the decoding performance of these methods. Specifically, AS-TRCA improved the average accuracy as follows: TRCA 7.21%, SSCOR 7.61%, TRCA-R 6.58%, msTRCA 7.70%, scTRCA 4.47%, TDCA 2.91%, and bi-SiamCA 3.23%. AS-TRCA is promising for further advancing the performance of SSVEP-BCI and promoting its practical applications.

9 February 2026

Maximum classification accuracy in the grid search at signal length 0.5 s: (a,c) show the results of benchmark dataset, while (b,d) present the results of BETA dataset. (a,b) correspond to channel selection parameter optimization, while (c,d) show subspace feature fusion parameter optimization. The location of blue circle indicates the maximum classification accuracy.

Conventional subtractive manufacturing methods, such as cutting, often result in material waste and limitations in geometric complexity. To address these challenges, Wire Arc Additive Manufacturing (WAAM), in which components are built through successive weld bead deposition, has attracted increasing attention across various industrial fields. However, WAAM-fabricated components typically exhibit significant surface irregularities, necessitating additional post-processing that reduces overall productivity. Improving productivity therefore requires effective control and optimization of deposition parameters. This task is particularly challenging in multilayer WAAM processes, as the geometry of previously deposited layers varies with operating conditions. To address this challenge, this study proposes an AI-based framework for controlling surface roughness by rapidly identifying near-optimal process parameters in response to evolving bead geometry. A large-scale simulation dataset was generated by applying a pre-trained deep neural network (DNN) surface roughness predictor to one million bead geometry variations under 72 process parameter combinations. The resulting optimal parameter labels were used to train a classification model that recommends process conditions based on the current bead geometry. Model performance was evaluated using predictor-estimated surface roughness values, achieving Weighted Precision, Recall, and F1-score of 0.98, with an average AUC of 0.977. Five previously generated WAAM specimens were used for comparative analysis between AI-recommended and conventional process conditions using the previously developed and validated surface roughness prediction model, rather than direct physical measurements. This predictor-based feasibility analysis showed that AI-recommended conditions consistently reduced the predicted surface roughness, indicating the potential of AI-driven process optimization to improve surface quality in WAAM and reduce reliance on post-processing.

9 February 2026

(a) WAAM product surface between the second and third layers. (b) Reconstruction of WAAM product between the second and third layers using a CMM [7].

Addressing the challenges in vision-based sensor networks, this study proposes a novel fire detection framework combining Multi-Feature Fusion and Adaptive Support Vector Machine (A-SVM). First, a high-dimensional feature vector is constructed by fusing HSI color space statistics, Local Binary Pattern (LBP) dynamic textures, and Wavelet Transform shape features. A baseline SVM classifier is then trained on source domain data. Second, to overcome the difficulty of acquiring labeled samples in target domains (e.g., strong daytime interference or low nighttime illumination), a small-sample domain adaptation mechanism is introduced. This mechanism fine-tunes the source model parameters using only a few labeled samples from the target domain via regularization constraints. Experimental results demonstrate that, compared with traditional color thresholding methods and unadapted baseline SVMs, the proposed method increases the F1-score by 19% and 30% in typical daytime and nighttime cross-domain scenarios, respectively. This study effectively achieves low-cost, high-precision, and robust cross-scenario fire detection, making it highly suitable for deployment on resource-constrained edge computing nodes within smart sensor networks.

9 February 2026

The overall signal processing architecture of the proposed visual sensor-based fire detection framework, incorporating multi-feature fusion and the A-SVM domain adaptation mechanism (The green, yellow, and purple blocks represent the extraction of color, texture, and shape features, respectively. The blue block denotes the pre-trained source domain model, while the red block indicates the target domain model being adapted.).

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Sensors - ISSN 1424-8220