Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (259)

Search Parameters:
Keywords = perfect matching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 16157 KB  
Article
A Robust Skeletonization Method for High-Density Fringe Patterns in Holographic Interferometry Based on Parametric Modeling and Strip Integration
by Sergey Lychev and Alexander Digilov
J. Imaging 2026, 12(2), 54; https://doi.org/10.3390/jimaging12020054 - 24 Jan 2026
Viewed by 38
Abstract
Accurate displacement field measurement by holographic interferometry requires robust analysis of high-density fringe patterns, which is hindered by speckle noise inherent in any interferogram, no matter how perfect. Conventional skeletonization methods, such as edge detection algorithms and active contour models, often fail under [...] Read more.
Accurate displacement field measurement by holographic interferometry requires robust analysis of high-density fringe patterns, which is hindered by speckle noise inherent in any interferogram, no matter how perfect. Conventional skeletonization methods, such as edge detection algorithms and active contour models, often fail under these conditions, producing fragmented and unreliable fringe contours. This paper presents a novel skeletonization procedure that simultaneously addresses three fundamental challenges: (1) topology preservation—by representing the fringe family within a physics-informed, finite-dimensional parametric subspace (e.g., Fourier-based contours), ensuring global smoothness, connectivity, and correct nesting of each fringe; (2) extreme noise robustness—through a robust strip integration functional that replaces noisy point sampling with Gaussian-weighted intensity averaging across a narrow strip, effectively suppressing speckle while yielding a smooth objective function suitable for gradient-based optimization; and (3) sub-pixel accuracy without phase extraction—leveraging continuous bicubic interpolation within a recursive quasi-optimization framework that exploits fringe similarity for precise and stable contour localization. The method’s performance is quantitatively validated on synthetic interferograms with controlled noise, demonstrating significantly lower error compared to baseline techniques. Practical utility is confirmed by successful processing of a real interferogram of a bent plate containing over 100 fringes, enabling precise displacement field reconstruction that closely matches independent theoretical modeling. The proposed procedure provides a reliable tool for processing challenging interferograms where traditional methods fail to deliver satisfactory results. Full article
(This article belongs to the Special Issue Image Segmentation: Trends and Challenges)
24 pages, 1423 KB  
Article
Probing Threshold Behavior of Adaptive Cascaded Quantum Codes Under Variable Biased Noise for Practical Fault-Tolerant Quantum Computing
by Yongnan Chen, Zaixu Fan, Haopeng Wang, Cewen Tian and Hongyang Ma
Electronics 2026, 15(2), 436; https://doi.org/10.3390/electronics15020436 - 19 Jan 2026
Viewed by 83
Abstract
This paper proposes a resource optimized cascaded quantum surface repetition code architecture integrated with a Union Find (UF) enhanced hybrid decoder, which suppresses biased noise and improves the scalability of quantum error correction through synergistic inner outer quantum code collaboration. The hybrid architecture [...] Read more.
This paper proposes a resource optimized cascaded quantum surface repetition code architecture integrated with a Union Find (UF) enhanced hybrid decoder, which suppresses biased noise and improves the scalability of quantum error correction through synergistic inner outer quantum code collaboration. The hybrid architecture employs inner quantum repetition codes for local error suppression and outer rotated quantum surface codes for topological robustness, reducing auxiliary quantum qubits by 12.5% via shared stabilizers and compact lattice embedding. An optimized UF decoder employing path compression and adaptive cluster merging achieves near-linear time complexity O(nα(n)), outperforming minimum-weight perfect matching (MWPM) decoders O(n2.5). Under Z-biased noise η=10, simulations demonstrate a 28.2% error threshold, 2.6% higher than standard quantum surface codes, and 15% lower logical error rates via dynamic boundary expansion. At code distance d=7, resource savings reach 9.3% with maximum relative error below 8.5%, fulfilling fault-tolerance criteria. The UF decoder exhibits 38% threshold advantage over MWPM at low bias η103 and 15% less degradation at high noise p=0.5, enabling scalable real-time decoding. This framework bridges theoretical thresholds with practical resource constraints, offering a noise-adaptive QEC solution for near-term quantum devices including photonic quantum systems referenced in the paper’s background on repetition cat qubits. Full article
Show Figures

Figure 1

15 pages, 1028 KB  
Article
Who Am I? Eyebrow Follicles Minimize Donor-Derived DNA for Germline Testing After Hematopoietic Stem Cell Transplantation
by Matthias Mertens, Mona Sadlo, Jörn-Sven Kühl, Klaus Metzeler, Louisa Zschenderlein, Jeanett Edelmann, Claudia Lehmann, Sarah Thull, Mert Karakaya, Clara Velmans, Theresa Tumewu, Matthias Böhme, Christina Klötzer, Anne Weigert, Vladan Vucinic, Julia Hentschel and Mareike Mertens
Int. J. Mol. Sci. 2026, 27(2), 744; https://doi.org/10.3390/ijms27020744 - 12 Jan 2026
Viewed by 207
Abstract
Germline genetic testing plays a critical role in diagnosing inherited predispositions and increasingly guides therapeutic and surveillance choices—but becomes technically challenging after allogeneic hematopoietic stem cell transplantation (HSCT), when donor-derived DNA contaminates host tissues. To address this, we compared donor-derived DNA across three [...] Read more.
Germline genetic testing plays a critical role in diagnosing inherited predispositions and increasingly guides therapeutic and surveillance choices—but becomes technically challenging after allogeneic hematopoietic stem cell transplantation (HSCT), when donor-derived DNA contaminates host tissues. To address this, we compared donor-derived DNA across three accessible tissues—buccal swab, nail, and eyebrow follicles—in recipients after hematopoietic stem cell transplantation using two orthogonal assays (34-SNP next-generation sequencing and a 27-marker short tandem repeat panel) and modeled clinical covariates that influence chimerism. Eyebrow follicles showed consistently low donor DNA (median 1% by NGS; 3% by STR) whereas buccal swabs and nails carried substantially higher donor fractions (+25 and +22 percentage points versus eyebrow, respectively; both p < 0.01). Across methods, STR yielded on average ≈6 percentage points higher donor fractions than NGS at low-level chimerism. Several transplant covariates correlated with chimerism: matched-related donors and a perfect HLA match (10/10) were each associated with lower donor DNA (≈12–14 and 15–20 percentage points, respectively); longer times since hematopoietic stem cell transplantation correlated with lower levels for nail samples, and donor–recipient sex match correlated with higher donor DNA (~7–8 percentage points). Even low-level chimerism can distort germline variant interpretation. We propose a pragmatic protocol for post-hematopoietic stem cell transplantation germline testing that prioritizes eyebrow follicles as the default tissue. An SNP-based quality control assay is used to flag unsafe donor fractions (≥ 5–10%) before comprehensive germline analysis, reducing the risk that chimeric donor DNA distorts germline variant interpretation. Full article
Show Figures

Figure 1

33 pages, 1529 KB  
Article
An SQL Query Description Problem with AI Assistance for an SQL Programming Learning Assistant System
by Ni Wayan Wardani, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Zihao Zhu, I Nyoman Darma Kotama, Putu Sugiartawan and I Nyoman Agus Suarya Putra
Information 2026, 17(1), 65; https://doi.org/10.3390/info17010065 - 9 Jan 2026
Viewed by 303
Abstract
Today, relational databases are widely used in information systems. SQL (structured query language) is taught extensively in universities and professional schools across the globe as a programming language for its data management and accesses. Previously, we have studied a web-based programming learning assistant [...] Read more.
Today, relational databases are widely used in information systems. SQL (structured query language) is taught extensively in universities and professional schools across the globe as a programming language for its data management and accesses. Previously, we have studied a web-based programming learning assistant system (PLAS) to help novice students learn popular programming languages by themselves through solving various types of exercises. For SQL programming, we have implemented the grammar-concept understanding problem (GUP) and the comment insertion problem (CIP) for its initial studies. In this paper, we propose an SQL Query Description Problem (SDP) as a new exercise type for describing the SQL query to a specified request in a MySQL database system. To reduce teachers’ preparation workloads, we integrate a generative AI-assisted SQL query generator to automatically generate a new SDP instance with a given dataset. An SDP instance consists of a table, a set of questions and corresponding queries. Answer correctness is determined by enhanced string matching against an answer module that includes multiple semantically equivalent canonical queries. For evaluation, we generated 11 SDP instances on basic topics using the generator, where we found that Gemini 3.0 Pro exhibited higher pedagogical consistency compared to ChatGPT-5.0, achieving perfect scores in Sensibleness, Topicality, and Readiness metrics. Then, we assigned the generated instances to 32 undergraduate students at the Indonesian Institute of Business and Technology (INSTIKI). The results showed an average correct answer rate of 95.2% and a mean SUS score of 78, which demonstrates strong initial student performance and system acceptance. Full article
(This article belongs to the Special Issue Generative AI Transformations in Industrial and Societal Applications)
Show Figures

Graphical abstract

18 pages, 4821 KB  
Article
Automated Baseline-Correction and Signal-Detection Algorithms with Web-Based Implementation for Thermal Liquid Biopsy Data Analysis
by Karl C. Reger, Gabriela Schneider, Keegan T. Line, Alagammai Kaliappan, Robert Buscaglia and Nichola C. Garbett
Cancers 2026, 18(1), 60; https://doi.org/10.3390/cancers18010060 - 24 Dec 2025
Viewed by 380
Abstract
Background/Objectives: Differential scanning calorimetry (DSC) analysis of blood plasma, also known as thermal liquid biopsy (TLB), is a promising approach for disease detection and monitoring; however, its wider adoption in clinical settings has been hindered by labor-intensive data processing workflows, particularly baseline correction. [...] Read more.
Background/Objectives: Differential scanning calorimetry (DSC) analysis of blood plasma, also known as thermal liquid biopsy (TLB), is a promising approach for disease detection and monitoring; however, its wider adoption in clinical settings has been hindered by labor-intensive data processing workflows, particularly baseline correction. Methods: We developed and tested two automated algorithms to address critical bottlenecks in TLB analysis: (1) a baseline-correction algorithm utilizing rolling-variance analysis for endpoint detection, and (2) a signal-detection algorithm that applies auto-regressive integrated moving average (ARIMA)-based stationarity testing to determine whether a profile contains interpretable thermal features. Both algorithms are implemented in ThermogramForge, an open-source R Shiny web application providing an end-to-end workflow for data upload, processing, and report generation. Results: The baseline-correction algorithm demonstrated excellent performance on plasma TLB data (characterized by high heat capacity), matching the quality of rigorous manual processing. However, its performance was less robust for low signal biofluids, such as urine, where weak thermal transitions reduce the reliability of baseline estimation. To address this, a complementary signal-detection algorithm was developed to screen for TLB profiles with discernable thermal transitions prior to baseline correction, enabling users to exclude non-informative data. The signal-detection algorithm achieved near-perfect classification accuracy for TLB profiles with well-defined thermal transitions and maintained a low false-positive rate of 3.1% for true noise profiles, with expected lower performance for borderline cases. The interactive review interface in ThermogramForge further supports quality control and expert refinement. Conclusions: The automated baseline-correction and signal-detection algorithms, together with their web-based implementation, substantially reduce analysis time while maintaining quality, supporting more efficient and reproducible TLB research. Full article
Show Figures

Figure 1

27 pages, 8990 KB  
Article
A Non-Embedding Watermarking Framework Using MSB-Driven Reference Mapping for Distortion-Free Medical Image Authentication
by Osama Ouda
Electronics 2026, 15(1), 7; https://doi.org/10.3390/electronics15010007 - 19 Dec 2025
Viewed by 302
Abstract
Ensuring the integrity of medical images is essential to securing clinical workflows, telemedicine platforms, and healthcare IoT environments. Existing watermarking and reversible data-hiding approaches often modify pixel intensities, reducing diagnostic fidelity, introducing embedding constraints, or causing instability under compression and format conversion. This [...] Read more.
Ensuring the integrity of medical images is essential to securing clinical workflows, telemedicine platforms, and healthcare IoT environments. Existing watermarking and reversible data-hiding approaches often modify pixel intensities, reducing diagnostic fidelity, introducing embedding constraints, or causing instability under compression and format conversion. This work proposes a distortion-free, non-embedding authentication framework that leverages the inherent stability of the most significant bit (MSB) patterns in the Non-Region of Interest (NROI) to construct a secure and tamper-sensitive reference for the diagnostic Region of Interest (ROI). The ROI is partitioned into fixed blocks, each producing a 256-bit SHA-256 signature. Instead of embedding this signature, each hash bit is mapped to an NROI pixel whose MSB matches the corresponding bit value, and only the encrypted coordinates of these pixels are stored externally in a secure database. During verification, hashes are recomputed and compared bit-by-bit with the MSB sequence extracted from the referenced NROI coordinates, enabling precise block-level tamper localization without modifying the image. Extensive experiments conducted on MRI (OASIS), X-ray (ChestX-ray14), and CT (CT-ORG) datasets demonstrate the following: (i) perfect zero-distortion fidelity; (ii) stable and deterministic MSB-class mapping with abundant coordinate diversity; (iii) 100% detection of intentional ROI tampering with no false positives across the six clinically relevant manipulation types; and (iv) robustness to common benign Non-ROI operations. The results show that the proposed scheme offers a practical, secure, and computationally lightweight solution for medical image integrity verification in PACS systems, cloud-based archives, and healthcare IoT applications, while avoiding the limitations of embedding-based methods. Full article
(This article belongs to the Special Issue Advances in Cryptography and Image Encryption)
Show Figures

Figure 1

18 pages, 842 KB  
Article
Model-Embedded Lightweight Network for Joint I/Q Imbalance and CFO Estimation in NB-IoT
by Yijun Ling and Yue Meng
Symmetry 2025, 17(12), 2157; https://doi.org/10.3390/sym17122157 - 15 Dec 2025
Viewed by 281
Abstract
Narrowband Internet of Things (NB-IoT) was designed as a key Low-Power Wide-Area Network technology when 5G networks were established. The ideal quadrature demodulation in NB-IoT relies on the fundamental symmetry between the in-phase (I) and quadrature (Q) branches, characterized by a perfect 90-degree [...] Read more.
Narrowband Internet of Things (NB-IoT) was designed as a key Low-Power Wide-Area Network technology when 5G networks were established. The ideal quadrature demodulation in NB-IoT relies on the fundamental symmetry between the in-phase (I) and quadrature (Q) branches, characterized by a perfect 90-degree phase shift and matched amplitude. However, practical hardware imperfections in mixers, filters, and ADCs break this symmetry, leading to I/Q imbalances. Moreover, I/Q imbalance is coupled with carrier frequency offset (CFO), which arises from asymmetry in the frequency of the transceiver oscillator. In this paper, we propose a model-embedded lightweight network for joint CFO and I/Q imbalance estimation for NB-IoT systems. An I/Q imbalance compensation model is embedded as a layer to connect two subnetworks, I/Q estimation network (IQENET) and CFO estimation network (CFOENET). By embedding the physical model, the network gains the capability to learn the features of coupling effects during the training process, as the image signals caused by I/Q imbalance are removed before CFO estimation. A phased training strategy is also proposed. In the first phase, the two subnetworks are pre-trained independently. In the second phase, they are fine-tuned jointly to deal with the coupling effects. Simulation results show that the proposed network achieves high estimation accuracy while maintaining low complexity. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Wireless Sensor Networks)
Show Figures

Figure 1

12 pages, 2043 KB  
Article
On Vertex Magic 3-Regular Graphs with a Perfect Matching
by Tao-Ming Wang
Mathematics 2025, 13(24), 3969; https://doi.org/10.3390/math13243969 - 12 Dec 2025
Viewed by 552
Abstract
Let G=(V,E) be a finite simple graph with p=|V| vertices and q=|E| edges, without isolated vertices or isolated edges. A vertex magic total labeling is a bijection f from [...] Read more.
Let G=(V,E) be a finite simple graph with p=|V| vertices and q=|E| edges, without isolated vertices or isolated edges. A vertex magic total labeling is a bijection f from VE to the consecutive integers 1,2,,p+q, with the property that, for every vertex uV, one has f(u)+uvEf(uv)=k for some magic constant k. The vertex magic total labeling is called E-super if furthermore f(E)={1,2,,q}. A graph is called (E-super) vertex magic if it admits an (E-super) vertex magic total labeling. In this paper, we verify the existence of E-super vertex magic total labeling for a class of 3-regular graphs with a perfect matching, and we confirm the existence of such a labeling for general regular graphs of odd degree containing particular classes of 3-factors, which provides us with known and new examples. Note that Harary graphs are among the popular models used in communication networks. In 2012, G. Marimuthu and M. Balakrishnan raised a conjecture that if n>4, n0(mod4) and m is odd, then the Harary graph Hm,n admits an E-super vertex magic labeling. Among others, we are able to verify this conjecture except for one case while m=3 and n4(mod8). Full article
(This article belongs to the Special Issue Graph Theory and Applications, 3rd Edition)
Show Figures

Figure 1

20 pages, 5100 KB  
Article
A Supervised Learning Approach for Accurate and Efficient Identification of Chikungunya Virus Lineages and Signature Mutations
by Miao Miao, Yameng Fan, Jiao Tan, Xiaobin Hu, Yonghong Ma, Guangdi Li and Ke Men
Biology 2025, 14(12), 1736; https://doi.org/10.3390/biology14121736 - 4 Dec 2025
Viewed by 515
Abstract
Chikungunya virus (CHIKV) poses a significant public health threat, and its continuous evolution necessitates high-resolution genomic surveillance. Current methods lack the speed and resolution to efficiently discriminate sub-lineages. To address this, we developed CHIKVGenotyper, an interpretable machine learning framework for high-resolution CHIKV lineage [...] Read more.
Chikungunya virus (CHIKV) poses a significant public health threat, and its continuous evolution necessitates high-resolution genomic surveillance. Current methods lack the speed and resolution to efficiently discriminate sub-lineages. To address this, we developed CHIKVGenotyper, an interpretable machine learning framework for high-resolution CHIKV lineage classification. This study leveraged a comprehensive dataset of 6886 CHIKV genome sequences, from which a high-quality set of 3014 sequences was established for model development. A hierarchical assignment pipeline that integrated a probability-based sequence matching model, machine learning refinement, and phylogenetic validation was developed to assign high-confidence labels across eight CHIKV lineages, thereby constructing a reliable dataset for subsequent analysis. Multiple machine learning models were trained and evaluated, with the optimal Random Forest model achieving near-perfect accuracy (F1-score: 99.53%) on high-coverage whole-genome test data and maintaining robust performance (F1-score: 96.50%) on an independent low-coverage set. The E2 glycoprotein alone yielded comparable accuracy (F1-score: 99.52%), highlighting its discriminative power. SHapley Additive exPlanations (SHAP) analysis identified key lineage-defining amino acid mutations, such as E1-K211E and E2-V264A, for the Indian Ocean Lineage, which were corroborated by established biological knowledge. This work provides an accurate, scalable, and interpretable tool for CHIKV molecular epidemiology, offering insights into viral evolution and aiding outbreak response. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Figure 1

22 pages, 5508 KB  
Article
A Generative AI-Enhanced Robotic Desktop Automation Framework for Multi-System Nephrology Data Entry in Government Healthcare Platforms
by Sumalee Sangamuang, Perasuk Worragin, Kitti Puritat, Phichete Julrode and Kannikar Intawong
Technologies 2025, 13(12), 558; https://doi.org/10.3390/technologies13120558 - 29 Nov 2025
Viewed by 558
Abstract
This study introduces a Generative AI-Enhanced Robotic Data Automation (AI-ERDA) framework designed to improve accuracy, efficiency, and adaptability in healthcare data workflows. Conducted over a two-month, real-world experiment across three government health platforms—one web-based (NHSO) and two PC-based systems (CHi and TRT)—the study [...] Read more.
This study introduces a Generative AI-Enhanced Robotic Data Automation (AI-ERDA) framework designed to improve accuracy, efficiency, and adaptability in healthcare data workflows. Conducted over a two-month, real-world experiment across three government health platforms—one web-based (NHSO) and two PC-based systems (CHi and TRT)—the study compared the performance of AI-ERDA against a conventional RDA system in terms of usability, automation accuracy, and resilience to user interface (UI) changes. Results demonstrated notable improvements in both usability and reliability. The AI-ERDA achieved a mean System Usability Scale (SUS) score of 80, compared with 68 for the traditional RDA, while Field Exact Match Accuracy increased by 1.8 percent in the web system and by 0.2 to 0.3 percent in the PC systems. During actual UI modifications, the AI-ERDA maintained near-perfect accuracy, with rapid self-correction within one day, whereas the baseline RDA required several days of manual reconfiguration and assistance from the development team to resolve issues. These findings indicate that generative and adaptive automation can effectively reduce manual workload, minimize downtime, and maintain high data integrity across heterogeneous systems. By integrating adaptive learning, semantic validation, and human-in-the-loop oversight, the AI-ERDA framework advances sustainable digital transformation and reinforces transparency, trust, and accountability in healthcare data management. Full article
(This article belongs to the Special Issue AI-Enabled Smart Healthcare Systems)
Show Figures

Figure 1

3845 KB  
Proceeding Paper
Exploring the Application of UAV-Multispectral Sensors for Proximal Imaging of Agricultural Crops
by Tarun Teja Kondraju, Rabi N. Sahoo, Selvaprakash Ramalingam, Rajan G. Rejith, Amrita Bhandari, Rajeev Ranjan and Devanakonda Venkata Sai Chakradhar Reddy
Eng. Proc. 2025, 118(1), 91; https://doi.org/10.3390/ECSA-12-26542 - 7 Nov 2025
Viewed by 169
Abstract
UAV-mounted multispectral sensors are widely used to study crop health. Utilising the same cameras to capture close-up images of crops can significantly improve crop health evaluations through multispectral technology. Unlike RGB cameras that only detect visible light, these sensors can identify additional spectral [...] Read more.
UAV-mounted multispectral sensors are widely used to study crop health. Utilising the same cameras to capture close-up images of crops can significantly improve crop health evaluations through multispectral technology. Unlike RGB cameras that only detect visible light, these sensors can identify additional spectral bands in the red-edge and near-infrared (NIR) ranges. This enables early detection of diseases, pests, and deficiencies through the calculation of various spectral indices. In this work, the ability to use UAV-multispectral sensors for close-proximity imaging of crops was studied. Images of plants were taken with a Micasense Rededge-MX from top and side views at a distance of 1 m. The camera has five sensors that independently capture blue, green, red, red-edge, and NIR light. The slight misalignment of these sensors results in a shift in the swath. This shift needs to be corrected to create a proper layer stack that could allow for further processing. This research utilised the Oriented FAST and Rotated BRIEF (ORB) method to detect features in each image. Random sample consensus (RANSAC) was used for feature matching to find similar features in the slave images compared to the master image (indicated by the green band). Utilising homography to warp the slave images ensures their perfect alignment with the master image. After alignment, the images were stacked, and the alignment accuracy was visually checked using true colour composites. The side-view images of the plants were perfectly aligned, while the top-view images showed errors, particularly in the pixels far from the centre. This study demonstrates that UAV-mounted multispectral sensors can capture images of plants effectively, provided the plant is centred in the frame and occupies a smaller area within the image. Full article
Show Figures

Figure 1

12 pages, 3653 KB  
Proceeding Paper
CMOS-Compatible Narrow Bandpass MIM Metamaterial Absorbers for Spectrally Selective LWIR Thermal Sensors
by Moshe Avraham, Mikhail Klinov and Yael Nemirovsky
Eng. Proc. 2025, 118(1), 1; https://doi.org/10.3390/ECSA-12-26501 - 7 Nov 2025
Viewed by 201
Abstract
The growing demand for compact, low-power infrared (IR) sensors necessitates advanced solutions for on-chip spectral selectivity, particularly for integration with Thermal Metal-Oxide-Semiconductor (TMOS) devices. This paper investigates the design and analysis of CMOS-compatible metal–insulator–metal (MIM) metamaterial absorbers tailored for selective absorption in the [...] Read more.
The growing demand for compact, low-power infrared (IR) sensors necessitates advanced solutions for on-chip spectral selectivity, particularly for integration with Thermal Metal-Oxide-Semiconductor (TMOS) devices. This paper investigates the design and analysis of CMOS-compatible metal–insulator–metal (MIM) metamaterial absorbers tailored for selective absorption in the long-wave infrared (LWIR) region. We present a design methodology utilizing an equivalent-circuit model, which provides intuitive physical insight into the absorption mechanism and significantly reduces computational costs compared to full-wave electromagnetic simulations. An important rule in this design methodology is demonstrating how the resonance wavelength of these absorbers can be precisely tuned across the LWIR spectrum by engineering the geometric parameters of the top metallic patterns and, critically, by optimizing the dielectric substrate’s refractive index and thickness, which assist in designing small period MIM absorber units which are important in infrared thermal sensor pixels. Our results demonstrate that the resonance wavelength of these absorbers can be precisely tuned across the LWIR spectrum by engineering the geometric parameters of the top metallic patterns and by optimizing the dielectric substrate’s refractive index and thickness. Specifically, the selection of silicon as the dielectric material, owing to its high refractive index and low losses, facilitates compact designs with high-quality factors. The transmission line model provides intuitive insight into how near-perfect absorption is achieved when the absorber’s input impedance matches the free-space impedance. This work presents a new approach for the methodology of designing MIM absorbers in the mid-infrared and long-wave infrared (LWIR) regions, utilizing the intuitive insights provided by equivalent circuit modeling. This study validates a highly efficient design approach for high-performance, spectrally selective MIM absorbers for LWIR radiation, paving the way for their monolithic integration with TMOS sensors to enable miniaturized, cost-effective, and functionally enhanced IR sensing systems. Full article
Show Figures

Figure 1

16 pages, 1762 KB  
Article
Relationship Between Internal and External Load in Under-16 Soccer Players: Heart Rate, Rating of Perceived Exertion, and GPS-Derived Variables
by Krisztián Havanecz, Sándor Sáfár, Csaba Bartha, Bence Kopper, Tamás Horváth, Péter János Tóth, Gabriella P. Szabó, Zoltán Szalánczi and Gábor Géczi
Sports 2025, 13(11), 376; https://doi.org/10.3390/sports13110376 - 3 Nov 2025
Viewed by 1595
Abstract
Heart rate (HR) monitoring is a practical method for assessing internal load (IL). However, it remains unclear for which age group HR would be an appropriate predictor of IL considering the relationship with external load (EL). Thus, this study aims to evaluate the [...] Read more.
Heart rate (HR) monitoring is a practical method for assessing internal load (IL). However, it remains unclear for which age group HR would be an appropriate predictor of IL considering the relationship with external load (EL). Thus, this study aims to evaluate the relevance and applicability of HR monitoring by exploring the relationship between EL and IL among U16 soccer players. EL was measured using global positioning system (GPS) data, while IL was assessed through training impulse (TRIMP), Edward’s TRIMP, HR exertion, rate of perceived exertion (RPE), and session-RPE (s-RPE). Nineteen (N = 19) male footballers from an elite football academy participated, with data collected from 50 training sessions and 11 matches. In the analysis of the training sessions, TRIMP demonstrated a near-perfect correlation with total distance (TD) (p < 0.001), and eTRIMP correlated strongly with TD (r = 0.82) and player load (r = 0.79). HR exertion also correlated significantly with TD, medium-speed running, decelerations, inertial movement analysis (IMA) events, and player load (p < 0.001). In matches, a large correlation was observed between TRIMP and TD (r = 0.73), while the strongest correlation was between RPE and s-RPE with TD and PL (p < 0.001). Furthermore, TD emerged as the best GPS-derived predictor of both TRIMP and HR exertion in training contexts. These findings provide evidence for the validity and usability of heart rate-based and RPE-based measures to indicate IL in U16 soccer players. Future research should focus on contextual factors in exploring the relationship between EL and IL. Full article
Show Figures

Graphical abstract

17 pages, 2738 KB  
Article
Success from the Spot: Insights into Penalty Performance in Elite Women’s Football
by Pablo Cidre-Fuentes, Alfonso Gutiérrez-Santiago and Iván Prieto-Lage
Appl. Sci. 2025, 15(21), 11678; https://doi.org/10.3390/app152111678 - 31 Oct 2025
Viewed by 870
Abstract
This study analyzed penalty kick performance in elite women’s football, focusing on contextual, situational, and technical factors across the Spanish Liga F and the English Women’s Super League during the 2022/23 to 2024/25 seasons. Using an observational methodology, 240 penalties were systematically coded [...] Read more.
This study analyzed penalty kick performance in elite women’s football, focusing on contextual, situational, and technical factors across the Spanish Liga F and the English Women’s Super League during the 2022/23 to 2024/25 seasons. Using an observational methodology, 240 penalties were systematically coded according to twelve criteria and fifty-five categories. Chi-square goodness-of-fit and independence tests were applied to examine distributions and associations. The overall conversion rate was 80.4%, with 15.4% of attempts saved and 4.2% missed. Home teams achieved significantly higher success than away teams (85.6% vs. 72.7%), while penalties taken when leading were less effective (69.9%) compared with those executed while drawing or losing (>84%). Temporal effects were also observed, with lower effectiveness around halftime (71.8%). Laterality and goalkeeper actions showed no significant influence, although some league-specific tendencies were noted. Shot placement emerged as the strongest determinant of success, with upper and central zones achieving near-perfect results, whereas medium-height shots were least effective. These findings extend existing knowledge by providing the first longitudinal evidence from elite women’s domestic leagues in Spain and England. Practical implications include emphasizing accuracy toward optimal zones, reinforcing psychological preparation when leading, and addressing performance drops during specific match periods. Full article
(This article belongs to the Special Issue Advances in Sports Science and Biomechanics)
Show Figures

Figure 1

18 pages, 2599 KB  
Article
Rapid FTIR Spectral Fingerprinting of Kidney Allograft Perfusion Fluids Distinguishes DCD from DBD Donors: A Pilot Machine Learning Study
by Luis Ramalhete, Rúben Araújo, Miguel Bigotte Vieira, Emanuel Vigia, Ana Pena, Sofia Carrelha, Anibal Ferreira and Cecília R. C. Calado
Metabolites 2025, 15(11), 702; https://doi.org/10.3390/metabo15110702 - 29 Oct 2025
Viewed by 585
Abstract
Background/Objectives: Rapid, objective phenotyping of donor kidneys is needed to support peri-implant decisions. Label-free Fourier-transform infrared (FTIR) spectroscopy of static cold-storage Celsior® perfusion fluid can discriminate kidneys recovered from donation after circulatory death (DCD) versus donation after brain death (DBD). Methods: Preservation [...] Read more.
Background/Objectives: Rapid, objective phenotyping of donor kidneys is needed to support peri-implant decisions. Label-free Fourier-transform infrared (FTIR) spectroscopy of static cold-storage Celsior® perfusion fluid can discriminate kidneys recovered from donation after circulatory death (DCD) versus donation after brain death (DBD). Methods: Preservation solution from isolated kidney allografts (n = 10; 5 DCD/5 DBD) matched on demographics was analyzed in the Amide I and fingerprint regions. Several spectral preprocessing steps were applied, and feature extraction was based on the Fast Correlation-Based Filter. Support vector machines and Naïve Bayes were evaluated. Unsupervised structure was assessed based on cosine distance, multidimensional scaling, and hierarchical clustering. Two-dimensional correlation spectroscopy (2D-COS) was used to examine band co-variation. Results: Donor cohorts were well balanced, except for higher terminal serum creatinine in DCD. Quality metrics were comparable, indicating no systematic technical bias. In Amide I, derivatives improved classification, but performance remained modest (e.g., second derivative with feature selection yielded an area under the curve (AUC) of 0.88 and an accuracy of 0.90 for support vector machines; Naïve Bayes reached an AUC of 0.92 with an accuracy of 0.70). The fingerprint window was most informative. Naïve Bayes with second derivative plus feature selection identified bands at ~1202, ~1203, ~1342, and ~1413 cm−1 and achieved an AUC of 1.00 and an accuracy of 1.00. Unsupervised analyses showed coherent grouping in the fingerprint region, and 2D correlation maps indicated coordinated multi-band changes. Conclusions: Performance in this 10-sample pilot should be interpreted cautiously, as perfect leave-one-out cross-validation (LOOCV) estimates are vulnerable to overfitting. The findings are preliminary and hypothesis-generating, and they require confirmation in larger, multicenter cohorts with a pre-registered analysis pipeline and external validation. Full article
Show Figures

Figure 1

Back to TopTop