Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (19,946)

Search Parameters:
Keywords = similarity measurement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 10952 KB  
Article
Therapeutic Outcomes of Fingolimod and Interferon Beta-1a in Relapsing–Remitting Multiple Sclerosis: A Real-World Study from Jordan
by Arwa Al Anber, Ola Abu Al Karsaneh, Dua Abuquteish, Osama Abdallah, Mohammad A. Issa, Mohammad Sa’adeh and Dena Kilani
Medicina 2026, 62(1), 203; https://doi.org/10.3390/medicina62010203 (registering DOI) - 18 Jan 2026
Abstract
Background and Objectives: Multiple sclerosis (MS) is a chronic autoimmune disease of the central nervous system with rising prevalence in the Middle East. Real-world comparative data on disease-modifying therapies from this region remain limited. This retrospective study compared the clinical outcomes and [...] Read more.
Background and Objectives: Multiple sclerosis (MS) is a chronic autoimmune disease of the central nervous system with rising prevalence in the Middle East. Real-world comparative data on disease-modifying therapies from this region remain limited. This retrospective study compared the clinical outcomes and tolerability of fingolimod and interferon beta-1a (IFN-β1a) among patients with relapsing–remitting multiple sclerosis treated at a large public referral hospital in Jordan. Materials and Methods: All eligible RRMS patients received fingolimod or IFN-β1a at a single tertiary hospital. The annualized relapse rate (ARR), Expanded Disability Status Scale (EDSS) scores, and adverse effect frequencies were analyzed using descriptive and inferential statistics. A full-cohort inclusion approach was applied instead of sample-size calculation, as all available cases at Al-Basheer Hospital (Amman, Jordan) were included. Results: Fingolimod-treated patients showed a significantly higher ARR than those on IFN-β1a (0.51 vs. 0.26, p = 0.016), an association likely influenced by treatment sequencing and baseline disease activity. EDSS distributions were similar between treatment groups, with most patients demonstrating mild disability (EDSS ≤ 3.5). IFN-β1a was linked to injection site reactions, while fingolimod was better tolerated. Conclusions: The higher observed relapse rate among fingolimod-treated patients possibly reflects treatment sequencing and underlying disease severity rather than pharmacologic efficacy, as fingolimod was commonly prescribed as an escalation therapy. These findings highlight the importance of individualized treatment selection and underscore the need for prospective studies incorporating standardized baseline disease activity measures to better inform multiple sclerosis care in Jordan and the wider Middle Eastern region. Full article
(This article belongs to the Section Pharmacology)
Show Figures

Figure 1

23 pages, 1505 KB  
Article
Loss Prediction and Global Sensitivity Analysis for Distribution Transformers Based on NRBO-Transformer-BiLSTM
by Qionglin Li, Yi Wang and Tao Mao
Electronics 2026, 15(2), 420; https://doi.org/10.3390/electronics15020420 (registering DOI) - 18 Jan 2026
Abstract
As distributed energy resources and nonlinear loads are integrated into power grids on a large scale, power quality issues have grown increasingly prominent, triggering a substantial rise in distribution transformer losses. Traditional approaches struggle to accurately forecast transformer losses under complex power quality [...] Read more.
As distributed energy resources and nonlinear loads are integrated into power grids on a large scale, power quality issues have grown increasingly prominent, triggering a substantial rise in distribution transformer losses. Traditional approaches struggle to accurately forecast transformer losses under complex power quality conditions and lack quantitative analysis of the influence of various power quality indicators on losses. This study presents a data-driven methodology for transformer loss prediction and sensitivity analysis in such environments. First, an experimental platform is designed and built to measure transformer losses under composite power quality conditions, enabling the collection of actual measurement data when multi-source disturbances exist. Second, a high-precision loss prediction model—dubbed Newton-Raphson-Based Optimizer-Transformer-Bidirectional Long Short-Term Memory (NRBO-Transformer-BiLSTM)—is developed on the basis of an enhanced deep neural network. Finally, global sensitivity analysis methods are utilized to quantitatively evaluate the impact of different power quality indicators on transformer losses. Experimental results reveal that the proposed prediction model achieves an average error rate of less than 0.18% and a similarity coefficient of over 0.9989. Among all power quality indicators, voltage deviation has the most significant impact on transformer losses (with a sensitivity of 0.3268), followed by three-phase unbalance (sensitivity: 0.0109) and third harmonics (sensitivity: 0.0075). This research offers a theoretical foundation and technical support for enhancing the energy efficiency of distribution transformers and implementing effective power quality management. Full article
30 pages, 1142 KB  
Article
Entropy and Normalization in MCDA: A Data-Driven Perspective on Ranking Stability
by Ewa Roszkowska
Entropy 2026, 28(1), 114; https://doi.org/10.3390/e28010114 (registering DOI) - 18 Jan 2026
Abstract
Normalization is a critical step in Multiple-Criteria Decision Analysis (MCDA) because it transforms heterogeneous criterion values into comparable information. This study examines normalization techniques through the lens of entropy, highlighting how criterion data structure shapes normalization behavior and ranking stability within TOPSIS (Technique [...] Read more.
Normalization is a critical step in Multiple-Criteria Decision Analysis (MCDA) because it transforms heterogeneous criterion values into comparable information. This study examines normalization techniques through the lens of entropy, highlighting how criterion data structure shapes normalization behavior and ranking stability within TOPSIS (Technique for Order Preference by Similarity to Ideal Solution). Seven widely used normalization procedures are analyzed regarding mathematical properties, sensitivity to extreme values, treatment of benefit and cost criteria, and rank reversal. Normalization is treated as a source of uncertainty in MCDA outcomes, as different schemes can produce divergent rankings under identical decision settings. Shannon entropy is employed as a descriptive measure of information dispersion and structural uncertainty, capturing the heterogeneity and discriminatory potential of criteria rather than serving as a weighting mechanism. An illustrative experiment with ten alternatives and four criteria (two high-entropy, two low-entropy) demonstrates how entropy mediates normalization effects. Seven normalization schemes are examined, including vector, max, linear Sum, and max–min procedures. For vector, max, and linear sum, cost-type criteria are treated using either linear inversion or reciprocal transformation, whereas max–min is implemented as a single method. This design separates the choice of normalization form from the choice of cost-criteria transformation, allowing a cleaner identification of their respective contributions to ranking variability. The analysis shows that normalization choice alone can cause substantial differences in preference values and rankings. High-entropy criteria tend to yield stable rankings, whereas low-entropy criteria amplify sensitivity, especially with extreme or cost-type data. These findings position entropy as a key mediator linking data structure with normalization-induced ranking variability and highlight the need to consider entropy explicitly when selecting normalization procedures. Finally, a practical entropy-based method for choosing normalization techniques is introduced to enhance methodological transparency and ranking robustness in MCDA. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
18 pages, 3490 KB  
Article
Research on Seafloor 3D Reconstruction Method Based on Sparse Measurement Points
by Erliang Xiao, Lang Qin, Zhipeng Chi, Haiqing Gu, Yunsong Hua, Hui Yang and Ran Li
Sensors 2026, 26(2), 639; https://doi.org/10.3390/s26020639 (registering DOI) - 18 Jan 2026
Abstract
Seafloor 3D reconstruction is a core technology for seafloor topography and deformation monitoring. Due to the complexity of the deep-sea environment and the high requirements for measurement devices, long-term monitoring can only acquire low-resolution and limited seafloor topography data. This leads to difficulties [...] Read more.
Seafloor 3D reconstruction is a core technology for seafloor topography and deformation monitoring. Due to the complexity of the deep-sea environment and the high requirements for measurement devices, long-term monitoring can only acquire low-resolution and limited seafloor topography data. This leads to difficulties for existing 3D reconstruction algorithms in handling details and accuracy, especially with complex variations in seafloor terrain, which poses higher demands on 3D reconstruction algorithms. This study proposes a “fractal–Gaussian process” hybrid model, leveraging the fractal self-similarity property to precisely capture complex local details of the seafloor terrain, combined with the Bayesian global optimization ability of the Gaussian process model, to achieve high-resolution modeling of seafloor 3D reconstruction. Finally, Perlin noise is introduced to enhance the naturalness and detail representation of the terrain. Experiments show that under sparse data conditions, the proposed method significantly outperforms traditional interpolation methods, with average errors reduced by 30–40% and an R2 value of 0.9836. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

12 pages, 616 KB  
Article
The Role of Docosahexaenoic Acid in the Development of Preeclampsia and Perinatal Outcomes
by Nalan Kuruca, Senol Senturk, Ilknur Merve Ayazoglu, Medeni Arpa, Mehmet Kagıtcı, Sibel Dogan Polat and Bülent Yılmaz
Diagnostics 2026, 16(2), 305; https://doi.org/10.3390/diagnostics16020305 (registering DOI) - 17 Jan 2026
Abstract
Background/Objectives: Preeclampsia is a leading cause of maternal and perinatal morbidity worldwide, yet its underlying mechanisms remain unclear. Polyunsaturated fatty acids, particularly docosahexaenoic acid (DHA), are essential for placental development and vascular function, but evidence on their role in preeclampsia is inconsistent. [...] Read more.
Background/Objectives: Preeclampsia is a leading cause of maternal and perinatal morbidity worldwide, yet its underlying mechanisms remain unclear. Polyunsaturated fatty acids, particularly docosahexaenoic acid (DHA), are essential for placental development and vascular function, but evidence on their role in preeclampsia is inconsistent. This study aimed to compare serum DHA levels between women with preeclampsia and normotensive pregnant women and to examine their association with disease severity and maternal and perinatal outcomes. Methods: A total of 145 pregnant women aged 18–40 years were enrolled, including 47 with newly diagnosed preeclampsia (PE) and 98 normotensive controls. PE was defined according to the ACOG 2019 criteria. Serum DHA levels were measured using ELISA in fasting blood samples collected at the first visit. Results: Maternal serum DHA levels did not differ significantly between preeclampsia and control groups (p = 0.571); they were similar across control, mild PE, and severe PE groups. DHA showed a negative correlation with neutrophil-to-lymphocyte ratio (r = −0.305) and maternal hospitalization duration (r = −0.334). Independent predictors of PE included nulliparity (OR: 4.43), advanced age (OR: 1.14), elevated BMI (OR: 1.29), and low albumin (OR: 0.77). After adjusting for age and BMI, DHA was an independent negative predictor of IUGR (OR: 0.65). Conclusions: DHA levels: Placental and/or fetal DHA metabolism may be impaired in patients with preeclampsia. Although DHA was not associated with the development of PE, it was a negative predictor of IUGR. DHA reduces the length of maternal hospital stay through its anti-inflammatory effect. Full article
37 pages, 1276 KB  
Review
Versatility of Transcranial Magnetic Stimulation: A Review of Diagnostic and Therapeutic Applications
by Massimo Pascuzzi, Nika Naeini, Adam Dorich, Marco D’Angelo, Jiwon Kim, Jean-Francois Nankoo, Naaz Desai and Robert Chen
Brain Sci. 2026, 16(1), 101; https://doi.org/10.3390/brainsci16010101 (registering DOI) - 17 Jan 2026
Abstract
Transcranial magnetic stimulation (TMS) is a non-invasive neuromodulation technique that utilizes magnetic fields to induce cortical electric currents, enabling both the measurement and modulation of neuronal activity. Initially developed as a diagnostic tool, TMS now serves dual roles in clinical neurology, offering insight [...] Read more.
Transcranial magnetic stimulation (TMS) is a non-invasive neuromodulation technique that utilizes magnetic fields to induce cortical electric currents, enabling both the measurement and modulation of neuronal activity. Initially developed as a diagnostic tool, TMS now serves dual roles in clinical neurology, offering insight into neurophysiological dysfunctions and the therapeutic modulation of abnormal cortical excitability. This review examines key TMS outcome measures, including motor thresholds (MT), input–output (I/O) curves, cortical silent periods (CSP), and paired-pulse paradigms such as short-interval intracortical inhibition (SICI), short-interval intracortical facilitation (SICF), intracortical facilitation (ICF), long interval cortical inhibition (LICI), interhemispheric inhibition (IHI), and short-latency afferent inhibition (SAI). These biomarkers reflect underlying neurotransmitter systems and can aid in differentiating neurological conditions. Diagnostic applications of TMS are explored in Parkinson’s disease (PD), dystonia, essential tremor (ET), Alzheimer’s disease (AD), and mild cognitive impairment (MCI). Each condition displays characteristic neurophysiological profiles, highlighting the potential for TMS-derived biomarkers in early or differential diagnosis. Therapeutically, repetitive TMS (rTMS) has shown promise in modulating cortical circuits and improving motor and cognitive symptoms. High- and low-frequency stimulation protocols have demonstrated efficacy in PD, dystonia, ET, AD, and MCI, targeting the specific cortical regions implicated in each disorder. Moreover, the successful application of TMS in differentiating and treating AD and MCI underscores its clinical utility and translational potential across all neurodegenerative conditions. As research advances, increased attention and investment in TMS could facilitate similar diagnostic and therapeutic breakthroughs for other neurological disorders that currently lack robust tools for early detection and effective intervention. Moreover, this review also aims to underscore the importance of maintaining standardized TMS protocols. By highlighting inconsistencies and variability in outcomes across studies, we emphasize that careful methodological design is critical for ensuring the reproducibility, comparability, and reliable interpretation of TMS findings. In summary, this review emphasizes the value of TMS as a distinctive, non-invasive approach to probing brain function and highlights its considerable promise as both a diagnostic and therapeutic modality in neurology—roles that are often considered separately. Full article
Show Figures

Figure 1

15 pages, 1053 KB  
Article
Training and Competency Gaps for Shipping Decarbonization in the Era of Disruptive Technology: The Case of Panama
by Javier Eloy Diaz Jimenez, Eddie Blanco-Davis, Rosa Mary de la Campa Portela, Sean Loughney, Jin Wang and Ervin Vargas Wilson
Sustainability 2026, 18(2), 958; https://doi.org/10.3390/su18020958 (registering DOI) - 17 Jan 2026
Abstract
The maritime sector is undergoing a profound transformation driven by disruptive technologies and global decarbonization objectives, placing new demands on Maritime Education and Training (MET) systems. Equipping maritime professionals with competencies for low-carbon shipping is now as critical as technological advancement itself. This [...] Read more.
The maritime sector is undergoing a profound transformation driven by disruptive technologies and global decarbonization objectives, placing new demands on Maritime Education and Training (MET) systems. Equipping maritime professionals with competencies for low-carbon shipping is now as critical as technological advancement itself. This study examines how disruptive technologies can be effectively integrated into MET frameworks to support environmental sustainability, using Panama as a representative case study of a major flag and maritime service state. A mixed-methods approach was adopted, combining a structured literature review, expert surveys, and a multi-criteria decision-making analysis based on the Analytic Hierarchy Process (AHP). The findings reveal a significant misalignment between existing MET curricula and the competencies required for decarbonized maritime operations. Key gaps include limited training in alternative fuels, emissions measurement and reporting, energy-efficient technologies, digital analytics, and regulatory compliance. Stakeholders also reported fragmented training provision, uneven access to emerging technologies, and weak coordination between academia, industry, and regulators, particularly in developing contexts. The results highlight the urgent need for curriculum reform and stronger cross-sector collaboration to align MET with evolving technological and regulatory demands. The study provides an applied, evidence-based framework for MET reform, with insights transferable to other systems facing similar decarbonization challenges. Full article
(This article belongs to the Special Issue Sustainable Energy Systems and Renewable Generation—Second Edition)
Show Figures

Figure 1

24 pages, 12718 KB  
Article
Proposed Methodology for Correcting Fourier-Transform Infrared Spectroscopy Field-of-View Scene-Change Artifacts
by Kody A. Wilson, Michael L. Dexter, Benjamin F. Akers and Anthony L. Franz
Remote Sens. 2026, 18(2), 317; https://doi.org/10.3390/rs18020317 (registering DOI) - 17 Jan 2026
Abstract
Fourier-transform spectrometers are widely used for spectral measurements. Changes in the field of view during measurement introduce oscillations into the measured spectra known as scene-change artifacts. Field-of-view changes also introduce uncertainty about which target the measured spectrum represents. Though scene-change artifacts are often [...] Read more.
Fourier-transform spectrometers are widely used for spectral measurements. Changes in the field of view during measurement introduce oscillations into the measured spectra known as scene-change artifacts. Field-of-view changes also introduce uncertainty about which target the measured spectrum represents. Though scene-change artifacts are often present in dynamic data, their significance is disputed in the current literature. This work presents a theoretical framework and experimental validation for scene-change artifacts. Field-of-view changes introduce variable interferogram offsets, which standard processing techniques assume are constant. The error between the interferogram offset and its estimate is Fourier-transformed, yielding scene-change artifacts, often confused with noise, in the calibrated spectrum. Previous theoretical models ignored the effect of the interferogram offset in generating SCAs, leading to an underestimation of the scene-change artifact significance. Smooth offset correction removes these artifacts by estimating the variable interferogram offset using locally weighted scatter-plot smoothing. Updating the interferogram offset estimate resulted in the same accuracy expected for static conditions. The resulting spectra resemble the zero path difference spectra, similar to earlier theoretical predictions. These results indicate that Fourier-transform spectroscopy accuracy with variable scenes can be significantly improved with minor modifications to data processing. Full article
Show Figures

Figure 1

31 pages, 1363 KB  
Review
A Review of the Parameters Controlling Crack Growth in AM Steels and Its Implications for Limited-Life AM and CSAM Parts
by Rhys Jones, Andrew Ang, Nam Phan, Michael R. Brindza, Michael B. Nicholas, Chris Timbrell, Daren Peng and Ramesh Chandwani
Materials 2026, 19(2), 372; https://doi.org/10.3390/ma19020372 (registering DOI) - 16 Jan 2026
Viewed by 26
Abstract
This paper reviews the fracture mechanics parameters associated with the variability in the crack growth curves associated with forty-two different tests that range from additively manufactured (AM) steels to cold spray additively manufactured (CSAM) 316L steel. As a result of this review, it [...] Read more.
This paper reviews the fracture mechanics parameters associated with the variability in the crack growth curves associated with forty-two different tests that range from additively manufactured (AM) steels to cold spray additively manufactured (CSAM) 316L steel. As a result of this review, it is found that, to a first approximation, the effects of different building processes and R-ratios on the relationship between ΔK and the crack growth rate (da/dN) can be captured by allowing for changes in the fatigue threshold and the apparent cyclic toughness in the Schwalbe crack driving force (Δκ). Whilst this observation, when taken in conjunction with similar findings for AM Ti-6Al-4V, Inconel 718, Inconel 625, and Boeing Space Intelligence and Weapon Systems (BSI&WS) laser powder bed (LPBF)-built Scalmalloy®, as well as for a range of CSAM pure metals, go a long way in making a point; it is NOT a mathematical proof. It is merely empirical evidence. As a result, this review highlights that for AM and CSAM materials, it is advisable to plot the crack growth rate (da/dN) against both ΔK and Δκ. The observation that, for the AM and CSAM steels examined in this study, the da/dN versus Δκ curves are similar, when coupled with similar observation for a range of other AM materials, supports a prior study that suggested using fracture toughness measurements in conjunction with the flight load spectrum and the operational life requirement to guide the choice of the building process for AM Ti-6Al-4V parts. The observations outlined in this study, when taken together with related findings given in the open literature for AM Ti-6Al-4V, AM Inconel 718, AM Inconel 625, and BSI&WS LPFB-built Scalmalloy®, as well as for a range of CSAM-built pure metals, have implications for the implementation and certification of limited-life AM parts. Full article
20 pages, 16586 KB  
Article
A Deep Transfer Learning Framework for Speed-of-Sound Aberration Correction in Full-Ring Photoacoustic Tomography
by Jie Yin, Yingjie Feng, Qi Feng, Junjun He and Chao Tao
Sensors 2026, 26(2), 626; https://doi.org/10.3390/s26020626 (registering DOI) - 16 Jan 2026
Viewed by 43
Abstract
Speed-of-sound (SoS) heterogeneities introduce pronounced artifacts in full-ring photoacoustic tomography (PAT), degrading imaging accuracy and constraining its practical use. We introduce a transfer learning-based deep neural framework that couples an ImageNet-pretrained ResNet-50 encoder with a tailored deconvolutional decoder to perform end-to-end artifact correction [...] Read more.
Speed-of-sound (SoS) heterogeneities introduce pronounced artifacts in full-ring photoacoustic tomography (PAT), degrading imaging accuracy and constraining its practical use. We introduce a transfer learning-based deep neural framework that couples an ImageNet-pretrained ResNet-50 encoder with a tailored deconvolutional decoder to perform end-to-end artifact correction on photoacoustic tomography reconstructions. We propose a two-phase curriculum learning protocol, initial pretraining on simulations with uniform SoS mismatches, followed by fine-tuning on spatially heterogeneous SoS fields, to improve generalization to complex aberrations. Evaluated on numerical models, physical phantom experiments and in vivo experiments, the framework provides substantial gains over conventional back-projection and U-Net baselines in mean squared error, structural similarity index measure, and Pearson correlation coefficient, while achieving an average inference time of 17 ms per frame. These results indicate that the proposed approach can reduce the sensitivity of full-ring PAT to SoS inhomogeneity and improve full-view reconstruction quality. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

14 pages, 1436 KB  
Article
Triplane Left Atrial Reservoir Strain in Cardiac Amyloidosis: A Comparative Study with Rhythm-Matched Controls
by Marina Leitman, Vladimir Tyomkin and Shmuel Fuchs
Clin. Pract. 2026, 16(1), 17; https://doi.org/10.3390/clinpract16010017 - 16 Jan 2026
Viewed by 22
Abstract
Background: Cardiac amyloidosis is characterized by progressive myocardial and atrial infiltration, leading to atrial mechanical dysfunction, atrial fibrillation, and thromboembolic complications. Left atrial (LA) strain is an established marker of atrial function; however, data on triplane LA strain in cardiac amyloidosis are limited. [...] Read more.
Background: Cardiac amyloidosis is characterized by progressive myocardial and atrial infiltration, leading to atrial mechanical dysfunction, atrial fibrillation, and thromboembolic complications. Left atrial (LA) strain is an established marker of atrial function; however, data on triplane LA strain in cardiac amyloidosis are limited. Methods: We evaluated transthoracic echocardiographic examinations of 24 patients with cardiac amyloidosis and 24 age-, sex-, rhythm-, and ejection fraction-matched control subjects (9 with atrial fibrillation in each group). Among amyloidosis patients, 21 had transthyretin and 3 had light-chain cardiac amyloidosis. All examinations were performed during 2025. Triplane and biplane LA reservoir strain were assessed using speckle-tracking echocardiography. Two-way analysis of variance tested the effects of disease (amyloidosis vs. control) and rhythm (sinus rhythm vs. atrial fibrillation). Agreement between triplane and biplane measurements was evaluated using Pearson correlation and Bland–Altman analyses. Results: Triplane LA reservoir strain was significantly lower in patients with cardiac amyloidosis compared with controls (6.7 ± 2.7% vs. 16.2 ± 8.3%, p < 0.001). Even in sinus rhythm, amyloidosis patients demonstrated markedly impaired LA strain, with mean values similar to those observed in control subjects with atrial fibrillation. Two-way ANOVA revealed significant main effects of disease (F = 68.9, p < 0.0001) and rhythm (F = 45.0, p < 0.0001), as well as a significant disease–rhythm interaction (F = 26.5, p < 0.0001). Triplane and biplane LA strain showed strong correlation (r = 0.90, p < 0.0001) with good agreement. Reproducibility was excellent (intra-observer ICC = 0.97; inter-observer ICC = 0.94). Conclusions: Triplane LA reservoir strain is markedly reduced in cardiac amyloidosis and enables comprehensive visualization of atrial mechanical dysfunction. The technique demonstrates high reproducibility and strong agreement with biplane analysis, supporting its use as a complementary tool for characterizing amyloid atriopathy. Full article
Show Figures

Graphical abstract

24 pages, 1911 KB  
Article
Non-Destructive Detection of Heat Stress in Tobacco Plants Using Visible-Near-Infrared Spectroscopy and Aquaphotomics Approach
by Daniela Moyankova, Petya Stoykova, Antoniya Petrova, Nikolai K. Christov, Petya Veleva, Gergana Savova and Stefka Atanassova
AgriEngineering 2026, 8(1), 33; https://doi.org/10.3390/agriengineering8010033 - 16 Jan 2026
Viewed by 86
Abstract
Non-destructive estimation of high-temperature stress effects on tobacco plants is crucial for both scientific research and practical applications. Normalized difference vegetation index (NDVI), chlorophyll index, and spectra in the range of 900–1700 nm of Burley, Oriental, and Virginia tobacco plants under control and [...] Read more.
Non-destructive estimation of high-temperature stress effects on tobacco plants is crucial for both scientific research and practical applications. Normalized difference vegetation index (NDVI), chlorophyll index, and spectra in the range of 900–1700 nm of Burley, Oriental, and Virginia tobacco plants under control and high-temperature stress conditions were measured using portable instruments. NDVI and chlorophyll index measurements indicate that young leaves of all tobacco types are tolerant to high temperatures. In contrast, the older leaves (the fifth leaf) showed increased sensitivity to heat stress. The chlorophyll content of these leaves decreased by 40 to 60% after five days of stress, and by the seventh day, the reduction reached 80% or more in all plants. The vegetative index of the fifth leaf also decreased on the seventh day of stress in all tobacco types. Differences in near-infrared spectra were observed between control, stressed, and recovered plants, as well as among different stress days, and among tobacco lines. The most significant differences were in the 1300–1500 nm range. The first characterization of heat-induced changes in the molecular structure of water in tobacco leaves using an aquaphotomics approach was conducted. Models for determining days of high-temperature treatment based on near-infrared spectra achieved a standard error of cross-validation (SECV) from 0.49 to 0.62 days. The total accuracy of the Soft Independent Modeling of Class Analogy (SIMCA) classification models of control, stressed, and recovered plants ranged from 91.0 to 93.6% using leaves’ spectra of the first five days of high-temperature stress, and from 90.7 to 97.7% using spectra of only the fifth leaf. Similar accuracy was obtained using Partial Least Squares–Discriminant Analysis (PLS-DA). Near-infrared spectroscopy and aquaphotomics can be used as a fast and non-destructive approach for early detection of stress and additional tools for investigating high-temperature tolerance in tobacco plants. Full article
Show Figures

Figure 1

12 pages, 307 KB  
Article
Blockwise Exponential Covariance Modeling for High-Dimensional Portfolio Optimization
by Congying Fan and Jacquline Tham
Symmetry 2026, 18(1), 171; https://doi.org/10.3390/sym18010171 - 16 Jan 2026
Viewed by 20
Abstract
This paper introduces a new framework for high-dimensional covariance matrix estimation, the Blockwise Exponential Covariance Model (BECM), which extends the traditional block-partitioned representation to the log-covariance domain. By exploiting the block-preserving properties of the matrix logarithm and exponential transformations, the proposed model guarantees [...] Read more.
This paper introduces a new framework for high-dimensional covariance matrix estimation, the Blockwise Exponential Covariance Model (BECM), which extends the traditional block-partitioned representation to the log-covariance domain. By exploiting the block-preserving properties of the matrix logarithm and exponential transformations, the proposed model guarantees strict positive definiteness while substantially reducing the number of parameters to be estimated through a blockwise log-covariance parameterization, without imposing any rank constraint. Within each block, intra- and inter-group dependencies are parameterized through interpretable coefficients and kernel-based similarity measures of factor loadings, enabling a data-driven representation of nonlinear groupwise associations. Using monthly stock return data from the U.S. stock market, we conduct extensive rolling-window tests to evaluate the empirical performance of the BECM in minimum-variance portfolio construction. The results reveal three main findings. First, the BECM consistently outperforms the Canonical Block Representation Model (CBRM) and the native 1/N benchmark in terms of out-of-sample Sharpe ratios and risk-adjusted returns. Second, adaptive determination of the number of clusters through cross-validation effectively balances structural flexibility and estimation stability. Third, the model maintains numerical robustness under fine-grained partitions, avoiding the loss of positive definiteness common in high-dimensional covariance estimators. Overall, the BECM offers a theoretically grounded and empirically effective approach to modeling complex covariance structures in high-dimensional financial applications. Full article
(This article belongs to the Section Mathematics)
12 pages, 271 KB  
Article
Assessment of Eating Behavior and Genetic Risk Factors for Metabolic Syndrome
by Ainur Turmanbayeva, Karlygash Sadykova, Gulnaz Nuskabayeva, Ainash Oshibayeva, Ugilzhan Tatykayeva, Yusuf Ozkul, Dinara Azizkhojayeva, Dilbar Aidarbekova, Dinara Nemetova, Dana Kaldarkhan, Bibigul Tastemirova and Kanatzhan Kemelbekov
J. Clin. Med. 2026, 15(2), 739; https://doi.org/10.3390/jcm15020739 - 16 Jan 2026
Viewed by 51
Abstract
Background: Metabolic syndrome (MetS) is influenced by behavioral and genetic factors, yet evidence on eating behavior patterns and related genetic polymorphisms in Central Asian populations remains limited. Aim: The aim of this study was to assess eating behaviors among adults with and [...] Read more.
Background: Metabolic syndrome (MetS) is influenced by behavioral and genetic factors, yet evidence on eating behavior patterns and related genetic polymorphisms in Central Asian populations remains limited. Aim: The aim of this study was to assess eating behaviors among adults with and without MetS and evaluate their associations with clinical indicators and ADIPOQ rs266729 and MC4R rs17782313 variants. Methods: A cross-sectional study of 200 adults (115 non-MetS, 85 MetS) was conducted using Dutch Eating Behavior Questionnaire (DEBQ), standardized clinical measurements, and PCR-RFLP genotyping. Results: Participants with MetS were older than non-MetS adults (52 vs. 47 years; p = 0.004) and had substantially higher systolic blood pressure (126 vs. 114 mmHg; p < 0.001), diastolic blood pressure (83 vs. 74 mmHg; p < 0.001), and BMI (32.2 vs. 25.9 kg/m2; p < 0.001). Waist circumference, hip circumference, triglycerides, total cholesterol, and LDL were also significantly higher, while HDL was lower (1.13 ± 0.40 vs. 1.58 ± 1.50 mmol/L; p = 0.008). DEBQ restrained, emotional, and external eating scores showed no differences between groups (all p > 0.05). Eating behavior distribution was similar (p = 0.291). ADIPOQ genotypes (CC/CG/GG) did not differ by MetS status (p = 0.227), nor did MC4R variants (p = 0.679). Among MetS participants, clinical indicators did not vary across eating behavior categories, and no associations were observed between eating behavior and either polymorphism. Conclusions: Despite clear clinical and metabolic differences between MetS and non-MetS groups, neither eating behavior patterns nor ADIPOQ and MC4R variants were associated with metabolic measures among MetS group. Full article
(This article belongs to the Section Endocrinology & Metabolism)
13 pages, 2486 KB  
Article
Influence of Density, Temperature, and Moisture Content on the Dielectric Properties of Pedunculate Oak (Quercus robur L.)
by Dario Pervan, Stjepan Pervan, Miljenko Klarić, Jure Žigon and Aleš Straže
Forests 2026, 17(1), 120; https://doi.org/10.3390/f17010120 - 15 Jan 2026
Viewed by 33
Abstract
This study examines the effects of temperature, relative humidity, moisture content, and density on the dielectric constant (ε′) and dielectric loss tangent (tan δ) of oak wood lamellae within a frequency range of 0.079 MHz to 25.1 MHz. The hypothesis tested was that [...] Read more.
This study examines the effects of temperature, relative humidity, moisture content, and density on the dielectric constant (ε′) and dielectric loss tangent (tan δ) of oak wood lamellae within a frequency range of 0.079 MHz to 25.1 MHz. The hypothesis tested was that increased temperature and moisture content enhance both dielectric polarization and loss, while density acts as a dominant structural determinant of dielectric behaviour. Oak lamellas were conditioned above saturated salt solutions at 20 °C and measured using an Agilent 4285A LCR meter according to ASTM D150-22. Multiple linear regression was used to demonstrate the statistically significant influence of temperature, relative humidity, moisture content, and density on the tested electrical properties of the lamellas. The results showed that the dielectric properties increase with higher sample density and higher air humidity. Temperature also had an influence, but it was significantly smaller, though still statistically significant (p < 0.05). Changes in dielectric properties were most pronounced at frequencies below 1 MHz, suggesting that dipolar and interfacial polarization are greater at lower frequencies. The findings in this paper provide a basis for optimizing the high frequency/dielectric heating process for heating before bending of oak and other similar hardwoods. Full article
(This article belongs to the Section Wood Science and Forest Products)
Show Figures

Figure 1

Back to TopTop