Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,007)

Search Parameters:
Keywords = model-checking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 16946 KB  
Article
Layer-Stripping Velocity Analysis Method for GPR/LPR Data
by Nan Huai, Tao Lei, Xintong Liu and Ning Liu
Appl. Sci. 2026, 16(3), 1228; https://doi.org/10.3390/app16031228 (registering DOI) - 25 Jan 2026
Abstract
Diffraction-based velocity analysis is a key data interpretation technique in geophysical exploration, typically relying on the geometric characteristics, energy distribution, or propagation paths of diffraction waves. The hyperbola-based method is a classical strategy in this category, which extracts depth-dependent velocity (or dielectric properties) [...] Read more.
Diffraction-based velocity analysis is a key data interpretation technique in geophysical exploration, typically relying on the geometric characteristics, energy distribution, or propagation paths of diffraction waves. The hyperbola-based method is a classical strategy in this category, which extracts depth-dependent velocity (or dielectric properties) by correlating the hyperbolic shape of diffraction events with subsurface parameters for characterizing subsurface structures and material compositions. In this study, we propose a layer-stripping velocity analysis method applicable to ground-penetrating radar (GPR) and lunar-penetrating radar (LPR) data, with two main innovations: (1) replacing traditional local optimization algorithms with an intuitive parallelism check scheme, eliminating the need for complex nonlinear iterations; (2) performing depth-progressive velocity scanning of radargram diffraction signals, where shallow-layer velocity analysis constrains deeper-layer calculations. This strategy avoids misinterpretations of deep geological objects’ burial depth, morphology, and physical properties caused by a single average velocity or independent deep-layer velocity assumptions. The workflow of the proposed method is first demonstrated using a synthetic rock-fragment layered model, then applied to derive the near-surface dielectric constant distribution (down to 27 m) at the Chang’e-4 landing site. The estimated values range from 2.55 to 6, with the depth-dependent profile revealing lunar regolith stratification and interlayer material property variations. Consistent with previously reported results for the Chang’e-4 region, our findings confirm the method’s applicability to LPR data, providing a new technical framework for high-resolution subsurface structure reconstruction. Full article
Show Figures

Figure 1

29 pages, 2666 KB  
Article
Explainable Ensemble Learning for Predicting Stock Market Crises: Calibration, Threshold Optimization, and Robustness Analysis
by Eddy Suprihadi, Nevi Danila, Zaiton Ali and Gede Pramudya Ananta
Information 2026, 17(2), 114; https://doi.org/10.3390/info17020114 (registering DOI) - 25 Jan 2026
Abstract
Forecasting stock market crashes is difficult because such events are rare, highly nonlinear, and shaped by latent structural and behavioral forces. This study introduces a calibrated and interpretable Random Forest framework for detecting pre-crash conditions through structural feature engineering, early-warning calibration, and model [...] Read more.
Forecasting stock market crashes is difficult because such events are rare, highly nonlinear, and shaped by latent structural and behavioral forces. This study introduces a calibrated and interpretable Random Forest framework for detecting pre-crash conditions through structural feature engineering, early-warning calibration, and model explainability. Using daily data on global equity indices and major large-cap stocks from the U.S., Europe, and Asia, we construct a feature set that captures volatility expansion, moving-average deterioration, Bollinger Band width, and short-horizon return dynamics. Probability-threshold optimization significantly improves sensitivity to rare events and yields an operating point at a crash-probability threshold of 0.33. Compared with econometric and machine learning benchmarks, the calibrated model attains higher precision while maintaining competitive F1 and MCC scores, and it delivers meaningful early-warning signals with an average lead-time of around 60 days. SHAP analysis indicates that predictions are anchored in theoretically consistent indicators, particularly volatility clustering and weakening trends, while robustness checks show resilience to noise, structural perturbations, and simulated flash crashes. Taken together, these results provide a transparent and reproducible blueprint for building operational early-warning systems in financial markets. Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science, 3rd Edition)
Show Figures

Figure 1

23 pages, 1800 KB  
Article
Adaptive Data-Driven Framework for Unsupervised Learning of Air Pollution in Urban Micro-Environments
by Abdelrahman Eid, Shehdeh Jodeh, Raghad Eid, Ghadir Hanbali, Abdelkhaleq Chakir and Estelle Roth
Atmosphere 2026, 17(2), 125; https://doi.org/10.3390/atmos17020125 (registering DOI) - 24 Jan 2026
Abstract
(1) Background: Urban traffic micro-environments show strong spatial and temporal variability. Short and intensive campaigns remain a practical approach for understanding exposure patterns in complex environments, but they need clear and interpretable summaries that are not limited to simple site or time segmentation. [...] Read more.
(1) Background: Urban traffic micro-environments show strong spatial and temporal variability. Short and intensive campaigns remain a practical approach for understanding exposure patterns in complex environments, but they need clear and interpretable summaries that are not limited to simple site or time segmentation. (2) Methods: We carried out a multi-site campaign across five traffic-affected micro-environments, where measurements covered several pollutants, gases, and meteorological variables. A machine learning framework was introduced to learn interpretable operational regimes as recurring multivariate states using clustering with stability checks, and then we evaluated their added explanatory value and cross-site transfer using a strict site hold-out design to avoid information leakage. (3) Results: Five regimes were identified, representing combinations of emission intensity and ventilation strength. Incorporating regime information increased the explanatory power of simple NO2 models and allowed the imputation of missing H2S day using regime-aware random forest with an R2 near 0.97. Regime labels remained identifiable using reduced sensor sets, while cross-site forecasting transferred well for NO2 but was limited for PM, indicating stronger local effects for particles. (4) Conclusions: Operational-regime learning can transform short multivariate campaigns into practical and interpretable summaries of urban air pollution, while supporting data recovery and cautious model transfer. Full article
(This article belongs to the Section Air Quality)
Show Figures

Figure 1

15 pages, 458 KB  
Article
Feedback Structures Generating Policy Exposure, Gatekeeping, and Care Disruption in Transgender and Gender Expansive Healthcare
by Braveheart Gillani, Rem Martin, Augustus Klein, Meagan Ray-Novak, Alyssa Roberts, Dana Prince, Laura Mintz and Scott Emory Moore
Systems 2026, 14(1), 112; https://doi.org/10.3390/systems14010112 - 21 Jan 2026
Viewed by 105
Abstract
Transgender and gender-expansive (TGE) communities face persistent health inequities that are reproduced through everyday administrative and clinical encounters across care systems. A feedback-focused lens can clarify how those inequities are generated and sustained. Objective: To identify and validate feedback loops that create policy [...] Read more.
Transgender and gender-expansive (TGE) communities face persistent health inequities that are reproduced through everyday administrative and clinical encounters across care systems. A feedback-focused lens can clarify how those inequities are generated and sustained. Objective: To identify and validate feedback loops that create policy exposure and institutional gatekeeping in TGE healthcare and to surface leverage points to stabilize their continuity of care. Methods: Two facilitated, Zoom-based Group Model Building (GMB) sessions were conducted in March 2021 with eight TGE participants (mean age 38 years; range 22–63; transfeminine and transmasculine identities; multiracial, White, and SWANA racial identities) recruited through a Lesbian Gay Bisexual and Transgender (LGBT) community center, followed by a participant member-checking session to validate loop structure, causal direction, and interpretive accuracy. Analysis focused explicitly on identifying reinforcing and balancing feedback structures, rather than isolated barriers, to explain how policy exposure and institutional gatekeeping are generated over time. Results: Participants co-constructed a nine-variable Causal Loop Diagram (CLD) with six feedback structures, four reinforcing and two balancing that interact dynamically to amplify or dampen policy exposure, institutional gatekeeping, and continuity of care, which were organized across structural, institutional/clinical, and individual/community tiers. Reinforcing dynamics linked structural stigma, exclusion from formal employment, institutionalized provider bias, and enacted stigma to degraded care experience, increased trauma and distrust, and disrupted continuity, manifesting as policy exposure (e.g., coverage volatility, denials) and gatekeeping (e.g., discretionary documentation, referral hurdles). Community-based supports and peer/elder navigation functioned as balancing loops that reduced trauma, improved continuity and encounters, and, over time, dampened provider bias. A salient theme was the visibility/invisibility paradox: symbolic inclusion without workflow redesign can inadvertently increase exposure and reinforce harmful loops. Full article
(This article belongs to the Section Systems Practice in Social Science)
Show Figures

Figure 1

48 pages, 8070 KB  
Article
ResQConnect: An AI-Powered Multi-Agentic Platform for Human-Centered and Resilient Disaster Response
by Savinu Aththanayake, Chemini Mallikarachchi, Janeesha Wickramasinghe, Sajeev Kugarajah, Dulani Meedeniya and Biswajeet Pradhan
Sustainability 2026, 18(2), 1014; https://doi.org/10.3390/su18021014 - 19 Jan 2026
Viewed by 133
Abstract
Effective disaster management is critical for safeguarding lives, infrastructure and economies in an era of escalating natural hazards like floods and landslides. Despite advanced early-warning systems and coordination frameworks, a persistent “last-mile” challenge undermines response effectiveness: transforming fragmented and unstructured multimodal data into [...] Read more.
Effective disaster management is critical for safeguarding lives, infrastructure and economies in an era of escalating natural hazards like floods and landslides. Despite advanced early-warning systems and coordination frameworks, a persistent “last-mile” challenge undermines response effectiveness: transforming fragmented and unstructured multimodal data into timely and accountable field actions. This paper introduces ResQConnect, a human-centered, AI-powered multimodal multi-agent platform that bridges this gap by directly linking incident intake to coordinated disaster response operations in hazard-prone regions. ResQConnect integrates three key components. It uses an agentic Retrieval-Augmented Generation (RAG) workflow in which specialized language-model agents extract metadata, refine queries, check contextual adequacy and generate actionable task plans using a curated, hazard-specific knowledge base. The contribution lies in structuring the RAG for correctness, safety and procedural grounding in high-risk settings. The platform introduces an Adaptive Event-Triggered (AET) multi-commodity routing algorithm that decides when to re-optimize routes, balancing responsiveness, computational cost and route stability under dynamic disaster conditions. Finally, ResQConnect deploys a compressed, domain-specific language model on mobile devices to provide policy-aligned guidance when cloud connectivity is limited or unavailable. Across realistic flood and landslide scenarios, ResQConnect improved overall task-quality scores from 61.4 to 82.9 (+21.5 points) over a standard RAG baseline, reduced solver calls by up to 85% compared to continuous re-optimization while remaining within 7–12% of optimal response time, and delivered fully offline mobile guidance with sub-500 ms response latency and 54 tokens/s throughput on commodity smartphones. Overall, ResQConnect demonstrates a practical and resilient approach to AI-augmented disaster response. From a sustainability perspective, the proposed system contributes to Sustainable Development Goal (SDG) 11 by improving the speed and coordination of disaster response. It also supports SDG 13 by strengthening adaptation and readiness for climate-driven hazards. ResQConnect is validated using real-world flood and landslide disaster datasets, ensuring realistic incidents, constraints and operational conditions. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

29 pages, 2003 KB  
Article
The Impact of Metropolitan Area Integration Policies on Urban Industrial Structure Upgrading: Evidence from China
by Kan Liu and Jinjun Duan
Land 2026, 15(1), 177; https://doi.org/10.3390/land15010177 - 17 Jan 2026
Viewed by 192
Abstract
As global production networks become increasingly regionalized, diversified, and resilience-oriented, metropolitan areas (MAs) have emerged as important spatial platforms for industrial development. This study examines whether China’s national-level metropolitan area integration policies promote urban industrial structure upgrading and, if so, through which channels. [...] Read more.
As global production networks become increasingly regionalized, diversified, and resilience-oriented, metropolitan areas (MAs) have emerged as important spatial platforms for industrial development. This study examines whether China’s national-level metropolitan area integration policies promote urban industrial structure upgrading and, if so, through which channels. We first develop a set of conceptual mechanisms and hypotheses, and then test them using panel data for 281 prefecture-level cities in China from 2012 to 2022. A staggered difference-in-differences (DID) model, complemented by a series of robustness checks, is employed to identify the policy effects. The baseline estimates indicate that the industrial structure of MA member cities is, on average, about 2.43 percentage points more advanced than that of non-MA cities. Mechanism analysis shows that the policies foster urban industrial upgrading through unified market formation, technological improvement, and optimization of factor endowments. However, the policies have only a very limited impact on breakthroughs in cutting-edge or frontier technologies. Based on these findings, we propose targeted policy recommendations to address the identified shortcomings. Full article
Show Figures

Figure 1

22 pages, 6241 KB  
Article
Using Large Language Models to Detect and Debunk Climate Change Misinformation
by Zeinab Shahbazi and Sara Behnamian
Big Data Cogn. Comput. 2026, 10(1), 34; https://doi.org/10.3390/bdcc10010034 - 17 Jan 2026
Viewed by 282
Abstract
The rapid spread of climate change misinformation across digital platforms undermines scientific literacy, public trust, and evidence-based policy action. Advances in Natural Language Processing (NLP) and Large Language Models (LLMs) create new opportunities for automating the detection and correction of misleading climate-related narratives. [...] Read more.
The rapid spread of climate change misinformation across digital platforms undermines scientific literacy, public trust, and evidence-based policy action. Advances in Natural Language Processing (NLP) and Large Language Models (LLMs) create new opportunities for automating the detection and correction of misleading climate-related narratives. This study presents a multi-stage system that employs state-of-the-art large language models such as Generative Pre-trained Transformer 4 (GPT-4), Large Language Model Meta AI (LLaMA) version 3 (LLaMA-3), and RoBERTa-large (Robustly optimized BERT pretraining approach large) to identify, classify, and generate scientifically grounded corrections for climate misinformation. The system integrates several complementary techniques, including transformer-based text classification, semantic similarity scoring using Sentence-BERT, stance detection, and retrieval-augmented generation (RAG) for evidence-grounded debunking. Misinformation instances are detected through a fine-tuned RoBERTa–Multi-Genre Natural Language Inference (MNLI) classifier (RoBERTa-MNLI), grouped using BERTopic, and verified against curated climate-science knowledge sources using BM25 and dense retrieval via FAISS (Facebook AI Similarity Search). The debunking component employs RAG-enhanced GPT-4 to produce accurate and persuasive counter-messages aligned with authoritative scientific reports such as those from the Intergovernmental Panel on Climate Change (IPCC). A diverse dataset of climate misinformation categories covering denialism, cherry-picking of data, false causation narratives, and misleading comparisons is compiled for evaluation. Benchmarking experiments demonstrate that LLM-based models substantially outperform traditional machine-learning baselines such as Support Vector Machines, Logistic Regression, and Random Forests in precision, contextual understanding, and robustness to linguistic variation. Expert assessment further shows that generated debunking messages exhibit higher clarity, scientific accuracy, and persuasive effectiveness compared to conventional fact-checking text. These results highlight the potential of advanced LLM-driven pipelines to provide scalable, real-time mitigation of climate misinformation while offering guidelines for responsible deployment of AI-assisted debunking systems. Full article
(This article belongs to the Special Issue Natural Language Processing Applications in Big Data)
Show Figures

Figure 1

36 pages, 575 KB  
Article
In Silico Proof of Concept: Conditional Deep Learning-Based Prediction of Short Mitochondrial DNA Fragments in Archosaurs
by Dimitris Angelakis, Dionisis Cavouras, Dimitris Th. Glotsos, Spiros A. Kostopoulos, Emmanouil I. Athanasiadis, Ioannis K. Kalatzis and Pantelis A. Asvestas
AI 2026, 7(1), 27; https://doi.org/10.3390/ai7010027 - 14 Jan 2026
Viewed by 176
Abstract
This study presents an in silico proof of concept exploring whether deep learning models can perform conditional mitochondrial DNA (mtDNA) sequence prediction across species boundaries. A CNN–BiLSTM model was trained under a leave-one-species-out (LOSO) scheme on complete mitochondrial genomes from 21 vertebrate species, [...] Read more.
This study presents an in silico proof of concept exploring whether deep learning models can perform conditional mitochondrial DNA (mtDNA) sequence prediction across species boundaries. A CNN–BiLSTM model was trained under a leave-one-species-out (LOSO) scheme on complete mitochondrial genomes from 21 vertebrate species, primarily archosaurs. Model behavior was evaluated through multiple complementary tests. Under context-conditioned settings, the model performed next-nucleotide prediction using overlapping 200 bp windows to assemble contiguous 2000 bp fragments for held-out species; the resulting high token-level accuracy (>99%) under teacher forcing is reported as a diagnostic of conditional modeling capacity. To assess leakage-free performance, a two-flank masked-span imputation task was conducted as the primary evaluation, requiring free-running reconstruction of 500 bp interior spans using only distal flanking context; in this setting, the model consistently outperformed nearest-neighbor and demonstrated competitive performance relative to flank-copy baselines. Additional robustness analyses examined sensitivity to window placement, genomic region (coding versus D-loop), and random initialization. Biological plausibility was further assessed by comparing predicted fragments to reconstructed ancestral sequences and against composition-matched null models, where observed identities significantly exceeded null expectations. Using the National Center for Biotechnology Information (NCBI) BLAST web interface, BLASTn species identification was performed solely as a biological plausibility check, recovering the correct species as the top hit in all cases. Although limited by dataset size and the absence of ancient DNA damage modeling, these results demonstrate the feasibility of conditional mtDNA sequence prediction as an initial step toward more advanced generative and evolutionary modeling frameworks. Full article
(This article belongs to the Special Issue Transforming Biomedical Innovation with Artificial Intelligence)
19 pages, 2028 KB  
Article
RSSI-Based Localization of Smart Mattresses in Hospital Settings
by Yeh-Liang Hsu, Chun-Hung Yi, Shu-Chiung Lee and Kuei-Hua Yen
J. Low Power Electron. Appl. 2026, 16(1), 4; https://doi.org/10.3390/jlpea16010004 - 14 Jan 2026
Viewed by 119
Abstract
(1) Background: In hospitals, mattresses are often relocated for cleaning or patient transfer, leading to mismatches between actual and recorded bed locations. Manual updates are time-consuming and error-prone, requiring an automatic localization system that is cost-effective and easy to deploy to ensure traceability [...] Read more.
(1) Background: In hospitals, mattresses are often relocated for cleaning or patient transfer, leading to mismatches between actual and recorded bed locations. Manual updates are time-consuming and error-prone, requiring an automatic localization system that is cost-effective and easy to deploy to ensure traceability and reduce nursing workload. (2) Purpose: This study presents a pragmatic, large-scale implementation and validation of a BLE-based localization system using RSSI measurements. The goal was to achieve reliable room-level identification of smart mattresses by leveraging existing hospital infrastructure. (3) Results: The system showed stable signals in the complex hospital environment, with a 12.04 dBm mean gap between primary and secondary rooms, accurately detecting mattress movements and restoring location confidence. Nurses reported easier operation, reduced manual checks, and improved accuracy, though occasional mismatches occurred when receivers were offline. (4) Conclusions: The RSSI-based system demonstrates a feasible and scalable model for real-world asset tracking. Future upgrades include receiver health monitoring, watchdog restarts, and enhanced user training to improve reliability and usability. (5) Method: RSSI–distance relationships were characterized under different partition conditions to determine parameters for room differentiation. To evaluate real-world scalability, a field validation involving 266 mattresses in 101 rooms over 42 h tested performance, along with relocation tests and nurse feedback. Full article
Show Figures

Figure 1

18 pages, 317 KB  
Article
Whole-Process Agricultural Production Chain Management and Land Productivity: Evidence from Rural China
by Qilin Liu, Guangcai Xu, Jing Gong and Junhong Chen
Agriculture 2026, 16(2), 206; https://doi.org/10.3390/agriculture16020206 - 13 Jan 2026
Viewed by 240
Abstract
As agricultural labor shifted toward non-farm sectors and the farming population aged, innovative production arrangements became essential to sustain land productivity. While partial agricultural production chain management (PAPM) was widespread, the productivity impact of whole-process agricultural production chain management (WAPM)—a comprehensive model integrating [...] Read more.
As agricultural labor shifted toward non-farm sectors and the farming population aged, innovative production arrangements became essential to sustain land productivity. While partial agricultural production chain management (PAPM) was widespread, the productivity impact of whole-process agricultural production chain management (WAPM)—a comprehensive model integrating all production stages—remained empirically underexplored. Using nationally representative panel data from the China Labor-force Dynamics Survey (CLDS, 2014–2018) for grain-producing households, this study estimates the differential impacts of WAPM and PAPM with a two-way fixed-effects (TWFE) model, supplemented by propensity score matching (PSM) as a robustness check. The results show that WAPM significantly enhanced land productivity. Notably, the effect size of WAPM (coefficient: 0.486) is substantially larger than that of PAPM (coefficient: 0.214), indicating that systematic integration of service chains offers superior efficiency gains over fragmented outsourcing. Mechanism analysis suggests that WAPM improves productivity primarily by alleviating labor constraints and mitigating the disadvantages of small-scale farming. Furthermore, heterogeneity analysis demonstrated that these benefits are amplified in major grain-producing regions and hilly areas. These findings support policies that facilitate a transition from single-link outsourcing toward whole-process integrated service provision. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

23 pages, 2166 KB  
Article
Course-Oriented Knowledge Service-Based AI Teaching Assistant System for Higher Education Sustainable Development Demand
by Ling Wang, Tingkai Wang, Tie Hua Zhou and Zehuan Liu
Sustainability 2026, 18(2), 807; https://doi.org/10.3390/su18020807 - 13 Jan 2026
Viewed by 156
Abstract
With the advancement of artificial intelligence and educational informatization, there is a growing demand for intelligent teaching assistance systems in universities. Focusing on the university “Algorithms” course in the computer science department, this study develops a multi-terminal collaborative knowledge service system, Course-Oriented Knowledge [...] Read more.
With the advancement of artificial intelligence and educational informatization, there is a growing demand for intelligent teaching assistance systems in universities. Focusing on the university “Algorithms” course in the computer science department, this study develops a multi-terminal collaborative knowledge service system, Course-Oriented Knowledge Service–Based AI Teaching Assistant System (CKS-AITAS), which consists of a PC terminal and a mobile terminal, where the PC terminal integrates functions including knowledge graph, semantic retrieval, intelligent question-answering, and knowledge recommendation. While the mobile terminal enables classroom check-in and teaching interaction, thus forming a closed-loop platform for teaching organization, resource acquisition, and knowledge inquiry. For the document retrieval module, paragraph-level semantic modeling of textbook content is conducted using Word2Vec, combined with approximate nearest neighbor indexing, and this module achieves an MRR@10 of 0.641 and an average query time of 0.128 s, balancing accuracy and efficiency; the intelligent question-answering module, based on a self-built course FAQ dataset, is trained via the BERT model to enable question matching and answer retrieval, achieving an accuracy rate of 86.3% and an average response time of 0.31 s. Overall, CKS-AITAS meets the core teaching needs of the course, provides an AI-empowered solution for university teaching, and boasts promising application prospects in facilitating education sustainability. Full article
(This article belongs to the Special Issue Sustainable Digital Education: Innovations in Teaching and Learning)
Show Figures

Figure 1

32 pages, 1950 KB  
Article
Association of Circulating Irisin with Insulin Resistance and Metabolic Risk Markers in Prediabetic and Newly Diagnosed Type 2 Diabetes Patients
by Daniela Denisa Mitroi Sakizlian, Lidia Boldeanu, Diana Clenciu, Adina Mitrea, Ionela Mihaela Vladu, Alina Elena Ciobanu Plasiciuc, Mohamed-Zakaria Assani and Daniela Ciobanu
Int. J. Mol. Sci. 2026, 27(2), 787; https://doi.org/10.3390/ijms27020787 - 13 Jan 2026
Viewed by 114
Abstract
Circulating irisin, a myokine implicated in energy expenditure and adipose tissue regulation, has been increasingly studied as a potential biomarker of metabolic dysfunction. This study evaluated the relationship between serum irisin and metabolic indices, including the atherogenic index of plasma (AIP), the lipid [...] Read more.
Circulating irisin, a myokine implicated in energy expenditure and adipose tissue regulation, has been increasingly studied as a potential biomarker of metabolic dysfunction. This study evaluated the relationship between serum irisin and metabolic indices, including the atherogenic index of plasma (AIP), the lipid accumulation product (LAP), and hypertriglyceridemic-waist (HTGW) phenotype in individuals with prediabetes (PreDM) and newly diagnosed type 2 diabetes mellitus (T2DM). A total of 138 participants (48 PreDM, 90 T2DM) were assessed for anthropometric, glycemic, and lipid parameters. Serum irisin levels were measured by enzyme-linked immunosorbent assay (ELISA) and correlated with insulin resistance indices (Homeostatic Model Assessment of Insulin Resistance (HOMA-IR), Quantitative Insulin Sensitivity Check Index (QUICKI)), glycemic control (glycosylated hemoglobin A1c (HbA1c)), and composite lipid markers (total triglycerides-to-high-density lipoprotein cholesterol (TG/HDL-C)). Group differences were evaluated using non-parametric tests; two-way ANOVA assessed interactions between phenotypes and markers; multiple linear regression (MLR) and logistic regression models explored independent associations with metabolic indices and HTGW; receiver operating characteristic (ROC) analyses compared global and stratified model performance. Serum irisin was significantly lower in T2DM than in PreDM (median 140.4 vs. 230.7 ng/mL, p < 0.0001). Irisin levels remained comparable between males and females in both groups. Post hoc analysis shows that lipid indices and irisin primarily distinguish HTGW phenotypes, especially in T2DM. In both groups, irisin correlated inversely with HOMA-IR, AIP, and TG/HDL-C, and positively with QUICKI, indicating a possible compensatory role in early insulin resistance. MLR analyses revealed no independent relationship between irisin and either AIP or LAP in PreDM, while in T2DM, waist circumference remained the strongest negative predictor of irisin. Logistic regression identified age, male sex, and HbA1c as independent predictors of the HTGW phenotype, while irisin contributed modestly to overall model discrimination. ROC curves demonstrated good discriminative performance (AUC = 0.806 for global; 0.794 for PreDM; 0.813 for T2DM), suggesting comparable predictive accuracy across glycemic stages. In conclusion, irisin levels decline from prediabetes to overt diabetes and are inversely linked to lipid accumulation and insulin resistance but do not independently predict the HTGW phenotype. These findings support irisin’s role as an integrative indicator of metabolic stress rather than a stand-alone biomarker. Incorporating irisin into multi-parameter metabolic panels may enhance early detection of cardiometabolic risk in dysglycemic populations. Full article
(This article belongs to the Special Issue Molecular Diagnosis and Treatments of Diabetes Mellitus: 2nd Edition)
Show Figures

Figure 1

9 pages, 2602 KB  
Data Descriptor
A Comprehensive Dataset and Workflow for Building Large-Scale, Highly Oxidized Graphene Oxide Models
by Merve Fedai, Albert L. Kwansa and Yaroslava G. Yingling
Data 2026, 11(1), 18; https://doi.org/10.3390/data11010018 - 13 Jan 2026
Viewed by 268
Abstract
Graphene (GRA) and graphene oxide (GO) have drawn significant attention in materials science, chemistry, and nanotechnology because of their tunable physicochemical properties and wide range of potential uses in biomedical and environmental applications. Building reliable, large-scale molecular models of GRA and GO is [...] Read more.
Graphene (GRA) and graphene oxide (GO) have drawn significant attention in materials science, chemistry, and nanotechnology because of their tunable physicochemical properties and wide range of potential uses in biomedical and environmental applications. Building reliable, large-scale molecular models of GRA and GO is essential for molecular simulations of wetting, adsorption, and catalytic behavior. However, current methods often struggle to generate large, chemically consistent sheets at high oxidation levels. In addition, the resulting structures are frequently incompatible across different simulation packages. This work introduces a step-by-step protocol with custom Tool Command Language (Tcl) and modified Python version 3.12 scripts for building large-scale, AMBER-compatible GO structures with oxidation levels from 0% to 68%. The workflow applies a systematic surface modification strategy combined with post-processing and atom-type assignment routines to ensure chemical accuracy and force field consistency. The dataset includes fifteen MOL2 format files of 20 × 20 nm2 GO sheets, ranging from pristine to highly oxidized surfaces, each validated through oxidation-ratio analysis and structural integrity checks. Together, the dataset and protocol provide a design of scalable and chemically reliable GO molecular models for molecular dynamics simulations. Full article
Show Figures

Figure 1

11 pages, 264 KB  
Article
A Cross-Sectional Assessment of Oral Health and Quality of Life Among Dental Patients at a Public Special Care Center in Greece: A Cross-Sectional Study
by Eirini Thanasi, Maria Antoniadou, Petros Galanis and Vasiliki Kapaki
Hygiene 2026, 6(1), 4; https://doi.org/10.3390/hygiene6010004 - 12 Jan 2026
Viewed by 262
Abstract
Background: Despite its crucial role in overall health, oral health is frequently overlooked within healthcare systems, partly due to the misconception that oral diseases are neither life-threatening nor directly disabling. This perception has led to an underestimation of the psychological, social, and economic [...] Read more.
Background: Despite its crucial role in overall health, oral health is frequently overlooked within healthcare systems, partly due to the misconception that oral diseases are neither life-threatening nor directly disabling. This perception has led to an underestimation of the psychological, social, and economic burden associated with oral diseases. Τhe present study aimed to assess oral health status and oral health-related quality of life among dental patients attending a public Special Care Center in Greece. Methods: A cross-sectional study was conducted among 400 dental patients aged 18 years and older who visited a public Special Care Center for a routine check-up or a dental problem between September and October 2024. Data was collected through personal interviews and clinical examinations after informed consent was obtained. Oral health-related quality of life was evaluated using the Oral Health Impact Profile-14 (OHIP-14) and the Oral Impacts on Daily Performance (OIDP) questionnaires. Categorical variables were presented as absolute and relative frequencies, while quantitative variables were summarized as mean, standard deviation, median, minimum, and maximum. Normality was assessed using the Kolmogorov–Smirnov test. Bivariate analyses and multivariate linear regression models were performed, with statistical significance set at p < 0.05. Statistical analyses were conducted using IBM SPSS 23.0. Results: The majority of participants were female (56.3%) with a mean age of 50.4 years (SD = 14.9). Overall oral health-related quality of life was moderate (OHIP-14: Mean = 21.0, SD = 14.8; OIDP: Mean = 14.0, SD = 12.8). Patients who attended the center due to a dental problem reported significantly poorer oral health outcomes than those attending routine check-ups (p < 0.001). Poorer self-rated oral health, having ≥12 missing teeth, prosthetic restoration, and foreign nationality were significantly associated with worse oral health-related quality of life. Conclusions: Dental patients attending the Special Care Center demonstrated moderate oral health status, which was associated with psychological distress, physical disability, and social limitations. These findings underline the need for targeted public oral health interventions, especially for vulnerable population groups. Full article
(This article belongs to the Section Public Health and Preventive Medicine)
21 pages, 1073 KB  
Article
Near-Optimal Decoding Algorithm for Color Codes Using Population Annealing
by Fernando Martínez-García, Francisco Revson F. Pereira and Pedro Parrado-Rodríguez
Entropy 2026, 28(1), 91; https://doi.org/10.3390/e28010091 - 12 Jan 2026
Viewed by 235
Abstract
The development and use of large-scale quantum computers relies on integrating quantum error-correcting (QEC) schemes into the quantum computing pipeline. A fundamental part of the QEC protocol is the decoding of the syndrome to identify a recovery operation with a high success rate. [...] Read more.
The development and use of large-scale quantum computers relies on integrating quantum error-correcting (QEC) schemes into the quantum computing pipeline. A fundamental part of the QEC protocol is the decoding of the syndrome to identify a recovery operation with a high success rate. In this work, we implement a decoder that finds the recovery operation with the highest success probability by mapping the decoding problem to a spin system and using Population Annealing to estimate the free energy of the different error classes. We study the decoder performance on a 4.8.8 color code lattice under different noise models, including code capacity with bit-flip and depolarizing noise, and phenomenological noise, which considers noisy measurements, with performance reaching near-optimal thresholds for bit-flip and depolarizing noise, and the highest reported threshold for phenomenological noise. This decoding algorithm can be applied to a wide variety of stabilizer codes, including surface codes and quantum Low-Density Parity Check (qLDPC) codes. Full article
(This article belongs to the Special Issue Coding Theory and Its Applications)
Show Figures

Figure 1

Back to TopTop