Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,234)

Search Parameters:
Keywords = label generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 12414 KB  
Article
A Replication-Competent Flavivirus Genome with a Stable GFP Insertion at the NS1-NS2A Junction
by Pavel Tarlykov, Bakytkali Ingirbay, Dana Auganova, Tolganay Kulatay, Viktoriya Keyer, Sabina Atavliyeva, Maral Zhumabekova, Arman Abeev and Alexandr V. Shustov
Biology 2026, 15(3), 220; https://doi.org/10.3390/biology15030220 (registering DOI) - 24 Jan 2026
Abstract
The flavivirus NS1 protein is a component of the viral replication complex and plays diverse, yet poorly understood, roles in the viral life cycle. To enable real-time visualization of the developing replication organelle and biochemical analysis of tagged NS1 and its interacting partners, [...] Read more.
The flavivirus NS1 protein is a component of the viral replication complex and plays diverse, yet poorly understood, roles in the viral life cycle. To enable real-time visualization of the developing replication organelle and biochemical analysis of tagged NS1 and its interacting partners, we engineered a replication-competent yellow fever virus (YFV) replicon encoding a C-terminal fusion of NS1 with green fluorescent protein (NS1–GFP). The initial variant was non-viable in the absence of trans-complementation with wild-type NS1; however, viability was partially restored through the introduction of co-adaptive mutations in GFP (Q204R/A206V) and NS4A (M108L). Subsequent cell culture adaptation generated a 17-nucleotide frameshift within the NS1–GFP linker, resulting in a more flexible and less hydrophobic linker sequence. The optimized genome, in the form of a replicon, replicates in packaging cells that produce YFV structural proteins, as well as in naive BHK-21 cells. In the packaging cells, the adapted NS1–GFP replicon produces titers of infectious particles of approximately 10^6 FFU/mL and is genetically stable over five passages. The expressed NS1–GFP fusion protein localizes to the endoplasmic reticulum and co-fractionates with detergent-resistant heavy membranes, a hallmark of flavivirus replication organelles. This NS1–GFP replicon provides a novel platform for studying NS1 functions and can be further adapted for proximity-labeling strategies aimed at identifying the still-unknown protease responsible for NS1–NS2A cleavage. Full article
13 pages, 811 KB  
Article
Trends in Antipsychotic Drug Use in the United States, 2000–2016
by Nisrine Haddad, Nawal Farhat, Jennifer Go, Yue Chen, Christopher A. Gravel, Franco Momoli, Donald R. Mattison, Douglas McNair, Abdallah Alami and Daniel Krewski
Pharmacy 2026, 14(1), 14; https://doi.org/10.3390/pharmacy14010014 (registering DOI) - 24 Jan 2026
Abstract
This study evaluated long-term trends in the prevalence of use of atypical and typical antipsychotic drugs (APDs), both as classes of drugs and as individual drugs, among adult inpatients in the United States (US). The Health Facts® database developed by Cerner Corporation [...] Read more.
This study evaluated long-term trends in the prevalence of use of atypical and typical antipsychotic drugs (APDs), both as classes of drugs and as individual drugs, among adult inpatients in the United States (US). The Health Facts® database developed by Cerner Corporation was used to analyze the prevalence of APD use among adult inpatients aged 18 years or older who were administered at least one antipsychotic medication order during hospitalization between 1 January 2000 and 31 December 2016. The prevalence of APD use was standardized by age, sex, race, and census region. Typical and atypical antipsychotic treatment patterns in the US differed over this period. While the use of atypical APDs increased overall, the use of typical antipsychotic medications decreased, but remained more prevalent. Overall, haloperidol and prochlorperazine were the two most administered antipsychotic medications throughout the study period. From 2000 to 2011, prochlorperazine and haloperidol were the first- and second-most prescribed typical APDs, respectively; haloperidol became the most administered antipsychotic of this class as of 2012. Quetiapine was the most administered atypical antipsychotic medication, followed by risperidone and olanzapine until 2014, after which olanzapine was the second-most administered atypical APD. There was a notable decline in the use of atypical antipsychotics medications between 2005 and 2008, which may reflect the impact of the Food and Drug Administration’s warnings and the American Diabetes Association’s consensus position, but only for a short time. The usage patterns observed in this study support existing evidence of substantial off-label use of antipsychotic drugs in the US. Full article
(This article belongs to the Topic Optimization of Drug Utilization and Medication Adherence)
Show Figures

Figure 1

16 pages, 3865 KB  
Article
Data-Augmented Deep Learning for Downhole Depth Sensing and Validation
by Si-Yu Xiao, Xin-Di Zhao, Tian-Hao Mao, Yi-Wei Wang, Yu-Qiao Chen, Hong-Yun Zhang, Jian Wang, Jun-Jie Wang, Shuang Liu, Tu-Pei Chen and Yang Liu
Sensors 2026, 26(3), 775; https://doi.org/10.3390/s26030775 (registering DOI) - 23 Jan 2026
Abstract
Accurate downhole depth measurement is essential for oil and gas well operations, directly influencing reservoir contact, production efficiency, and operational safety. Collar correlation using a casing collar locator (CCL) is fundamental for precise depth calibration. While neural network has achieved significant progress in [...] Read more.
Accurate downhole depth measurement is essential for oil and gas well operations, directly influencing reservoir contact, production efficiency, and operational safety. Collar correlation using a casing collar locator (CCL) is fundamental for precise depth calibration. While neural network has achieved significant progress in collar recognition, preprocessing methods for such applications remain underdeveloped. Moreover, the limited availability of real well data poses substantial challenges for training neural network models that require extensive datasets. This paper presents a system integrated into a downhole toolstring for CCL log acquisition to facilitate dataset construction. Comprehensive preprocessing methods for data augmentation are proposed, and their effectiveness is evaluated using baseline neural network models. Through systematic experimentation across diverse configurations, the contribution of each augmentation method is analyzed. Results demonstrate that standardization, label distribution smoothing (LDS), and random cropping are fundamental prerequisites for model training, while label smoothing regularization (LSR), time scaling, and multiple sampling significantly enhance model generalization capabilities. Incorporating the proposed augmentation methods into the two baseline models results in maximum F1 score improvements of 0.027 and 0.024 for the TAN and MAN models, respectively. Furthermore, applying these techniques yields F1 score gains of up to 0.045 for the TAN model and 0.057 for the MAN model compared to prior studies. Performance evaluation on real CCL waveforms confirms the effectiveness and practical applicability of our approach. This work addresses the existing gaps in data augmentation methodologies for training casing collar recognition models under CCL data-limited conditions, and provides a technical foundation for the future automation of downhole operations. Full article
(This article belongs to the Special Issue Intelligent Sensors and Signal Processing in Industry)
27 pages, 5594 KB  
Article
Conditional Tabular Generative Adversarial Network Based Clinical Data Augmentation for Enhanced Predictive Modeling in Chronic Kidney Disease Diagnosis
by Princy Randhawa, Veerendra Nath Jasthi, Kumar Piyush, Gireesh Kumar Kaushik, Malathy Batamulay, S. N. Prasad, Manish Rawat, Kiran Veernapu and Nithesh Naik
BioMedInformatics 2026, 6(1), 6; https://doi.org/10.3390/biomedinformatics6010006 (registering DOI) - 22 Jan 2026
Viewed by 10
Abstract
The lack of clinical data for chronic kidney disease (CKD) prediction frequently results in model overfitting and inadequate generalization to novel samples. This research mitigates this constraint by utilizing a Conditional Tabular Generative Adversarial Network (CTGAN) to enhance a constrained CKD dataset sourced [...] Read more.
The lack of clinical data for chronic kidney disease (CKD) prediction frequently results in model overfitting and inadequate generalization to novel samples. This research mitigates this constraint by utilizing a Conditional Tabular Generative Adversarial Network (CTGAN) to enhance a constrained CKD dataset sourced from the University of California, Irvine (UCI) Machine Learning Repository. The CTGAN model was trained to produce realistic synthetic samples that preserve the statistical and feature distributions of the original dataset. Multiple machine learning models, such as AdaBoost, Random Forest, Gradient Boosting, and K-Nearest Neighbors (KNN), were assessed on both the original and enhanced datasets with incrementally increasing degrees of synthetic data dilution. AdaBoost attained 100% accuracy on the original dataset, signifying considerable overfitting; however, the model exhibited enhanced generalization and stability with the CTGAN-augmented data. The occurrence of 100% test accuracy in several models should not be interpreted as realistic clinical performance. Instead, it reflects the limited size, clean structure, and highly separable feature distributions of the UCI CKD dataset. Similar behavior has been reported in multiple previous studies using this dataset. Such perfect accuracy is a strong indication of overfitting and limited generalizability, rather than feature or label leakage. This observation directly motivates the need for controlled data augmentation to introduce variability and improve model robustness. The dataset with the greatest dilution, comprising 2000 synthetic cases, attained a test accuracy of 95.27% utilizing a stochastic gradient boosting approach. Ensemble learning techniques, particularly gradient boosting and random forest, regularly surpassed conventional models like KNN in terms of predicted accuracy and resilience. The results demonstrate that CTGAN-based data augmentation introduces critical variability, diminishes model bias, and serves as an effective regularization technique. This method provides a viable alternative for reducing overfitting and improving predictive modeling accuracy in data-deficient medical fields, such as chronic kidney disease diagnosis. Full article
Show Figures

Figure 1

21 pages, 46330 KB  
Article
Bridging the Sim2Real Gap in UAV Remote Sensing: A High-Fidelity Synthetic Data Framework for Vehicle Detection
by Fuping Liao, Yan Liu, Wei Xu, Xingqi Wang, Gang Liu, Kun Yang and Jiahao Li
Remote Sens. 2026, 18(2), 361; https://doi.org/10.3390/rs18020361 - 21 Jan 2026
Viewed by 61
Abstract
Unmanned Aerial Vehicle (UAV) imagery has emerged as a critical data source in remote sensing, playing an important role in vehicle detection for intelligent traffic management and urban monitoring. Deep learning–based detectors rely heavily on large-scale, high-quality annotated datasets, however, collecting and labeling [...] Read more.
Unmanned Aerial Vehicle (UAV) imagery has emerged as a critical data source in remote sensing, playing an important role in vehicle detection for intelligent traffic management and urban monitoring. Deep learning–based detectors rely heavily on large-scale, high-quality annotated datasets, however, collecting and labeling real-world UAV data are both costly and time-consuming. Owing to its controllability and scalability, synthetic data has become an effective supplement to address the scarcity of real data. Nevertheless, the significant domain gap between synthetic data and real data often leads to substantial performance degradation during real-world deployment. To address this challenge, this paper proposes a high-fidelity synthetic data generation framework designed to reduce the Sim2Real gap. First, UAV oblique photogrammetry is utilized to reconstruct real-world 3D model, ensuring geometric and textural authenticity; second, diversified rendering strategies that simulate real-world illumination and weather variations are adopted to cover a wide range of environmental conditions; finally, an automated ground-truth generation algorithm based on semantic masks is developed to achieve pixel-level precision and cost-efficient annotation. Based on this framework, we construct a synthetic dataset named UAV-SynthScene. Experimental results show that multiple mainstream detectors trained on UAV-SynthScene achieve competitive performance when evaluated on real data, while significantly enhancing robustness in long-tail distributions and improving generalization on real datasets. Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches: UAV Data Analysis)
Show Figures

Figure 1

17 pages, 1195 KB  
Review
Meat Analog Products: Current Worldwide Scenario and Future Perspectives in Consumption and Regulation
by Tatiana Barbieri Cochlar, Ziane da Conceição das Mercês, Natalia Maldaner Salvadori, Sabrina Melo Evangelista, Virgílio José Strasburg and Viviani Ruffo de Oliveira
Foods 2026, 15(2), 376; https://doi.org/10.3390/foods15020376 - 20 Jan 2026
Viewed by 139
Abstract
Interest in plant-based diets has grown expressively in different regions of the world. However, the missing regulation for meat analogs may mislead consumers by suggesting that these products are the same as the meat they are replacing. Therefore, this study aims to analyze [...] Read more.
Interest in plant-based diets has grown expressively in different regions of the world. However, the missing regulation for meat analogs may mislead consumers by suggesting that these products are the same as the meat they are replacing. Therefore, this study aims to analyze the current global scenario of meat analogs, discuss consumption changes and their regulation, as well as pointing out future perspectives for the sector. A narrative literature review was performed using scientific papers from the Virtual Health Library (BVS), LILACS, PubMed (NIH), Embase, Web of Science, Scopus, and official documents. Included studies were aligned with the research theme, concentrating on countries with regulations for plant-based analog products and those lacking or pursuing such regulations. Additionally, studies were selected based on the following criteria: original or review studies from different countries, papers discussing meat analogs in terms of consumption, sensory attributes, market dynamics, sustainability, regulation, food safety; availability of full text; and publication dates ranging from 2015 to 2025. The data reveals that most of the assessed nations still lack specific regulations for meat analog products, adopting general labeling and naming standards that range from flexible approaches to strict restrictions. To conclude, the article highlights that meat substitutes are emerging as promising and sustainable options; however, their true consolidation is conditioned on the existence of more defined regulatory frameworks, increased consumer confidence, and market conditions that favor their large-scale adoption. Full article
(This article belongs to the Section Food Security and Sustainability)
Show Figures

Figure 1

32 pages, 8079 KB  
Article
Daytime Sea Fog Detection in the South China Sea Based on Machine Learning and Physical Mechanism Using Fengyun-4B Meteorological Satellite
by Jie Zheng, Gang Wang, Wenping He, Qiang Yu, Zijing Liu, Huijiao Lin, Shuwen Li and Bin Wen
Remote Sens. 2026, 18(2), 336; https://doi.org/10.3390/rs18020336 - 19 Jan 2026
Viewed by 132
Abstract
Sea fog is a major meteorological hazard that severely disrupts maritime transportation and economic activities in the South China Sea. As China’s next-generation geostationary meteorological satellite, Fengyun-4B (FY-4B) supplies continuous observations that are well suited for sea fog monitoring, yet a satellite-specific recognition [...] Read more.
Sea fog is a major meteorological hazard that severely disrupts maritime transportation and economic activities in the South China Sea. As China’s next-generation geostationary meteorological satellite, Fengyun-4B (FY-4B) supplies continuous observations that are well suited for sea fog monitoring, yet a satellite-specific recognition method has been lacking. A key obstacle is the radiometric inconsistency between the Advanced Geostationary Radiation Imager (AGRI) sensors on FY-4A and FY-4B, compounded by the cessation of Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) observations, which prevents direct transfer of fog labels. To address these challenges and fill this research gap, we propose a machine learning framework that integrates cross-satellite radiometric recalibration and physical mechanism constraints for robust daytime sea fog detection. First, we innovatively apply a radiation recalibration transfer technique based on the radiative transfer model to normalize FY-4A/B radiances and, together with Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/fog classification products and ERA5 reanalysis, construct a highly consistent joint training set of FY-4A/B for the winter-spring seasons since 2019. Secondly, to enhance the model’s physical performance, we incorporate key physical parameters related to the sea fog formation process (such as temperature inversion, near-surface humidity, and wind field characteristics) as physical constraints, and combine them with multispectral channel sensitivity and the brightness temperature (BT) standard deviation that characterizes texture smoothness, resulting in an optimized 13-dimensional feature matrix. Using this, we optimize the sea fog recognition model parameters of decision tree (DT), random forest (RF), and support vector machine (SVM) with grid search and particle swarm optimization (PSO) algorithms. The validation results show that the RF model outperforms others with the highest overall classification accuracy (0.91) and probability of detection (POD, 0.81) that surpasses prior FY-4A-based work for the South China Sea (POD 0.71–0.76). More importantly, this study demonstrates that the proposed FY-4B framework provides reliable technical support for operational, continuous sea fog monitoring over the South China Sea. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

21 pages, 5051 KB  
Article
High-Temperature Gelation and Structural Characterisation of Commercial Yellow Pea, Faba Bean, and Mungbean Protein–Starch Systems
by Niorie Moniharapon, Minqian Zhu, Lucinda Daborn and Sushil Dhital
Gels 2026, 12(1), 89; https://doi.org/10.3390/gels12010089 - 19 Jan 2026
Viewed by 135
Abstract
The heating of plant proteins at high temperatures is often associated with phase separation due to the aggregation of protein fractions, resulting in weak or discontinuous gels in liquid processing systems. This study examined the high-temperature gelation behaviour of commercial yellow pea, faba [...] Read more.
The heating of plant proteins at high temperatures is often associated with phase separation due to the aggregation of protein fractions, resulting in weak or discontinuous gels in liquid processing systems. This study examined the high-temperature gelation behaviour of commercial yellow pea, faba bean, and mungbean protein isolates and evaluated how different levels of dry-fractionated starch substitution tailor viscosity development and final gel strength. To characterise structural changes during heating, pasting behaviour was evaluated at 95 °C and 120 °C using a high-temperature Rapid Visco Analyser, while gel strength, temperature-ramp rheology, and thermal transitions were measured using a texture analyser, rheometer, and Differential Scanning Calorimetry. At 95 °C, all systems showed controlled pasting behaviour, with yellow pea exhibiting moderate viscosity development and clear recovery during cooling, mungbean generating the highest peak viscosity, and faba bean forming the strongest elastic network and gel structure. At 120 °C, yellow pea showed reduced stability, whereas faba bean and mungbean retained higher viscosity during heating. Starch addition improved the viscosity stability and gel strength across all proteins by limiting excessive aggregation and supporting network formation. These findings clarify how protein type and starch substitution affect high-temperature gelation, supporting the development of a heat-stable, clean-label plant-based gel system. Full article
(This article belongs to the Special Issue Gels: Diversity of Structures and Applications in Food Science)
Show Figures

Figure 1

26 pages, 3132 KB  
Article
An Unsupervised Cloud-Centric Intrusion Diagnosis Framework Using Autoencoder and Density-Based Learning
by Suresh K. S, Thenmozhi Elumalai, Radhakrishnan Rajamani, Anubhav Kumar, Balamurugan Balusamy, Sumendra Yogarayan and Kaliyaperumal Prabu
Future Internet 2026, 18(1), 54; https://doi.org/10.3390/fi18010054 - 19 Jan 2026
Viewed by 72
Abstract
Cloud computing environments generate high-dimensional, large-scale, and highly dynamic network traffic, making intrusion diagnosis challenging due to evolving attack patterns, severe traffic imbalance, and limited availability of labeled data. To address these challenges, this study presents an unsupervised, cloud-centric intrusion diagnosis framework that [...] Read more.
Cloud computing environments generate high-dimensional, large-scale, and highly dynamic network traffic, making intrusion diagnosis challenging due to evolving attack patterns, severe traffic imbalance, and limited availability of labeled data. To address these challenges, this study presents an unsupervised, cloud-centric intrusion diagnosis framework that integrates autoencoder-based representation learning with density-based attack categorization. A dual-stage autoencoder is trained exclusively on benign traffic to learn compact latent representations and to identify anomalous flows using reconstruction-error analysis, enabling effective anomaly detection without prior attack labels. The detected anomalies are subsequently grouped using density-based learning to uncover latent attack structures and support fine-grained multiclass intrusion diagnosis under varying attack densities. Experiments conducted on the large-scale CSE-CIC-IDS2018 dataset demonstrate that the proposed framework achieves an anomaly detection accuracy of 99.46%, with high recall and low false-negative rates in the optimal latent-space configuration. The density-based classification stage achieves an overall multiclass attack classification accuracy of 98.79%, effectively handling both majority and minority attack categories. Clustering quality evaluation reports a Silhouette Score of 0.9857 and a Davies–Bouldin Index of 0.0091, indicating strong cluster compactness and separability. Comparative analysis against representative supervised and unsupervised baselines confirms the framework’s scalability and robustness under highly imbalanced cloud traffic, highlighting its suitability for future Internet cloud security ecosystems. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

38 pages, 4273 KB  
Article
Transformer-Model-Based Automatic Aquifer Generalization Using Borehole Logs: A Case Study in a Mining Area in Xingtai, Hebei Province, China
by Yuanze Du, Hongrui Luo, Yihui Wang, Xinrui Li and Yingwang Zhao
Appl. Sci. 2026, 16(2), 983; https://doi.org/10.3390/app16020983 - 18 Jan 2026
Viewed by 134
Abstract
Generalized aquifers are widely used in various fields, such as groundwater use, mine water prevention and control, and geothermal energy. This paper presents a transformer-model-based automatic aquifer generalization method using borehole logs in scenarios with scarce experimental parameters. Relying only on basic borehole [...] Read more.
Generalized aquifers are widely used in various fields, such as groundwater use, mine water prevention and control, and geothermal energy. This paper presents a transformer-model-based automatic aquifer generalization method using borehole logs in scenarios with scarce experimental parameters. Relying only on basic borehole data, the method used an agent-assisted approach to extract and clean key lithological and coordinate information, which was then fused using a dual embedding mechanism. The model leveraged multi-head self-attention to calculate attention weights between the target stratum and its adjacent strata, capturing the potential contextual correlations in aquifer potential across strata. The resulting deep feature vectors from the transformer’s encoder were fed into a classification head to predict aquifer potential labels. Evaluation results demonstrated a model accuracy of 0.86, significantly outperforming the random classification baseline in precision, recall, the F1-score, and the kappa coefficient. Full article
Show Figures

Figure 1

17 pages, 4792 KB  
Article
A Deep Learning-Based Graphical User Interface for Predicting Corneal Ectasia Scores from Raw Optical Coherence Tomography Data
by Maziar Mirsalehi and Achim Langenbucher
Diagnostics 2026, 16(2), 310; https://doi.org/10.3390/diagnostics16020310 - 18 Jan 2026
Viewed by 124
Abstract
Background/Objectives: Keratoconus, a condition in which the cornea becomes thinner and steeper, can cause visual problems, particularly when it is progressive. Early diagnosis is important for preserving visual acuity. Raw data, unlike preprocessed data, are unaffected by software modifications. They retain their [...] Read more.
Background/Objectives: Keratoconus, a condition in which the cornea becomes thinner and steeper, can cause visual problems, particularly when it is progressive. Early diagnosis is important for preserving visual acuity. Raw data, unlike preprocessed data, are unaffected by software modifications. They retain their native structure across versions, providing consistency for analytical purposes. The objective of this study was to design a deep learning-based graphical user interface for predicting the corneal ectasia score using raw optical coherence tomography data. Methods: The graphical user interface was developed using Tkinter, a Python library for building graphical user interfaces. The user is allowed to select raw data from the cornea/anterior segment optical coherence tomography Casia2, which is generated in the 3dv format, from the local system. To view the predicted corneal ectasia score, the user must determine whether the selected 3dv file corresponds to the left or right eye. Extracted optical coherence tomography images are cropped, resized to 224 × 224 pixels and processed by the modified EfficientNet-B0 convolutional neural network to predict the corneal ectasia score. The predicted corneal ectasia score value is displayed along with a diagnosis: ‘No detectable ectasia pattern’ or ‘Suspected ectasia’ or ‘Clinical ectasia’. Performance metric values were rounded to four decimal places, and the mean absolute error value was rounded to two decimal places. Results: The modified EfficientNet-B0 obtained a mean absolute error of 6.65 when evaluated on the test dataset. For the two-class classification, it achieved an accuracy of 87.96%, a sensitivity of 82.41%, a specificity of 96.69%, a PPV of 97.52% and an F1 score of 89.33%. For the three-class classification, it attained a weighted-average F1 score of 84.95% and an overall accuracy of 84.75%. Conclusions: The graphical user interface outputs numerical ectasia scores, which improves other categorical labels. The graphical user interface enables consistent diagnostics, regardless of software updates, by using raw data from the Casia2. The successful use of raw optical coherence tomography data indicates the potential for raw optical coherence tomography data to be used, rather than preprocessed optical coherence tomography data, for diagnosing keratoconus. Full article
(This article belongs to the Special Issue Diagnosis of Corneal and Retinal Diseases)
Show Figures

Figure 1

17 pages, 591 KB  
Article
The Intricacy of Consuming Fast-Fashion Clothing: The Role of Guilt and Sustainability Values
by Judith Cavazos-Arroyo and Rogelio Puente-Díaz
Behav. Sci. 2026, 16(1), 138; https://doi.org/10.3390/bs16010138 - 18 Jan 2026
Viewed by 196
Abstract
The consumption of clothes creates paradoxes in which values, motives, and emotions interact to generate consumption experiences. To test some of these interactions, we conducted three correlational studies, studies 1, 2, and 3, one experiment, study 4, and one qualitative study, study 5. [...] Read more.
The consumption of clothes creates paradoxes in which values, motives, and emotions interact to generate consumption experiences. To test some of these interactions, we conducted three correlational studies, studies 1, 2, and 3, one experiment, study 4, and one qualitative study, study 5. Study 1 found negative relationships between sustainability values and materialism and positive relationships between sustainable values and the preference for experiential purchases. Study 2 found positive relationships between two components of the slow-fashion movement, equity and exclusiveness, and guilt, and a negative relationship with functionality, another component of slow fashion. Study 3 found an indirect relationship between sustainable values and guilt through their positive and significant relationship with increased awareness of the environmental impact of the fast-fashion industry, supporting a mediation model. Study 4 found that participants were was more likely, regardless of whether the purchase of clothing was labeled as fast fashion or not, to experience pride than guilt when recalling recent past purchases. Last, in study 5, we found that consumers buy clothes to look good and pay attention to quality and value without significant concerns for environmental issues. The implications for consumer behavior were discussed. Full article
Show Figures

Figure 1

15 pages, 16477 KB  
Article
Defect Classification Dataset and Algorithm for Magnetic Random Access Memory
by Hui Chen and Jianyi Yang
Mathematics 2026, 14(2), 323; https://doi.org/10.3390/math14020323 - 18 Jan 2026
Viewed by 166
Abstract
Defect categorization is essential to product quality assurance during the production of magnetic random access memory (MRAM). Nevertheless, traditional defect detection techniques continue to face difficulties in large-scale deployments, such as a lack of labeled examples with complicated defect shapes, which results in [...] Read more.
Defect categorization is essential to product quality assurance during the production of magnetic random access memory (MRAM). Nevertheless, traditional defect detection techniques continue to face difficulties in large-scale deployments, such as a lack of labeled examples with complicated defect shapes, which results in inadequate identification accuracy. In order to overcome these problems, we create the MARMset dataset, which consists of 39,822 photos and covers 14 common defect types for MRAM defect detection and classification. Furthermore, we present a baseline framework (GAGBnet) for MRAM defect classification, including a global attention module (GAM) and an attention-guided block (AGB). Firstly, the GAM is introduced to enhance the model’s feature extraction capability. Secondly, inspired by the feature enhancement strategy, the AGB is designed to incorporate an attention-guided mechanism during feature fusion to remove redundant information and focus on critical features. Finally, the experimental results show that the average accuracy rate of this method on the MARMset reaches 92.90%. In addition, we test on the NEU-CLS dataset to evaluate cross-dataset generalization, achieving an average accuracy of 98.60%. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 1099 KB  
Article
Effect of Additive Removal on the Physicochemical Properties of Gluten-Free Bread
by Ramón Torres-Pérez, Marta Maravilla Siguero-Tudela, Tania Doménech, Purificación García-Segovia, Javier Martínez-Monzó and Marta Igual
Foods 2026, 15(2), 338; https://doi.org/10.3390/foods15020338 - 16 Jan 2026
Viewed by 143
Abstract
The growing demand for clean-label gluten-free bread is driving a reduction in additives, although their technological roles are not yet fully understood. This study evaluated the effect of progressively removing monocalcium phosphate, sodium bicarbonate, and mono- and diglycerides (MDG) on the quality of [...] Read more.
The growing demand for clean-label gluten-free bread is driving a reduction in additives, although their technological roles are not yet fully understood. This study evaluated the effect of progressively removing monocalcium phosphate, sodium bicarbonate, and mono- and diglycerides (MDG) on the quality of gluten-free bread during storage. Four formulations were prepared: a reference (RF) containing all additives, and three reduced-additive versions without monocalcium phosphate (FA), without monocalcium phosphate and sodium bicarbonate (FB), or without any additives (FC). Specific volume, moisture, water activity, crumb structure, color, and texture were assessed on days 1, 8, 15, and 22. Additive removal significantly affected bread quality: the formulation without leavening agents (FB) showed the lowest specific volume (≈2.8 cm3/g) and the highest crumb hardness (≈38 N), whereas the additive-free formulation (FC) achieved the highest specific volume (≈3.3 cm3/g) and a crumb structure comparable to the reference bread, with a higher void fraction (≈28%). During storage, all breads exhibited increasing hardness, although FC did not stale faster than RF, likely due to its higher specific volume after baking. The results confirm that sodium bicarbonate and monocalcium phosphate are essential for gas generation and structural development, while removal of MDG improved loaf volume without intensifying deterioration. Full article
(This article belongs to the Section Grain)
Show Figures

Figure 1

22 pages, 570 KB  
Article
Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models
by Marco Rospocher, Massimo Salgaro and Simone Rebora
Information 2026, 17(1), 95; https://doi.org/10.3390/info17010095 - 16 Jan 2026
Viewed by 217
Abstract
Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less [...] Read more.
Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less authentic or emotionally resonant than human creations, with authorship attribution strongly shaping esthetic judgments. Yet little attention has been paid to how AI systems themselves evaluate creative authorship. This study investigates how large language models (LLMs) evaluate literary quality under different framings of authorship—Human, AI, or Human+AI collaboration. Using a questionnaire-based experimental design, we prompted four instruction-tuned LLMs (ChatGPT 4, Gemini 2, Gemma 3, and LLaMA 3) to read and assess three short stories in Italian, originally generated by ChatGPT 4 in the narrative style of Roald Dahl. For each story × authorship condition × model combination, we collected 100 questionnaire completions, yielding 3600 responses in total. Across esthetic, literary, and inclusiveness dimensions, the stated authorship systematically conditioned model judgments: identical stories were consistently rated more favorably when framed as human-authored or human–AI co-authored than when labeled as AI-authored, revealing a robust negative bias toward AI authorship. Model-specific analyses further indicate distinctive evaluative profiles and inclusiveness thresholds across proprietary and open-source systems. Our findings extend research on attribution bias into the computational realm, showing that LLM-based evaluations reproduce human-like assumptions about creative agency and literary value. We publicly release all materials to facilitate transparency and future comparative work on AI-mediated literary evaluation. Full article
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)
Show Figures

Graphical abstract

Back to TopTop