Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,158)

Search Parameters:
Keywords = fitness classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1576 KiB  
Article
Research on the Optimization Method of Injection Molding Process Parameters Based on the Improved Particle Swarm Optimization Algorithm
by Zhenfa Yang, Xiaoping Lu, Lin Wang, Lucheng Chen and Yu Wang
Processes 2025, 13(8), 2491; https://doi.org/10.3390/pr13082491 - 7 Aug 2025
Abstract
Optimization of injection molding process parameters is essential for improving product quality and production efficiency. Traditional methods, which rely heavily on operator experience, often result in inconsistencies, high time consumption, high defect rates, and suboptimal energy consumption. In this study, an improved particle [...] Read more.
Optimization of injection molding process parameters is essential for improving product quality and production efficiency. Traditional methods, which rely heavily on operator experience, often result in inconsistencies, high time consumption, high defect rates, and suboptimal energy consumption. In this study, an improved particle swarm optimization (IPSO) algorithm was proposed, integrating dynamic inertia weight adjustment, adaptive acceleration coefficients, and position constraints to address the issue of premature convergence and enhance global search capabilities. A dual-model architecture was implemented: a constraint validation mechanism based on support vector machine (SVM) was enforced per iteration cycle to ensure stepwise quality compliance, while a fitness function derived by extreme gradient boosting (XGBoost) was formulated to minimize cycle time as the optimization objective. The results demonstrated that the average injection cycle time was reduced by 9.41% while ensuring that the product was qualified. The SVM and XGBoost models achieved high performance metrics (accuracy: 0.92; R2: 0.93; RMSE: 1.05), confirming their robustness in quality classification and cycle time prediction. This method provides a systematic and data-driven solution for multi-objective optimization in injection molding, significantly improving production efficiency and energy utilization. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

20 pages, 1265 KiB  
Article
Validation of the Player Personality and Dynamics Scale
by Ayose Lomba Perez, Juan Carlos Martín-Quintana, Jesus B. Alonso-Hernandez and Iván Martín-Rodríguez
Appl. Sci. 2025, 15(15), 8714; https://doi.org/10.3390/app15158714 - 6 Aug 2025
Abstract
This study presents the validation of the Player Personality and Dynamics Scale (PPDS), designed to identify player profiles in educational gamification contexts with narrative elements. Through a sample of 635 participants, a questionnaire was developed and applied, covering sociodemographic data, lifestyle habits, gaming [...] Read more.
This study presents the validation of the Player Personality and Dynamics Scale (PPDS), designed to identify player profiles in educational gamification contexts with narrative elements. Through a sample of 635 participants, a questionnaire was developed and applied, covering sociodemographic data, lifestyle habits, gaming practices, and a classification system of 40 items on a six-point Likert scale. The results of the factorial analysis confirm a structure of five factors: Toxic Profile, Joker Profile, Tryhard Profile, Aesthetic Profile, and Coacher Profile, with high fit and reliability indices (RMSEA = 0.06; CFI = 0.95; TLI = 0.91). The resulting classification enables the design of personalized gamified experiences that enhance learning and interaction in the classroom, highlighting the importance of understanding players’ motivations to better adapt educational dynamics. Applying this scale fosters meaningful learning through the creation of narratives tailored to students’ individual preferences. Full article
Show Figures

Figure 1

14 pages, 719 KiB  
Article
Recursive Interplay of Family and Biological Dynamics: Adults with Type 1 Diabetes Mellitus Under the Spotlight
by Helena Jorge, Bárbara Regadas Correia, Miguel Castelo-Branco and Ana Paula Relvas
Diabetology 2025, 6(8), 81; https://doi.org/10.3390/diabetology6080081 - 6 Aug 2025
Abstract
Objectives: Diabetes Mellitus involves demanding challenges that interfere with family functioning and routines. In turn, family and social context impacts individual glycemic control. This study aims to identify this recursive interplay, the mutual influences of family systems and diabetes management. Design: Data was [...] Read more.
Objectives: Diabetes Mellitus involves demanding challenges that interfere with family functioning and routines. In turn, family and social context impacts individual glycemic control. This study aims to identify this recursive interplay, the mutual influences of family systems and diabetes management. Design: Data was collected through a cross-sectional design comparing patients, aged 22–55, with and without metabolic control. Methods: Participants filled out a set of self-report measures of sociodemographic, clinical and family systems assessment. Patients (91) were also invited to describe their perception about disease management interference regarding family functioning. We first examined the extent to which family variables grouped dataset to determine if there were similarities and dissimilarities that fit with our initial diabetic groups’ classification. Results: Cluster analysis results identify a two-cluster solution validating initial classification of two groups of patients: 49 with metabolic control (MC) and 42 without metabolic control (NoMC). Independent sample tests suggested statistically significant differences between groups in family subscales- family difficulties and family communication (p < 0.05). Binary logistic regression shed light on predictors of explained variance to no metabolic control, in four models: Sociodemographic, Clinical data, SCORE-15/Congruence Scale and Eating Behavior. Furthermore, groups differ on family support, level and sources of family conflict caused by diabetes management issues. Considering only patients who co-habit with a partner for more than one year (N = 44), NoMC patients score lower on marital functioning in all categories (p < 0.05). Discussion: Family-Chronic illness interaction plays a significant role in a patient’s adherence to treatment. This study highlights the Standards of Medical Care for Diabetes, considering caregivers and family members on diabetes care. Full article
Show Figures

Figure 1

23 pages, 8569 KiB  
Article
Evidential K-Nearest Neighbors with Cognitive-Inspired Feature Selection for High-Dimensional Data
by Yawen Liu, Yang Zhang, Xudong Wang and Xinyuan Qu
Big Data Cogn. Comput. 2025, 9(8), 202; https://doi.org/10.3390/bdcc9080202 - 6 Aug 2025
Abstract
The Evidential K-Nearest Neighbor (EK-NN) classifier has demonstrated robustness in handling incomplete and uncertain data; however, its application in high-dimensional big data for feature selection, such as genomic datasets with tens of thousands of gene features, remains underexplored. Our proposed Granular–Elastic Evidential K-Nearest [...] Read more.
The Evidential K-Nearest Neighbor (EK-NN) classifier has demonstrated robustness in handling incomplete and uncertain data; however, its application in high-dimensional big data for feature selection, such as genomic datasets with tens of thousands of gene features, remains underexplored. Our proposed Granular–Elastic Evidential K-Nearest Neighbor (GEK-NN) approach addresses this gap. In the context of big data, GEK-NN integrates an Elastic Net within the Genetic Algorithm’s fitness function to efficiently sift through vast amounts of data, identifying relevant feature subsets. This process mimics human cognitive behavior of filtering and refining information, similar to concepts in cognitive computing. A granularity metric is further employed to optimize subset size, maximizing its impact. GEK-NN consists of two crucial phases. Initially, an Elastic Net-based feature evaluation is conducted to pinpoint relevant features from the high-dimensional data. Subsequently, granularity-based optimization refines the subset size, adapting to the complexity of big data. Before applying to genomic big data, experiments on UCI datasets demonstrated the feasibility and effectiveness of GEK-NN. By using an Evidence Theory framework, GEK-NN overcomes feature-selection challenges in both low-dimensional UCI datasets and high-dimensional genomic big data, significantly enhancing pattern recognition and classification accuracy. Comparative analyses with existing EK-NN feature-selection methods, using both UCI and high-dimensional gene datasets, underscore GEK-NN’s superiority in handling big data for feature selection and classification. These results indicate that GEK-NN not only enriches EK-NN applications but also offers a cognitive-inspired solution for complex gene data analysis, effectively tackling high-dimensional feature-selection challenges in the realm of big data. Full article
Show Figures

Figure 1

29 pages, 12050 KiB  
Article
PolSAR-SFCGN: An End-to-End PolSAR Superpixel Fully Convolutional Generation Network
by Mengxuan Zhang, Jingyuan Shi, Long Liu, Wenbo Zhang, Jie Feng, Jin Zhu and Boce Chu
Remote Sens. 2025, 17(15), 2723; https://doi.org/10.3390/rs17152723 - 6 Aug 2025
Abstract
Polarimetric Synthetic Aperture Radar (PolSAR) image classification is one of the most important applications in remote sensing. The impressive superpixel generation approaches can improve the efficiency of the subsequent classification task and restrain the influence of the speckle noise to an extent. Most [...] Read more.
Polarimetric Synthetic Aperture Radar (PolSAR) image classification is one of the most important applications in remote sensing. The impressive superpixel generation approaches can improve the efficiency of the subsequent classification task and restrain the influence of the speckle noise to an extent. Most of the classical PolSAR superpixel generation approaches use the features extracted manually and even only consider the pseudocolor images. They do not make full use of polarimetric information and do not necessarily lead to good enough superpixels. The deep learning methods can extract effective deep features but they are difficult to combine with superpixel generation to achieve true end-to-end training. Addressing the above issues, this study proposes an end-to-end fully convolutional superpixel generation network for PolSAR images. It integrates the extraction of polarization information features and the generation of PolSAR superpixels into one step. PolSAR superpixels can be generated based on deep polarization feature extraction and need no traditional clustering process. Both the performance and efficiency of generations of PolSAR superpixels can be enhanced effectively. The experimental results on various PolSAR datasets show that the proposed method can achieve impressive superpixel segmentation by fitting the real boundaries of different types of ground objects effectively and efficiently. It can achieve excellent classification performance by connecting a very simple classification network, which is helpful to improve the efficiency of the subsequent PolSAR image classification tasks. Full article
16 pages, 7134 KiB  
Article
The Impact of an Object’s Surface Material and Preparatory Actions on the Accuracy of Optical Coordinate Measurement
by Danuta Owczarek, Ksenia Ostrowska, Jerzy Sładek, Adam Gąska, Wiktor Harmatys, Krzysztof Tomczyk, Danijela Ignjatović and Marek Sieja
Materials 2025, 18(15), 3693; https://doi.org/10.3390/ma18153693 - 6 Aug 2025
Abstract
Optical coordinate measurement is a universal technique that aligns with the rapid development of industrial technologies and new materials. Nevertheless, can this technique be consistently effective when applied to the precise measurement of all types of materials? As shown in this article, an [...] Read more.
Optical coordinate measurement is a universal technique that aligns with the rapid development of industrial technologies and new materials. Nevertheless, can this technique be consistently effective when applied to the precise measurement of all types of materials? As shown in this article, an analysis of optical measurement systems reveals that some materials cause difficulties during the scanning process. This article details the matting process, resulting, as demonstrated, in lower measurement uncertainty values compared to the pre-matting state, and identifies materials for which applying a matting spray significantly improves the measurement quality. The authors propose a classification of materials into easy-to-scan and hard-to-scan groups, along with specific procedures to improve measurements, especially for the latter. Tests were conducted in an accredited Laboratory of Coordinate Metrology using an articulated arm with a laser probe. Measured objects included spheres made of ceramic, tungsten carbide (including a matte finish), aluminum oxide, titanium nitride-coated steel, and photopolymer resin, with reference diameters established by a high-precision Leitz PMM 12106 coordinate measuring machine. Diameters were determined from point clouds obtained via optical measurements using the best-fit method, both before and after matting. Color measurements using a spectrocolorimeter supplemented this study to assess the effect of matting on surface color. The results revealed correlations between the material type and measurement accuracy. Full article
(This article belongs to the Section Optical and Photonic Materials)
Show Figures

Figure 1

17 pages, 545 KiB  
Article
Concordance Index-Based Comparison of Inflammatory and Classical Prognostic Markers in Untreated Hepatocellular Carcinoma
by Natalia Afonso-Luis, Irene Monescillo-Martín, Joaquín Marchena-Gómez, Pau Plá-Sánchez, Francisco Cruz-Benavides and Carmen Rosa Hernández-Socorro
J. Clin. Med. 2025, 14(15), 5514; https://doi.org/10.3390/jcm14155514 - 5 Aug 2025
Abstract
Background/Objectives: Inflammation-based markers have emerged as potential prognostic tools in hepatocellular carcinoma (HCC), but comparative data with classical prognostic factors in untreated HCC are limited. This study aimed to evaluate and compare the prognostic performance of inflammatory and conventional markers using Harrell’s [...] Read more.
Background/Objectives: Inflammation-based markers have emerged as potential prognostic tools in hepatocellular carcinoma (HCC), but comparative data with classical prognostic factors in untreated HCC are limited. This study aimed to evaluate and compare the prognostic performance of inflammatory and conventional markers using Harrell’s concordance index (C-index). Methods: This retrospective study included 250 patients with untreated HCC. Prognostic variables included age, BCLC stage, Child–Pugh classification, Milan criteria, MELD score, AFP, albumin, Charlson comorbidity index, and the inflammation-based markers neutrophil-to-lymphocyte ratio (NLR), platelet-to-lymphocyte ratio (PLR), monocyte-to-lymphocyte ratio (MLR), Systemic Inflammation Response Index (SIRI), and Systemic Immune-inflammation Index (SIII). Survival was analyzed using Cox regression. Predictive performance was assessed using the C-index, Akaike Information Criterion (AIC), and likelihood ratio tests. Results: Among the classical markers, BCLC showed the highest predictive performance (C-index: 0.717), while NLR ranked highest among the inflammatory markers (C-index: 0.640), above the MELD score and Milan criteria. In multivariate analysis, NLR ≥ 2.3 remained an independent predictor of overall survival (HR: 1.787; 95% CI: 1.264–2.527; p < 0.001), along with BCLC stage, albumin, Charlson index, and Milan criteria. Including NLR in the model modestly improved the C-index (from 0.781 to 0.794) but significantly improved model fit (Δ–2LL = 10.75; p = 0.001; lower AIC). Conclusions: NLR is an accessible, cost-effective, and independent prognostic marker for overall survival in untreated HCC. It shows discriminative power comparable to or greater than most conventional predictors and may complement classical stratification tools for HCC. Full article
(This article belongs to the Section General Surgery)
Show Figures

Figure 1

24 pages, 3291 KiB  
Article
Machine Learning Subjective Opinions: An Application in Forensic Chemistry
by Anuradha Akmeemana and Michael E. Sigman
Algorithms 2025, 18(8), 482; https://doi.org/10.3390/a18080482 - 4 Aug 2025
Viewed by 134
Abstract
Simulated data created in silico using a previously reported method were sampled by bootstrapping to generate data sets for training multiple copies of an ensemble learner (i.e., a machine learning (ML) method). The posterior probabilities of class membership obtained by applying the ensemble [...] Read more.
Simulated data created in silico using a previously reported method were sampled by bootstrapping to generate data sets for training multiple copies of an ensemble learner (i.e., a machine learning (ML) method). The posterior probabilities of class membership obtained by applying the ensemble of ML models to previously unseen validation data were fitted to a beta distribution. The shape parameters for the fitted distribution were used to calculate the subjective opinion of sample membership into one of two mutually exclusive classes. The subjective opinion consists of belief, disbelief and uncertainty masses. A subjective opinion for each validation sample allows identification of high-uncertainty predictions. The projected probabilities of the validation opinions were used to calculate log-likelihood ratio scores and generate receiver operating characteristic (ROC) curves from which an opinion-supported decision can be made. Three very different ML models, linear discriminant analysis (LDA), random forest (RF), and support vector machines (SVM) were applied to the two-state classification problem in the analysis of forensic fire debris samples. For each ML method, a set of 100 ML models was trained on data sets bootstrapped from 60,000 in silico samples. The impact of training data set size on opinion uncertainty and ROC area under the curve (AUC) were studied. The median uncertainty for the validation data was smallest for LDA ML and largest for the SVM ML. The median uncertainty continually decreased as the size of the training data set increased for all ML.The AUC for ROC curves based on projected probabilities was largest for the RF model and smallest for the LDA method. The ROC AUC was statistically unchanged for LDA at training data sets exceeding 200 samples; however, the AUC increased with increasing sample size for the RF and SVM methods. The SVM method, the slowest to train, was limited to a maximum of 20,000 training samples. All three ML methods showed increasing performance when the validation data was limited to higher ignitable liquid contributions. An ensemble of 100 RF ML models, each trained on 60,000 in silico samples, performed the best with a median uncertainty of 1.39x102 and ROC AUC of 0.849 for all validation samples. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation (2nd Edition))
Show Figures

Graphical abstract

27 pages, 7810 KiB  
Article
Mutation Interval-Based Segment-Level SRDet: Side Road Detection Based on Crowdsourced Trajectory Data
by Ying Luo, Fengwei Jiao, Longgang Xiang, Xin Chen and Meng Wang
ISPRS Int. J. Geo-Inf. 2025, 14(8), 299; https://doi.org/10.3390/ijgi14080299 - 31 Jul 2025
Viewed by 217
Abstract
Accurate side road detection is essential for traffic management, urban planning, and vehicle navigation. However, existing research mainly focuses on road network construction, lane extraction, and intersection identification, while fine-grained side road detection remains underexplored. Therefore, this study proposes a road segment-level side [...] Read more.
Accurate side road detection is essential for traffic management, urban planning, and vehicle navigation. However, existing research mainly focuses on road network construction, lane extraction, and intersection identification, while fine-grained side road detection remains underexplored. Therefore, this study proposes a road segment-level side road detection method based on crowdsourced trajectory data: First, considering the geometric and dynamic characteristics of trajectories, SRDet introduces a trajectory lane-change pattern recognition method based on mutation intervals to distinguish the heterogeneity of lane-change behaviors between main and side roads. Secondly, combining geometric features with spatial statistical theory, SRDet constructs multimodal features for trajectories and road segments, and proposes a potential side road segment classification model based on random forests to achieve precise detection of side road segments. Finally, based on mutation intervals and potential side road segments, SRDet utilizes density peak clustering to identify main and side road access points, completing the fitting of side roads. Experiments were conducted using 2021 Beijing trajectory data. The results show that SRDet achieves precision and recall rates of 84.6% and 86.8%, respectively. This demonstrates the superior performance of SRDet in side road detection across different areas, providing support for the precise updating of urban road navigation information. Full article
Show Figures

Figure 1

24 pages, 4103 KiB  
Article
SARS-CoV-2 Remdesivir Exposure Leads to Different Evolutionary Pathways That Converge in Moderate Levels of Drug Resistance
by Carlota Fernandez-Antunez, Line A. Ryberg, Kuan Wang, Long V. Pham, Lotte S. Mikkelsen, Ulrik Fahnøe, Katrine T. Hartmann, Henrik E. Jensen, Kenn Holmbeck, Jens Bukh and Santseharay Ramirez
Viruses 2025, 17(8), 1055; https://doi.org/10.3390/v17081055 - 29 Jul 2025
Viewed by 424
Abstract
Various SARS-CoV-2 remdesivir resistance-associated substitutions (RAS) have been reported, but a comprehensive comparison of their resistance levels is lacking. We identified novel RAS and performed head-to-head comparisons with known RAS in Vero E6 cells. A remdesivir escape polyclonal virus exhibited a 3.6-fold increase [...] Read more.
Various SARS-CoV-2 remdesivir resistance-associated substitutions (RAS) have been reported, but a comprehensive comparison of their resistance levels is lacking. We identified novel RAS and performed head-to-head comparisons with known RAS in Vero E6 cells. A remdesivir escape polyclonal virus exhibited a 3.6-fold increase in remdesivir EC50 and mutations throughout the genome, including substitutions in nsp12 (E796D) and nsp14 (A255S). However, in reverse-genetics infectious assays, viruses harboring both these substitutions exhibited only a slight decrease in remdesivir susceptibility (1.3-fold increase in EC50). The nsp12-E796D substitution did not impair viral fitness (Vero E6 cells or Syrian hamsters) and was reported in a remdesivir-treated COVID-19 patient. In replication assays, a subgenomic replicon containing nsp12-E796D+nsp14-A255S led to a 16.1-fold increase in replication under remdesivir treatment. A comparison with known RAS showed that S759A, located in the active site of nsp12, conferred the highest remdesivir resistance (106.1-fold increase in replication). Nsp12-RAS V166A/L, V792I, E796D or C799F, all adjacent to the active site, caused intermediate resistance (2.0- to 11.5-fold), whereas N198S, D484Y, or E802D, located farther from the active site, showed no resistance (≤2.0-fold). In conclusion, our classification system, correlating replication under remdesivir treatment with RAS location in nsp12, shows that most nsp12-RAS cause moderate resistance. Full article
(This article belongs to the Special Issue Viral Resistance)
Show Figures

Figure 1

28 pages, 5373 KiB  
Article
Transfer Learning Based on Multi-Branch Architecture Feature Extractor for Airborne LiDAR Point Cloud Semantic Segmentation with Few Samples
by Jialin Yuan, Hongchao Ma, Liang Zhang, Jiwei Deng, Wenjun Luo, Ke Liu and Zhan Cai
Remote Sens. 2025, 17(15), 2618; https://doi.org/10.3390/rs17152618 - 28 Jul 2025
Viewed by 315
Abstract
The existing deep learning-based Airborne Laser Scanning (ALS) point cloud semantic segmentation methods require a large amount of labeled data for training, which is not always feasible in practice. Insufficient training data may lead to over-fitting. To address this issue, we propose a [...] Read more.
The existing deep learning-based Airborne Laser Scanning (ALS) point cloud semantic segmentation methods require a large amount of labeled data for training, which is not always feasible in practice. Insufficient training data may lead to over-fitting. To address this issue, we propose a novel Multi-branch Feature Extractor (MFE) and a three-stage transfer learning strategy that conducts pre-training on multi-source ALS data and transfers the model to another dataset with few samples, thereby improving the model’s generalization ability and reducing the need for manual annotation. The proposed MFE is based on a novel multi-branch architecture integrating Neighborhood Embedding Block (NEB) and Point Transformer Block (PTB); it aims to extract heterogeneous features (e.g., geometric features, reflectance features, and internal structural features) by leveraging the parameters contained in ALS point clouds. To address model transfer, a three-stage strategy was developed: (1) A pre-training subtask was employed to pre-train the proposed MFE if the source domain consisted of multi-source ALS data, overcoming parameter differences. (2) A domain adaptation subtask was employed to align cross-domain feature distributions between source and target domains. (3) An incremental learning subtask was proposed for continuous learning of novel categories in the target domain, avoiding catastrophic forgetting. Experiments conducted on the source domain consisted of DALES and Dublin datasets and the target domain consists of ISPRS benchmark dataset. The experimental results show that the proposed method achieved the highest OA of 85.5% and an average F1 score of 74.0% using only 10% training samples, which means the proposed framework can reduce manual annotation by 90% while keeping competitive classification accuracy. Full article
Show Figures

Figure 1

18 pages, 1687 KiB  
Article
Beyond Classical AI: Detecting Fake News with Hybrid Quantum Neural Networks
by Volkan Altıntaş
Appl. Sci. 2025, 15(15), 8300; https://doi.org/10.3390/app15158300 - 25 Jul 2025
Viewed by 231
Abstract
The advent of quantum computing has introduced new opportunities for enhancing classical machine learning architectures. In this study, we propose a novel hybrid model, the HQDNN (Hybrid Quantum–Deep Neural Network), designed for the automatic detection of fake news. The model integrates classical fully [...] Read more.
The advent of quantum computing has introduced new opportunities for enhancing classical machine learning architectures. In this study, we propose a novel hybrid model, the HQDNN (Hybrid Quantum–Deep Neural Network), designed for the automatic detection of fake news. The model integrates classical fully connected neural layers with a parameterized quantum circuit, enabling the processing of textual data within both classical and quantum computational domains. To assess its effectiveness, we conducted experiments on the widely used LIAR dataset utilizing Term Frequency–Inverse Document Frequency (TF-IDF) features, as well as transformer-based DistilBERT embeddings. The experimental results demonstrate that the HQDNN achieves a superior recall performance—92.58% with TF-IDF and 94.40% with DistilBERT—surpassing traditional machine learning models such as Logistic Regression, Linear SVM, and Multilayer Perceptron. Additionally, we compare the HQDNN with SetFit, a recent CPU-efficient few-shot transformer model, and show that while SetFit achieves higher precision, the HQDNN significantly outperforms it in recall. Furthermore, an ablation experiment confirms the critical contribution of the quantum component, revealing a substantial drop in performance when the quantum layer is removed. These findings highlight the potential of hybrid quantum–classical models as effective and compact alternatives for high-sensitivity classification tasks, particularly in domains such as fake news detection. Full article
Show Figures

Figure 1

12 pages, 236 KiB  
Article
Should an Anesthesiologist Be Interested in the Patient’s Personality? Relationship Between Personality Traits and Preoperative Anesthesia Scales of Patients Enrolled for a Hip Replacement Surgery
by Jakub Grabowski, Agnieszka Maryniak, Dariusz Kosson and Marcin Kolacz
J. Clin. Med. 2025, 14(15), 5227; https://doi.org/10.3390/jcm14155227 - 24 Jul 2025
Viewed by 260
Abstract
Background: Preparing patients for surgery considers assessing the patient’s somatic health, for example by the American Society of Anesthesiology (ASA) scale or the Revised Cardiac Risk Index (RCRI), known as the Lee index. This process usually ignores mental functioning (personality and anxiety), which [...] Read more.
Background: Preparing patients for surgery considers assessing the patient’s somatic health, for example by the American Society of Anesthesiology (ASA) scale or the Revised Cardiac Risk Index (RCRI), known as the Lee index. This process usually ignores mental functioning (personality and anxiety), which is known to influence health. The purpose of this study is to analyze the existence of a relationship between personality traits (the Big Five model and trait-anxiety) and anesthesia scales (ASA scale, Lee index) used for the preoperative evaluation of patients. Methods: The study group comprised 102 patients (59 women, 43 men) scheduled for hip replacement surgery. Patients completed two psychological questionnaires: the NEO-FFI (NEO Five Factors Inventory) and the X-2 STAI (State-Trait Anxiety Inventory) sheet. Next, the presence and possible strength of the relationship between personality traits and demographic and medical variables were analyzed using Spearman’s rho rank correlation coefficient. Results: Patients with a high severity of trait anxiety are classified higher on the ASA scale (rs = 0.359; p < 0.001). Neuroticism, defined according to the Big Five model, significantly correlates with scales of preoperative patient assessment: the ASA classification (rs = 0.264; p < 0.001) and the Lee index (rs = 0.202; p = 0.044). A hierarchical regression model was created to test the possibility of predicting ASA scores based on personality. It explained more than 34% of the variance and was a good fit to the data (p < 0.05). The controlled variables of age and gender accounted for more than 23% of the variance. Personality indicators (trait anxiety, neuroticism) additionally accounted for slightly more than 11% of the variance. Trait anxiety (Beta = 0.293) proved to be a better predictor than neuroticism (Beta = 0.054). Conclusions: These results indicate that inclusion of personality screening in the preoperative patient evaluation might help to introduce a more individualized approach to patients, which could result in better surgical outcomes. Full article
(This article belongs to the Special Issue Perioperative Anesthesia: State of the Art and the Perspectives)
19 pages, 1667 KiB  
Article
Mapping the Literature on Short-Selling in Financial Markets: A Lexicometric Analysis
by Nitika Sharma, Sridhar Manohar, Bruce A. Huhmann and Yam B. Limbu
Int. J. Financial Stud. 2025, 13(3), 135; https://doi.org/10.3390/ijfs13030135 - 23 Jul 2025
Viewed by 525
Abstract
This study provides a comprehensive assessment and synthesis of the literature on short-selling. It performs a lexicometric analysis, providing a quantitative review of 1093 peer-reviewed journal articles to identify and illustrate the main themes in short-selling research. Almost half the published literature on [...] Read more.
This study provides a comprehensive assessment and synthesis of the literature on short-selling. It performs a lexicometric analysis, providing a quantitative review of 1093 peer-reviewed journal articles to identify and illustrate the main themes in short-selling research. Almost half the published literature on short-selling is thematically clustered around portfolio management techniques. Other key themes involve short-selling as it relates to risk management, strategic management, and market irregularities. Descending hierarchical classification examines the overall structure of the textual corpus of the short-selling literature and the relationships between its key terms. Similarity analysis reveals that the short-selling literature is highly concentrated, with most conceptual groups closely aligned and fitting into overlapping or conceptually similar areas. Some notable groups highlight prior short-selling studies of market dynamics, behavioral factors, technological advancements, and regulatory frameworks, which can serve as a foundation for market regulators to make more informed decisions that enhance overall market stability. Additionally, this study proposes a conceptual framework in which short-selling can be either a driver or an outcome by integrating the literature on its antecedents, consequences, explanatory variables, and boundary conditions. Finally, it suggests directions for future research. Full article
Show Figures

Figure 1

13 pages, 2828 KiB  
Article
Wafer Defect Image Generation Method Based on Improved Styleganv3 Network
by Jialin Zou, Hongcheng Wang and Jiajin Zhong
Micromachines 2025, 16(8), 844; https://doi.org/10.3390/mi16080844 - 23 Jul 2025
Viewed by 319
Abstract
This paper takes a look at training a generator model based on a limited dataset that can fit the distribution of the original dataset, improving the reconstruction ability of wafer datasets. High-fidelity wafer defect image generation remains challenging due to limited real data [...] Read more.
This paper takes a look at training a generator model based on a limited dataset that can fit the distribution of the original dataset, improving the reconstruction ability of wafer datasets. High-fidelity wafer defect image generation remains challenging due to limited real data and poor physical authenticity of existing methods. We propose an enhanced StyleGANv3 framework with two key innovations: (1) a Heterogeneous Kernel Fusion Unit (HKFU) enabling multi-scale defect feature refinement via spatiotemporal attention and dynamic gating; (2) a Dynamic Adaptive Attention Module (DAAM) adaptively boosting discriminator sensitivity. Experiments on Mixtype-WM38 and MIR-WM811K datasets demonstrate state-of-the-art performance, achieving FID scores of 25.20 and 28.70 alongside SDS values of 36.00 and 35.45. The proposed method in this article helps alleviate the problem of limited datasets and makes an important contribution to data preparation for downstream classification and detection tasks. Full article
Show Figures

Figure 1

Back to TopTop