Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (210)

Search Parameters:
Keywords = tree professionals

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 602 KB  
Review
Artificial Intelligence in Obesity Prevention
by Golbarg Shabani Jafarabadi and Luca Busetto
Healthcare 2025, 13(24), 3262; https://doi.org/10.3390/healthcare13243262 - 12 Dec 2025
Viewed by 277
Abstract
Background/Objectives: Obesity is a complex disorder that causes further health issues linked to several chronic diseases, such as cancer, diabetes, metabolic syndrome, and cardiovascular diseases; thus, it is critical to identify and diagnose obesity as soon as possible. Traditional methods, such as anthropometric [...] Read more.
Background/Objectives: Obesity is a complex disorder that causes further health issues linked to several chronic diseases, such as cancer, diabetes, metabolic syndrome, and cardiovascular diseases; thus, it is critical to identify and diagnose obesity as soon as possible. Traditional methods, such as anthropometric measures, were popular, although recent advances in artificial intelligence (AI) offer new opportunities for prediction models; as a result, AI has become an essential tool in obesity research. This study provides a comprehensive analysis of the research on the impact of AI on obesity prevention. Methods: In this study, the researchers performed a scoping study using AI to assess and predict obesity in PubMed, Scopus, Web of Science, and Google Scholar from February 2009 to July 2025. The researchers compiled and arranged the employed AI approaches to find connections, patterns, and trends that could guide further research and the application of machine learning algorithms for advanced data analytics. Results: Clinical professionals in obesity medicine may find chatbots valuable as a source of clinical and scientific knowledge, and for creating standard operating procedures, policies, and procedures. According to the findings, AI models can be used to identify clinically significant patterns of obesity or the connections between specific factors and weight outcomes. Moreover, the application of deep learning and machine learning approaches, such as logistic regression, decision trees, and artificial neural networks, appears to have yielded new insight into data, particularly in terms of obesity prediction. Conclusions: This work aims to contribute to a better understanding of obesity detection. While more studies are needed, AI offers solutions to modern challenges in obesity prediction. Full article
Show Figures

Figure 1

27 pages, 1622 KB  
Article
Detecting Burnout Among Undergraduate Computing Students with Supervised Machine Learning
by Eldar Yeskuatov, Lee Kien Foo and Sook-Ling Chua
Healthcare 2025, 13(23), 3182; https://doi.org/10.3390/healthcare13233182 - 4 Dec 2025
Viewed by 426
Abstract
Background: Academic burnout significantly impacts students’ cognitive and psychological well-being and may result in adverse behavioral changes. An effective and timely detection of burnout in the student population is crucial as it enables educational institutions to mobilize necessary support systems and implement intervention [...] Read more.
Background: Academic burnout significantly impacts students’ cognitive and psychological well-being and may result in adverse behavioral changes. An effective and timely detection of burnout in the student population is crucial as it enables educational institutions to mobilize necessary support systems and implement intervention strategies. However, current survey-based detection methods are susceptible to response biases and administrative overhead. This study investigated the feasibility of detecting academic burnout symptoms using machine learning trained exclusively on university records, eliminating reliance on psychological surveys. Methods: We developed models to detect three burnout dimensions—exhaustion, cynicism, and low professional efficacy. Five machine learning algorithms (i.e., logistic regression, support vector machine, naive Bayes, decision tree, and extreme gradient boosting) were trained using features engineered from administrative data. Results: Results demonstrated considerable variability across burnout dimensions. Models achieved the highest performance for exhaustion detection, with logistic regression obtaining an F1 score of 68.4%. Cynicism detection showed moderate performance, while professional efficacy detection has the lowest performance. Conclusions: Our findings showed that automated detection using passively collected university records is feasible for identifying signs of exhaustion and cynicism. The modest performance highlights the challenges of capturing psychological constructs through administrative data alone, providing a foundation for future research in unobtrusive student burnout detection. Full article
Show Figures

Figure 1

14 pages, 498 KB  
Article
Are Countermovement Jump Variables Indicators of Injury Risk in Professional Soccer Players? A Machine Learning Approach
by Jorge Pérez-Contreras, Rodrigo Villaseca-Vicuña, Juan Francisco Loro-Ferrer, Felipe Inostroza-Ríos, Ciro José Brito, Hugo Cerda-Kohler, Alejandro Bustamante-Garrido, Fernando Muñoz-Hinrichsen, Felipe Hermosilla-Palma, David Ulloa-Díaz, Pablo Merino-Muñoz and Esteban Aedo-Muñoz
Appl. Sci. 2025, 15(23), 12721; https://doi.org/10.3390/app152312721 - 1 Dec 2025
Viewed by 519
Abstract
Background: Muscle injuries are among the main problems in professional soccer, affecting player availability and team performance. Countermovement jump (CMJ) variables have been proposed as indicators of injury risk and for detecting strength imbalances, although their use is less explored than isokinetic assessments. [...] Read more.
Background: Muscle injuries are among the main problems in professional soccer, affecting player availability and team performance. Countermovement jump (CMJ) variables have been proposed as indicators of injury risk and for detecting strength imbalances, although their use is less explored than isokinetic assessments. Unlike previous studies based solely on linear statistics, this research integrates biomechanical data with machine learning approaches, providing a novel perspective for injury prediction in elite soccer. Objective: To examine the association between CMJ variables and muscle injury risk during a competitive season, considering injury incidence and effective playing minutes. It was hypothesized that specific CMJ asymmetries would be associated with a higher injury risk, and that machine learning algorithms could accurately classify players according to their injury status. Methods: Forty-one professional soccer players (18 women, 23 men) from national league teams (Chile) were assessed during preseason using force platforms. Non-contact muscle injuries and playing minutes were recorded over 10 months after the CMJ evaluations. Analyses included two-way ANOVA (sex × injury status) and machine learning algorithms (Logistic Regression, Decision Tree, K-Nearest Neighbors [KNN], Random Forest, Gradient Boosting [GB]). Results: Significant sex differences were observed in most variables (p < 0.05 and ηp2 > 0.11), except peak force and peak power asymmetry. For injury status, only peak force asymmetry differed, while sex × injury interactions were found in peak power and left peak power. KNN (Accuracy = 87% and CI 95% = 71% to 96%) and GB (Accuracy = 84% and CI 95% = 68% to 94%) achieved the best classification performance between injured and non-injured players. Conclusions: CMJ did not show consistent statistical differences between injured and non-injured groups. However, machine learning models, particularly KNN and GB, demonstrated high predictive accuracy, suggesting that injuries are a complex phenomenon characterized by non-linear patterns. These findings highlight the potential of combining CMJ with machine learning approaches for functional monitoring and early detection of injury risk, though validation in larger cohorts is required before establishing clinical thresholds and preventive applications. Full article
Show Figures

Figure 1

17 pages, 2069 KB  
Review
Impact of Planting Depth on Urban Tree Health and Survival
by Jamie Lim, Kelly S. Allen, Candace B. Powning and Richard W. Harper
Forests 2025, 16(12), 1788; https://doi.org/10.3390/f16121788 - 28 Nov 2025
Viewed by 420
Abstract
Deep planting of young trees—defined as the burial of the root collar below soil grade—is widely recognized by practitioners as an improper technique that can impair tree development and establishment. Despite this knowledge, research has shown that urban trees are frequently planted too [...] Read more.
Deep planting of young trees—defined as the burial of the root collar below soil grade—is widely recognized by practitioners as an improper technique that can impair tree development and establishment. Despite this knowledge, research has shown that urban trees are frequently planted too deeply. To better understand the impacts of planting depth on the urban forest, we conducted a literature review of peer-reviewed and professional studies relevant to the effects of planting depth in urban trees. Most studies reported effects on tree establishment (34%), growth (23%), and root development (22%). A general conclusion across reviewed articles was evident: trees planted too deep exhibited higher mortality, slower establishment, and reduced growth, primarily due to poor root development. Effects of planting depth were also species-specific—Norway Maple (Acer platanoides L.), Turkish Hazel (Corylus colurna L.), White Ash (Fraxinus americana L.), and Green Ash (Fraxinus pennsylvanica Marshall) showed minimal differences in performance when deeply planted, while Baldcypress (Taxodium distichum L. Rich), which tolerates anoxic conditions, performed better at or below grade than when planted above grade, although the findings in these studies only measured the effects of planting depth relative to limited measured parameters. We also compiled a reference table that links tree species to their performance based on planting depth. These findings highlight the critical role of planting depth in shaping root architecture and long-term success, emphasizing the need for adherence to best practices concerning proper planting, tree maintenance (e.g., mulching), and production in the nursery. Full article
(This article belongs to the Special Issue Growing the Urban Forest: Building Our Understanding)
Show Figures

Figure 1

23 pages, 1062 KB  
Article
Mxplainer: Explain and Learn Insights by Imitating Mahjong Agents
by Lingfeng Li, Yunlong Lu, Yongyi Wang, Qifan Zheng and Wenxin Li
Algorithms 2025, 18(12), 738; https://doi.org/10.3390/a18120738 - 24 Nov 2025
Viewed by 431
Abstract
People need to internalize the skills of AI agents to improve their own capabilities. Our paper focuses on Mahjong, a multiplayer game involving imperfect information and requiring effective long-term decision-making amidst randomness and hidden information. Through the efforts of AI researchers, several impressive [...] Read more.
People need to internalize the skills of AI agents to improve their own capabilities. Our paper focuses on Mahjong, a multiplayer game involving imperfect information and requiring effective long-term decision-making amidst randomness and hidden information. Through the efforts of AI researchers, several impressive Mahjong AI agents have already achieved performance levels comparable to those of professional human players; however, these agents are often treated as black boxes from which few insights can be gleaned. This paper introduces Mxplainer, a parameterized search algorithm that can be converted into an equivalent neural network to learn the parameters of black-box agents. Experiments on both human and AI agents demonstrate that Mxplainer achieves a top-three action prediction accuracy of over 92% and 90%, respectively, while providing faithful and interpretable approximations that outperform decision-tree methods (34.8% top-three accuracy). This enables Mxplainer to deliver both strategy-level insights into agent characteristics and actionable, step-by-step explanations for individual decisions. Full article
(This article belongs to the Collection Algorithms for Games AI)
Show Figures

Figure 1

17 pages, 655 KB  
Article
Emotional Intelligence, Creativity, and Subjective Well-Being: Their Implication for Academic Success in Higher Education
by Presentación Ángeles Caballero García, Sara Sánchez Ruiz and Alexander Constante Amores
Educ. Sci. 2025, 15(11), 1562; https://doi.org/10.3390/educsci15111562 - 19 Nov 2025
Viewed by 694
Abstract
Professional skills training and academic success are key challenges for contemporary educational systems, particularly within higher education. The labour market increasingly demands well-prepared graduates with specific competencies that are still insufficiently embedded in university curricula. In this context, acquiring new professional skills becomes [...] Read more.
Professional skills training and academic success are key challenges for contemporary educational systems, particularly within higher education. The labour market increasingly demands well-prepared graduates with specific competencies that are still insufficiently embedded in university curricula. In this context, acquiring new professional skills becomes a decisive factor for students’ employability and competitiveness. At the same time, academic success remains a crucial indicator of educational quality, and its improvement is an urgent priority for universities. In response to these demands, our study evaluates cognitive-emotional competencies—emotional intelligence, creativity, and subjective well-being—in a sample of 300 university students from the Community of Madrid (Spain), analysing their influence on academic success with the aim of enhancing it. A non-experimental, cross-sectional research design was employed, using standardised self-report measures (TMMS-24, CREA, SHS, OHI, SLS, and OLS), innovative data mining algorithms (Random Forest and decision trees), and binary logistic regression techniques. The results highlight the importance of creativity, life satisfaction, and emotional attention in predicting academic success, with creativity showing the strongest discriminative power among the variables studied. These findings reinforce the need to integrate emotional and creative development into university curricula, promoting competency-based educational models that enhance training quality and students’ academic outcomes. Full article
(This article belongs to the Section Higher Education)
Show Figures

Figure 1

24 pages, 1123 KB  
Article
Democratizing Machine Learning: A Practical Comparison of Low-Code and No-Code Platforms
by Luis Giraldo and Sergio Laso
Mach. Learn. Knowl. Extr. 2025, 7(4), 141; https://doi.org/10.3390/make7040141 - 7 Nov 2025
Viewed by 1351
Abstract
The growing use of machine learning (ML) and artificial intelligence across sectors has shown strong potential to improve decision-making processes. However, the adoption of ML by non-technical professionals remains limited due to the complexity of traditional development workflows, which often require software engineering [...] Read more.
The growing use of machine learning (ML) and artificial intelligence across sectors has shown strong potential to improve decision-making processes. However, the adoption of ML by non-technical professionals remains limited due to the complexity of traditional development workflows, which often require software engineering and data science expertise. In recent years, low-code and no-code platforms have emerged as promising solutions to democratize ML by abstracting many of the technical tasks typically involved in software engineering pipelines. This paper investigates whether these platforms can offer a viable alternative for making ML accessible to non-expert users. Beyond predictive performance, this study also evaluates usability, setup complexity, the transparency of automated workflows, and cost management under realistic “out-of-the-box” conditions. This multidimensional perspective provides insights into the practical viability of LC/NC tools in real-world contexts. The comparative evaluation was conducted using three leading cloud-based tools: Amazon SageMaker Canvas, Google Cloud Vertex AI, and Azure Machine Learning Studio. These tools employ ensemble-based learning algorithms such as Gradient Boosted Trees, XGBoost, and Random Forests. Unlike traditional ML workflows that require extensive software engineering knowledge and manual optimization, these platforms enable domain experts to build predictive models through visual interfaces. The findings show that all platforms achieved high accuracy, with consistent identification of key features. Google Cloud Vertex AI was the most user-friendly, SageMaker Canvas offered a highly visual interface with some setup complexity, and Azure Machine Learning delivered the best model performance with a steeper learning curve. Cost transparency also varied considerably, with Google Cloud and Azure providing clearer safeguards against unexpected charges compared to Sagemaker Canvas. Full article
Show Figures

Figure 1

26 pages, 720 KB  
Review
Ethical Bias in AI-Driven Injury Prediction in Sport: A Narrative Review of Athlete Health Data, Autonomy and Governance
by Zbigniew Waśkiewicz, Kajetan J. Słomka, Tomasz Grzywacz and Grzegorz Juras
AI 2025, 6(11), 283; https://doi.org/10.3390/ai6110283 - 1 Nov 2025
Viewed by 2392
Abstract
The increasing use of artificial intelligence (AI) in athlete health monitoring and injury prediction presents both technological opportunities and complex ethical challenges. This narrative review critically examines 24 empirical and conceptual studies focused on AI-driven injury forecasting systems across diverse sports disciplines, including [...] Read more.
The increasing use of artificial intelligence (AI) in athlete health monitoring and injury prediction presents both technological opportunities and complex ethical challenges. This narrative review critically examines 24 empirical and conceptual studies focused on AI-driven injury forecasting systems across diverse sports disciplines, including professional, collegiate, youth, and Paralympic contexts. Applying an IMRAD framework, the analysis identifies five dominant ethical concerns: privacy and data protection, algorithmic fairness, informed consent, athlete autonomy, and long-term data governance. While studies commonly report the effectiveness of AI models—such as those employing decision trees, neural networks, and explainability tools like SHAP and HiPrCAM—few offers robust ethical safeguards or athlete-centered governance structures. Power asymmetries persist between athletes and institutions, with limited recognition of data ownership, transparency, and the right to contest predictive outputs. The findings highlight that ethical risks vary by sport type and competitive level, underscoring the need for sport-specific frameworks. Recommendations include establishing enforceable data rights, participatory oversight mechanisms, and regulatory protections to ensure that AI systems align with principles of fairness, transparency, and athlete agency. Without such frameworks, the integration of AI in sports medicine risks reinforcing structural inequalities and undermining the autonomy of those it intends to support. Full article
Show Figures

Figure 1

18 pages, 1933 KB  
Article
Clinical Application of Machine Learning Models for Early-Stage Chronic Kidney Disease Detection
by Hasnain Iftikhar, Atef F. Hashem, Moiz Qureshi and Paulo Canas Rodrigues
Diagnostics 2025, 15(20), 2610; https://doi.org/10.3390/diagnostics15202610 - 16 Oct 2025
Viewed by 1435
Abstract
Background/Objectives: Chronic kidney disease (CKD) is a progressive condition that affects the body’s ability to remove waste and regulate fluid and electrolytes. Early detection is crucial for delaying disease progression and initiating timely interventions. Machine learning (ML) techniques have emerged as powerful tools [...] Read more.
Background/Objectives: Chronic kidney disease (CKD) is a progressive condition that affects the body’s ability to remove waste and regulate fluid and electrolytes. Early detection is crucial for delaying disease progression and initiating timely interventions. Machine learning (ML) techniques have emerged as powerful tools for automating disease diagnosis and prognosis. This study aims to evaluate the predictive performance of individual and ensemble ML algorithms for the early classification of CKD. Methods: A clinically annotated dataset was utilized to categorize patients into CKD and non-CKD groups. The models investigated included Logistic Regression, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Ridge Classifier, Naïve Bayes, K-Nearest Neighbors (KNN), Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Ensemble learning strategies. A systematic preprocessing pipeline was implemented, and model performance was assessed using accuracy, precision, recall, F1 score, and area under the receiver operating characteristic curve (AUC). Results: The empirical findings reveal that ML-based classifiers achieved high predictive accuracy in CKD detection. Ensemble learning methods outperformed individual models in terms of robustness and generalization, indicating their potential in clinical decision-making contexts. Conclusions: The study demonstrates the efficacy of ML-based frameworks for early CKD prediction, offering a scalable, interpretable, and accurate clinical decision support approach. The proposed methodology supports timely diagnosis and can assist healthcare professionals in improving patient outcomes. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

11 pages, 1477 KB  
Commentary
Pharmacotherapy of Demodex-Associated Blepharitis: Current Trends and Future Perspectives
by Aleksandra Czępińska-Myszura, Małgorzata Maria Kozioł and Beata Rymgayłło-Jankowska
Pharmacy 2025, 13(5), 148; https://doi.org/10.3390/pharmacy13050148 - 15 Oct 2025
Viewed by 2463
Abstract
Demodex-associated blepharitis (DAB) is a common condition in our society. Patients report not only uncomfortable and bothersome symptoms but also decreased self-esteem and confidence. Because of its nonspecific signs, pharmacists are often the first healthcare professionals patients consult. What is most concerning [...] Read more.
Demodex-associated blepharitis (DAB) is a common condition in our society. Patients report not only uncomfortable and bothersome symptoms but also decreased self-esteem and confidence. Because of its nonspecific signs, pharmacists are often the first healthcare professionals patients consult. What is most concerning is that DAB can cause serious complications within the eye, such as dry eye syndrome, corneal scarring, or recurrent styes and chalazia. Therefore, we aimed to compile both standard and innovative therapies and compare their effectiveness and safety. As shown, standard methods remain the recommended approach. Alongside antiparasitic agents such as metronidazole or ivermectin, education and improved eyelid hygiene are crucial. However, in recent years, promising new treatments for Demodex blepharitis have emerged, such as Lotilaner Ophthalmic Solution 0.25%, which has shown positive results in clinical trials. Mechanical techniques, including Intense Pulsed Light (IPL) therapy and eyelid peeling procedures such as Blepharoexfoliation, have also demonstrated promise. Due to the notable effects of tea tree oil, studies have explored the lethal effects of other essential oils, such as sage, peppermint, and bergamot oils. These are just a few of the interesting examples discussed in this paper. Full article
Show Figures

Graphical abstract

16 pages, 571 KB  
Article
Lightweight Statistical and Texture Feature Approach for Breast Thermogram Analysis
by Ana P. Romero-Carmona, Jose J. Rangel-Magdaleno, Francisco J. Renero-Carrillo, Juan M. Ramirez-Cortes and Hayde Peregrina-Barreto
J. Imaging 2025, 11(10), 358; https://doi.org/10.3390/jimaging11100358 - 13 Oct 2025
Viewed by 546
Abstract
Breast cancer is the most commonly diagnosed cancer in women globally and represents the leading cause of mortality related to malignant tumors. Currently, healthcare professionals are focused on developing and implementing innovative techniques to improve the early detection of this disease. Thermography, studied [...] Read more.
Breast cancer is the most commonly diagnosed cancer in women globally and represents the leading cause of mortality related to malignant tumors. Currently, healthcare professionals are focused on developing and implementing innovative techniques to improve the early detection of this disease. Thermography, studied as a complementary method to traditional approaches, captures infrared radiation emitted by tissues and converts it into data about skin surface temperature. During tumor development, angiogenesis occurs, increasing blood flow to support tumor growth, which raises the surface temperature in the affected area. Automatic classification techniques have been explored to analyze thermographic images and develop an optimal classification tool to identify thermal anomalies. This study aims to design a concise description using statistical and texture features to accurately classify thermograms as control or highly probable to be cancer (with thermal anomalies). The importance of employing a short description lies in facilitating interpretation by medical professionals. In contrast, a characterization based on a large number of variables could make it more challenging to identify which values differentiate the thermograms between groups, thereby complicating the explanation of results to patients. A maximum accuracy of 91.97% was achieved by applying only seven features and using a Coarse Decision Tree (DT) classifier and robust Machine Learning (ML) model, which demonstrated competitive performance compared with previously reported studies. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

9 pages, 1084 KB  
Proceeding Paper
Heart Disease Prediction Using ML
by Abdul Rehman Ilyas, Sabeen Javaid and Ivana Lucia Kharisma
Eng. Proc. 2025, 107(1), 124; https://doi.org/10.3390/engproc2025107124 - 10 Oct 2025
Viewed by 2146
Abstract
The term heart disease refers to a wide range of conditions that impact the heart and blood vessels. It continues to be a major global cause of morbidity and mortality. The narrowing or blockage of blood vessels, which can result in major medical [...] Read more.
The term heart disease refers to a wide range of conditions that impact the heart and blood vessels. It continues to be a major global cause of morbidity and mortality. The narrowing or blockage of blood vessels, which can result in major medical events like heart attacks, angina (chest pain) or strokes, is a common issue linked to heart disease. In order to lower the risk of serious complications and facilitate prompt medical intervention, early diagnosis and prediction are essential. This study developed predictive models that can precisely identify people at risk by applying a variety of machine learning algorithms to a structured dataset on heart disease. Blood pressure, cholesterol, age, gender, and other health-related indicators are among the 13 essential characteristics that make up the dataset. Numerous machine learning models such as Naïve Bayes, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree, Random Forest, and others were trained using these features. Using the RapidMiner platform, which offered a visual environment for data preprocessing, model training, and performance analysis, all models were created and assessed. The best-performing model was the Naïve Bayes classifier which achieved an impressive accuracy rate of 90% after extensive testing and comparison of performance metrics like accuracy precision and recall. This outcome shows how well the model can predict heart disease in actual clinical settings. By supporting individualized health recommendations, enabling early diagnosis, and facilitating timely treatment, the effective application of such models can significantly benefit patients and healthcare professionals. Furthermore, heart disease incidence can be considerably decreased by identifying and addressing modifiable risk factors such as high blood pressure, elevated cholesterol, smoking, diabetes, and physical inactivity. In summary, machine learning has the potential to improve the identification and treatment of heart-related disorders. This study highlights the value of data-driven methods in healthcare and indicates that incorporating predictive models into standard medical procedures may enhance patient outcomes, lower healthcare expenses, and improve public health administration. Full article
Show Figures

Figure 1

14 pages, 823 KB  
Article
Preparedness for the Digital Transition in Healthcare: Insights from an Italian Sample of Professionals
by Valentina Elisabetta Di Mattei, Gaia Perego, Francesca Milano, Federica Cugnata, Chiara Brombin, Antonio Catarinella, Francesca Gatti, Lavinia Bellamore Dettori, Jennifer Tuzii and Elena Bottinelli
Healthcare 2025, 13(20), 2556; https://doi.org/10.3390/healthcare13202556 - 10 Oct 2025
Viewed by 515
Abstract
Background: The digital transition is reshaping healthcare systems through the adoption of telemedicine and electronic health records (EHRs). While these innovations enhance efficiency and access, their implementation unfolds within overstretched organizational settings characterized by workforce shortages, bureaucratic demands, and heightened psychosocial risks. Burnout, [...] Read more.
Background: The digital transition is reshaping healthcare systems through the adoption of telemedicine and electronic health records (EHRs). While these innovations enhance efficiency and access, their implementation unfolds within overstretched organizational settings characterized by workforce shortages, bureaucratic demands, and heightened psychosocial risks. Burnout, impostor syndrome, and the quality of organizational support have thus become pivotal constructs in understanding healthcare professionals’ digital preparedness. Methods: A cross-sectional online survey was conducted among 111 professionals employed at two San Donato Group facilities in Bologna, Italy. The battery included socio-demographic and occupational data, perceptions of digitalization, and validated instruments: the Maslach Burnout Inventory (MBI), the Clance Impostor Phenomenon Scale (CIPS), and the Work Organization Assessment Questionnaire (WOAQ). Descriptive analyses were complemented by Classification and Regression Trees (CART) to identify predictors of perceived digital preparedness. Results: Most respondents (88%) acknowledged the relevance of digitalization, yet 18% felt unprepared, especially women and administrative staff. Burnout levels were high, with 51% reporting emotional exhaustion, most notably among nurses and female participants. Impostor syndrome affected 43% of the sample, with nurses exhibiting the highest prevalence. CART analysis identified emotional exhaustion, impostor syndrome, and age as principal discriminators of digital preparedness. Conclusions: Our findings highlight the role of emotional exhaustion, impostor syndrome, and age in shaping perceived digital preparedness, underscoring the need for tailored training and supportive practices to ensure a sustainable digital transition. Full article
Show Figures

Figure 1

25 pages, 2876 KB  
Article
Prediction of the Injury Severity of Accidents at Work: A New Approach to Analysis of Already Existing Statistical Data
by Szymon Ordysiński
Appl. Sci. 2025, 15(19), 10666; https://doi.org/10.3390/app151910666 - 2 Oct 2025
Viewed by 1200
Abstract
This article presents a novel statistical approach for analyzing occupational accident data from the ESAW database, aiming to improve the evaluation and prediction of accident severity among specific groups of employees. The proposed method combines univariate and multivariate analytical techniques (effect size measures [...] Read more.
This article presents a novel statistical approach for analyzing occupational accident data from the ESAW database, aiming to improve the evaluation and prediction of accident severity among specific groups of employees. The proposed method combines univariate and multivariate analytical techniques (effect size measures and classification tree methods: CHAID and CART) to identify employee groups that are both statistically robust and meaningfully distinct. The resulting model is based on six key variables describing employee and workplace characteristics, enabling accurate prediction of accident severity within these groups. The model demonstrates high reliability in predicting accident severity, achieving over 80% accuracy in a binary classification (high vs. low risk), making it a valuable tool for risk management and proactive safety planning. The findings have both theoretical and practical implications. Theoretically, the model’s strong predictive performance suggests that accident severity is not random but follows identifiable patterns linked to underlying risk factors that go beyond standard occupational or economic classification. Practically, the model allows for a more detail and effective categorization of work environments into high- and low-risk classes, and can support safety professionals, managers, and policymakers in achieving more precise identification of employee groups that are more prone to severe accidents. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

30 pages, 401 KB  
Systematic Review
Explainable Artificial Intelligence and Machine Learning for Air Pollution Risk Assessment and Respiratory Health Outcomes: A Systematic Review
by Israel Edem Agbehadji and Ibidun Christiana Obagbuwa
Atmosphere 2025, 16(10), 1154; https://doi.org/10.3390/atmos16101154 - 1 Oct 2025
Viewed by 2329
Abstract
Air pollution is a leading environmental risk that causes respiratory morbidity and mortality. The increasing availability of high-resolution environmental data and air pollution-related health cases have accelerated the use of machine learning models (ML) to estimate environmental exposure–response relationships, forecast health risks and [...] Read more.
Air pollution is a leading environmental risk that causes respiratory morbidity and mortality. The increasing availability of high-resolution environmental data and air pollution-related health cases have accelerated the use of machine learning models (ML) to estimate environmental exposure–response relationships, forecast health risks and call for the needed policy and practical interventions. Unfortunately, ML models are opaque, in a sense that, it is unclear how these models combine various data inputs to make a concise decision. Thus, limiting its trust and use in clinical matters. Explainable artificial intelligence (xAI) models offer the necessary techniques to ensure transparent and interpretable models. This systematic review explores online data repositories through the lens of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline to synthesize articles from 2020 to 2025. Various inclusion and exclusion criteria were established to narrow the search to a final selection of 92 articles, which were thoroughly reviewed by independent researchers to reduce bias in article assessment. Equally, the ROBINS-I (Risk Of Bias In Non-randomized Studies of Interventions) domain strategy was helpful in further reducing any possible risk in the article assessment and its reproducibility. The findings reveal a growing adoption of ML techniques such as random forests, XGBoost, parallel lightweight diagnosis models and deep neural networks for health risk prediction, with SHAP (SHapley Additive exPlanations) emerging as the dominant technique for these models’ interpretability. The extremely randomized tree (ERT) technique demonstrated optimal performance but lacks explainability. Moreover, the limitations of these models include generalizability, data limitations and policy translation. This review’s outcome suggests limited research on the integration of LIME (Local Interpretable Model-Agnostic Explanations) in the current ML model; it recommends that future research could focus on causal-xAI-ML models. Again, the use of such models in respiratory health issues may be complemented with a medical professional’s opinion. Full article
(This article belongs to the Section Air Quality and Health)
Show Figures

Figure 1

Back to TopTop