Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,419)

Search Parameters:
Keywords = method of expert evaluations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 978 KB  
Article
Multicriteria Optimization of Nanocellulose-Reinforced Polyvinyl Alcohol and Pyrrolidone Hydrogels
by Nuno Costa, João Lourenço, Joana Cabalú, Ana Branco and Célio G. Figueiredo-Pina
Sustainability 2025, 17(21), 9905; https://doi.org/10.3390/su17219905 (registering DOI) - 6 Nov 2025
Abstract
Developing new materials for human cartilage replacement is a hot research topic. These materials have multiple properties of interest, so selecting a new material (hydrogel) is a multi-attribute decision-making problem. A case study illustrates the application of a structured approach and tools to [...] Read more.
Developing new materials for human cartilage replacement is a hot research topic. These materials have multiple properties of interest, so selecting a new material (hydrogel) is a multi-attribute decision-making problem. A case study illustrates the application of a structured approach and tools to solve this problem type. Ten hydrogels, most of which are new formulations, were evaluated based on three attributes. The weights assigned to the attributes were identified using three methods from the literature, in addition to those previously assigned by an expert. Since the hydrogel properties showed some variability, Monte Carlo simulations were carried out using triangular distribution. Ten thousand decision matrices were built and 10,000 rankings were generated by each of the ten multicriteria decision-making methods employed in this study. Ranking similarity was evaluated through the PS index, whose values ensure consistency and reliability of the results achieved. Rank acceptability and pairwise indexes were used to identify the most promising hydrogels. Two hydrogels were identified as the most promising for further study, for any of the four sets of weights used. Both are annealed nanocellulose-reinforced polyvinyl alcohol and pyrrolidone hydrogels. The robustness of this result is supported on the values of acceptability and pairwise indexes. Full article
(This article belongs to the Special Issue Decision-Making in Sustainable Management)
29 pages, 2362 KB  
Article
Numerical Aggregation and Evaluation of High-Dimensional Multi-Expert Decisions Based on Triangular Intuitionistic Fuzzy Modeling
by Yanshan Qian, Junda Qiu, Jiali Tang, Chuanan Li and Senyuan Chen
Math. Comput. Appl. 2025, 30(6), 123; https://doi.org/10.3390/mca30060123 (registering DOI) - 6 Nov 2025
Abstract
To address the challenges of high-dimensional complexity and increasing heterogeneity in expert opinions, this study proposes a novel numerical aggregation model for multi-expert decision making based on triangular intuitionistic fuzzy numbers (TIFNs) and the Plant Growth Simulation Algorithm (PGSA). The proposed framework transforms [...] Read more.
To address the challenges of high-dimensional complexity and increasing heterogeneity in expert opinions, this study proposes a novel numerical aggregation model for multi-expert decision making based on triangular intuitionistic fuzzy numbers (TIFNs) and the Plant Growth Simulation Algorithm (PGSA). The proposed framework transforms experts’ fuzzy preference information into five-dimensional geometric vectors and employs the PGSA to perform global optimization, thereby yielding an optimized collective decision matrix. To comprehensively evaluate the aggregation performance, several quantitative indicators—such as weighted Hamming distance, correlation sum, information intuition energy, and weighted correlation coefficient—are introduced to assess the results from the perspectives of consensus, stability, and informational strength. Extensive numerical experiments and comparative analyses demonstrate that the proposed method significantly improves expert consensus reliability and aggregation robustness, achieving higher decision accuracy than conventional approaches. This framework provides a scalable and generalizable tool for high-dimensional fuzzy group decision making, offering promising potential for complex real-world applications. Full article
23 pages, 2566 KB  
Article
An AHP-ME-IOWA Model for Assessing National Space Technology Scientific and Technological Strength: A Case Study of the United States
by Yingying Chen, Zhenqiang Qi, Jinzhao Li and Yuting Zhu
Entropy 2025, 27(11), 1141; https://doi.org/10.3390/e27111141 (registering DOI) - 6 Nov 2025
Abstract
Space technology, a frontier of global scientific innovation, is crucial for competitive edges and national tech innovation. Amid intensified international competition and rapid technological change, scientifically evaluating a country’s Scientific and Technological Strength in Space Technology (STSST) is vital. A model is innovatively [...] Read more.
Space technology, a frontier of global scientific innovation, is crucial for competitive edges and national tech innovation. Amid intensified international competition and rapid technological change, scientifically evaluating a country’s Scientific and Technological Strength in Space Technology (STSST) is vital. A model is innovatively proposed in this study called “Analytic Hierarchy Process-Maximum Entropy-Induced Ordered Weighted Average (AHP-ME-IOWA)” for the assessment of STSST. First, an STSST assessment indicator system is developed with four sub-dimensions: scientific research, industrial operation, innovation output, and policy resources. Second, the AHP model is used to convert experts’ qualitative judgments on indicator importance into initial individual weight vectors. Subsequently, the IOWA operator is employed to aggregate these individual weight vectors, thereby mitigating the impact of outliers and enhancing the robustness of the weights. Specifically, the weights are reordered using the cosine similarity between each expert’s weight vector and the temporary group mean as the induced value. Position weights are then determined via the ME method, and consensus weights are derived through re-aggregation. A systematic evaluation of the United States’ STSST was conducted using this method. The results show that the United States achieved a comprehensive STSST score of 8.73 (out of 10), which is in line with the actual situation, thereby providing empirical validation for the proposed method. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 6016 KB  
Article
Statistical Learning Improves Classification of Limestone Provenance
by Rok Brajkovič and Klemen Koselj
Heritage 2025, 8(11), 464; https://doi.org/10.3390/heritage8110464 (registering DOI) - 6 Nov 2025
Abstract
Determining the lithostratigraphic provenance of limestone artefacts is challenging. We addressed the issue by analysing Roman stone artefacts, where previously traditionalpetrological methods failed to identify the provenance of 72% of the products due to the predominance of micrite limestone. We applied statistical classification [...] Read more.
Determining the lithostratigraphic provenance of limestone artefacts is challenging. We addressed the issue by analysing Roman stone artefacts, where previously traditionalpetrological methods failed to identify the provenance of 72% of the products due to the predominance of micrite limestone. We applied statistical classification methods to 15 artefacts using linear discriminant analysis, decision trees, random forest, and support vector machines. The latter achieved the highest accuracy, with 73% of the samples classified to the same stratigraphic member as determined by the expert. We improved classification reliability and evaluated it by aggregating the results of different classifiers for each stone product. Combining aggregated results with additional evidence from paleontological data or precise optical microscopy leads to successful provenance determination. After a few samples were reassigned in this procedure, a support vector machine correctly classified 87% of the samples. Strontium isotope ratios (87Sr/86Sr) proved particularly effective as provenance indicators. We successfully assigned all stone products to local sources across four lithostratigraphic members, thereby confirming local patterns of stone use by Romans. We provide guidance for future use of statistical learning in provenance determination. Our integrated approach, combining geological and statistical expertise, provides a robust framework for challenging provenance determination. Full article
Show Figures

Figure 1

22 pages, 1020 KB  
Article
Spherical Fuzzy CRITIC–ARAS Framework for Evaluating Flow Experience in Metaverse Fashion Retail
by Adnan Veysel Ertemel, Nurdan Tümbek Tekeoğlu and Ayşe Karayılan
Processes 2025, 13(11), 3578; https://doi.org/10.3390/pr13113578 - 6 Nov 2025
Abstract
The Metaverse—an evolving convergence of virtual and physical realities—has emerged as a transformative platform, particularly within the fashion and retail industries. Its immersive nature aligns closely with the principles of flow theory, which describes the optimal psychological state of deep engagement and enjoyment. [...] Read more.
The Metaverse—an evolving convergence of virtual and physical realities—has emerged as a transformative platform, particularly within the fashion and retail industries. Its immersive nature aligns closely with the principles of flow theory, which describes the optimal psychological state of deep engagement and enjoyment. This study investigates the dynamics of fashion retail shopping in the Metaverse through a novel multi-criteria decision-making (MCDM) framework. Specifically, it integrates the CRITIC and ARAS methods within a spherical fuzzy environment to address decision-making under uncertainty. Flow theory is employed as the theoretical lens, with its dimensions serving as evaluation criteria. By incorporating spherical fuzzy sets, the model accommodates expert uncertainty more effectively. The findings provide empirical insights into the relative importance of flow constructs in shaping immersive consumer experiences in Metaverse-based retail environments. This study offers both theoretical contributions to the literature on digital consumer behavior and practical implications for fashion brands navigating immersive virtual ecosystems. Sensitivity analyses and comparative validation further demonstrate the robustness of the proposed framework. Full article
Show Figures

Figure 1

22 pages, 1071 KB  
Article
Development and Validation of a Questionnaire to Evaluate AI-Generated Summaries for Radiologists: ELEGANCE (Expert-Led Evaluation of Generative AI Competence and ExcelleNCE)
by Yuriy A. Vasilev, Anton V. Vladzymyrskyy, Olga V. Omelyanskaya, Yulya A. Alymova, Dina A. Akhmedzyanova, Yuliya F. Shumskaya, Maria R. Kodenko, Ivan A. Blokhin and Roman V. Reshetnikov
AI 2025, 6(11), 287; https://doi.org/10.3390/ai6110287 - 5 Nov 2025
Abstract
Background/Objectives: Large language models (LLMs) are increasingly considered for use in radiology, including the summarization of patient medical records to support radiologists in processing large volumes of data under time constraints. This task requires not only accuracy and completeness but also clinical applicability. [...] Read more.
Background/Objectives: Large language models (LLMs) are increasingly considered for use in radiology, including the summarization of patient medical records to support radiologists in processing large volumes of data under time constraints. This task requires not only accuracy and completeness but also clinical applicability. Automatic metrics and general-purpose questionnaires fail to capture these dimensions, and no standardized tool currently exists for the expert evaluation of LLM-generated summaries in radiology. Here, we aimed to develop and validate such a tool. Methods: Items for the questionnaire were formulated and refined through focus group testing with radiologists. Validation was performed on 132 LLM-generated summaries of 44 patient records, each independently assessed by radiologists. Criterion validity was evaluated through known-group differentiation and construct validity through confirmatory factor analysis. Results: The resulting seven-item instrument, ELEGANCE (Expert-Led Evaluation of Generative AI Competence and Excellence), demonstrated excellent internal consistency (Cronbach’s α = 0.95). It encompasses seven dimensions: relevance, completeness, applicability, falsification, satisfaction, structure, and correctness of language and terminology. Confirmatory factor analysis supported a two-factor structure (content and form), with strong fit indices (RMSEA = 0.079, CFI = 0.989, TLI = 0.982, SRMR = 0.029). Criterion validity was confirmed by significant between-group differences (p < 0.001). Conclusions: ELEGANCE is the first validated tool for expert evaluation of LLM-generated medical record summaries for radiologists, providing a standardized framework to ensure quality and clinical utility. Full article
Show Figures

Figure 1

19 pages, 2457 KB  
Article
A Logic Tensor Network-Based Neurosymbolic Framework for Explainable Diabetes Prediction
by Semanto Mondal, Antonino Ferraro, Fabiano Pecorelli and Giuseppe De Pietro
Appl. Sci. 2025, 15(21), 11806; https://doi.org/10.3390/app152111806 - 5 Nov 2025
Abstract
Neurosymbolic AI is an emerging paradigm that combines neural network learning capabilities with the structured reasoning capacity of symbolic systems. Although machine learning has achieved cutting-edge outcomes in diverse fields, including healthcare, agriculture, and environmental science, it has potential limitations. Machine learning and [...] Read more.
Neurosymbolic AI is an emerging paradigm that combines neural network learning capabilities with the structured reasoning capacity of symbolic systems. Although machine learning has achieved cutting-edge outcomes in diverse fields, including healthcare, agriculture, and environmental science, it has potential limitations. Machine learning and neural models excel at identifying intricate data patterns, yet they often lack transparency, depend on large labelled datasets, and face challenges with logical reasoning and tasks that require explainability. These challenges reduce their reliability in high-stakes applications such as healthcare. To address these limitations, we propose a hybrid framework that integrates symbolic knowledge expressed in First-Order Logic into neural learning via a Logic Tensor Network (LTN). In this framework, expert-defined medical rules are embedded as logical axioms with learnable thresholds. As a result, the model gains predictive power, interpretability, and explainability through reasoning over the logical rules. We have utilized this neurosymbolic method for predicting diabetes by employing the Pima Indians Diabetes Dataset. Our experimental setup evaluates the LTN-based model against several conventional methods, including Support Vector Machines (SVM), Logistic Regression (LR), K-Nearest Neighbors (K-NN), Random Forest Classifiers (RF), Naive Bayes (NB), and a Standalone Neural Network (NN). The findings demonstrate that the neurosymbolic framework not only surpasses traditional models in predictive accuracy but also offers improved explainability and robustness. Notably, the LTN-based neurosymbolic framework achieves an excellent balance between recall and precision, along with a higher AUC-ROC score. These results underscore its potential for trustworthy medical diagnostics. This work highlights how integrating symbolic reasoning with data-driven models can bridge the gap between explainability, interpretability, and performance, offering a promising direction for AI systems in domains where both accuracy and explainability are critical. Full article
Show Figures

Figure 1

16 pages, 6124 KB  
Article
FPGA-Parallelized Digital Filtering for Real-Time Linear Envelope Detection of Surface Electromyography Signal on cRIO Embedded System
by Abdelouahad Achmamad, Atman Jbari and Nourdin Yaakoubi
Sensors 2025, 25(21), 6770; https://doi.org/10.3390/s25216770 - 5 Nov 2025
Abstract
Surface electromyography (sEMG) signal processing has been the subject of many studies for many years now. These studies had the main objective of providing pertinent information to medical experts to help them make correct interpretations and medical diagnoses. Beyond its clinical relevance, sEMG [...] Read more.
Surface electromyography (sEMG) signal processing has been the subject of many studies for many years now. These studies had the main objective of providing pertinent information to medical experts to help them make correct interpretations and medical diagnoses. Beyond its clinical relevance, sEMG plays a critical role in human–machine interface systems by monitoring skeletal muscle activity through analysis of the signal’s amplitude envelope. Achieving accurate envelope detection, however, demands a robust and efficient signal processing pipeline. This paper presents the implementation of an optimized processing framework for the real-time linear envelope detection of sEMG signals. The proposed pipeline comprises three main stages, namely data acquisition, full-wave rectification, and low-pass filtering, where the deterministic execution time of the algorithm on the FPGA (98 ns per sample) is two orders of magnitude faster than the data acquisition sample interval (200 µs), guaranteeing real-time performance. The entire algorithm is designed for deployment on the FPGA core of a CompactRIO embedded controller, with emphasis on achieving high accuracy while minimizing hardware resource consumption. For this purpose, a parallel second-order structure of the Butterworth low-pass (LP) filter is proposed. The designed filter is tested and compared practically to the conventional method, which is the moving average (MAV) filter. The mean square error (MSE) is used as a metric for performance evaluation. From the analysis, it is observed that the proposed design LP filter shows an improved MSE and reduced hardware resources than the MAV filter. Furthermore, the comparative analysis and the results show that our proposed design LP filter is a valid and reliable method for linear envelope detection. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

17 pages, 898 KB  
Article
One Step Closer to Conversational Medical Records: ChatGPT Parses Psoriasis Treatments from EMRs
by Jonathan Shapiro, Mor Atlas, Sharon Baum, Felix Pavlotsky, Aviv Barzilai, Rotem Gershon, Romi Gleicher and Itay Cohen
J. Clin. Med. 2025, 14(21), 7845; https://doi.org/10.3390/jcm14217845 - 5 Nov 2025
Abstract
Background: Large Language Models (LLMs), such as ChatGPT, are increasingly applied in medicine for summarization, clinical decision support, and diagnostic assistance, including recent work in dermatology. Previous AI and NLP models in dermatology have mainly focused on lesion classification, diagnostic support, and [...] Read more.
Background: Large Language Models (LLMs), such as ChatGPT, are increasingly applied in medicine for summarization, clinical decision support, and diagnostic assistance, including recent work in dermatology. Previous AI and NLP models in dermatology have mainly focused on lesion classification, diagnostic support, and patient education, while extracting structured treatment information from unstructured dermatology records remains underexplored. We evaluated ChatGPT-4o’s ability to identify psoriasis treatments from free-text documentation, compared with expert annotations. Methods: In total, 94 electronic medical records (EMRs) of patients diagnosed with psoriasis were analyzed. ChatGPT-4o extracted treatments used for psoriasis from each unstructured clinical note. Its output was compared to manually curated reference annotations by expert dermatologists. A total of 83 treatments, including topical agents, systemic medications, biologics, phototherapy, and procedural interventions, were evaluated. Performance metrics included recall, precision, F1-score, specificity, accuracy, Cohen’s Kappa, and Area Under the Curve (AUC). Analyses were conducted at the individual-treatment level and grouped into pharmacologic categories. Results: ChatGPT-4o demonstrated strong performance, with recall of 0.91, precision of 0.96, F1-score of 0.94, specificity of 0.99, and accuracy of 0.99. Agreement with expert annotations was high (Cohen’s Kappa = 0.93; AUC = 0.98). Group-level analysis confirmed these results, with the highest performance in biologics and methotrexate (F1 = 1.00) and lower recall in categories with vague documentation, such as systemic corticosteroids and antihistamines. Conclusions: Our study highlights the potential of LLMs to extract psoriasis treatment information from unstructured clinical documentation and structure it for research and decision support. The model performed best with well-defined, commonly used treatments. Full article
(This article belongs to the Section Dermatology)
Show Figures

Figure 1

26 pages, 759 KB  
Article
From Price to Value: Implementing Best Value Procurement in the Czech Public Sector—A Case Study with Survey Insights
by Petr Marvan and Vít Hromádka
Buildings 2025, 15(21), 3981; https://doi.org/10.3390/buildings15213981 - 4 Nov 2025
Abstract
This paper explores the implementation of the Best Value Approach in public procurement, particularly in construction projects, with a focus on its application at Brno University of Technology. This study addresses the need for qualitative evaluation criteria in supplier selection to improve project [...] Read more.
This paper explores the implementation of the Best Value Approach in public procurement, particularly in construction projects, with a focus on its application at Brno University of Technology. This study addresses the need for qualitative evaluation criteria in supplier selection to improve project outcomes and mitigate risks. The key problem addressed in this paper is the use of qualitative methods in selecting suitable contractors for public contracts. As the main methods, a descriptive mixed-methods study that includes a narrative overview and two descriptive cross-sectional surveys were adopted. Drawing on theoretical foundations such as Information Measurement Theory and the Kashiwagi Solution Model, this paper outlines the principles and processes of BVA, including its emphasis on transparency, expert-driven decision-making, and risk management. The results show that BVA enhances procurement quality by reducing reliance on lowest-price criteria, encouraging realistic pricing, and fostering deeper bidder engagement. The surveys reveal growing interest in qualitative methods but also highlight limited awareness and experience with BVA in the Czech Republic. Pilot projects confirmed the method’s effectiveness and informed procedural refinements. This paper concludes that successful BVA implementation requires a paradigm shift, leadership support, education, and continuous improvement. BVA principles offer tools for cultivating transparency, efficiency, and quality in public procurement. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

13 pages, 259 KB  
Article
Social Media’s Impact on Public Awareness of the Effects of Dietary Habits and Fluid Consumption on Kidney Stone Formation: A Cross-Sectional Study
by Mansour Alnazari, Omar Ayidh Alotaibi, Abdulaziz Ali Alharbi, Saad Mohammed Alharthi, Ahmed H. Al-wadani, Muteb Obaid Alharthi, Bassam Abdulaziz Alosaimi, Abdulaziz Mohammed Alrasheed, Suliman Ahmed Albedaiwi, Turki Dibas Alharbi, Shahad Adel Alhemaid, Huda Yousef Alhashem, Wesam Khan and Emad Rajih
Healthcare 2025, 13(21), 2795; https://doi.org/10.3390/healthcare13212795 - 4 Nov 2025
Abstract
Background: Renal stone disease is a common urological condition considered to be greatly affected by lifestyle, dietary practices, and hydration status. With the rapid advancement and remarkable rise in digital communication, social media has become an important source of health information. However, [...] Read more.
Background: Renal stone disease is a common urological condition considered to be greatly affected by lifestyle, dietary practices, and hydration status. With the rapid advancement and remarkable rise in digital communication, social media has become an important source of health information. However, little is known about its effects on raising public awareness of dietary and fluid-related risk factors for kidney stone formation, particularly in Middle Eastern populations. Aim: We aimed to evaluate the impact of social media platforms on public awareness of dietary habits and fluid consumption in relation to kidney stone prevention. Methods: A cross-sectional survey was applied to 980 adults with varying demographic characteristics. Data on social media use, dietary and fluid knowledge, and attitudes toward kidney stone prevention were collected through structured questionnaires. Statistical analyses, including regression and mediation models, were employed to identify predictors of awareness and explore pathways linking social media use to knowledge and attitudes. Results: Among the 980 participants (mean age = 29.9 ± 11 years; 55.4% males), 69.9% held university degrees, and 7.2% had a history of kidney stones. The overall awareness of kidney stone prevention varied, with most of the participants recognizing the protective role of adequate hydration (67%) and the adverse impact of soft-drink consumption (73.2%), while knowledge of dietary contributors such as animal protein and tea was limited. Greater knowledge and more appropriate attitudes were associated with older age, female gender, following healthcare professionals, and engagement with medical websites, YouTube, and TikTok. Mediation analysis revealed that social media influenced awareness indirectly through improvements in knowledge and attitudes. Conclusions: This study reveals that the digital environment shapes both public knowledge of and attitudes toward kidney stone prevention, though critical knowledge gaps persist regarding complex dietary factors. Mediation analysis indicated that the digital influence is likely channeled through improvements in knowledge and attitudes. We emphasize that source credibility is paramount; relying on official medical websites and following health professionals were the most effective strategies for boosting awareness. Therefore, expert-led educational strategies must be integrated into public health protocols. Full article
23 pages, 4947 KB  
Article
Graded Evaluation and Optimal Scheme Selection of Mine Rock Diggability Based on the Multidimensional Cloud Model
by Shibin Yao, Xiaoyuan Li, Jian Zhou and Manoj Khandelwal
Machines 2025, 13(11), 1019; https://doi.org/10.3390/machines13111019 - 3 Nov 2025
Viewed by 106
Abstract
With the advancement of mining technologies, the evaluation of rock diggability has become a critical research topic for ensuring both safety and efficiency in mining operations. This study establishes a comprehensive evaluation system for mine rock diggability and proposes corresponding grading criteria. For [...] Read more.
With the advancement of mining technologies, the evaluation of rock diggability has become a critical research topic for ensuring both safety and efficiency in mining operations. This study establishes a comprehensive evaluation system for mine rock diggability and proposes corresponding grading criteria. For the determination of indicator weights, a combination of subjective and objective methods is employed, integrating expert knowledge and data characteristics to identify optimal weights, thereby providing a reliable basis for comprehensive evaluation. The single-indicator cloud model effectively mitigates the difficulties associated with defining transitional values between adjacent intervals. The multidimensional cloud model, by considering the interactions among indicators, enables the optimization of indicator interactions and enhances the interpretability of diggability grades. Comparison with the Diggability Index (DI) method shows a high consistency between the two approaches (R2 = 0.991). The absolute accuracy of diggability levels reaches 74%, while the accuracy based on cloud model fuzzy evaluation reaches 100%, demonstrating the effectiveness of the cloud model in handling transitional intervals and capturing uncertainty. This study provides a novel methodology and theoretical foundation for the scientific evaluation of mine rock diggability, offering practical guidance for reasonable grading, optimization of mining parameters, and interpretation of diggability levels in engineering practice. Full article
Show Figures

Figure 1

29 pages, 943 KB  
Article
A Linguistic q-Rung Orthopair ELECTRE II Algorithm for Fuzzy Multi-Criteria Ontology Ranking
by Ameeth Sooklall and Jean Vincent Fonou-Dombeu
Big Data Cogn. Comput. 2025, 9(11), 277; https://doi.org/10.3390/bdcc9110277 - 3 Nov 2025
Viewed by 88
Abstract
In recent years, interest in the application of ontologies in various domains of knowledge has grown significantly. Ontologies are widely used in a myriad of areas, such as artificial intelligence, data integration, knowledge management, and the semantic web, to name but a few. [...] Read more.
In recent years, interest in the application of ontologies in various domains of knowledge has grown significantly. Ontologies are widely used in a myriad of areas, such as artificial intelligence, data integration, knowledge management, and the semantic web, to name but a few. However, despite the widespread adoption, there exist a range of problems associated with ontologies, such as the complexity and cognitive challenges associated with ontology engineering, design, and development. One of the solutions to these challenges is to reuse existing ontologies rather than developing new ontologies afresh for new applications. The reuse of ontologies that describe a knowledge domain is a complex task consisting of many aspects. One of the key aspects involves ranking ontologies to aid in their selection. Various techniques have been proposed for this task, but many of them fall short in their expressiveness and ability to capture the cognitive aspects of human-like decision-making processes. Furthermore, much of the existing research focuses on an objective approach to ontology ranking, but it is unquestionable that a wide range of aspects pertaining to the quality of an ontology simply cannot be captured in a quantitative manner. Existing ranking models fail to provide a robust and flexible canvas for facilitating qualitative ontology ranking and selection for reuse. To address the aforementioned shortcomings of existing ontology ranking approaches, this study proposes a novel algorithm for ranking ontologies that extends the Elimination and Choice Translating Reality (ELECTRE) multi-criteria decision-making method with the Linguistic q-Rung Orthopair Fuzzy Set (Lq-ROFS-ELECTRE II), allowing the expression of uncertainty in a more robust and precise manner. The new Lq-ROFS-ELECTRE II algorithm was applied to rank a set of 19 ontologies of the machine learning (ML) domain. The ML ontologies were evaluated using a set of seven qualitative criteria extracted from the Ontometric framework. The proposed Lq-ROFS-ELECTRE II algorithm was then applied to rank the 19 ontologies in light of the seven criteria. The ranking results obtained were compared against the quantitative rankings of the same 19 ontologies using the traditional ELECTRE II algorithm, and confirmed the validity of the ranking performed by the proposed Lq-ROFS-ELECTRE II algorithm and its effectiveness in the task of ontology ranking. Furthermore, a comparative analysis of the proposed Lq-ROFS-ELECTRE II against existing MCDM methods and other existing fuzzy ELECTRE II methods displayed its superior modeling capabilities that allow for more natural decision evaluation from subject experts in real-world applications and allow the decision-maker to have much flexibility in expressing their preferences. These capabilities of the Lq-ROFS-ELECTRE II algorithm make it applicable not only in ontology ranking, but in any domain where there exist decision-making scenarios that comprise multiple conflicting criteria under uncertainty. Full article
Show Figures

Figure 1

14 pages, 4248 KB  
Article
Effect of Additional Aluminum Filtration on the Image Quality in Cone Beam Computed Tomographic Studies of Equine Distal Limbs Using Visual Grading Characteristics Analysis: A Pilot Study
by Luca Papini, Mathieu de Preux, Frederik Pauwels, Joris Missotten and Elke Van der Vekens
Vet. Sci. 2025, 12(11), 1051; https://doi.org/10.3390/vetsci12111051 - 2 Nov 2025
Viewed by 148
Abstract
(1) Background: Cone beam computed tomography (CBCT) is increasingly used in equine practice to diagnose musculoskeletal injuries, including fractures in the distal limb. However, limited detail in the thick cortical bone of the metacarpus/metatarsus hinders accurate diagnosis. In human medicine, the addition of [...] Read more.
(1) Background: Cone beam computed tomography (CBCT) is increasingly used in equine practice to diagnose musculoskeletal injuries, including fractures in the distal limb. However, limited detail in the thick cortical bone of the metacarpus/metatarsus hinders accurate diagnosis. In human medicine, the addition of aluminum filters (AF) enhanced image quality while reducing radiation exposure. This study aimed to evaluate the effect of AF on image quality in CBCT scans of equine distal limbs. (2) Methods: Adult equine cadaver limbs were scanned with a mobile CBCT unit using varying tube currents (10–100 mA) and AF (13–25 mm). Two independent experts assessed the image quality using a four-point visual grading scale, focusing on cortical bone detail and artifacts. (3) Results: Higher tube currents generally improved image quality, but no filter was beneficial for the metacarpal/metatarsal regions. For the proximal phalanx, thicker AF (19–25 mm) improved image quality without significantly increasing the required tube current. (4) Conclusions: The optimal balance between image quality and radiation exposure using the O-arm® CBCT system for equine distal limbs was a tube current of 50 or 64 mA without filtration for the metacarpus/metatarsus, while a tube current of 50 mA with a 19–25 mm AF provided the best image quality for the proximal phalanx. Full article
Show Figures

Figure 1

24 pages, 2940 KB  
Article
Driving Green Through Lean: A Structured Causal Analysis of Lean Practices in Automotive Sustainability
by Matteo Ferrazzi and Alberto Portioli-Staudacher
Eng 2025, 6(11), 296; https://doi.org/10.3390/eng6110296 - 1 Nov 2025
Viewed by 129
Abstract
The urgent global challenge of environmental sustainability has intensified interest in integrating Lean Management practices with environmental objectives, particularly within the automotive industry, a sector known for both innovation and high environmental impact. This study investigates the systemic relationships between 16 lean practices [...] Read more.
The urgent global challenge of environmental sustainability has intensified interest in integrating Lean Management practices with environmental objectives, particularly within the automotive industry, a sector known for both innovation and high environmental impact. This study investigates the systemic relationships between 16 lean practices and three environmental performance metrics: energy consumption, CO2 emissions, and waste generation. Using the Fuzzy Decision-Making Trial And Evaluation Laboratory (DEMATEL) methodology, data were collected from seven lean experts in the Italian automotive industry to model the cause–effect dynamics among the selected practices. The analysis revealed that certain practices, such as Total Productive Maintenance (TPM), just-in-time (JIT), and one-piece-flow, consistently act as influential drivers across all environmental objectives. Conversely, practices like Statistical Process Control (SPC) and Total Quality Management (TQM) were identified as highly dependent, delivering full benefits only when preceded by foundational practices. The results suggest a strategic three-step implementation roadmap tailored to each environmental goal, providing decision-makers with actionable guidance for sustainable transformation. This study contributes to the literature by offering a structured perspective on lean and environmental sustainability in the context of the automotive sector in Italy. The research is supported by a data-driven method to prioritize practices based on their systemic influence and contextual effectiveness. Full article
(This article belongs to the Section Chemical, Civil and Environmental Engineering)
Show Figures

Figure 1

Back to TopTop