Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = goal question metric

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3315 KiB  
Article
Cloud Security Assessment: A Taxonomy-Based and Stakeholder-Driven Approach
by Abdullah Abuhussein, Faisal Alsubaei, Vivek Shandilya, Fredrick Sheldon and Sajjan Shiva
Information 2025, 16(4), 291; https://doi.org/10.3390/info16040291 - 4 Apr 2025
Viewed by 446
Abstract
Cloud adoption necessitates relinquishing data control to cloud service providers (CSPs), involving diverse stakeholders with varying security and privacy (S&P) needs and responsibilities. Building upon previously published work, this paper addresses the persistent challenge of a lack of standardized, transparent methods for consumers [...] Read more.
Cloud adoption necessitates relinquishing data control to cloud service providers (CSPs), involving diverse stakeholders with varying security and privacy (S&P) needs and responsibilities. Building upon previously published work, this paper addresses the persistent challenge of a lack of standardized, transparent methods for consumers to select and quantify appropriate S&P measures. This work introduces a stakeholder-centric methodology to identify and address S&P challenges, enabling stakeholders to assess their cloud service protection capabilities. The primary contribution lies in the development of new classifications and updated considerations, along with tailored S&P features designed to accommodate specific service models, deployment models, and stakeholder roles. This novel approach shifts from data or infrastructure perspectives to comprehensively account for S&P issues arising from stakeholder interactions and conflicts. A prototype framework, utilizing a rule-based taxonomy and the Goal–Question–Metric (GQM) method, recommends essential S&P attributes. Multi-criteria decision-making (MCDM) is employed to measure protection levels and facilitate benchmarking. The evaluation of the implemented prototype demonstrates the framework’s effectiveness in recommending and consistently measuring security features. This work aims to reduce consumer apprehension regarding cloud migration, improve transparency between consumers and CSPs, and foster competitive transparency among CSPs. Full article
(This article belongs to the Special Issue Internet of Things (IoT) and Cloud/Edge Computing)
Show Figures

Figure 1

23 pages, 5045 KiB  
Article
Urban Geography Compression Patterns: Non-Euclidean and Fractal Viewpoints
by Daniel A. Griffith and Sandra Lach Arlinghaus
AppliedMath 2025, 5(1), 9; https://doi.org/10.3390/appliedmath5010009 - 21 Jan 2025
Viewed by 1219
Abstract
The intersection of fractals, non-Euclidean geometry, spatial autocorrelation, and urban structure offers valuable theoretical and practical application insights, which echoes the overarching goal of this paper. Its research question asks about connections between graph theory adjacency matrix eigenfunctions and certain non-Euclidean grid systems; [...] Read more.
The intersection of fractals, non-Euclidean geometry, spatial autocorrelation, and urban structure offers valuable theoretical and practical application insights, which echoes the overarching goal of this paper. Its research question asks about connections between graph theory adjacency matrix eigenfunctions and certain non-Euclidean grid systems; its explorations reflect accompanying synergistic influences on modern urban design. A Minkowski metric with an exponent between one and two bridges Manhattan and Euclidean spaces, supplying an effective tool in these pursuits. This model coalesces with urban fractal dimensions, shedding light on network density and human activity compression. Unlike Euclidean geometry, which assumes unique shortest paths, Manhattan geometry better represents human movements that typically follow multiple equal-length network routes instead of unfettered straight-line paths. Applying these concepts to urban spatial models, like the Burgess concentric ring conceptualization, reinforces the need for fractal analyses in urban studies. Incorporating a fractal perspective into eigenvector methods, particularly those affiliated with spatial autocorrelation, provides a deeper understanding of urban structure and dynamics, enlightening scholars about city evolution and functions. This approach enhances geometric understanding of city layouts and human behavior, offering insights into urban planning, network density, and human activity flows. Blending theoretical and applied concepts renders a clearer picture of the complex patterns shaping urban spaces. Full article
Show Figures

Figure 1

18 pages, 3635 KiB  
Article
Diagnostic Approach and Tool for Assessing and Increasing the Sustainability of Renewable Energy Projects
by Jing Tian, Sam Culley, Holger R. Maier, Aaron C. Zecchin and James Hopeward
Sustainability 2024, 16(24), 10871; https://doi.org/10.3390/su162410871 - 11 Dec 2024
Cited by 2 | Viewed by 1414
Abstract
The imperative of achieving net zero carbon emissions is driving the transition to renewable energy sources. However, this often leads to carbon tunnel vision by narrowly focusing on carbon metrics and overlooking broader sustainability impacts. To enable these broader impacts to be considered, [...] Read more.
The imperative of achieving net zero carbon emissions is driving the transition to renewable energy sources. However, this often leads to carbon tunnel vision by narrowly focusing on carbon metrics and overlooking broader sustainability impacts. To enable these broader impacts to be considered, we have developed a generic approach and a freely available assessment tool on GitHub that not only facilitate the high-level sustainability assessment of renewable energy projects but also indicate whether project-level decisions have positive, negative, or neutral impacts on each of the sustainable development goals (SDGs). This information highlights potential problem areas and which actions can be taken to increase the sustainability of renewable energy projects. The tool is designed to be accessible and user-friendly by developing it in MS Excel and by only requiring yes/no answers to approximately 60 diagnostic questions. The utility of the approach and tool are illustrated via three desktop case studies performed by the authors. The three illustrative case studies are located in Australia and include a large-scale solar farm, biogas production from wastewater plants, and an offshore wind farm. Results show that the case study projects impact the SDGs in different and unique ways and that different project–level decisions are most influential, highlighting the value of the proposed approach and tool to provide insight into specific projects and their sustainability implications, as well as which actions can be taken to increase project sustainability. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

19 pages, 8890 KiB  
Article
Forgotten Ecological Corridors: A GIS Analysis of the Ditches and Hedges in the Roman Centuriation Northeast of Padua
by Tanja Kremenić, Mauro Varotto and Francesco Ferrarese
Sustainability 2024, 16(20), 8962; https://doi.org/10.3390/su16208962 - 16 Oct 2024
Cited by 1 | Viewed by 1574
Abstract
Studying historical rural landscapes beyond their archaeological and cultural significance, as has typically been addressed in previous research, is important in the context of current environmental challenges. Some historical rural landscapes, such as Roman land divisions, have persisted for more than 2000 years [...] Read more.
Studying historical rural landscapes beyond their archaeological and cultural significance, as has typically been addressed in previous research, is important in the context of current environmental challenges. Some historical rural landscapes, such as Roman land divisions, have persisted for more than 2000 years and may still contribute to sustainability goals. To assess this topic, the hydraulic and vegetation network of the centuriation northeast of Padua were studied, emphasising their multiple benefits. Their length, distribution, and evolution over time (2008–2022) were vectorised and measured using available digital terrain models and orthophotographs in a geographic information system (GIS). The results revealed a significant decline in the length of water ditches and hedgerows across almost all examined areas, despite their preservation being highlighted in regional and local spatial planning documents. These findings indicate the need for a better understanding of the local dynamics driving such trends and highlight the importance of adopting a more tailored approach to their planning. This study discusses the GIS metrics utilised and, in this way, contributes to landscape monitoring and restoration actions. Finally, a multifunctional approach to the sustainable planning of this area is proposed here—one that integrates the cultural archaeological heritage in question with environmental preservation and contemporary climate adaptation and mitigation strategies. Full article
Show Figures

Figure 1

23 pages, 6853 KiB  
Review
Net-Zero Greenhouse Gas Emission Electrified Aircraft Propulsion for Large Commercial Transport
by Hao Huang and Kaushik Rajashekara
World Electr. Veh. J. 2024, 15(9), 411; https://doi.org/10.3390/wevj15090411 - 8 Sep 2024
Cited by 3 | Viewed by 2143
Abstract
Until recently, electrified aircraft propulsion (EAP) technology development has been driven by the dual objectives of reducing greenhouse gas (GHG) emissions and addressing the depletion of fossil fuels. However, the increasing severity of climate change, posing a significant threat to all life forms, [...] Read more.
Until recently, electrified aircraft propulsion (EAP) technology development has been driven by the dual objectives of reducing greenhouse gas (GHG) emissions and addressing the depletion of fossil fuels. However, the increasing severity of climate change, posing a significant threat to all life forms, has resulted in the global consensus of achieving net-zero GHG emissions by 2050. This major shift has alerted the aviation electrification industry to consider the following: What is the clear path forward for EAP technology development to support the net-zero GHG goals for large commercial transport aviation? The purpose of this paper is to answer this question. After identifying four types of GHG emissions that should be used as metrics to measure the effectiveness of each technology for GHG reduction, the paper presents three significant categories of GHG reduction efforts regarding the engine, evaluates the potential of EAP technologies within each category as well as combinations of technologies among the different categories using the identified metrics, and thus determines the path forward to support the net-zero GHG objective. Specifically, the paper underscores the need for the aviation electrification industry to adapt, adjust, and integrate its EAP technology development into the emerging new engine classes. These innovations and collaborations are crucial to accelerate net-zero GHG efforts effectively. Full article
(This article belongs to the Special Issue Electric and Hybrid Electric Aircraft Propulsion Systems)
Show Figures

Figure 1

20 pages, 1255 KiB  
Article
Training from Zero: Forecasting of Radio Frequency Machine Learning Data Quantity
by William H. Clark and Alan J. Michaels
Telecom 2024, 5(3), 632-651; https://doi.org/10.3390/telecom5030032 - 18 Jul 2024
Viewed by 1815
Abstract
The data used during training in any given application space are directly tied to the performance of the system once deployed. While there are many other factors that are attributed to producing high-performance models based on the Neural Scaling Law within Machine Learning, [...] Read more.
The data used during training in any given application space are directly tied to the performance of the system once deployed. While there are many other factors that are attributed to producing high-performance models based on the Neural Scaling Law within Machine Learning, there is no doubt that the data used to train a system provide the foundation from which to build. One of the underlying heuristics used within the Machine Learning space is that having more data leads to better models, but there is no easy answer to the question, “How much data is needed to achieve the desired level of performance?” This work examines a modulation classification problem in the Radio Frequency domain space, attempting to answer the question of how many training data are required to achieve a desired level of performance, but the procedure readily applies to classification problems across modalities. The ultimate goal is to determine an approach that requires the lowest amount of data collection to better inform a more thorough collection effort to achieve the desired performance metric. By focusing on forecasting the performance of the model rather than the loss value, this approach allows for a greater intuitive understanding of data volume requirements. While this approach will require an initial dataset, the goal is to allow for the initial data collection to be orders of magnitude smaller than what is required for delivering a system that achieves the desired performance. An additional benefit of the techniques presented here is that the quality of different datasets can be numerically evaluated and tied together with the quantity of data, and ultimately, the performance of the architecture in the problem domain. Full article
Show Figures

Figure 1

17 pages, 1065 KiB  
Article
Evaluating Quantized Llama 2 Models for IoT Privacy Policy Language Generation
by Bhavani Malisetty and Alfredo J. Perez
Future Internet 2024, 16(7), 224; https://doi.org/10.3390/fi16070224 - 26 Jun 2024
Cited by 1 | Viewed by 2948
Abstract
Quantized large language models are large language models (LLMs) optimized for model size while preserving their efficacy. They can be executed on consumer-grade computers without the powerful features of dedicated servers needed to execute regular (non-quantized) LLMs. Because of their ability to summarize, [...] Read more.
Quantized large language models are large language models (LLMs) optimized for model size while preserving their efficacy. They can be executed on consumer-grade computers without the powerful features of dedicated servers needed to execute regular (non-quantized) LLMs. Because of their ability to summarize, answer questions, and provide insights, LLMs are being used to analyze large texts/documents. One of these types of large texts/documents are Internet of Things (IoT) privacy policies, which are documents specifying how smart home gadgets, health-monitoring wearables, and personal voice assistants (among others) collect and manage consumer/user data on behalf of Internet companies providing services. Even though privacy policies are important, they are difficult to comprehend due to their length and how they are written, which makes them attractive for analysis using LLMs. This study evaluates how quantized LLMs are modeling the language of privacy policies to be potentially used to transform IoT privacy policies into simpler, more usable formats, thus aiding comprehension. While the long-term goal is to achieve this usable transformation, our work focuses on evaluating quantized LLM models used for IoT privacy policy language. Particularly, we study 4-bit, 5-bit, and 8-bit quantized versions of the large language model Meta AI version 2 (Llama 2) and the base Llama 2 model (zero-shot, without fine-tuning) under different metrics and prompts to determine how well these quantized versions model the language of IoT privacy policy documents by completing and generating privacy policy text. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

26 pages, 8034 KiB  
Article
Unraveling the Impact of Class Imbalance on Deep-Learning Models for Medical Image Classification
by Carlos J. Hellín, Alvaro A. Olmedo, Adrián Valledor, Josefa Gómez, Miguel López-Benítez and Abdelhamid Tayebi
Appl. Sci. 2024, 14(8), 3419; https://doi.org/10.3390/app14083419 - 18 Apr 2024
Cited by 9 | Viewed by 4336
Abstract
The field of image analysis with artificial intelligence has grown exponentially thanks to the development of neural networks. One of its most promising areas is medical diagnosis through lung X-rays, which are crucial for diseases like pneumonia, which can be mistaken for other [...] Read more.
The field of image analysis with artificial intelligence has grown exponentially thanks to the development of neural networks. One of its most promising areas is medical diagnosis through lung X-rays, which are crucial for diseases like pneumonia, which can be mistaken for other conditions. Despite medical expertise, precise diagnosis is challenging, and this is where well-trained algorithms can assist. However, working with medical images presents challenges, especially when datasets are limited and unbalanced. Strategies to balance these classes have been explored, but understanding their local impact and how they affect model evaluation is still lacking. This work aims to analyze how a class imbalance in a dataset can significantly influence the informativeness of metrics used to evaluate predictions. It demonstrates that class separation in a dataset impacts trained models and is a strategy deserving more attention in future research. To achieve these goals, classification models using artificial and deep neural networks implemented in the R environment are developed. These models are trained using a set of publicly available images related to lung pathologies. All results are validated using metrics obtained from the confusion matrix to verify the impact of data imbalance on the performance of medical diagnostic models. The results raise questions about the procedures used to group classes in many studies, aiming to achieve class balance in imbalanced data and open new avenues for future research to investigate the impact of class separation in datasets with clinical pathologies. Full article
Show Figures

Figure 1

26 pages, 1502 KiB  
Article
Robustness Assessment of AI-Based 2D Object Detection Systems: A Method and Lessons Learned from Two Industrial Cases
by Anne-Laure Wozniak, Sergio Segura and Raúl Mazo
Electronics 2024, 13(7), 1368; https://doi.org/10.3390/electronics13071368 - 4 Apr 2024
Viewed by 2076
Abstract
The reliability of AI-based object detection models has gained interest with their increasing use in safety-critical systems and the development of new regulations on artificial intelligence. To meet the need for robustness evaluation, several authors have proposed methods for testing these models. However, [...] Read more.
The reliability of AI-based object detection models has gained interest with their increasing use in safety-critical systems and the development of new regulations on artificial intelligence. To meet the need for robustness evaluation, several authors have proposed methods for testing these models. However, applying these methods in industrial settings can be difficult, and several challenges have been identified in practice in the design and execution of tests. There is, therefore, a need for clear guidelines for practitioners. In this paper, we propose a method and guidelines for assessing the robustness of AI-based 2D object detection systems, based on the Goal Question Metric approach. The method defines the overall robustness testing process and a set of recommended metrics to be used at each stage of the process. We developed and evaluated the method through action research cycles, based on two industrial cases and feedback from practitioners. Thus, the resulting method addresses issues encountered in practice. A qualitative evaluation of the method by practitioners was also conducted to provide insights that can guide future research on the subject. Full article
(This article belongs to the Special Issue AI Test)
Show Figures

Figure 1

22 pages, 2095 KiB  
Article
A Method and Metrics to Assess the Energy Efficiency of Smart Working
by Lucia Cattani, Anna Magrini and Anna Chiari
Buildings 2024, 14(3), 741; https://doi.org/10.3390/buildings14030741 - 9 Mar 2024
Cited by 2 | Viewed by 2302
Abstract
The paper discusses the energy efficiency of smart working (SW) as a solution to traditional work-approach issues, with a focus on evaluating benefits for both employers and employees. Remote working, while offering environmental advantages such as reduced commuting and office space use, poses [...] Read more.
The paper discusses the energy efficiency of smart working (SW) as a solution to traditional work-approach issues, with a focus on evaluating benefits for both employers and employees. Remote working, while offering environmental advantages such as reduced commuting and office space use, poses challenges in assessing its true impact. The study presents results from a dynamic analysis on a real residential building, typical of an architectural style diffused in northern Italy, revealing a 15% average increase in energy consumption when all work tasks are performed from home. To address concerns about the environmental impact of SW, the research proposes a method and metrics for evaluation. Four novel indices (SWEET, SEE, SSEE, and 4E) are introduced, providing a structured approach to assess the energy efficiency of SW initiatives. The paper outlines a methodology for data gathering and metric application, aiming to acquire quantitative insights and mitigate disparities arising from a transfer of burdens to employees. This contribution not only signifies a ground-breaking methodology but also addresses an unresolved research question concerning the evaluation of the actual energy efficiency of smart working implementations for both employers and employees. The results underscore the importance of understanding the nuances of SW’s impact on household energy usage and its broader implications for sustainability goals. Full article
Show Figures

Figure 1

18 pages, 408 KiB  
Article
Static Malware Analysis Using Low-Parameter Machine Learning Models
by Ryan Baker del Aguila, Carlos Daniel Contreras Pérez, Alejandra Guadalupe Silva-Trujillo, Juan C. Cuevas-Tello and Jose Nunez-Varela
Computers 2024, 13(3), 59; https://doi.org/10.3390/computers13030059 - 23 Feb 2024
Cited by 13 | Viewed by 5200
Abstract
Recent advancements in cybersecurity threats and malware have brought into question the safety of modern software and computer systems. As a direct result of this, artificial intelligence-based solutions have been on the rise. The goal of this paper is to demonstrate the efficacy [...] Read more.
Recent advancements in cybersecurity threats and malware have brought into question the safety of modern software and computer systems. As a direct result of this, artificial intelligence-based solutions have been on the rise. The goal of this paper is to demonstrate the efficacy of memory-optimized machine learning solutions for the task of static analysis of software metadata. The study comprises an evaluation and comparison of the performance metrics of three popular machine learning solutions: artificial neural networks (ANN), support vector machines (SVMs), and gradient boosting machines (GBMs). The study provides insights into the effectiveness of memory-optimized machine learning solutions when detecting previously unseen malware. We found that ANNs shows the best performance with 93.44% accuracy classifying programs as either malware or legitimate even with extreme memory constraints. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

21 pages, 1062 KiB  
Article
Validation of Instruments for the Improvement of Interprofessional Education through Educational Management: An Internet of Things (IoT)-Based Machine Learning Approach
by Mustafa Mohamed, Fahriye Altinay, Zehra Altinay, Gokmen Dagli, Mehmet Altinay and Mutlu Soykurt
Sustainability 2023, 15(24), 16577; https://doi.org/10.3390/su152416577 - 6 Dec 2023
Cited by 2 | Viewed by 1916
Abstract
Educational management is the combination of human and material resources that supervises, plans, and responsibly executes an educational system with outcomes and consequences. However, when seeking improvements in interprofessional education and collaborative practice through the management of health professions, educational modules face significant [...] Read more.
Educational management is the combination of human and material resources that supervises, plans, and responsibly executes an educational system with outcomes and consequences. However, when seeking improvements in interprofessional education and collaborative practice through the management of health professions, educational modules face significant obstacles and challenges. The primary goal of this study was to analyse data collected from discussion sessions and feedback from respondents concerning interprofessional education (IPE) management modules. Thus, this study used an explanatory and descriptive design to obtain responses from the selected group via a self-administered questionnaire and semi-structured interviews, and the results were limited to averages, i.e., frequency distributions and summary statistics. The results of this study reflect the positive responses from both subgroups and strongly support the further implementation of IPE in various aspects and continuing to improve and develop it. Four different artificial intelligence (AI) techniques were used to model interprofessional education improvement through educational management, using 20 questions from the questionnaire as the variables (19 input variables and 1 output variable). The modelling performance of the nonlinear and linear models could reliably predict the output in both the calibration and validation phases when considering the four performance metrics. These models were shown to be reliable tools for evaluating and modelling interprofessional education through educational management. Gaussian process regression (GPR) outperformed all the models in both the training and validation stages. Full article
Show Figures

Figure 1

25 pages, 3442 KiB  
Article
The Human-Centredness Metric: Early Assessment of the Quality of Human-Centred Design Activities
by Olga Sankowski and Dieter Krause
Appl. Sci. 2023, 13(21), 12090; https://doi.org/10.3390/app132112090 - 6 Nov 2023
Cited by 2 | Viewed by 2259
Abstract
Human-centred design as a research field is characterised by multidisciplinarity and a variety of many similar methods. Previous research attempted to classify existing methods into groups and categories, e.g., according to the degree of user involvement. The research question here is the following: [...] Read more.
Human-centred design as a research field is characterised by multidisciplinarity and a variety of many similar methods. Previous research attempted to classify existing methods into groups and categories, e.g., according to the degree of user involvement. The research question here is the following: How can human-centredness be measured and evaluated based on resulting product concepts? The goal of the paper is to present and apply a new metric—the Human-Centredness Metric (HCM)—for the early estimation of the quality of any human-centred activity based on the four goals of human-centred design. HCM was employed to evaluate 16 concepts, utilising a 4-point Likert scale, covering four different everyday products that were created by four students, which used three different human-centred design methods for this. The first concept was created without the application of any additional human-centred design method. The results illuminated trends regarding the impact of additional human-centred design methods on the HCM score. However, statistical significance remained elusive, potentially due to a series of limitations such as concept complexity, the small number of concepts, and the early developmental stage. The study’s limitations underscore the need for refined items and expanded samples to better gauge the impact of human-centred methods on product development. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

20 pages, 8979 KiB  
Article
Modeling Theory of Mind in Dyadic Games Using Adaptive Feedback Control
by Ismael T. Freire, Xerxes D. Arsiwalla, Jordi-Ysard Puigbò and Paul Verschure
Information 2023, 14(8), 441; https://doi.org/10.3390/info14080441 - 4 Aug 2023
Cited by 3 | Viewed by 2873
Abstract
A major challenge in cognitive science and AI has been to understand how intelligent autonomous agents might acquire and predict the behavioral and mental states of other agents in the course of complex social interactions. How does such an agent model the goals, [...] Read more.
A major challenge in cognitive science and AI has been to understand how intelligent autonomous agents might acquire and predict the behavioral and mental states of other agents in the course of complex social interactions. How does such an agent model the goals, beliefs, and actions of other agents it interacts with? What are the computational principles to model a Theory of Mind (ToM)? Deep learning approaches to address these questions fall short of a better understanding of the problem. In part, this is due to the black-box nature of deep networks, wherein computational mechanisms of ToM are not readily revealed. Here, we consider alternative hypotheses seeking to model how the brain might realize a ToM. In particular, we propose embodied and situated agent models based on distributed adaptive control theory to predict the actions of other agents in five different game-theoretic tasks (Harmony Game, Hawk-Dove, Stag Hunt, Prisoner’s Dilemma, and Battle of the Exes). Our multi-layer control models implement top-down predictions from adaptive to reactive layers of control and bottom-up error feedback from reactive to adaptive layers. We test cooperative and competitive strategies among seven different agent models (cooperative, greedy, tit-for-tat, reinforcement-based, rational, predictive, and internal agents). We show that, compared to pure reinforcement-based strategies, probabilistic learning agents modeled on rational, predictive, and internal phenotypes perform better in game-theoretic metrics across tasks. The outlined autonomous multi-agent models might capture systems-level processes underlying a ToM and suggest architectural principles of ToM from a control-theoretic perspective. Full article
(This article belongs to the Special Issue Intelligent Agent and Multi-Agent System)
Show Figures

Figure 1

15 pages, 3170 KiB  
Article
Conformer-Based Dental AI Patient Clinical Diagnosis Simulation Using Korean Synthetic Data Generator for Multiple Standardized Patient Scenarios
by Kangmin Kim, Chanjun Chun and Seong-Yong Moon
Bioengineering 2023, 10(5), 615; https://doi.org/10.3390/bioengineering10050615 - 19 May 2023
Cited by 3 | Viewed by 2476
Abstract
The goal of clinical practice education is to develop the ability to apply theoretical knowledge in a clinical setting and to foster growth as a professional healthcare provider. One effective method of achieving this is through the utilization of Standardized Patients (SP) in [...] Read more.
The goal of clinical practice education is to develop the ability to apply theoretical knowledge in a clinical setting and to foster growth as a professional healthcare provider. One effective method of achieving this is through the utilization of Standardized Patients (SP) in education, which familiarizes students with real patient interviews and allows educators to assess their clinical performance skills. However, SP education faces challenges such as the cost of hiring actors and the shortage of professional educators to train them. In this paper, we aim to alleviate these issues by utilizing deep learning models to replace the actors. We employ the Conformer model for the implementation of the AI patient, and we develop a Korean SP scenario data generator to collect data for training responses to diagnostic questions. Our Korean SP scenario data generator is devised to generate SP scenarios based on the provided patient information, using pre-prepared questions and answers. In the AI patient training process, two types of data are employed: common data and personalized data. The common data are employed to develop natural general conversation skills, while personalized data, from the SP scenario, are utilized to learn specific clinical information relevant to a patient’s role. Based on these data, to evaluate the learning efficiency of the Conformer structure, a comparison was conducted with the Transformer using the BLEU score and WER as evaluation metrics. Experimental results showed that the Conformer-based model demonstrated a 3.92% and 6.74% improvement in BLEU and WER performance compared to the Transformer-based model, respectively. The dental AI patient for SP simulation presented in this paper has the potential to be applied to other medical and nursing fields, provided that additional data collection processes are conducted. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

Back to TopTop