Journal Description
Informatics
Informatics
is an international, peer-reviewed, open access journal on information and communication technologies, human–computer interaction, and social informatics, and is published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Communication)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 33 days after submission; acceptance to publication is undertaken in 5.7 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.4 (2023);
5-Year Impact Factor:
3.1 (2023)
Latest Articles
The Influence of Stakeholder Involvement in the Adoption of Digital Technologies in the UK Construction Industry
Informatics 2024, 11(4), 97; https://doi.org/10.3390/informatics11040097 - 9 Dec 2024
Abstract
►
Show Figures
This study explored stakeholder involvement practice in digitalisation of the construction industry in the UK, and the influence thereof in the adoption of digital technologies. A qualitative interpretive method was followed using a case study approach to collect data. Thematic analysis of twenty-four
[...] Read more.
This study explored stakeholder involvement practice in digitalisation of the construction industry in the UK, and the influence thereof in the adoption of digital technologies. A qualitative interpretive method was followed using a case study approach to collect data. Thematic analysis of twenty-four semi-structured interviews and sixty survey responses, which were conducted with different digital technologies adoption actors in the construction industry, allowed the identification of six final themes depicting the influence of stakeholder involvement in the adoption of digital technologies. The findings indicate that stakeholder involvement influence is a function of its embeddedness in an organisation digitalisation approach. Stakeholder involvement embeddedness in the approach, or lack thereof, dictates how the stakeholder landscape is planned and managed, and how communication between and with stakeholder groups occurs. This is the foundation of digitalisation value creation among stakeholders. The approach is prone to digitalisation limitations and intrinsic determinants of adoption, both of which can be positively impacted through better stakeholder involvement practices. Stakeholder involvement practices are therefore catalytic to the subsequent behaviour change for digital technologies adoption and the extent to which digital technologies become adopted. This paper contextualises stakeholder involvement in the adoption of digital technologies in the construction industry, highlighting the catalytic influence of stakeholder involvement embeddedness in the complex digitalisation activity system and its interplay with industry-specific practices and other digital technology adoption determinants.
Full article
Open AccessReview
Transforming Service Quality in Healthcare: A Comprehensive Review of Healthcare 4.0 and Its Impact on Healthcare Service Quality
by
Karam Al-Assaf, Zied Bahroun and Vian Ahmed
Informatics 2024, 11(4), 96; https://doi.org/10.3390/informatics11040096 - 2 Dec 2024
Abstract
This systematic review investigates the transformative impact of Healthcare 4.0 (HC4.0) technologies on healthcare service quality (HCSQ), focusing on their potential to enhance healthcare delivery while addressing critical challenges. This study reviewed 168 peer-reviewed articles from the Scopus database, published between 2005 and
[...] Read more.
This systematic review investigates the transformative impact of Healthcare 4.0 (HC4.0) technologies on healthcare service quality (HCSQ), focusing on their potential to enhance healthcare delivery while addressing critical challenges. This study reviewed 168 peer-reviewed articles from the Scopus database, published between 2005 and 2023. The selection process used clearly defined inclusion and exclusion criteria to identify studies focusing on advanced technologies such as artificial intelligence (AI), the Internet of Things (IoT), and big data analytics. Rayyan software facilitated systematic organization and duplicate removal, while manual evaluation ensured relevance and quality. The findings highlight HC4.0’s potential to improve service delivery, patient outcomes, and operational efficiencies but also reveal challenges, including interoperability, ethical concerns, and access disparities for underserved populations. The results were synthesized descriptively, uncovering key patterns and thematic insights while acknowledging heterogeneity across studies. Limitations include the absence of a formal risk-of-bias assessment and the diversity of methodologies, which precluded quantitative synthesis. This review emphasizes the need for future research on integration frameworks, ethical guidelines, and equitable access policies to realize HC4.0’s transformative potential. No external funding was received, and no formal protocol was registered.
Full article
(This article belongs to the Special Issue Health Informatics: Feature Review Papers)
►▼
Show Figures
Figure 1
Open AccessArticle
Identification of Cancer Stem Cell (CSC)-Associated Genes, Prognostic Value, and Candidate Drugs as Modulators of CSC-Associated Signaling in Carcinomas Through a Multiomics Data Analysis Approach
by
Pallabi Mondal, Poulami Singh, Krishna Mahanti and Sankar Bhattacharyya
Informatics 2024, 11(4), 95; https://doi.org/10.3390/informatics11040095 - 29 Nov 2024
Abstract
Background: Cancer stem cells (CSCs) are a small subpopulation of cancer cells that have the potential for self-renewal and a strong proliferative capacity, and sustain tumorigenesis capabilities. This ability of CSCs to escape immune responses makes the CSCs a primary source of functionally
[...] Read more.
Background: Cancer stem cells (CSCs) are a small subpopulation of cancer cells that have the potential for self-renewal and a strong proliferative capacity, and sustain tumorigenesis capabilities. This ability of CSCs to escape immune responses makes the CSCs a primary source of functionally altered, immune-resistant, chemoresistant, aggressive tumor cells. These characteristics determine the potential advantage of targeting CSCs for the treatment of solid tumors. Method: First, we downloaded different gene expression datasets of CSCs from the NCBI-GEO (National Center for Biotechnology Information–Gene Expression Omnibus) database and identified common genes by using a suitable Venn tool. Subsequently, we explored the prognostic significance of the particular genes in particular cancers and analyzed the expression of these genes at the protein level in human solid tumors by using KM plotter (Kaplan-Meier plotter) and an HPA (The Human Protein Atlas) database, respectively. Finally, using a comparative toxicogenomic database, we selected several important drugs or chemicals. Result: From this study, we identified APOC1 as a common upregulated gene in breast cancer and SLC44A5 and CAV2 as common up- and downregulated genes in lung cancer. In ovarian cancer, PRRG4 is a commonly upregulated gene, and ADCY7, AKAP12, TPM2, and FLNC are commonly downregulated genes. These genes also show prognostic significance in respective cancers. Several drugs that are capable of targeting the expression or signaling network of designated genes of CSC were also identified, which may contribute in CSC-targeted cancer therapy. Conclusion: Our study suggests a need for more in-depth experimental investigations to determine the actual functional activity and the mechanism of action of these CSC-associated genes.
Full article
(This article belongs to the Section Health Informatics)
►▼
Show Figures
Figure 1
Open AccessArticle
Does Fun Matter? Using Chatbots for Customer Services
by
Tai Ming Wut, Elaine Ah-heung Chan and Helen Shun-mun Wong
Informatics 2024, 11(4), 94; https://doi.org/10.3390/informatics11040094 - 27 Nov 2024
Abstract
►▼
Show Figures
Chatbots are widely used in customer services contexts today. People using chatbots have their pragmatic reasons, like checking delivery status and refund policies. The purpose of the paper is to investigate what are those factors that affect user experience and a chatbot’s service
[...] Read more.
Chatbots are widely used in customer services contexts today. People using chatbots have their pragmatic reasons, like checking delivery status and refund policies. The purpose of the paper is to investigate what are those factors that affect user experience and a chatbot’s service quality which influence user satisfaction and electronic word-of-mouth. A survey was conducted in July 2024 to collect responses in Hong Kong about users’ perceptions of chatbots. Contrary to previous literature, entertainment and warmth perception were not associated with user experience and service quality. Social presence was associated with user experience, but not service quality. Competence was relevant to user experience and service quality, which reveals important implications for digital marketers and brands of adopting chatbots to enhance their service quality.
Full article
Figure 1
Open AccessArticle
A Gap Analysis Framework for an Open Data Portal Assessment Based on Data Provision and Consumption Activities
by
Sahaporn Sripramong, Chutiporn Anutariya, Patipat Tumsangthong, Theerawat Wutthitasarn and Marut Buranarach
Informatics 2024, 11(4), 93; https://doi.org/10.3390/informatics11040093 - 27 Nov 2024
Abstract
►▼
Show Figures
An Open Government Data (OGD) portal assessment is necessary to track and monitor the progress of the OGD initiative and to drive improvement. Although OGD benchmarks typically focus on assessing and ranking OGD portals, few have been developed specifically for internal process improvement
[...] Read more.
An Open Government Data (OGD) portal assessment is necessary to track and monitor the progress of the OGD initiative and to drive improvement. Although OGD benchmarks typically focus on assessing and ranking OGD portals, few have been developed specifically for internal process improvement within the portal. This paper proposes a gap analysis framework to support the Plan–Do–Check–Act (PDCA) cycle to guide OGD portal improvement. The framework adopted the Importance–Performance Analysis (IPA) to identify gaps in an OGD portal. The analysis measured the performance and importance of an OGD portal based on data provision and consumption activities. Several factors related to data provision and consumption activities are examined, including dataset creation, updates, views, searches, high-value datasets, resource formats, and user data requests. Gap analysis assessment results can help to identify the current situations of different areas on the portal and their gaps in achieving the objectives. A case study of the Data.go.th portal was conducted to exemplify and validate the framework’s adoption. The analysis results of the case study revealed existing patterns of relationships between data provision and consumption activities that can guide the improvement of similar OGD portals.
Full article
Figure 1
Open AccessArticle
WordDGA: Hybrid Knowledge-Based Word-Level Domain Names Against DGA Classifiers and Adversarial DGAs
by
Sarojini Selvaraj and Rukmani Panjanathan
Informatics 2024, 11(4), 92; https://doi.org/10.3390/informatics11040092 - 26 Nov 2024
Abstract
►▼
Show Figures
A Domain Generation Algorithm (DGA) employs botnets to generate domain names through a communication link between the C&C server and the bots. A DGA can generate pseudo-random AGDs (algorithmically generated domains) regularly, a handy method for detecting bots on the C&C server. Unlike
[...] Read more.
A Domain Generation Algorithm (DGA) employs botnets to generate domain names through a communication link between the C&C server and the bots. A DGA can generate pseudo-random AGDs (algorithmically generated domains) regularly, a handy method for detecting bots on the C&C server. Unlike current DGA detection methods, AGDs can be identified with lightweight, promising technology. DGAs can prolong the life of a viral operation, improving its profitability. Recent research on the sensitivity of deep learning to various adversarial DGAs has sought to enhance DGA detection techniques. They have character- and word-level classifiers; hybrid-level classifiers may detect and classify AGDs generated by DGAs, significantly diminishing the effectiveness of DGA classifiers. This work introduces WordDGA, a hybrid RCNN-BiLSTM-based adversarial DGA with strong anti-detection capabilities based on NLP and cWGAN, which offers word- and hybrid-level evasion techniques. It initially models the semantic relationships between benign and DGA domains by constructing a prediction model with a hybrid RCNN-BiLSTM network. To optimize the similarity between benign and DGA domain names, it modifies phrases from each input domain using the prediction model to detect DGA family categorizations. The experimental results reveal that dodging numerous wordlists and mixed-level DGA classifiers with training and testing sets improves word repetition rate, domain collision rate, attack success rate, and detection rate, indicating the usefulness of cWGAN-based oversampling in the face of adversarial DGAs.
Full article
Figure 1
Open AccessReview
Exploring the Adoption of Robotics in Teaching and Learning in Higher Education Institutions
by
Samkelisiwe Purity Phokoye, Ayogeboh Epizitone, Ntando Nkomo, Peggy Pinky Mthalane, Smangele Pretty Moyane, Mbalenhle Marcia Khumalo, Mthokozisi Luthuli and Nombuso Phamela Zondi
Informatics 2024, 11(4), 91; https://doi.org/10.3390/informatics11040091 - 26 Nov 2024
Abstract
►▼
Show Figures
Artificial intelligence (AI) has become a prevalent part of many businesses, including higher education. AI is progressively gaining traction as an instrumental engagement tool in higher education institutions (HEIs). The premise underlying this trend is the potential of robots to foster enhanced student
[...] Read more.
Artificial intelligence (AI) has become a prevalent part of many businesses, including higher education. AI is progressively gaining traction as an instrumental engagement tool in higher education institutions (HEIs). The premise underlying this trend is the potential of robots to foster enhanced student engagement and, consequently, elevate academic performance. Considering this development, HEI’s must probe deeper into the possible adoption of robotics in educational practices. This paper aims to conduct a comprehensive exploration into the adoption of robotics in teaching and learning in the higher education space. To provide a holistic perspective, this study poses three questions: what factors influence robotics uptake in HEIs, how can robots be integrated to improve teaching and learning in HEIs, and what are the perceived benefits of robotics implementation in teaching and learning. A bibliometric analysis and comprehensive review methodology were employed in this study to provide an in-depth assessment of the development, significance, and implications of robotics in HEIs. The dual approach offers a robust evaluation of robotics as a pivotal element needed for the enhancement of teaching and learning practices. The study’s findings uncover the increasing adoption of robotics within the higher education sphere. It also identifies the challenges encountered during adoption, ranging from technical hurdles to educational adjustments. Furthermore, this paper offers guidelines for various stakeholders for the effective integration of robotics into higher education.
Full article
Figure 1
Open AccessReview
Advances and Challenges in Low-Resource-Environment Software Systems: A Survey
by
Abayomi Agbeyangi and Hussein Suleman
Informatics 2024, 11(4), 90; https://doi.org/10.3390/informatics11040090 - 25 Nov 2024
Abstract
►▼
Show Figures
A low-resource environment has limitations in terms of resources, such as limited network availability and low-powered computing devices. In such environments, it is arguably more difficult to set up new software systems, maintain existing software, and migrate between software systems. This paper presents
[...] Read more.
A low-resource environment has limitations in terms of resources, such as limited network availability and low-powered computing devices. In such environments, it is arguably more difficult to set up new software systems, maintain existing software, and migrate between software systems. This paper presents a survey of software systems for low-resource environments to highlight the challenges (social and technical) and concepts. A qualitative methodology is employed, consisting of an extensive literature review and comparative analysis of selected software systems. The literature covers academic and non-academic sources, focusing on identifying software solutions that address specific challenges in low-resource environments. The selected software systems are categorized based on their ability to overcome challenges such as limited technical skills, device constraints, and socio-cultural issues. The study reveals that despite noteworthy progress, unresolved challenges persist, necessitating further attention to enable the optimal performance of software systems in low-resource environments.
Full article
Figure 1
Open AccessArticle
Hybrid Machine Learning for Stunting Prevalence: A Novel Comprehensive Approach to Its Classification, Prediction, and Clustering Optimization in Aceh, Indonesia
by
Novia Hasdyna, Rozzi Kesuma Dinata, Rahmi and T. Irfan Fajri
Informatics 2024, 11(4), 89; https://doi.org/10.3390/informatics11040089 - 21 Nov 2024
Abstract
Stunting remains a significant public health issue in Aceh, Indonesia, and is influenced by various socio-economic and environmental factors. This study aims to address key challenges in accurately classifying stunting prevalence, predicting future trends, and optimizing clustering methods to support more effective interventions.
[...] Read more.
Stunting remains a significant public health issue in Aceh, Indonesia, and is influenced by various socio-economic and environmental factors. This study aims to address key challenges in accurately classifying stunting prevalence, predicting future trends, and optimizing clustering methods to support more effective interventions. To this end, we propose a novel hybrid machine learning framework that integrates classification, predictive modeling, and clustering optimization. Support Vector Machines (SVM) with Radial Basis Function (RBF) and Sigmoid kernels were employed to improve the classification accuracy, with the RBF kernel outperforming the Sigmoid kernel, achieving an accuracy rate of 91.3% compared with 85.6%. This provides a more reliable tool for identifying high-risk populations. Furthermore, linear regression was used for predictive modeling, yielding a low Mean Squared Error (MSE) of 0.137, demonstrating robust predictive accuracy for future stunting prevalence. Finally, the clustering process was optimized using a weighted-product approach to enhance the efficiency of K-Medoids. This optimization reduced the number of iterations from seven to three and improved the Calinski–Harabasz Index from 85.2 to 93.7. This comprehensive framework not only enhances the classification, prediction, and clustering of results but also delivers actionable insights for targeted public health interventions and policymaking aimed at reducing stunting in Aceh.
Full article
(This article belongs to the Section Health Informatics)
►▼
Show Figures
Figure 1
Open AccessArticle
Influencing Mechanism of Signal Design Elements in Complex Human–Machine System: Evidence from Eye Movement Data
by
Siu Shing Man, Wenbo Hu, Hanxing Zhou, Tingru Zhang and Alan Hoi Shou Chan
Informatics 2024, 11(4), 88; https://doi.org/10.3390/informatics11040088 - 21 Nov 2024
Abstract
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible
[...] Read more.
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible (BC), transverse compatible (TC), longitudinal compatible (LC), and both incompatible (BI)), and stimulus onset asynchrony (200 ms/600 ms) on the performance of participants in complex human–machine systems. Eye movement data and a dual-task paradigm involving stimulus–response and manual tracking were utilized for this study. The findings reveal that visual pre-cues can captivate participants’ attention towards peripheral regions, a phenomenon not observed when visual stimuli are presented in isolation. Furthermore, when confronted with visual stimuli, participants predominantly prioritize continuous manual tracking tasks, utilizing focal vision, while concurrently executing stimulus–response compatibility tasks with peripheral vision. Furthermore, the average pupil diameter tends to diminish with the use of visual pre-cues or visual stimuli but expands during auditory or tactile stimuli or pre-cue modes. These findings contribute to the existing literature on the theoretical design of complex human–machine interfaces and offer practical implications for the design of human–machine system interfaces. Moreover, this paper underscores the significance of considering the optimal combination of stimulus modes, pre-cue modes, and stimulus onset asynchrony, tailored to the characteristics of the human–machine interaction task.
Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Estimation of Mango Fruit Production Using Image Analysis and Machine Learning Algorithms
by
Liliana Arcila-Diaz, Heber I. Mejia-Cabrera and Juan Arcila-Diaz
Informatics 2024, 11(4), 87; https://doi.org/10.3390/informatics11040087 - 16 Nov 2024
Abstract
Mango production is fundamental to the agricultural economy, generating income and employment in various communities. Accurate estimation of its production optimizes the planning and logistics of harvesting; traditionally, manual methods are inefficient and prone to errors. Currently, machine learning, by handling large volumes
[...] Read more.
Mango production is fundamental to the agricultural economy, generating income and employment in various communities. Accurate estimation of its production optimizes the planning and logistics of harvesting; traditionally, manual methods are inefficient and prone to errors. Currently, machine learning, by handling large volumes of data, emerges as an innovative solution to enhance the precision of mango production estimation. This study presents an analysis of mango fruit detection using machine learning algorithms, specifically YOLO version 8 and Faster R-CNN. The present study employs a dataset consisting of 212 original images, annotated with a total of 9604 labels, which has been expanded to include 2449 additional images and 116,654 annotations. This significant increase in dataset size notably enhances the robustness and generalization capacity of the model. The YOLO-trained model achieves an accuracy of 96.72%, a recall of 77.4%, and an F1 Score of 86%, compared to the results of Faster R-CNN, which are 98.57%, 63.80%, and 77.46%, respectively. YOLO demonstrates greater efficiency, being faster in training, consuming less memory, and utilizing fewer CPU resources. Furthermore, this study has developed a web application with a user interface that facilitates the uploading of images from mango trees considered samples. The YOLO-trained model detects the fruits of each tree in the representative sample and uses extrapolation techniques to estimate the total number of fruits across the entire population of mango trees.
Full article
(This article belongs to the Section Machine Learning)
►▼
Show Figures
Figure 1
Open AccessReview
The Use of Artificial Intelligence to Analyze the Exposome in the Development of Chronic Diseases: A Review of the Current Literature
by
Stefania Isola, Giuseppe Murdaca, Silvia Brunetto, Emanuela Zumbo, Alessandro Tonacci and Sebastiano Gangemi
Informatics 2024, 11(4), 86; https://doi.org/10.3390/informatics11040086 - 12 Nov 2024
Abstract
►▼
Show Figures
The “Exposome” is a concept that indicates the set of exposures to which a human is subjected during their lifetime. These factors influence the health state of individuals and can drive the development of Noncommunicable Diseases (NCDs). Artificial Intelligence (AI) allows one to
[...] Read more.
The “Exposome” is a concept that indicates the set of exposures to which a human is subjected during their lifetime. These factors influence the health state of individuals and can drive the development of Noncommunicable Diseases (NCDs). Artificial Intelligence (AI) allows one to analyze large amounts of data in a short time. As such, several authors have used AI to study the relationship between exposome and chronic diseases. Under such premises, this study reviews the use of AI in analyzing the exposome to understand its role in the development of chronic diseases, focusing on how AI can identify patterns in exposure-related data and support prevention strategies. To achieve this, we carried out a search on multiple databases, including PubMed, ScienceDirect, and SCOPUS, from 1 January 2019 to 31 May 2023, using the MeSH terms (exposome) and (‘Artificial Intelligence’ OR ‘Machine Learning’ OR ‘Deep Learning’) to identify relevant studies on this topic. After completing the identification, screening, and eligibility assessment, a total of 18 studies were included in this literature review. According to the search, most authors used supervised or unsupervised machine learning models to study multiple exposure factors’ role in the risk of developing cardiovascular, metabolic, and chronic respiratory diseases. In some more recent studies, authors also used deep learning. Furthermore, the exposome analysis is useful to study the risk of developing neuropsychiatric disorders or evaluating pregnancy outcomes and child growth. Understanding the role of the exposome is pivotal to overcome the classic concept of a single exposure/disease. The application of AI allows one to analyze multiple environmental risks and their combined effects on health conditions. In the future, AI could be helpful in the prevention of chronic diseases, providing new diagnostic, therapeutic, and follow-up strategies.
Full article
Figure 1
Open AccessArticle
Modeling Zika Virus Disease Dynamics with Control Strategies
by
Mlyashimbi Helikumi, Paride O. Lolika, Kimulu Ancent Makau, Muli Charles Ndambuki and Adquate Mhlanga
Informatics 2024, 11(4), 85; https://doi.org/10.3390/informatics11040085 - 11 Nov 2024
Abstract
In this research, we formulated a fractional-order model for the transmission dynamics of Zika virus, incorporating three control strategies: health education campaigns, the use of insecticides, and preventive measures. We conducted a theoretical analysis of the model, obtaining the disease-free equilibrium and the
[...] Read more.
In this research, we formulated a fractional-order model for the transmission dynamics of Zika virus, incorporating three control strategies: health education campaigns, the use of insecticides, and preventive measures. We conducted a theoretical analysis of the model, obtaining the disease-free equilibrium and the basic reproduction number, and analyzing the existence and uniqueness of the model. Additionally, we performed model parameter estimation using real data on Zika virus cases reported in Colombia. We found that the fractional-order model provided a better fit to the real data compared to the classical integer-order model. A sensitivity analysis of the basic reproduction number was conducted using computed partial rank correlation coefficients to assess the impact of each parameter on Zika virus transmission. Furthermore, we performed numerical simulations to determine the effect of memory on the spread of Zika virus. The simulation results showed that the order of derivatives significantly impacts the dynamics of the disease. We also assessed the effect of the control strategies through simulations, concluding that the proposed interventions have the potential to significantly reduce the spread of Zika virus in the population.
Full article
(This article belongs to the Section Health Informatics)
►▼
Show Figures
Figure 1
Open AccessCase Report
Can ChatGPT Support Clinical Coding Using the ICD-10-CM/PCS?
by
Bernardo Nascimento Teixeira, Ana Leitão, Generosa Nascimento, Adalberto Campos-Fernandes and Francisco Cercas
Informatics 2024, 11(4), 84; https://doi.org/10.3390/informatics11040084 - 7 Nov 2024
Abstract
►▼
Show Figures
Introduction: With the growing development and adoption of artificial intelligence in healthcare and across other sectors of society, various user-friendly and engaging tools to support research have emerged, such as chatbots, notably ChatGPT. Objective: To investigate the performance of ChatGPT as an assistant
[...] Read more.
Introduction: With the growing development and adoption of artificial intelligence in healthcare and across other sectors of society, various user-friendly and engaging tools to support research have emerged, such as chatbots, notably ChatGPT. Objective: To investigate the performance of ChatGPT as an assistant to medical coders using the ICD-10-CM/PCS. Methodology: We conducted a prospective exploratory study between 2023 and 2024 over 6 months. A total of 150 clinical cases coded using the ICD-10-CM/PCS, extracted from technical coding books, were systematically randomized. All cases were translated into Portuguese (the native language of the authors) and English (the native language of the ICD-10-CM/PCS). These clinical cases varied in complexity levels regarding the quantity of diagnoses and procedures, as well as the nature of the clinical information. Each case was input into the 2023 ChatGPT free version. The coding obtained from ChatGPT was analyzed by a senior medical auditor/coder and compared with the expected results. Results: Regarding the correct codes, ChatGPT’s performance was higher by approximately 29 percentage points between diagnoses and procedures, with greater proficiency in diagnostic codes. The accuracy rate for codes was similar across languages, with rates of 31.0% and 31.9%. The error rate in procedure codes was substantially higher than that in diagnostic codes by almost four times. For missing information, a higher incidence was observed in diagnoses compared to procedures of slightly more than double the comparative rates. Additionally, there was a statistically significant excess of codes not related to clinical information, which was higher in procedures and nearly the same value in both languages under study. Conclusion: Given the ease of access to these tools, this investigation serves as an awareness factor, demonstrating that ChatGPT can assist the medical coder in directed research. However, it does not replace their technical validation in this process. Therefore, further developments of this tool are necessary to increase the quality and reliability of the results.
Full article
Figure 1
Open AccessArticle
Web Traffic Anomaly Detection Using Isolation Forest
by
Wilson Chua, Arsenn Lorette Diamond Pajas, Crizelle Shane Castro, Sean Patrick Panganiban, April Joy Pasuquin, Merwin Jan Purganan, Rica Malupeng, Divine Jessa Pingad, John Paul Orolfo, Haron Hakeen Lua and Lemuel Clark Velasco
Informatics 2024, 11(4), 83; https://doi.org/10.3390/informatics11040083 - 5 Nov 2024
Abstract
As companies increasingly undergo digital transformation, the value of their data assets also rises, making them even more attractive targets for hackers. The large volume of weblogs warrants the use of advanced classification methodologies in order for cybersecurity specialists to identify web traffic
[...] Read more.
As companies increasingly undergo digital transformation, the value of their data assets also rises, making them even more attractive targets for hackers. The large volume of weblogs warrants the use of advanced classification methodologies in order for cybersecurity specialists to identify web traffic anomalies. This study aims to implement Isolation Forest, an unsupervised machine learning methodology in the identification of anomalous and non-anomalous web traffic. The publicly available weblogs dataset from an e-commerce website underwent data preparation through a systematic pipeline of processes involving data ingestion, data type conversion, data cleaning, and normalization. This led to the addition of derived columns in the training set and manually labeled testing set that was then used to compare the anomaly detection performance of the Isolation Forest model with that of cybersecurity experts. The developed Isolation Forest model was implemented using the Python Scikit-learn library, and exhibited a superior Accuracy of 93%, Precision of 95%, Recall of 90% and F1-Score of 92%. By appropriate data preparation, model development, model implementation, and model evaluation, this study shows that Isolation Forest can be a viable solution for close to accurate web traffic anomaly detection.
Full article
(This article belongs to the Section Machine Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
Perceptions of AI Integration in the UAE’s Creative Sector
by
Asma Hassouni and Noha Mellor
Informatics 2024, 11(4), 82; https://doi.org/10.3390/informatics11040082 - 4 Nov 2024
Abstract
This study explores the perceptions of artificial intelligence (AI) within the creative sector of the United Arab Emirates (UAE) based on 13 semi-structured interviews and a survey with 224 participants among media professionals and their stakeholders. The findings indicate considerable enthusiasm surrounding AI’s
[...] Read more.
This study explores the perceptions of artificial intelligence (AI) within the creative sector of the United Arab Emirates (UAE) based on 13 semi-structured interviews and a survey with 224 participants among media professionals and their stakeholders. The findings indicate considerable enthusiasm surrounding AI’s potential to augment creativity and drive operational efficiency, a perspective that the study’s participants share. However, there are also apprehensions regarding job displacement and the necessity for strategic upskilling. Participants generally regard AI as an unavoidable technological influence that demands adaptation and seamless integration into daily workflows. The study underscores the disparity between the UAE’s government-led digital transformation objectives and the actual implementation within organizations, underscoring the urgent need for cohesive strategic alignment. The findings caution that the absence of clear directives and strategic planning may precipitate a new digital schism, impeding progress in the sector.
Full article
(This article belongs to the Section Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessSystematic Review
Early Estimation in Agile Software Development Projects: A Systematic Mapping Study
by
José Gamaliel Rivera Ibarra, Gilberto Borrego and Ramón R. Palacio
Informatics 2024, 11(4), 81; https://doi.org/10.3390/informatics11040081 - 4 Nov 2024
Abstract
►▼
Show Figures
Estimating during the early stages is crucial for determining the feasibility and conducting the budgeting and planning of agile software development (ASD) projects. However, due to the characteristics of ASD and limited initial information, these estimates are often complicated and inaccurate. This study
[...] Read more.
Estimating during the early stages is crucial for determining the feasibility and conducting the budgeting and planning of agile software development (ASD) projects. However, due to the characteristics of ASD and limited initial information, these estimates are often complicated and inaccurate. This study aims to systematically map the literature to identify the most used estimation techniques; the reasons for their selection; the input artifacts, predictors, and metrics associated with these techniques; as well as research gaps in early-stage estimations in ASD. This study was based on the guidelines proposed by Kitchenham for systematic literature reviews in software engineering; a review protocol was defined with research questions and criteria for the selection of empirical studies. Results show that data-driven techniques are preferred to reduce biases and inconsistencies of expert-driven techniques. Most selected studies do not mention input artifacts, and software size is the most commonly used predictor. Machine learning-based techniques use publicly available data but often contain records of old projects from before the agile movement. The study highlights the need for tools supporting estimation activities and identifies key areas for future research, such as evaluating hybrid approaches and creating datasets of recent projects with sufficient contextual information and standardized metrics.
Full article
Figure 1
Open AccessArticle
Enhancing Visible Light Communication Channel Estimation in Complex 3D Environments: An Open-Source Ray Tracing Simulation Framework
by
Véronique Georlette, Nicolas Vallois, Véronique Moeyaert and Bruno Quoitin
Informatics 2024, 11(4), 80; https://doi.org/10.3390/informatics11040080 - 31 Oct 2024
Abstract
►▼
Show Figures
Estimating the optical power distribution in a room in order to assess the performance of a visible light communication (VLC) system is nothing new. It can be estimated using a Monte Carlo optical ray tracing algorithm that sums the contribution of each ray
[...] Read more.
Estimating the optical power distribution in a room in order to assess the performance of a visible light communication (VLC) system is nothing new. It can be estimated using a Monte Carlo optical ray tracing algorithm that sums the contribution of each ray on the reception plane. For now, research has focused on rectangular parallelepipedic rooms with single-textured walls, when studying indoor applications. This article presents a new open-source simulator that answers the case of more complex rooms by analysing them using a 3D STL (stereolithography) model. This paper describes this new tool in detail, with the material used, the software architecture, the ray tracing algorithm, and validates it against the literature and presents new use cases. To the best of our knowledge, this simulator is the only free and open-source ray tracing analysis for complex 3D rooms for VLC research. In particular, this simulator is capable of studying any room shape, such as an octagon or an L-shape. The user has the opportunity to control the number of emitters, their orientation, and especially the number of rays emitted and reflected. The final results are detailed heat maps, enabling the visualization of the optical power distribution across any 3D room. This tool is innovative both visually (using 3D models) and mathematically (estimating the coverage of a VLC system).
Full article
Figure 1
Open AccessArticle
Blockchain Technology in K-12 Computer Science Education?!
by
Rupert Gehrlein and Andreas Dengel
Informatics 2024, 11(4), 79; https://doi.org/10.3390/informatics11040079 - 30 Oct 2024
Abstract
►▼
Show Figures
The blockchain technology and its applications, such as cryptocurrencies or non-fungible tokens, represent significant advancements in computer science. Alongside its transformative potential, human interaction with blockchain has led to notable negative implications, including cybersecurity vulnerabilities, high energy consumption in mining activities, environmental impacts,
[...] Read more.
The blockchain technology and its applications, such as cryptocurrencies or non-fungible tokens, represent significant advancements in computer science. Alongside its transformative potential, human interaction with blockchain has led to notable negative implications, including cybersecurity vulnerabilities, high energy consumption in mining activities, environmental impacts, and the prevalence of economic fraud and high-risk financial products. Considering the expanding range of blockchain applications, there is interest in exploring its integration into K-12 education. For this purpose, this paper examines existing and documented attempts through a systematic literature review. Although the findings are quantitatively limited, they reveal initial concepts and ideas.
Full article
Figure 1
Open AccessArticle
Educational Roles and Scenarios for Large Language Models: An Ethnographic Research Study of Artificial Intelligence
by
Nikša Alfirević, Darko Rendulić, Maja Fošner and Ajda Fošner
Informatics 2024, 11(4), 78; https://doi.org/10.3390/informatics11040078 - 29 Oct 2024
Abstract
►▼
Show Figures
This paper reviews the theoretical background and potential applications of Large Language Models (LLMs) in educational processes and academic research. Utilizing a novel digital ethnographic approach, we engaged in iterative research with OpenAI’s ChatGPT-4 and Google’s Gemini Ultra—two advanced commercial LLMs. The methodology
[...] Read more.
This paper reviews the theoretical background and potential applications of Large Language Models (LLMs) in educational processes and academic research. Utilizing a novel digital ethnographic approach, we engaged in iterative research with OpenAI’s ChatGPT-4 and Google’s Gemini Ultra—two advanced commercial LLMs. The methodology treated LLMs as research participants, emphasizing the AI-guided perspectives and their envisioned roles in educational settings. Our findings identified the potential LLM roles in educational and research processes and we discussed the AI challenges, which included potential biases in decision-making and AI as a potential source of discrimination and conflict of interest. In addition to practical implications, we used the qualitative research results to advise on the relevant topics for future research.
Full article
Figure 1
Journal Menu
► ▼ Journal Menu-
- Informatics Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Editorial Office
- 10th Anniversary
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 31 December 2024
Topic in
Brain Sciences, Healthcare, Informatics, IJERPH, JCM, Reports
Applications of Virtual Reality Technology in Rehabilitation
Topic Editors: Jorge Oliveira, Pedro GamitoDeadline: 30 June 2025
Topic in
Sustainability, World, Informatics
The Applications of Artificial Intelligence in Tourism
Topic Editors: Angelica Lo Duca, Jose BerengueresDeadline: 31 August 2025
Topic in
Applied Sciences, Electronics, Informatics, Information, Software
Software Engineering and Applications
Topic Editors: Sanjay Misra, Robertas Damaševičius, Bharti SuriDeadline: 31 October 2025
Conferences
Special Issues
Special Issue in
Informatics
The Smart Cities Continuum via Machine Learning and Artificial Intelligence
Guest Editors: Augusto Neto, Roger ImmichDeadline: 31 December 2024
Special Issue in
Informatics
AI for the People: An Ubuntu Approach to Transforming Health, Education, and Economic Landscapes
Guest Editors: Lufuno Makhado, Takalani Samuel Mashau, Nombulelo SepengDeadline: 31 May 2025