Previous Issue
Volume 15, April
 
 

Information, Volume 15, Issue 5 (May 2024) – 19 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1814 KiB  
Article
The Impact of Immersive Virtual Reality on Knowledge Acquisition and Adolescent Perceptions in Cultural Education
by Athanasios Christopoulos, Maria Styliou, Nikolaos Ntalas and Chrysostomos Stylios
Information 2024, 15(5), 261; https://doi.org/10.3390/info15050261 - 03 May 2024
Viewed by 101
Abstract
Understanding local history is fundamental to fostering a comprehensive global viewpoint. As technological advances shape our pedagogical tools, Virtual Reality (VR) stands out for its potential educational impact. Though its promise in educational settings is widely acknowledged, especially in science, technology, engineering and [...] Read more.
Understanding local history is fundamental to fostering a comprehensive global viewpoint. As technological advances shape our pedagogical tools, Virtual Reality (VR) stands out for its potential educational impact. Though its promise in educational settings is widely acknowledged, especially in science, technology, engineering and mathematics (STEM) fields, there is a noticeable decrease in research exploring VR’s efficacy in arts. The present study examines the effects of VR-mediated interventions on cultural education. In greater detail, secondary school adolescents (N = 52) embarked on a journey into local history through an immersive 360° VR experience. As part of our research approach, we conducted pre- and post-intervention assessments to gauge participants’ grasp of the content and further distributed psychometric instruments to evaluate their reception of VR as an instructional approach. The analysis indicates that VR’s immersive elements enhance knowledge acquisition but the impact is modulated by the complexity of the subject matter. Additionally, the study reveals that a tailored, context-sensitive, instructional design is paramount for optimising learning outcomes and mitigating educational inequities. This work challenges the “one-size-fits-all” approach to educational VR, advocating for a more targeted instructional approach. Consequently, it emphasises the need for educators and VR developers to collaboratively tailor interventions that are both culturally and contextually relevant. Full article
33 pages, 48967 KiB  
Article
Medical Support Vehicle Location and Deployment at Mass Casualty Incidents
by Miguel Medina-Perez, Giovanni Guzmán, Magdalena Saldana-Perez and Valeria Karina Legaria-Santiago
Information 2024, 15(5), 260; https://doi.org/10.3390/info15050260 - 03 May 2024
Viewed by 99
Abstract
Anticipating and planning for the urgent response to large-scale disasters is critical to increase the probability of survival at these events. These incidents present various challenges that complicate the response, such as unfavorable weather conditions, difficulties in accessing affected areas, and the geographical [...] Read more.
Anticipating and planning for the urgent response to large-scale disasters is critical to increase the probability of survival at these events. These incidents present various challenges that complicate the response, such as unfavorable weather conditions, difficulties in accessing affected areas, and the geographical spread of the victims. Furthermore, local socioeconomic factors, such as inadequate prevention education, limited disaster resources, and insufficient coordination between public and private emergency services, can complicate these situations. In large-scale emergencies, multiple demand points (DPs) are generally observed, which requires efforts to coordinate the strategic allocation of human and material resources in different geographical areas. Therefore, the precise management of these resources based on the specific needs of each area becomes fundamental. To address these complexities, this paper proposes a methodology that models these scenarios as a multi-objective optimization problem, focusing on the location-allocation problem of resources in Mass Casualty Incidents (MCIs). The proposed case study is Mexico City in a earthquake post-disaster scenario, using voluntary geographic information, open government data, and historical data from the 19 September 2017 earthquake. It is assumed that the resources that require optimal location and allocation are ambulances, which focus on medical issues that affect the survival of victims. The designed solution involves the use of a metaheuristic optimization technique, along with a parameter tuning technique, to find configurations that perform at different instances of the problem, i.e., different hypothetical scenarios that can be used as a reference for future possible situations. Finally, the objective is to present the different solutions graphically, accompanied by relevant information to facilitate the decision-making process of the authorities responsible for the practical implementation of these solutions. Full article
(This article belongs to the Special Issue Telematics, GIS and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 5773 KiB  
Article
Enhanced Fault Detection in Bearings Using Machine Learning and Raw Accelerometer Data: A Case Study Using the Case Western Reserve University Dataset
by Krish Kumar Raj, Shahil Kumar, Rahul Ranjeev Kumar and Mauro Andriollo
Information 2024, 15(5), 259; https://doi.org/10.3390/info15050259 - 02 May 2024
Viewed by 248
Abstract
This study introduces a novel approach for fault classification in bearing components utilizing raw accelerometer data. By employing various neural network models, including deep learning architectures, we bypass the traditional preprocessing and feature-extraction stages, streamlining the classification process. Utilizing the Case Western Reserve [...] Read more.
This study introduces a novel approach for fault classification in bearing components utilizing raw accelerometer data. By employing various neural network models, including deep learning architectures, we bypass the traditional preprocessing and feature-extraction stages, streamlining the classification process. Utilizing the Case Western Reserve University (CWRU) bearing dataset, our methodology demonstrates remarkable accuracy, particularly in deep learning networks such as the three variant convolutional neural networks (CNNs), achieving above 98% accuracy across various loading levels, establishing a new benchmark in fault-detection efficiency. Notably, data exploration through principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE) provided valuable insights into feature relationships and patterns, aiding in effective fault detection. This research not only proves the efficacy of neural network classifiers in handling raw data but also opens avenues for more straightforward yet effective diagnostic methods in machinery health monitoring. These findings suggest significant potential for real-world applications, offering a faster yet reliable alternative to conventional fault-classification techniques. Full article
(This article belongs to the Section Information Applications)
22 pages, 931 KiB  
Article
A Hybrid MCDM Approach Using the BWM and the TOPSIS for a Financial Performance-Based Evaluation of Saudi Stocks
by Abdulrahman T. Alsanousi, Ammar Y. Alqahtani, Anas A. Makki and Majed A. Baghdadi
Information 2024, 15(5), 258; https://doi.org/10.3390/info15050258 - 02 May 2024
Viewed by 307
Abstract
This study presents a hybrid multicriteria decision-making approach for evaluating stocks in the Saudi Stock Market. The objective is to provide investors and stakeholders with a robust evaluation methodology to inform their investment decisions. With a market value of USD 2.89 trillion dollars [...] Read more.
This study presents a hybrid multicriteria decision-making approach for evaluating stocks in the Saudi Stock Market. The objective is to provide investors and stakeholders with a robust evaluation methodology to inform their investment decisions. With a market value of USD 2.89 trillion dollars in September 2022, the Saudi Stock Market is of significant importance for the country’s economy. However, navigating the complexities of stock market performance poses investment challenges. This study employs the best–worst method and the technique for order preference by similarity to identify an ideal solution to address these challenges. Utilizing data from the Saudi Stock Market (Tadawul), this study evaluates stock performance based on financial criteria, including return on equity, return on assets, net profit margin, and asset turnover. The findings reveal valuable insights, particularly in the banking sector, which exhibited the highest net profit margin ratios among sectors. The hybrid multicriteria decision-making-based approach enhances investment decisions. This research provides a foundation for future investigations, facilitating a deeper exploration and analysis of additional aspects of the Saudi Stock Market’s performance. The developed methodology and findings have implications for investors and stakeholders, aiding their investment decisions and maximizing returns. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis II)
Show Figures

Figure 1

21 pages, 11491 KiB  
Article
FIWARE-Compatible Smart Data Models for Satellite Imagery and Flood Risk Assessment to Enhance Data Management
by Ioannis-Omiros Kouloglou, Gerasimos Antzoulatos, Georgios Vosinakis, Francesca Lombardo, Alberto Abella, Marios Bakratsas, Anastasia Moumtzidou, Evangelos Maltezos, Ilias Gialampoukidis, Eleftherios Ouzounoglou, Stefanos Vrochidis, Angelos Amditis, Ioannis Kompatsiaris and Michele Ferri
Information 2024, 15(5), 257; https://doi.org/10.3390/info15050257 - 02 May 2024
Viewed by 185
Abstract
The increasing rate of adoption of innovative technological achievements along with the penetration of the Next Generation Internet (NGI) technologies and Artificial Intelligence (AI) in the water sector are leading to a shift to a Water-Smart Society. New challenges have emerged in terms [...] Read more.
The increasing rate of adoption of innovative technological achievements along with the penetration of the Next Generation Internet (NGI) technologies and Artificial Intelligence (AI) in the water sector are leading to a shift to a Water-Smart Society. New challenges have emerged in terms of data interoperability, sharing, and trustworthiness due to the rapidly increasing volume of heterogeneous data generated by multiple technologies. Hence, there is a need for efficient harmonization and smart modeling of the data to foster advanced AI analytical processes, which will lead to efficient water data management. The main objective of this work is to propose two Smart Data Models focusing on the modeling of the satellite imagery data and the flood risk assessment processes. The utilization of those models reinforces the fusion and homogenization of diverse information and data, facilitating the adoption of AI technologies for flood mapping and monitoring. Furthermore, a holistic framework is developed and evaluated via qualitative and quantitative performance indicators revealing the efficacy of the proposed models concerning the usage of the models in real cases. The framework is based on the well-known and compatible technologies on NGSI-LD standards which are customized and applicable easily to support the water data management processes effectively. Full article
16 pages, 906 KiB  
Article
Epileptic Seizure Detection from Decomposed EEG Signal through 1D and 2D Feature Representation and Convolutional Neural Network
by Shupta Das, Suraiya Akter Mumu, M. A. H. Akhand, Abdus Salam and Md Abdus Samad Kamal
Information 2024, 15(5), 256; https://doi.org/10.3390/info15050256 - 02 May 2024
Viewed by 183
Abstract
Electroencephalogram (EEG) has emerged as the most favorable source for recognizing brain disorders like epileptic seizure (ES) using deep learning (DL) methods. This study investigated the well-performed EEG-based ES detection method by decomposing EEG signals. Specifically, empirical mode decomposition (EMD) decomposes EEG signals [...] Read more.
Electroencephalogram (EEG) has emerged as the most favorable source for recognizing brain disorders like epileptic seizure (ES) using deep learning (DL) methods. This study investigated the well-performed EEG-based ES detection method by decomposing EEG signals. Specifically, empirical mode decomposition (EMD) decomposes EEG signals into six intrinsic mode functions (IMFs). Three distinct features, namely, fluctuation index, variance, and ellipse area of the second order difference plot (SODP), were extracted from each of the IMFs. The feature values from all EEG channels were arranged in two composite feature forms: a 1D (i.e., unidimensional) form and a 2D image-like form. For ES recognition, the convolutional neural network (CNN), the most prominent DL model for 2D input, was considered for the 2D feature form, and a 1D version of CNN was employed for the 1D feature form. The experiment was conducted on a benchmark CHB-MIT dataset as well as a dataset prepared from the EEG signals of ES patients from Prince Hospital Khulna (PHK), Bangladesh. The 2D feature-based CNN model outperformed the other 1D feature-based models, showing an accuracy of 99.78% for CHB-MIT and 95.26% for PHK. Furthermore, the cross-dataset evaluations also showed favorable outcomes. Therefore, the proposed method with 2D composite feature form can be a promising ES detection method. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
17 pages, 2692 KiB  
Article
Proactive Agent Behaviour in Dynamic Distributed Constraint Optimisation Problems
by Brighter Agyemang, Fenghui Ren and Jun Yan
Information 2024, 15(5), 255; https://doi.org/10.3390/info15050255 - 02 May 2024
Viewed by 231
Abstract
In multi-agent systems, the Dynamic Distributed Constraint Optimisation Problem (D-DCOP) framework is pivotal, allowing for the decomposition of global objectives into agent constraints. Proactive agent behaviour is crucial in such systems, enabling agents to anticipate future changes and adapt accordingly. Existing approaches, like [...] Read more.
In multi-agent systems, the Dynamic Distributed Constraint Optimisation Problem (D-DCOP) framework is pivotal, allowing for the decomposition of global objectives into agent constraints. Proactive agent behaviour is crucial in such systems, enabling agents to anticipate future changes and adapt accordingly. Existing approaches, like Proactive Dynamic DCOP (PD-DCOP) algorithms, often necessitate a predefined environment model. We address the problem of enabling proactive agent behaviour in D-DCOPs where the dynamics model of the environment is unknown. Specifically, we propose an approach where agents learn local autoregressive models from observations, predicting future states to inform decision-making. To achieve this, we present a temporal experience-sharing message-passing algorithm that leverages dynamic agent connections and a distance metric to collate training data. Our approach outperformed baseline methods in a search-and-extinguish task using the RoboCup Rescue Simulator, achieving better total building damage. The experimental results align with prior work on the significance of decision-switching costs and demonstrate improved performance when the switching cost is combined with a learned model. Full article
(This article belongs to the Special Issue Intelligent Agent and Multi-Agent System)
Show Figures

Figure 1

17 pages, 4720 KiB  
Article
MortalityMinder: Visualization and AI Interpretations of Social Determinants of Premature Mortality in the United States
by Karan Bhanot, John S. Erickson and Kristin P. Bennett
Information 2024, 15(5), 254; https://doi.org/10.3390/info15050254 - 30 Apr 2024
Viewed by 234
Abstract
MortalityMinder enables healthcare researchers, providers, payers, and policy makers to gain actionable insights into where and why premature mortality rates due to all causes, cancer, cardiovascular disease, and deaths of despair rose between 2000 and 2017 for adults aged 25–64. MortalityMinder is designed [...] Read more.
MortalityMinder enables healthcare researchers, providers, payers, and policy makers to gain actionable insights into where and why premature mortality rates due to all causes, cancer, cardiovascular disease, and deaths of despair rose between 2000 and 2017 for adults aged 25–64. MortalityMinder is designed as an open-source web-based visualization tool that enables interactive analysis and exploration of social, economic, and geographic factors associated with mortality at the county level. We provide case studies to illustrate how MortalityMinder finds interesting relationships between health determinants and deaths of despair. We also demonstrate how GPT-4 can help translate statistical results from MortalityMinder into actionable insights to improve population health. When combined with MortalityMinder results, GPT-4 provides hypotheses on why socio-economic risk factors are associated with mortality, how they might be causal, and what actions could be taken related to the risk factors to improve outcomes with supporting citations. We find that GPT-4 provided plausible and insightful answers about the relationship between social determinants and mortality. Our work is a first step towards enabling public health stakeholders to automatically discover and visualize relationships between social determinants of health and mortality based on available data and explain and transform these into meaningful results using artificial intelligence. Full article
(This article belongs to the Special Issue Interactive Machine Learning and Visual Data Mining)
Show Figures

Figure 1

18 pages, 3172 KiB  
Article
Transformer-Based Approach to Pathology Diagnosis Using Audio Spectrogram
by Mohammad Tami, Sari Masri, Ahmad Hasasneh and Chakib Tadj
Information 2024, 15(5), 253; https://doi.org/10.3390/info15050253 - 30 Apr 2024
Viewed by 285
Abstract
Early detection of infant pathologies by non-invasive means is a critical aspect of pediatric healthcare. Audio analysis of infant crying has emerged as a promising method to identify various health conditions without direct medical intervention. In this study, we present a cutting-edge machine [...] Read more.
Early detection of infant pathologies by non-invasive means is a critical aspect of pediatric healthcare. Audio analysis of infant crying has emerged as a promising method to identify various health conditions without direct medical intervention. In this study, we present a cutting-edge machine learning model that employs audio spectrograms and transformer-based algorithms to classify infant crying into distinct pathological categories. Our innovative model bypasses the extensive preprocessing typically associated with audio data by exploiting the self-attention mechanisms of the transformer, thereby preserving the integrity of the audio’s diagnostic features. When benchmarked against established machine learning and deep learning models, our approach demonstrated a remarkable 98.69% accuracy, 98.73% precision, 98.71% recall, and an F1 score of 98.71%, surpassing the performance of both traditional machine learning and convolutional neural network models. This research not only provides a novel diagnostic tool that is scalable and efficient but also opens avenues for improving pediatric care through early and accurate detection of pathologies. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

24 pages, 9723 KiB  
Article
On the Generalizability of Machine Learning Classification Algorithms and Their Application to the Framingham Heart Study
by Nabil Kahouadji
Information 2024, 15(5), 252; https://doi.org/10.3390/info15050252 - 29 Apr 2024
Viewed by 439
Abstract
The use of machine learning algorithms in healthcare can amplify social injustices and health inequities. While the exacerbation of biases can occur and be compounded during problem selection, data collection, and outcome definition, this research pertains to the generalizability impediments that occur during [...] Read more.
The use of machine learning algorithms in healthcare can amplify social injustices and health inequities. While the exacerbation of biases can occur and be compounded during problem selection, data collection, and outcome definition, this research pertains to the generalizability impediments that occur during the development and post-deployment of machine learning classification algorithms. Using the Framingham coronary heart disease data as a case study, we show how to effectively select a probability cutoff to convert a regression model for a dichotomous variable into a classifier. We then compare the sampling distribution of the predictive performance of eight machine learning classification algorithms under four stratified training/testing scenarios to test their generalizability and their potential to perpetuate biases. We show that both extreme gradient boosting and support vector machine are flawed when trained on an unbalanced dataset. We then show that the double discriminant scoring of type 1 and 2 is the most generalizable with respect to the true positive and negative rates, respectively, as it consistently outperforms the other classification algorithms, regardless of the training/testing scenario. Finally, we introduce a methodology to extract an optimal variable hierarchy for a classification algorithm and illustrate it on the overall, male and female Framingham coronary heart disease data. Full article
(This article belongs to the Special Issue 2nd Edition of Data Science for Health Services)
Show Figures

Figure 1

18 pages, 873 KiB  
Article
Navigating Market Sentiments: A Novel Approach to Iron Ore Price Forecasting with Weighted Fuzzy Time Series
by Flavio Mauricio da Cunha Souza, Geraldo Pereira Rocha Filho, Frederico Gadelha Guimarães, Rodolfo I. Meneguette and Gustavo Pessin
Information 2024, 15(5), 251; https://doi.org/10.3390/info15050251 - 29 Apr 2024
Viewed by 275
Abstract
The global iron ore price is influenced by numerous factors, thus showcasing a complex interplay among them. The collective expectations of market participants over time shape the variations and trends within the iron ore price time series. Consequently, devising a robust forecasting model [...] Read more.
The global iron ore price is influenced by numerous factors, thus showcasing a complex interplay among them. The collective expectations of market participants over time shape the variations and trends within the iron ore price time series. Consequently, devising a robust forecasting model for the volatility of iron ore prices, as well as for other assets connected to this commodity, is critical for guiding future investments and decision-making processes in mining companies. Within this framework, the integration of artificial intelligence techniques, encompassing both technical and fundamental analyses, is aimed at developing a comprehensive, autonomous hybrid system for decision support, which is specialized in iron ore asset management. This approach not only enhances the accuracy of predictions but also supports strategic planning in the mining sector. Full article
Show Figures

Figure 1

17 pages, 1166 KiB  
Article
Resource Allocation and Pricing in Energy Harvesting Serverless Computing Internet of Things Networks
by Yunqi Li and Changlin Yang
Information 2024, 15(5), 250; https://doi.org/10.3390/info15050250 - 29 Apr 2024
Viewed by 348
Abstract
This paper considers a resource allocation problem involving servers and mobile users (MUs) operating in a serverless edge computing (SEC)-enabled Internet of Things (IoT) network. Each MU has a fixed budget, and each server is powered by the grid and has energy harvesting [...] Read more.
This paper considers a resource allocation problem involving servers and mobile users (MUs) operating in a serverless edge computing (SEC)-enabled Internet of Things (IoT) network. Each MU has a fixed budget, and each server is powered by the grid and has energy harvesting (EH) capability. Our objective is to maximize the revenue of the operator that operates the said servers and the number of resources purchased by the MUs. We propose a Stackelberg game approach, where servers and MUs act as leaders and followers, respectively. We prove the existence of a Stackelberg game equilibrium and develop an iterative algorithm to determine the final game equilibrium price. Simulation results show that the proposed scheme is efficient in terms of the SEC’s profit and MU’s demand. Moreover, both MUs and SECs gain benefits from renewable energy. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

17 pages, 2973 KiB  
Article
Predicting the Conversion from Mild Cognitive Impairment to Alzheimer’s Disease Using an Explainable AI Approach
by Gerasimos Grammenos, Aristidis G. Vrahatis, Panagiotis Vlamos, Dean Palejev, Themis Exarchos and for the Alzheimer’s Disease Neuroimaging Initiative
Information 2024, 15(5), 249; https://doi.org/10.3390/info15050249 - 28 Apr 2024
Viewed by 394
Abstract
Mild Cognitive Impairment (MCI) is a cognitive state frequently observed in older adults, characterized by significant alterations in memory, thinking, and reasoning abilities that extend beyond typical cognitive decline. It is worth noting that around 10–15% of individuals with MCI are projected to [...] Read more.
Mild Cognitive Impairment (MCI) is a cognitive state frequently observed in older adults, characterized by significant alterations in memory, thinking, and reasoning abilities that extend beyond typical cognitive decline. It is worth noting that around 10–15% of individuals with MCI are projected to develop Alzheimer’s disease, effectively positioning MCI as an early stage of Alzheimer’s. In this study, a novel approach is presented involving the utilization of eXtreme Gradient Boosting to predict the onset of Alzheimer’s disease during the MCI stage. The methodology entails utilizing data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Through the analysis of longitudinal data, spanning from the baseline visit to the 12-month follow-up, a predictive model was constructed. The proposed model calculates, over a 36-month period, the likelihood of progression from MCI to Alzheimer’s disease, achieving an accuracy rate of 85%. To further enhance the precision of the model, this study implements feature selection using the Recursive Feature Elimination technique. Additionally, the Shapley method is employed to provide insights into the model’s decision-making process, thereby augmenting the transparency and interpretability of the predictions. Full article
Show Figures

Figure 1

16 pages, 4187 KiB  
Article
An Omnidirectional Image Super-Resolution Method Based on Enhanced SwinIR
by Xiang Yao, Yun Pan and Jingtao Wang
Information 2024, 15(5), 248; https://doi.org/10.3390/info15050248 - 28 Apr 2024
Viewed by 259
Abstract
For the significant distortion problem caused by the special projection method of equi-rectangular projection (ERP) images, this paper proposes an omnidirectional image super-resolution algorithm model based on position information transformation, taking SwinIR as the base. By introducing a space position transformation module that [...] Read more.
For the significant distortion problem caused by the special projection method of equi-rectangular projection (ERP) images, this paper proposes an omnidirectional image super-resolution algorithm model based on position information transformation, taking SwinIR as the base. By introducing a space position transformation module that supports deformable convolution, the image preprocessing process is optimized to reduce the distortion effects in the polar regions of the ERP image. Meanwhile, by introducing deformable convolution in the deep feature extraction process, the model’s adaptability to local deformations of images is enhanced. Experimental results on publicly available datasets have shown that our method outperforms SwinIR, with an average improvement of over 0.2 dB in WS-PSNR and over 0.030 in WS-SSIM for ×4 pixel upscaling. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 295 KiB  
Article
Quantum Information: Systems, Their States, and the Use of Variances
by Alain Deville and Yannick Deville
Information 2024, 15(5), 247; https://doi.org/10.3390/info15050247 - 25 Apr 2024
Viewed by 423
Abstract
Quantum information mobilizes the description of quantum systems, their states, and their behavior. Since a measurement postulate introduced by von Neumann in 1932, if a quantum system has been prepared in two different mixed states represented by the same density operator ρ, [...] Read more.
Quantum information mobilizes the description of quantum systems, their states, and their behavior. Since a measurement postulate introduced by von Neumann in 1932, if a quantum system has been prepared in two different mixed states represented by the same density operator ρ, these preparations are said to have led to the same mixture. For more than 50 years, there has been a lack of consensus about this postulate. In a 2011 article, considering variances of spin components, Fratini and Hayrapetyan tried to show that this postulate is unjustified. The aim of the present paper is to discuss major points in this 2011 article and in their reply to a 2012 paper by Bodor and Diosi claiming that their analysis was irrelevant. Facing some ambiguities or inconsistencies in the 2011 paper and in the reply, we first try to guess their aim, establish results useful in this context, and finally discuss the use or misuse of several concepts implied in this debate. Full article
52 pages, 3960 KiB  
Review
A Critical Analysis of Deep Semi-Supervised Learning Approaches for Enhanced Medical Image Classification
by Kaushlesh Singh Shakya, Azadeh Alavi, Julie Porteous, Priti K, Amit Laddi and Manojkumar Jaiswal
Information 2024, 15(5), 246; https://doi.org/10.3390/info15050246 - 24 Apr 2024
Viewed by 359
Abstract
Deep semi-supervised learning (DSSL) is a machine learning paradigm that blends supervised and unsupervised learning techniques to improve the performance of various models in computer vision tasks. Medical image classification plays a crucial role in disease diagnosis, treatment planning, and patient care. However, [...] Read more.
Deep semi-supervised learning (DSSL) is a machine learning paradigm that blends supervised and unsupervised learning techniques to improve the performance of various models in computer vision tasks. Medical image classification plays a crucial role in disease diagnosis, treatment planning, and patient care. However, obtaining labeled medical image data is often expensive and time-consuming for medical practitioners, leading to limited labeled datasets. DSSL techniques aim to address this challenge, particularly in various medical image tasks, to improve model generalization and performance. DSSL models leverage both the labeled information, which provides explicit supervision, and the unlabeled data, which can provide additional information about the underlying data distribution. That offers a practical solution to resource-intensive demands of data annotation, and enhances the model’s ability to generalize across diverse and previously unseen data landscapes. The present study provides a critical review of various DSSL approaches and their effectiveness and challenges in enhancing medical image classification tasks. The study categorized DSSL techniques into six classes: consistency regularization method, deep adversarial method, pseudo-learning method, graph-based method, multi-label method, and hybrid method. Further, a comparative analysis of performance for six considered methods is conducted using existing studies. The referenced studies have employed metrics such as accuracy, sensitivity, specificity, AUC-ROC, and F1 score to evaluate the performance of DSSL methods on different medical image datasets. Additionally, challenges of the datasets, such as heterogeneity, limited labeled data, and model interpretability, were discussed and highlighted in the context of DSSL for medical image classification. The current review provides future directions and considerations to researchers to further address the challenges and take full advantage of these methods in clinical practices. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

16 pages, 931 KiB  
Article
Learning Circuits and Coding with Arduino Board in Higher Education Using Tangible and Graphical User Interfaces
by Sokratis Tselegkaridis, Theodosios Sapounidis and Dimitrios Papakostas
Information 2024, 15(5), 245; https://doi.org/10.3390/info15050245 - 24 Apr 2024
Viewed by 409
Abstract
The integration of the Arduino board into educational settings has penetrated across various educational levels. The teaching of this subject can be accomplished by (a) using real components in breadboards, (b) prefabricated modular boards that snap together, and (c) utilizing computer simulations. Yet, [...] Read more.
The integration of the Arduino board into educational settings has penetrated across various educational levels. The teaching of this subject can be accomplished by (a) using real components in breadboards, (b) prefabricated modular boards that snap together, and (c) utilizing computer simulations. Yet, it is unknown which interface offers a more effective learning experience. Therefore, this experimental study aims to compare the effectiveness of these interfaces in a series of three laboratory exercises involving 110 university students, who were divided into three groups: (a) the first group used a tangible user interface, implementing circuits on breadboards, (b) the second group also used a tangible interface but with modular boards, and (c) the third group used a graphical user interface to simulate circuits using Tinkercad. For each laboratory exercise, students completed both pretests and posttests. Also, they provided feedback through five Likert-type attitude questions regarding their experiences. In terms of data analysis, t-tests, ANOVA, and ANCOVA, along with bootstrapping, and principal component analysis were employed. The results suggest that among the participants, those who used a graphical user interface stated that their understanding of the interconnection of components in microcontroller circuits was enhanced, while students with previous experience in microcontroller labs found the circuit creation process easier than students without experience. Full article
(This article belongs to the Special Issue Human–Computer Interaction in Smart Cities)
Show Figures

Figure 1

21 pages, 4649 KiB  
Article
Immersive Storytelling in Social Virtual Reality for Human-Centered Learning about Sensitive Historical Events
by Athina Papadopoulou, Stylianos Mystakidis and Avgoustos Tsinakos
Information 2024, 15(5), 244; https://doi.org/10.3390/info15050244 - 23 Apr 2024
Viewed by 1056
Abstract
History is a subject that students often find uninspiring in school education. This paper explores the application of social VR metaverse platforms in combination with interactive, nonlinear web platforms designed for immersive storytelling to support learning about a sensitive historical event, namely the [...] Read more.
History is a subject that students often find uninspiring in school education. This paper explores the application of social VR metaverse platforms in combination with interactive, nonlinear web platforms designed for immersive storytelling to support learning about a sensitive historical event, namely the Asia Minor Catastrophe. The goal was to design an alternative method of learning history and investigate if it would engage students and foster their independence. A mixed-methods research design was applied. Thirty-four (n = 34) adult participants engaged in the interactive book and VR space over the course of three weeks. After an online workshop, feedback was collected from participants through a custom questionnaire. The quantitative data from the questionnaire were analyzed statistically utilizing IBM SPSS, while the qualitative responses were coded thematically. This study reveals that these two tools can enhance historical education by increasing student engagement, interaction, and understanding. Participants appreciated the immersive and participatory nature of the material. This study concludes that these technologies have the potential to enhance history education by promoting active participation and engagement. Full article
Show Figures

Figure 1

20 pages, 6685 KiB  
Article
Improving the Classification of Unexposed Potsherd Cavities by Means of Preprocessing
by Randy Cahya Wihandika, Yoonji Lee, Mahendra Data, Masayoshi Aritsugi, Hiroki Obata and Israel Mendonça
Information 2024, 15(5), 243; https://doi.org/10.3390/info15050243 - 23 Apr 2024
Viewed by 401
Abstract
The preparation of raw images for subsequent analysis, known as image preprocessing, is a crucial step that can boost the performance of an image classification model. Although deep learning has succeeded in image classification without handcrafted features, certain studies underscore the continued significance [...] Read more.
The preparation of raw images for subsequent analysis, known as image preprocessing, is a crucial step that can boost the performance of an image classification model. Although deep learning has succeeded in image classification without handcrafted features, certain studies underscore the continued significance of image preprocessing for enhanced performance during the training process. Nonetheless, this task is often demanding and requires high-quality images to effectively train a classification model. The quality of training images, along with other factors, impacts the classification model’s performance and insufficient image quality can lead to suboptimal classification performance. On the other hand, achieving high-quality training images requires effective image preprocessing techniques. In this study, we perform exploratory experiments aimed at improving a classification model of unexposed potsherd cavities images via image preprocessing pipelines. These pipelines are evaluated on two distinct image sets: a laboratory-made, experimental image set that contains archaeological images with controlled lighting and background conditions, and a Jōmon–Yayoi image set that contains images of real-world potteries from the Jōmon period through the Yayoi period with varying conditions. The best accuracy performances obtained on the experimental images and the more challenging Jōmon–Yayoi images are 90.48% and 78.13%, respectively. The comprehensive analysis and experimentation conducted in this study demonstrate a noteworthy enhancement in performance metrics compared to the established baseline benchmark. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

Previous Issue
Back to TopTop