Journal Description
Analytics
Analytics
is an international, peer-reviewed, open access journal on methodologies, technologies, and applications of analytics, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.7 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Analytics is a companion journal of Mathematics.
Latest Articles
Advancements in Predictive Maintenance: A Bibliometric Review of Diagnostic Models Using Machine Learning Techniques
Analytics 2024, 3(4), 493-507; https://doi.org/10.3390/analytics3040028 - 10 Dec 2024
Abstract
►
Show Figures
This bibliometric review investigates the advancements in machine learning techniques for predictive maintenance, focusing on the use of Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) for fault detection in wheelset axle bearings. Using data from Scopus and Web of Science, the
[...] Read more.
This bibliometric review investigates the advancements in machine learning techniques for predictive maintenance, focusing on the use of Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) for fault detection in wheelset axle bearings. Using data from Scopus and Web of Science, the review analyses key trends, influential publications, and significant contributions to the field from 2000 to 2024. The findings highlight the performance of ANNs in handling large datasets and modelling complex, non-linear relationships, as well as the high accuracy of SVMs in fault classification tasks, particularly with small-to-medium-sized datasets. However, the study also identifies several limitations, including the dependency on high-quality data, significant computational resource requirements, limited model adaptability, interpretability challenges, and practical implementation complexities. This review provides valuable insights for researchers and engineers, guiding the selection of appropriate diagnostic models and highlighting opportunities for future research. Addressing the identified limitations is crucial for the broader adoption and effectiveness of machine learning-based predictive maintenance strategies across various industrial contexts.
Full article
Open AccessArticle
NPI-WGNN: A Weighted Graph Neural Network Leveraging Centrality Measures and High-Order Common Neighbor Similarity for Accurate ncRNA–Protein Interaction Prediction
by
Fatemeh Khoushehgir, Zahra Noshad, Morteza Noshad and Sadegh Sulaimany
Analytics 2024, 3(4), 476-492; https://doi.org/10.3390/analytics3040027 - 2 Dec 2024
Abstract
►▼
Show Figures
Predicting ncRNA–protein interactions (NPIs) is essential for understanding regulatory roles in cellular processes and disease mechanisms, yet experimental methods are costly and time-consuming. In this study, we propose NPI-WGNN, a novel weighted graph neural network model designed to enhance NPI prediction by incorporating
[...] Read more.
Predicting ncRNA–protein interactions (NPIs) is essential for understanding regulatory roles in cellular processes and disease mechanisms, yet experimental methods are costly and time-consuming. In this study, we propose NPI-WGNN, a novel weighted graph neural network model designed to enhance NPI prediction by incorporating topological insights from graph structures. Our approach introduces a bipartite version of the high-order common neighbor (HOCN) similarity metric to assign edge weights in an ncRNA–protein network, refining node embeddings via weighted node2vec. We further enrich these embeddings with centrality measures, such as degree and Katz centralities, to capture network hierarchy and connectivity. To optimize prediction accuracy, we employ a hybrid GNN architecture that combines graph convolutional network (GCN), graph attention network (GAT), and GraphSAGE layers, each contributing unique advantages: GraphSAGE offers scalability, GCN provides a global structural perspective, and GAT applies dynamic neighbor weighting. An ablation study confirms the complementary strengths of these layers, showing that their integration improves predictive accuracy and robustness across varied graph complexities. Experimental results on three benchmark datasets demonstrate that NPI-WGNN outperforms state-of-the-art methods, achieving up to 96.1% accuracy, 97.5% sensitivity, and an F1-score of 0.96, positioning it as a robust and accurate framework for ncRNA–protein interaction prediction.
Full article
Figure 1
Open AccessArticle
Breast Cancer Classification Using Fine-Tuned SWIN Transformer Model on Mammographic Images
by
Oluwatosin Tanimola, Olamilekan Shobayo, Olusogo Popoola and Obinna Okoyeigbo
Analytics 2024, 3(4), 461-475; https://doi.org/10.3390/analytics3040026 - 11 Nov 2024
Abstract
►▼
Show Figures
Breast cancer is the most prevalent type of disease among women. It has become one of the foremost causes of death among women globally. Early detection plays a significant role in administering personalized treatment and improving patient outcomes. Mammography procedures are often used
[...] Read more.
Breast cancer is the most prevalent type of disease among women. It has become one of the foremost causes of death among women globally. Early detection plays a significant role in administering personalized treatment and improving patient outcomes. Mammography procedures are often used to detect early-stage cancer cells. This traditional method of mammography while valuable has limitations in its potential for false positives and negatives, patient discomfort, and radiation exposure. Therefore, there is a probe for more accurate techniques required in detecting breast cancer, leading to exploring the potential of machine learning in the classification of diagnostic images due to its efficiency and accuracy. This study conducted a comparative analysis of pre-trained CNNs (ResNet50 and VGG16) and vision transformers (ViT-base and SWIN transformer) with the inclusion of ViT-base trained from scratch model architectures to effectively classify mammographic breast cancer images into benign and malignant cases. The SWIN transformer exhibits superior performance with 99.9% accuracy and a precision of 99.8%. These findings demonstrate the efficiency of deep learning to accurately classify mammographic breast cancer images for the diagnosis of breast cancer, leading to improvements in patient outcomes.
Full article
Figure 1
Open AccessArticle
Modified Bayesian Information Criterion for Item Response Models in Planned Missingness Test Designs
by
Alexander Robitzsch
Analytics 2024, 3(4), 449-460; https://doi.org/10.3390/analytics3040025 - 8 Nov 2024
Abstract
►▼
Show Figures
The Bayesian information criterion (BIC) is a widely used statistical tool originally derived for fully observed data. The BIC formula includes the sample size and the number of estimated parameters in the penalty term. However, not all variables are available for every subject
[...] Read more.
The Bayesian information criterion (BIC) is a widely used statistical tool originally derived for fully observed data. The BIC formula includes the sample size and the number of estimated parameters in the penalty term. However, not all variables are available for every subject in planned missingness designs. This article demonstrates that a modified BIC, tailored for planned missingness designs, outperforms the original BIC. The modification adjusts the penalty term by using the average number of estimable parameters per subject rather than the total number of model parameters. This new criterion was successfully applied to item response theory models in two simulation studies. We recommend that future studies utilizing planned missingness designs adopt the modified BIC formula proposed here.
Full article
Figure 1
Open AccessArticle
Adaptive Weighted Multiview Kernel Matrix Factorization and Its Application in Alzheimer’s Disease Analysis
by
Yarui Cao and Kai Liu
Analytics 2024, 3(4), 439-448; https://doi.org/10.3390/analytics3040024 - 4 Nov 2024
Abstract
►▼
Show Figures
Recent technology and equipment advancements have provided us with opportunities to better analyze Alzheimer’s disease (AD), where we could collect and employ the data from different image and genetic modalities that may potentially enhance the predictive performance. To perform better clustering in AD
[...] Read more.
Recent technology and equipment advancements have provided us with opportunities to better analyze Alzheimer’s disease (AD), where we could collect and employ the data from different image and genetic modalities that may potentially enhance the predictive performance. To perform better clustering in AD analysis, in this paper, we propose a novel model to leverage data from all different modalities/views, which can learn the weights of each view adaptively. Different from previous vanilla Non-negative matrix factorization which assumes data is linearly separable, we propose a simple yet efficient method based on kernel matrix factorization, which is not only able to deal with non-linear data structure but also can achieve better prediction accuracy. Experimental results on the ADNI dataset demonstrate the effectiveness of our proposed method, which indicates promising prospects for kernel application in AD analysis.
Full article
Figure 1
Open AccessArticle
Electric Vehicle Sentiment Analysis Using Large Language Models
by
Hemlata Sharma, Faiz Ud Din and Bayode Ogunleye
Analytics 2024, 3(4), 425-438; https://doi.org/10.3390/analytics3040023 - 1 Nov 2024
Abstract
►▼
Show Figures
Sentiment analysis is a technique used to understand the public’s opinion towards an event, product, or organization. For example, sentiment analysis can be used to understand positive or negative opinions or attitudes towards electric vehicle (EV) brands. This provides companies with valuable insight
[...] Read more.
Sentiment analysis is a technique used to understand the public’s opinion towards an event, product, or organization. For example, sentiment analysis can be used to understand positive or negative opinions or attitudes towards electric vehicle (EV) brands. This provides companies with valuable insight into the public’s opinion of their products and brands. In the field of natural language processing (NLP), transformer models have shown great performance compared to traditional machine learning algorithms. However, these models have not been explored extensively in the EV domain. EV companies are becoming significant competitors in the automotive industry and are projected to cover up to 30% of the United States light vehicle market by 2030 In this study, we present a comparative study of large language models (LLMs) including bidirectional encoder representations from transformers (BERT), robustly optimised BERT (RoBERTa), and a generalised autoregressive pre-training method (XLNet) using Lucid Motors and Tesla Motors YouTube datasets. Results evidenced that LLMs like BERT and her variants are off-the-shelf algorithms for sentiment analysis, specifically when fine-tuned. Furthermore, our findings present the need for domain adaptation whilst utilizing LLMs. Finally, the experimental results showed that RoBERTa achieved consistent performance across the EV datasets with an F1 score of at least 92%.
Full article
Figure 1
Open AccessArticle
The Analyst’s Hierarchy of Needs: Grounded Design Principles for Tailored Intelligence Analysis Tools
by
Antonio E. Girona, James C. Peters, Wenyuan Wang and R. Jordan Crouser
Analytics 2024, 3(4), 406-424; https://doi.org/10.3390/analytics3040022 - 29 Oct 2024
Abstract
Intelligence analysis involves gathering, analyzing, and interpreting vast amounts of information from diverse sources to generate accurate and timely insights. Tailored tools hold great promise in providing individualized support, enhancing efficiency, and facilitating the identification of crucial intelligence gaps and trends where traditional
[...] Read more.
Intelligence analysis involves gathering, analyzing, and interpreting vast amounts of information from diverse sources to generate accurate and timely insights. Tailored tools hold great promise in providing individualized support, enhancing efficiency, and facilitating the identification of crucial intelligence gaps and trends where traditional tools fail. The effectiveness of tailored tools depends on an analyst’s unique needs and motivations, as well as the broader context in which they operate. This paper describes a series of focus discovery exercises that revealed a distinct hierarchy of needs for intelligence analysts. This reflection on the balance between competing needs is of particular value in the context of intelligence analysis, where the compartmentalization required for security can make it difficult to group design patterns in stakeholder values. We hope that this study will enable the development of more effective tools, supporting the well-being and performance of intelligence analysts as well as the organizations they serve.
Full article
(This article belongs to the Special Issue Advances in Applied Data Science: Bridging Theory and Practice)
►▼
Show Figures
Figure 1
Open AccessArticle
Directed Topic Extraction with Side Information for Sustainability Analysis
by
Maria Osipenko
Analytics 2024, 3(3), 389-405; https://doi.org/10.3390/analytics3030021 - 11 Sep 2024
Abstract
Topic analysis represents each document in a text corpus in a low-dimensional latent topic space. In some cases, the desired topic representation is subject to specific requirements or guidelines constituting side information. For instance, sustainability-aware investors might be interested in automatically assessing aspects
[...] Read more.
Topic analysis represents each document in a text corpus in a low-dimensional latent topic space. In some cases, the desired topic representation is subject to specific requirements or guidelines constituting side information. For instance, sustainability-aware investors might be interested in automatically assessing aspects of firm sustainability based on the textual content of its corporate reports, focusing on the established 17 UN sustainability goals. The main corpus consists of the corporate report texts, while the texts containing the definitions of the 17 UN sustainability goals represent the side information. Under the assumption that both text corpora share a common low-dimensional subspace, we propose representing them in such a space via directed topic extraction using matrix co-factorization. Both the main and the side text corpora are first represented as term–context matrices, which are then jointly decomposed into word–topic and topic–context matrices. The word–topic matrix is common to both text corpora, whereas the topic–context matrices contain specific representations in the shared topic space. A nuisance parameter, which allows us to shift the focus between the error minimization of individual factorization terms, controls the extent to which the side information is taken into account. With our approach, documents from the main and the side corpora can be related to each other in the resulting latent topic space. That is, the corporate reports are represented in the same latent topic space as the descriptions of the 17 UN sustainability goals, enabling a structured automatic sustainability assessment of the textual report’s content. We provide an algorithm for such directed topic extraction and propose techniques for visualizing and interpreting the results.
Full article
(This article belongs to the Special Issue Business Analytics and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
SIMEX-Based and Analytical Bias Corrections in Stocking–Lord Linking
by
Alexander Robitzsch
Analytics 2024, 3(3), 368-388; https://doi.org/10.3390/analytics3030020 - 6 Aug 2024
Cited by 1
Abstract
►▼
Show Figures
Stocking–Lord (SL) linking is a popular linking method for group comparisons based on dichotomous item responses. This article proposes a bias correction technique based on the simulation extrapolation (SIMEX) method for SL linking in the 2PL model in the presence of uniform differential
[...] Read more.
Stocking–Lord (SL) linking is a popular linking method for group comparisons based on dichotomous item responses. This article proposes a bias correction technique based on the simulation extrapolation (SIMEX) method for SL linking in the 2PL model in the presence of uniform differential item functioning (DIF). The SIMEX-based method is compared to the analytical bias correction methods of SL linking. It turned out in a simulation study that SIMEX-based SL linking performed best, is easy to implement, and can be adapted to other linking methods straightforwardly.
Full article
Figure 1
Open AccessArticle
Comparative Analysis of Nature-Inspired Metaheuristic Techniques for Optimizing Phishing Website Detection
by
Thomas Nagunwa
Analytics 2024, 3(3), 344-367; https://doi.org/10.3390/analytics3030019 - 6 Aug 2024
Abstract
►▼
Show Figures
The increasing number, frequency, and sophistication of phishing website-based attacks necessitate the development of robust solutions for detecting phishing websites to enhance the overall security of cyberspace. Drawing inspiration from natural processes, nature-inspired metaheuristic techniques have been proven to be efficient in solving
[...] Read more.
The increasing number, frequency, and sophistication of phishing website-based attacks necessitate the development of robust solutions for detecting phishing websites to enhance the overall security of cyberspace. Drawing inspiration from natural processes, nature-inspired metaheuristic techniques have been proven to be efficient in solving complex optimization problems in diverse domains. Following these successes, this research paper aims to investigate the effectiveness of metaheuristic techniques, particularly Genetic Algorithms (GAs), Differential Evolution (DE), and Particle Swarm Optimization (PSO), in optimizing the hyperparameters of machine learning (ML) algorithms for detecting phishing websites. Using multiple datasets, six ensemble classifiers were trained on each dataset and their hyperparameters were optimized using each metaheuristic technique. As a baseline for assessing performance improvement, the classifiers were also trained with the default hyperparameters. To validate the genuine impact of the techniques over the use of default hyperparameters, we conducted statistical tests on the accuracy scores of all the optimized classifiers. The results show that the GA is the most effective technique, by improving the accuracy scores of all the classifiers, followed by DE, which improved four of the six classifiers. PSO was the least effective, improving only one classifier. It was also found that GA-optimized Gradient Boosting, LGBM and XGBoost were the best classifiers across all the metrics in predicting phishing websites, achieving peak accuracy scores of 98.98%, 99.24%, and 99.47%, respectively.
Full article
Figure 1
Open AccessArticle
A Longitudinal Tree-Based Framework for Lapse Management in Life Insurance
by
Mathias Valla
Analytics 2024, 3(3), 318-343; https://doi.org/10.3390/analytics3030018 - 5 Aug 2024
Cited by 1
Abstract
Developing an informed lapse management strategy (LMS) is critical for life insurers to improve profitability and gain insight into the risk of their global portfolio. Prior research in actuarial science has shown that targeting policyholders by maximising their individual customer lifetime value is
[...] Read more.
Developing an informed lapse management strategy (LMS) is critical for life insurers to improve profitability and gain insight into the risk of their global portfolio. Prior research in actuarial science has shown that targeting policyholders by maximising their individual customer lifetime value is more advantageous than targeting all those likely to lapse. However, most existing lapse analyses do not leverage the variability of features and targets over time. We propose a longitudinal LMS framework, utilising tree-based models for longitudinal data, such as left-truncated and right-censored (LTRC) trees and forests, as well as mixed-effect tree-based models. Our methodology provides time-informed insights, leading to increased precision in targeting. Our findings indicate that the use of longitudinally structured data significantly enhances the precision of models in predicting lapse behaviour, estimating customer lifetime value, and evaluating individual retention gains. The implementation of mixed-effect random forests enables the production of time-varying predictions that are highly relevant for decision-making. This paper contributes to the field of lapse analysis for life insurers by demonstrating the importance of exploiting the complete past trajectory of policyholders, which is often available in insurers’ information systems but has yet to be fully utilised.
Full article
(This article belongs to the Special Issue Business Analytics and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhancing Talent Recruitment in Business Intelligence Systems: A Comparative Analysis of Machine Learning Models
by
Hikmat Al-Quhfa, Ali Mothana, Abdussalam Aljbri and Jie Song
Analytics 2024, 3(3), 297-317; https://doi.org/10.3390/analytics3030017 - 15 Jul 2024
Abstract
►▼
Show Figures
In the competitive field of business intelligence, optimizing talent recruitment through data-driven methodologies is crucial for better decision-making. This study compares the effectiveness of various machine learning models to improve recruitment accuracy and efficiency. Using the recruitment data from a major Yemeni organization
[...] Read more.
In the competitive field of business intelligence, optimizing talent recruitment through data-driven methodologies is crucial for better decision-making. This study compares the effectiveness of various machine learning models to improve recruitment accuracy and efficiency. Using the recruitment data from a major Yemeni organization (2019–2022), we evaluated models including K-Nearest Neighbors, Logistic Regression, Support Vector Machine, Naive Bayes, Decision Trees, Random Forest, Gradient Boosting Classifier, AdaBoost Classifier, and Neural Networks. Hyperparameter tuning and cross-validation were used for optimization. The Random Forest model achieved the highest accuracy (92.8%), followed by Neural Networks (92.6%) and Gradient Boosting Classifier (92.5%). These results suggest that advanced machine learning models, particularly Random Forest and Neural Networks, can significantly enhance the recruitment processes in business intelligence systems. This study provides valuable insights for recruiters, advocating for the integration of sophisticated machine learning techniques in talent acquisition strategies.
Full article
Figure 1
Open AccessCommunication
Modeling Sea Level Rise Using Ensemble Techniques: Impacts on Coastal Adaptation, Freshwater Ecosystems, Agriculture and Infrastructure
by
Sambandh Bhusan Dhal, Rishabh Singh, Tushar Pandey, Sheelabhadra Dey, Stavros Kalafatis and Vivekvardhan Kesireddy
Analytics 2024, 3(3), 276-296; https://doi.org/10.3390/analytics3030016 - 5 Jul 2024
Abstract
►▼
Show Figures
Sea level rise (SLR) is a crucial indicator of climate change, primarily driven by greenhouse gas emissions and the subsequent increase in global temperatures. The impact of SLR, however, varies regionally due to factors such as ocean bathymetry, resulting in distinct shifts across
[...] Read more.
Sea level rise (SLR) is a crucial indicator of climate change, primarily driven by greenhouse gas emissions and the subsequent increase in global temperatures. The impact of SLR, however, varies regionally due to factors such as ocean bathymetry, resulting in distinct shifts across different areas compared to the global average. Understanding the complex factors influencing SLR across diverse spatial scales, along with the associated uncertainties, is essential. This study focuses on the East Coast of the United States and Gulf of Mexico, utilizing historical SLR data from 1993 to 2023. To forecast SLR trends from 2024 to 2103, a weighted ensemble model comprising SARIMAX, LSTM, and exponential smoothing models was employed. Additionally, using historical greenhouse gas data, an ensemble of LSTM models was used to predict real-time SLR values, achieving a testing loss of 0.005. Furthermore, conductance and dissolved oxygen (DO) values were assessed for the entire forecasting period, leveraging forecasted SLR trends to evaluate the impacts on marine life, agriculture, and infrastructure.
Full article
Figure 1
Open AccessArticle
TaskFinder: A Semantics-Based Methodology for Visualization Task Recommendation
by
Darius Coelho, Bhavya Ghai, Arjun Krishna, Maria Velez-Rojas, Steve Greenspan, Serge Mankovski and Klaus Mueller
Analytics 2024, 3(3), 255-275; https://doi.org/10.3390/analytics3030015 - 4 Jul 2024
Abstract
►▼
Show Figures
Data visualization has entered the mainstream, and numerous visualization recommender systems have been proposed to assist visualization novices, as well as busy professionals, in selecting the most appropriate type of chart for their data. Given a dataset and a set of user-defined analytical
[...] Read more.
Data visualization has entered the mainstream, and numerous visualization recommender systems have been proposed to assist visualization novices, as well as busy professionals, in selecting the most appropriate type of chart for their data. Given a dataset and a set of user-defined analytical tasks, these systems can make recommendations based on expert coded visualization design principles or empirical models. However, the need to identify the pertinent analytical tasks beforehand still exists and often requires domain expertise. In this work, we aim to automate this step with TaskFinder, a prototype system that leverages the information available in textual documents to understand domain-specific relations between attributes and tasks. TaskFinder employs word vectors as well as a custom dependency parser along with an expert-defined list of task keywords to extract and rank associations between tasks and attributes. It pairs these associations with a statistical analysis of the dataset to filter out tasks irrelevant given the data. TaskFinder ultimately produces a ranked list of attribute–task pairs. We show that the number of domain articles needed to converge to a recommendation consensus is bounded for our approach. We demonstrate our TaskFinder over multiple domains with varying article types and quantities.
Full article
Figure 1
Open AccessArticle
Customer Sentiments in Product Reviews: A Comparative Study with GooglePaLM
by
Olamilekan Shobayo, Swethika Sasikumar, Sandhya Makkar and Obinna Okoyeigbo
Analytics 2024, 3(2), 241-254; https://doi.org/10.3390/analytics3020014 - 18 Jun 2024
Cited by 2
Abstract
►▼
Show Figures
In this work, we evaluated the efficacy of Google’s Pathways Language Model (GooglePaLM) in analyzing sentiments expressed in product reviews. Although conventional Natural Language Processing (NLP) techniques such as the rule-based Valence Aware Dictionary for Sentiment Reasoning (VADER) and the long sequence Bidirectional
[...] Read more.
In this work, we evaluated the efficacy of Google’s Pathways Language Model (GooglePaLM) in analyzing sentiments expressed in product reviews. Although conventional Natural Language Processing (NLP) techniques such as the rule-based Valence Aware Dictionary for Sentiment Reasoning (VADER) and the long sequence Bidirectional Encoder Representations from Transformers (BERT) model are effective, they frequently encounter difficulties when dealing with intricate linguistic features like sarcasm and contextual nuances commonly found in customer feedback. We performed a sentiment analysis on Amazon’s fashion review datasets using the VADER, BERT, and GooglePaLM models, respectively, and compared the results based on evaluation metrics such as precision, recall, accuracy correct positive prediction, and correct negative prediction. We used the default values of the VADER and BERT models and slightly finetuned GooglePaLM with a Temperature of 0.0 and an N-value of 1. We observed that GooglePaLM performed better with correct positive and negative prediction values of 0.91 and 0.93, respectively, followed by BERT and VADER. We concluded that large language models surpass traditional rule-based systems for natural language processing tasks.
Full article
Figure 1
Open AccessFeature PaperArticle
Improving the Giant-Armadillo Optimization Method
by
Glykeria Kyrou, Vasileios Charilogis and Ioannis G. Tsoulos
Analytics 2024, 3(2), 225-240; https://doi.org/10.3390/analytics3020013 - 10 Jun 2024
Cited by 1
Abstract
►▼
Show Figures
Global optimization is widely adopted presently in a variety of practical and scientific problems. In this context, a group of widely used techniques are evolutionary techniques. A relatively new evolutionary technique in this direction is that of Giant-Armadillo Optimization, which is based on
[...] Read more.
Global optimization is widely adopted presently in a variety of practical and scientific problems. In this context, a group of widely used techniques are evolutionary techniques. A relatively new evolutionary technique in this direction is that of Giant-Armadillo Optimization, which is based on the hunting strategy of giant armadillos. In this paper, modifications to this technique are proposed, such as the periodic application of a local minimization method as well as the use of modern termination techniques based on statistical observations. The proposed modifications have been tested on a wide series of test functions available from the relevant literature and compared against other evolutionary methods.
Full article
Figure 1
Open AccessEditorial
Beyond the ROC Curve: The IMCP Curve
by
Jesus S. Aguilar-Ruiz
Analytics 2024, 3(2), 221-224; https://doi.org/10.3390/analytics3020012 - 27 May 2024
Cited by 2
Abstract
►▼
Show Figures
The ROC curve [...]
Full article
Figure 1
Open AccessArticle
Interconnected Markets: Unveiling Volatility Spillovers in Commodities and Energy Markets through BEKK-GARCH Modelling
by
Tetiana Paientko and Stanley Amakude
Analytics 2024, 3(2), 194-220; https://doi.org/10.3390/analytics3020011 - 16 Apr 2024
Abstract
Food commodities and energy bills have experienced rapid undulating movements and hikes globally in recent times. This spurred this study to examine the possibility that the shocks that arise from fluctuations of one market spill over to the other and to determine how
[...] Read more.
Food commodities and energy bills have experienced rapid undulating movements and hikes globally in recent times. This spurred this study to examine the possibility that the shocks that arise from fluctuations of one market spill over to the other and to determine how time-varying the spillovers were across a time. Data were daily frequency (prices of grains and energy products) from 1 July 2019 to 31 December 2022, as quoted in markets. The choice of the period was to capture the COVID pandemic and the Russian–Ukrainian war as events that could impact volatility. The returns were duly calculated using spreadsheets and subjected to ADF stationarity, co-integration, and the full BEKK-GARCH estimation. The results revealed a prolonged association between returns in the energy markets and food commodity market returns. Both markets were found to have volatility persistence individually, and time-varying bidirectional transmission of volatility across the markets was found. No lagged-effects spillover was found from one market to the other. The findings confirm that shocks that emanate from fluctuations in energy markets are impactful on the volatility of prices in food commodity markets and vice versa, but this impact occurs immediately after the shocks arise or on the same day such variation occurs.
Full article
(This article belongs to the Special Issue Business Analytics and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Learner Engagement and Demographic Influences in Brazilian Massive Open Online Courses: Aprenda Mais Platform Case Study
by
Júlia Marques Carvalho da Silva, Gabriela Hahn Pedroso, Augusto Basso Veber and Úrsula Gomes Rosa Maruyama
Analytics 2024, 3(2), 178-193; https://doi.org/10.3390/analytics3020010 - 3 Apr 2024
Abstract
This paper explores the dynamics of student engagement and demographic influences in Massive Open Online Courses (MOOCs). The study analyzes multiple facets of Brazilian MOOC participation, including re-enrollment patterns, course completion rates, and the impact of demographic characteristics on learning outcomes. Using survey
[...] Read more.
This paper explores the dynamics of student engagement and demographic influences in Massive Open Online Courses (MOOCs). The study analyzes multiple facets of Brazilian MOOC participation, including re-enrollment patterns, course completion rates, and the impact of demographic characteristics on learning outcomes. Using survey data and statistical analyses from the public Aprenda Mais Platform, this study reveals that MOOC learners exhibit a strong tendency toward continuous learning, with a majority re-enrolling in subsequent courses within a short timeframe. The average completion rate across courses is around 42.14%, with learners maintaining consistent academic performance. Demographic factors, notably, race/color and disability, are found to influence enrollment and completion rates, underscoring the importance of inclusive educational practices. Geographical location impacts students’ decision to enroll in and complete courses, highlighting the necessity for region-specific educational strategies. The research concludes that a diverse array of factors, including content interest, personal motivation, and demographic attributes, shape student engagement in MOOCs. These insights are vital for educators and course designers in creating effective, inclusive, and engaging online learning experiences.
Full article
(This article belongs to the Special Issue New Insights in Learning Analytics)
►▼
Show Figures
Figure 1
Open AccessArticle
Optimal Matching with Matching Priority
by
Massimo Cannas and Emiliano Sironi
Analytics 2024, 3(1), 165-177; https://doi.org/10.3390/analytics3010009 - 19 Mar 2024
Abstract
►▼
Show Figures
Matching algorithms are commonly used to build comparable subsets (matchings) in observational studies. When a complete matching is not possible, some units must necessarily be excluded from the final matching. This may bias the final estimates comparing the two populations, and thus it
[...] Read more.
Matching algorithms are commonly used to build comparable subsets (matchings) in observational studies. When a complete matching is not possible, some units must necessarily be excluded from the final matching. This may bias the final estimates comparing the two populations, and thus it is important to reduce the number of drops to avoid unsatisfactory results. Greedy matching algorithms may not reach the maximum matching size, thus dropping more units than necessary. Optimal matching algorithms do ensure a maximum matching size, but they implicitly assume that all units have the same matching priority. In this paper, we propose a matching strategy which is order optimal in the sense that it finds a maximum matching size which is consistent with a given matching priority. The strategy is based on an order-optimal matching algorithm originally proposed in connection with assignment problems by D. Gale. When a matching priority is given, the algorithm ensures that the discarded units have the lowest possible matching priority. We discuss the algorithm’s complexity and its relation with classic optimal matching. We illustrate its use with a problem in a case study concerning a comparison of female and male executives and a simulation.
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Conferences
Special Issues
Special Issue in
Analytics
Visual Analytics: Techniques and Applications
Guest Editors: Katerina Vrotsou, Kostiantyn KucherDeadline: 31 March 2025
Special Issue in
Analytics
Advances in Applied Data Science: Bridging Theory and Practice
Guest Editor: R. Jordan CrouserDeadline: 31 March 2025
Special Issue in
Analytics
Business Analytics and Applications
Guest Editors: Tatiana Ermakova, Benjamin FabianDeadline: 31 August 2025